Datasets:
Last commit not found
license: apple-amlr | |
license_name: apple-ascl | |
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data | |
task_categories: | |
- text-to-image | |
- image-to-text | |
language: | |
- en | |
# Dataset Card for DataComp-12M | |
<!-- Provide a quick summary of the dataset. --> | |
This dataset contains UIDs of DataComp-12M that is a 12M subset of [DataComp-1B-BestPool](https://huggingface.co/datasets/mlfoundations/datacomp_1b). | |
Image-text models trained on DataComp-12M are significantly better than on CC-12M/YFCC-15M as well as DataComp-Small/Medium. | |
For details on this dataset and the improved [DataCompDR-12M](https://huggingface.co/datasets/apple/DataCompDR-12M), | |
please visit our [MobileCLIP paper](https://arxiv.org/abs/2311.17049). | |
The dataset with the original captions is now available at [mlfoundations/DataComp-12M](https://huggingface.co/datasets/mlfoundations/DataComp-12M). | |
The UIDs per shards match between [mlfoundations/DataComp-12M](https://huggingface.co/datasets/mlfoundations/DataComp-12M) and [apple/DataCompDR-12M](https://huggingface.co/datasets/apple/DataCompDR-12M). | |
## Dataset Details | |
### Dataset Description | |
<!-- Provide a longer summary of what this dataset is. --> | |
DataCompDR is an image-text dataset and an enhancement to the DataComp dataset. | |
We reinforce the DataComp dataset using our multi-modal dataset reinforcement strategy. | |
In particular, we create DataCompDR-1B and DataCompDR-12M by reinforcing the DataComp-1B (BestPool filtering) and a uniform subset of 12.8M samples, DataCompDR-12M. | |
We have a one-time generation process, the cost of which is amortized over multiple architectures and extensive ablations. | |
We generate 5 synthetic captions per image using the `coca_ViT-L-14` model in OpenCLIP, and strong random image augmentations (10 for DataCompDR-1B and 30 for DataCompDR-12M). | |
We compute embeddings of an ensemble of two strong teachers (`ViT-L-14` with pretrained weights `datacomp_xl_s13b_b90k` and openai in OpenCLIP) on augmented images as well as real and synthetic captions. | |
Embeddings are 1536-D concatenations of 2x768-D vectors. | |
One seen sample for DataCompDR is a triplet of one randomly augmented image, one ground-truth caption, and one randomly picked synthetic caption. | |
- **Curated by:** Original data by [DataComp](https://www.datacomp.ai/) and metadata by Apple. | |
- **License:** We distribute our metadata under our [license](https://github.com/apple/ml-mobileclip/blob/main/LICENSE). The original image url-text samples and metadata were released by [DataComp](https://www.datacomp.ai/) under Creative Common CC-BY-4.0 license. The individual images are under their own copyrights. | |
- **Repository:** [ml-mobileclip GitHub](https://github.com/apple/ml-mobileclip) | |
- **Paper:** [MobileCLIP paper](https://arxiv.org/abs/2311.17049) | |
- **Demo:** Coming Soon | |
## Uses | |
<!-- Address questions around how the dataset is intended to be used. --> | |
Training with DataCompDR shows significant learning efficiency improvement compared to the standard CLIP training. | |
For example, with a single node of 8×A100 GPUs, we achieve 61.7% zero-shot classification on ImageNet-val in approximately one day when training a ViT-B/16 based CLIP from scratch on DataCompDR-12M. | |
Training with DataCompDR-1B sets new state-of-the-art performance on several metrics (Fig. 2) while still using a fraction of the training compute budget compared to previous works. | |
Using DataCompDR, we demonstrate 10x-1000x learning efficiency in comparison to DataComp. | |
## Dataset Structure | |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> | |
``` | |
- uids.txt: List of 12779520 (65536*195) UIDs, one UID per line. | |
- uids.npy: List of 12779520 (65536*195) UIDs as a NumPy array of type `numpy.dtype("u8,u8")`. | |
``` | |
## Citation | |
**[MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training](https://arxiv.org/pdf/2311.17049.pdf). (CVPR 2024)** | |
*Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.* | |
```bibtex | |
@InProceedings{mobileclip2024, | |
author = {Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel}, | |
title = {MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training}, | |
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, | |
month = {June}, | |
year = {2024}, | |
} | |
``` |