Abstract
Contrastive Language-Image Pre-training (CLIP) is an approach that has advanced research and applications in computer vision, fueling modern recognition systems and generative models. We believe that the main ingredient to the success of CLIP is its data and not the model architecture or pre-training objective. However, CLIP only provides very limited information about its data and how it has been collected, leading to works that aim to reproduce CLIP's data by filtering with its model parameters. In this work, we intend to reveal CLIP's data curation approach and in our pursuit of making it open to the community introduce Metadata-Curated Language-Image Pre-training (MetaCLIP). MetaCLIP takes a raw data pool and metadata (derived from CLIP's concepts) and yields a balanced subset over the metadata distribution. Our experimental study rigorously isolates the model and training settings, concentrating solely on data. MetaCLIP applied to CommonCrawl with 400M image-text data pairs outperforms CLIP's data on multiple standard benchmarks. In zero-shot ImageNet classification, MetaCLIP achieves 70.8% accuracy, surpassing CLIP's 68.3% on ViT-B models. Scaling to 1B data, while maintaining the same training budget, attains 72.4%. Our observations hold across various model sizes, exemplified by ViT-H achieving 80.5%, without any bells-and-whistles. Curation code and training data distribution on metadata is made available at https://github.com/facebookresearch/MetaCLIP.
Community
Here is a ML-generated summary
Objective
The paper aims to reveal CLIP's data curation approach and present a transparent algorithm called MetaCLIP to curate high-quality image-text data from raw web data.
Insights
- Metadata plays a central role in mitigating noise and preserving signal.
- Balancing the distribution is key to maximizing diversity and task-agnostic properties.
- Sub-string matching acts as an implicit filter to remove noise without manual rules.
- Curation algorithm enables easy adaptation to new data sources without external filters.
- MetaCLIP outperforms CLIP's data, showing the effectiveness of the curation approach.
Results
MetaCLIP applied to CommonCrawl with 400M data points outperforms CLIP's WIT400M dataset on multiple benchmarks, achieving 70.8% top-1 accuracy on ImageNet zero-shot classification with ViT-B/16, compared to CLIP's 68.3%.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Finetuned Multimodal Language Models Are High-Quality Image-Text Data Filters (2024)
- Data-Efficient Contrastive Language-Image Pretraining: Prioritizing Data Quality over Quantity (2024)
- Do CLIPs Always Generalize Better than ImageNet Models? (2024)
- No"Zero-Shot"Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance (2024)
- Text Data-Centric Image Captioning with Interactive Prompts (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 13
Browse 13 models citing this paperDatasets citing this paper 0
No dataset linking this paper