dataset_info:
features:
- name: url
dtype: string
- name: caption
dtype: string
splits:
- name: laion
num_bytes: 6602596166
num_examples: 40000000
- name: coyo
num_bytes: 12706527320
num_examples: 70000000
- name: conceptualCaptions
num_bytes: 584517500
num_examples: 3318333
download_size: 14883240515
dataset_size: 19893640986
configs:
- config_name: default
data_files:
- split: laion
path: data/laion-*
- split: coyo
path: data/coyo-*
- split: conceptualCaptions
path: data/conceptualCaptions-*
Dataset Card for image_captions_x (URL + Caption)
This dataset provides a lightweight, web-scale resource of image-caption pairs in the form of URLs and their associated textual descriptions (captions). It is designed for training and evaluating vision-language models where users retrieve images independently from the provided links.
This dataset card is based on the Hugging Face dataset card template.
Dataset Details
Dataset Description
This dataset merges subsets from three well-known large-scale image-text datasets:
- LAION-400M (10% sample): A multilingual dataset of image-text pairs crawled from the web and filtered with CLIP.
- COYO-700M (10% sample): A large-scale Korean-English image-text dataset from Kakao Brain.
- Conceptual Captions: A publicly available dataset from Google AI with filtered image captions from the web.
The dataset consists of three splits:
Split | Source | # Examples |
---|---|---|
laion |
LAION-400M (10%) | 40,000,000 |
coyo |
COYO-700M (10%) | 70,000,000 |
conceptualCaptions |
Conceptual Captions | 3,318,333 |
All splits share the same two fields:
url
: A direct link to the image.caption
: A natural language description of the image.Curated by: [kamruzzaman-asif]
Funded by [optional]: N/A
Shared by [optional]: Hugging Face user:
kamruzzaman-asif
Language(s) (NLP): Multilingual (primarily English, some Korean in COYO)
License: See individual source licenses (LAION and COYO are CC BY 4.0)
Dataset Sources
- LAION-400M: https://huggingface.co/datasets/laion/laion400m
- COYO-700M: https://huggingface.co/datasets/kakaobrain/coyo-700m
- Conceptual Captions: https://ai.google.com/research/ConceptualCaptions
Uses
Direct Use
This dataset is intended for:
- Training or evaluating vision-language models (e.g., CLIP, BLIP, Flamingo)
- Image-text retrieval tasks
- Weakly supervised or semi-supervised learning with large-scale web data
Out-of-Scope Use
- The dataset does not contain actual images — only URLs. Any tasks requiring image pixel data require separate downloading.
- May contain broken or unreachable URLs.
- Not suitable for tasks requiring curated or verified image-caption quality.
Dataset Structure
Each split is a flat table with the following fields:
Field | Type | Description |
---|---|---|
url |
string | Publicly available link to an image |
caption |
string | Textual description of the corresponding image |
Data splits:
laion
: Sampled from LAION-400Mcoyo
: Sampled from COYO-700MconceptualCaptions
: Full Conceptual Captions dataset
Dataset Creation
Curation Rationale
Large-scale image-text datasets are essential for training multimodal models, but full datasets are often too large or difficult to host. This merged dataset offers a lighter, URL-only version to ease access and experimentation.
Source Data
Data Collection and Processing
- LAION and COYO subsets were sampled at approximately 10% of their full size.
- Duplicates and malformed records were removed.
- Only
url
andcaption
fields were retained. - Conceptual Captions was included in full.
Who are the source data producers?
The data originates from large web-scale crawls performed by the LAION team, Kakao Brain, and Google AI.
Annotations
No additional annotations beyond the original captions are included.
Personal and Sensitive Information
The dataset may contain content from the open web that includes personal, copyrighted, or sensitive material. Use responsibly and adhere to the terms of the original datasets.
Bias, Risks, and Limitations
- The data reflects web-scale distribution, which may contain biases, offensive content, or culturally insensitive material.
- Captions are not manually verified.
- URLs may expire or be removed over time.
Recommendations
Researchers and developers should pre-filter, verify, and clean the dataset further for production or sensitive use cases.
Citation
If you use this dataset, please cite the original datasets:
LAION-400M
Schuhmann et al., LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs
https://arxiv.org/abs/2111.02114
COYO-700M
Kim et al., COYO-700M: Image-Text Dataset for Web-scale Learning
https://arxiv.org/abs/2303.06512
Conceptual Captions
Sharma et al., Conceptual Captions: A Cleaned, Hypernymed, Image Caption Dataset for the Web
https://aclanthology.org/P18-1238/
More Information
For issues, contributions, or questions, please contact the dataset maintainer on Hugging Face.
Dataset Card Authors
[kamruzzaman-asif]