Update Dataset Card
Browse files
README.md
CHANGED
@@ -27,3 +27,157 @@ configs:
|
|
27 |
- split: conceptualCaptions
|
28 |
path: data/conceptualCaptions-*
|
29 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
- split: conceptualCaptions
|
28 |
path: data/conceptualCaptions-*
|
29 |
---
|
30 |
+
|
31 |
+
# Dataset Card for LAION-COYO-CC (URL + Caption)
|
32 |
+
|
33 |
+
This dataset provides a lightweight, web-scale resource of image-caption pairs in the form of URLs and their associated textual descriptions (captions). It is designed for training and evaluating vision-language models where users retrieve images independently from the provided links.
|
34 |
+
|
35 |
+
This dataset card is based on the [Hugging Face dataset card template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
|
36 |
+
|
37 |
+
---
|
38 |
+
|
39 |
+
## Dataset Details
|
40 |
+
|
41 |
+
### Dataset Description
|
42 |
+
|
43 |
+
This dataset merges subsets from three well-known large-scale image-text datasets:
|
44 |
+
|
45 |
+
- **LAION-400M** (10% sample): A multilingual dataset of image-text pairs crawled from the web and filtered with CLIP.
|
46 |
+
- **COYO-700M** (10% sample): A large-scale Korean-English image-text dataset from Kakao Brain.
|
47 |
+
- **Conceptual Captions**: A publicly available dataset from Google AI with filtered image captions from the web.
|
48 |
+
|
49 |
+
The dataset consists of three splits:
|
50 |
+
|
51 |
+
| Split | Source | # Examples |
|
52 |
+
|----------------------|------------------------|----------------|
|
53 |
+
| `laion` | LAION-400M (10%) | 40,000,000 |
|
54 |
+
| `coyo` | COYO-700M (10%) | 70,000,000 |
|
55 |
+
| `conceptualCaptions` | Conceptual Captions | 3,318,333 |
|
56 |
+
|
57 |
+
All splits share the same two fields:
|
58 |
+
- `url`: A direct link to the image.
|
59 |
+
- `caption`: A natural language description of the image.
|
60 |
+
|
61 |
+
- **Curated by:** [kamruzzaman-asif]
|
62 |
+
- **Funded by [optional]:** N/A
|
63 |
+
- **Shared by [optional]:** Hugging Face user: `kamruzzaman-asif`
|
64 |
+
- **Language(s) (NLP):** Multilingual (primarily English, some Korean in COYO)
|
65 |
+
- **License:** See individual source licenses (LAION and COYO are CC BY 4.0)
|
66 |
+
|
67 |
+
---
|
68 |
+
|
69 |
+
### Dataset Sources
|
70 |
+
|
71 |
+
- **LAION-400M:** https://huggingface.co/datasets/laion/laion400m
|
72 |
+
- **COYO-700M:** https://huggingface.co/datasets/kakaobrain/coyo-700m
|
73 |
+
- **Conceptual Captions:** https://ai.google.com/research/ConceptualCaptions
|
74 |
+
|
75 |
+
---
|
76 |
+
|
77 |
+
## Uses
|
78 |
+
|
79 |
+
### Direct Use
|
80 |
+
|
81 |
+
This dataset is intended for:
|
82 |
+
- Training or evaluating vision-language models (e.g., CLIP, BLIP, Flamingo)
|
83 |
+
- Image-text retrieval tasks
|
84 |
+
- Weakly supervised or semi-supervised learning with large-scale web data
|
85 |
+
|
86 |
+
### Out-of-Scope Use
|
87 |
+
|
88 |
+
- The dataset does not contain actual images — only URLs. Any tasks requiring image pixel data require separate downloading.
|
89 |
+
- May contain broken or unreachable URLs.
|
90 |
+
- Not suitable for tasks requiring curated or verified image-caption quality.
|
91 |
+
|
92 |
+
---
|
93 |
+
|
94 |
+
## Dataset Structure
|
95 |
+
|
96 |
+
Each split is a flat table with the following fields:
|
97 |
+
|
98 |
+
| Field | Type | Description |
|
99 |
+
|-----------|---------|-----------------------------------------|
|
100 |
+
| `url` | string | Publicly available link to an image |
|
101 |
+
| `caption` | string | Textual description of the corresponding image |
|
102 |
+
|
103 |
+
Data splits:
|
104 |
+
- `laion`: Sampled from LAION-400M
|
105 |
+
- `coyo`: Sampled from COYO-700M
|
106 |
+
- `conceptualCaptions`: Full Conceptual Captions dataset
|
107 |
+
|
108 |
+
---
|
109 |
+
|
110 |
+
## Dataset Creation
|
111 |
+
|
112 |
+
### Curation Rationale
|
113 |
+
|
114 |
+
Large-scale image-text datasets are essential for training multimodal models, but full datasets are often too large or difficult to host. This merged dataset offers a lighter, URL-only version to ease access and experimentation.
|
115 |
+
|
116 |
+
### Source Data
|
117 |
+
|
118 |
+
#### Data Collection and Processing
|
119 |
+
|
120 |
+
- LAION and COYO subsets were sampled at approximately 10% of their full size.
|
121 |
+
- Duplicates and malformed records were removed.
|
122 |
+
- Only `url` and `caption` fields were retained.
|
123 |
+
- Conceptual Captions was included in full.
|
124 |
+
|
125 |
+
#### Who are the source data producers?
|
126 |
+
|
127 |
+
The data originates from large web-scale crawls performed by the LAION team, Kakao Brain, and Google AI.
|
128 |
+
|
129 |
+
---
|
130 |
+
|
131 |
+
### Annotations
|
132 |
+
|
133 |
+
No additional annotations beyond the original captions are included.
|
134 |
+
|
135 |
+
#### Personal and Sensitive Information
|
136 |
+
|
137 |
+
The dataset may contain content from the open web that includes personal, copyrighted, or sensitive material. Use responsibly and adhere to the terms of the original datasets.
|
138 |
+
|
139 |
+
---
|
140 |
+
|
141 |
+
## Bias, Risks, and Limitations
|
142 |
+
|
143 |
+
- The data reflects web-scale distribution, which may contain biases, offensive content, or culturally insensitive material.
|
144 |
+
- Captions are not manually verified.
|
145 |
+
- URLs may expire or be removed over time.
|
146 |
+
|
147 |
+
### Recommendations
|
148 |
+
|
149 |
+
Researchers and developers should pre-filter, verify, and clean the dataset further for production or sensitive use cases.
|
150 |
+
|
151 |
+
---
|
152 |
+
|
153 |
+
## Citation
|
154 |
+
|
155 |
+
If you use this dataset, please cite the original datasets:
|
156 |
+
|
157 |
+
**LAION-400M**
|
158 |
+
Schuhmann et al., *LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs*
|
159 |
+
https://arxiv.org/abs/2111.02114
|
160 |
+
|
161 |
+
**COYO-700M**
|
162 |
+
Kim et al., *COYO-700M: Image-Text Dataset for Web-scale Learning*
|
163 |
+
https://arxiv.org/abs/2303.06512
|
164 |
+
|
165 |
+
**Conceptual Captions**
|
166 |
+
Sharma et al., *Conceptual Captions: A Cleaned, Hypernymed, Image Caption Dataset for the Web*
|
167 |
+
https://aclanthology.org/P18-1238/
|
168 |
+
|
169 |
+
---
|
170 |
+
|
171 |
+
## More Information
|
172 |
+
|
173 |
+
For issues, contributions, or questions, please contact the dataset maintainer on Hugging Face.
|
174 |
+
|
175 |
+
---
|
176 |
+
|
177 |
+
## Dataset Card Authors
|
178 |
+
|
179 |
+
[kamruzzaman-asif]
|
180 |
+
|
181 |
+
## Dataset Card Contact
|
182 |
+
|
183 |
+
[https://huggingface.co/kamruzzaman-asif]
|