File size: 5,764 Bytes
9155799
 
 
 
 
 
 
 
 
 
 
0cad27a
 
 
33a3f8d
 
 
 
 
9155799
 
 
 
 
0cad27a
 
33a3f8d
 
9155799
4fcbb59
2f736a9
4fcbb59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
---
dataset_info:
  features:
  - name: url
    dtype: string
  - name: caption
    dtype: string
  splits:
  - name: laion
    num_bytes: 6602596166
    num_examples: 40000000
  - name: coyo
    num_bytes: 12706527320
    num_examples: 70000000
  - name: conceptualCaptions
    num_bytes: 584517500
    num_examples: 3318333
  download_size: 14883240515
  dataset_size: 19893640986
configs:
- config_name: default
  data_files:
  - split: laion
    path: data/laion-*
  - split: coyo
    path: data/coyo-*
  - split: conceptualCaptions
    path: data/conceptualCaptions-*
---

# Dataset Card for image_captions_x (URL + Caption)

This dataset provides a lightweight, web-scale resource of image-caption pairs in the form of URLs and their associated textual descriptions (captions). It is designed for training and evaluating vision-language models where users retrieve images independently from the provided links.

This dataset card is based on the [Hugging Face dataset card template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).

---

## Dataset Details

### Dataset Description

This dataset merges subsets from three well-known large-scale image-text datasets:

- **LAION-400M** (10% sample): A multilingual dataset of image-text pairs crawled from the web and filtered with CLIP.
- **COYO-700M** (10% sample): A large-scale Korean-English image-text dataset from Kakao Brain.
- **Conceptual Captions**: A publicly available dataset from Google AI with filtered image captions from the web.

The dataset consists of three splits:

| Split                | Source                 | # Examples     |
|----------------------|------------------------|----------------|
| `laion`              | LAION-400M (10%)       | 40,000,000     |
| `coyo`               | COYO-700M (10%)        | 70,000,000     |
| `conceptualCaptions` | Conceptual Captions    | 3,318,333      |

All splits share the same two fields:
- `url`: A direct link to the image.
- `caption`: A natural language description of the image.

- **Curated by:** [kamruzzaman-asif]
- **Funded by [optional]:** N/A
- **Shared by [optional]:** Hugging Face user: `kamruzzaman-asif`
- **Language(s) (NLP):** Multilingual (primarily English, some Korean in COYO)
- **License:** See individual source licenses (LAION and COYO are CC BY 4.0)

---

### Dataset Sources

- **LAION-400M:** https://huggingface.co/datasets/laion/laion400m
- **COYO-700M:** https://huggingface.co/datasets/kakaobrain/coyo-700m
- **Conceptual Captions:** https://ai.google.com/research/ConceptualCaptions

---

## Uses

### Direct Use

This dataset is intended for:
- Training or evaluating vision-language models (e.g., CLIP, BLIP, Flamingo)
- Image-text retrieval tasks
- Weakly supervised or semi-supervised learning with large-scale web data

### Out-of-Scope Use

- The dataset does not contain actual images — only URLs. Any tasks requiring image pixel data require separate downloading.
- May contain broken or unreachable URLs.
- Not suitable for tasks requiring curated or verified image-caption quality.

---

## Dataset Structure

Each split is a flat table with the following fields:

| Field     | Type    | Description                             |
|-----------|---------|-----------------------------------------|
| `url`     | string  | Publicly available link to an image     |
| `caption` | string  | Textual description of the corresponding image |

Data splits:
- `laion`: Sampled from LAION-400M
- `coyo`: Sampled from COYO-700M
- `conceptualCaptions`: Full Conceptual Captions dataset

---

## Dataset Creation

### Curation Rationale

Large-scale image-text datasets are essential for training multimodal models, but full datasets are often too large or difficult to host. This merged dataset offers a lighter, URL-only version to ease access and experimentation.

### Source Data

#### Data Collection and Processing

- LAION and COYO subsets were sampled at approximately 10% of their full size.
- Duplicates and malformed records were removed.
- Only `url` and `caption` fields were retained.
- Conceptual Captions was included in full.

#### Who are the source data producers?

The data originates from large web-scale crawls performed by the LAION team, Kakao Brain, and Google AI.

---

### Annotations

No additional annotations beyond the original captions are included.

#### Personal and Sensitive Information

The dataset may contain content from the open web that includes personal, copyrighted, or sensitive material. Use responsibly and adhere to the terms of the original datasets.

---

## Bias, Risks, and Limitations

- The data reflects web-scale distribution, which may contain biases, offensive content, or culturally insensitive material.
- Captions are not manually verified.
- URLs may expire or be removed over time.

### Recommendations

Researchers and developers should pre-filter, verify, and clean the dataset further for production or sensitive use cases.

---

## Citation

If you use this dataset, please cite the original datasets:

**LAION-400M**  
Schuhmann et al., *LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs*  
https://arxiv.org/abs/2111.02114

**COYO-700M**  
Kim et al., *COYO-700M: Image-Text Dataset for Web-scale Learning*  
https://arxiv.org/abs/2303.06512

**Conceptual Captions**  
Sharma et al., *Conceptual Captions: A Cleaned, Hypernymed, Image Caption Dataset for the Web*  
https://aclanthology.org/P18-1238/

---

## More Information

For issues, contributions, or questions, please contact the dataset maintainer on Hugging Face.

---

## Dataset Card Authors

[kamruzzaman-asif]

## Dataset Card Contact

[https://huggingface.co/kamruzzaman-asif]