File size: 13,760 Bytes
46d0482
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49a0dac
 
 
2c3b1c7
 
ad85194
49a0dac
 
 
 
 
 
 
 
 
 
46d0482
49a0dac
 
 
 
 
 
 
 
 
 
 
 
 
 
5fbf012
 
 
 
 
 
 
49a0dac
 
 
 
5fbf012
 
 
49a0dac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5947793
49a0dac
 
 
5947793
49a0dac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5fbf012
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49a0dac
5fbf012
 
 
 
49a0dac
5fbf012
 
b2b24e3
5fbf012
 
 
 
 
 
49a0dac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5fbf012
 
 
49a0dac
 
 
 
 
 
5947793
49a0dac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5947793
49a0dac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5947793
49a0dac
 
 
5947793
49a0dac
5947793
49a0dac
5947793
49a0dac
ad85194
49a0dac
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
---
dataset_info:
  features:
  - name: id
    dtype: int32
  - name: title
    dtype: string
  - name: text
    dtype: string
  - name: url
    dtype: string
  - name: wiki_id
    dtype: int32
  - name: paragraph_id
    dtype: int32
  - name: images
    sequence:
    - name: caption
      dtype: string
    - name: image
      dtype: image
    - name: type
      dtype: string
    - name: url
      dtype: string
  splits:
  - name: train
    num_bytes: 58037298060.96
    num_examples: 42482460
  download_size: 47531941595
  dataset_size: 58037298060.96
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: cc-by-sa-4.0
task_categories:
- text-generation
- visual-document-retrieval
- sentence-similarity
- visual-question-answering
language:
- en
tags:
- retrieval-augmented-generation
- RAG
- multimodal
- vision-language
pretty_name: WikiFragments
size_categories:
- 10M<n<100M
---
# WikiFragments

<!-- Provide a quick summary of the dataset. -->

**WikiFragments** is a multimodal dataset built from [Wikipedia (en)](https://en.wikipedia.org/), consisting of cleaned textual paragraphs paired with related images (infobox and thumbnail) from the same page. Each pair forms a **multimodal fragment**, which serves as an atomic knowledge unit ideal for information retrieval and multimodal research.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/62d011476a61a88ea0d16665/ixMJmaq85vj0JJ0IPbUuZ.png)

* Fragment with four images and captions

![image/png](https://cdn-uploads.huggingface.co/production/uploads/62d011476a61a88ea0d16665/XnUBlPulrMyK4stI1ScnF.png)

* Fragment with only text and no associated images

> [!NOTE]  
> The images above were generated from two separate rows in the dataset using
> the [`FragmentCreator`](https://github.com/cilabuniba/artseek/blob/main/artseek/data/datasets/processing.py),
> which converts them into stand‑alone images.
> You can use the same code to reproduce this representation.
> In our paper, we employed this representation to encode fragments with [ColQwen2](https://huggingface.co/vidore/colqwen2-v1.0) for multimodal retrieval.

## Dataset Details

To construct this dataset, we modified the [wikiextractor](https://github.com/attardi/wikiextractor) tool to extract and clean paragraphs from every page in the English Wikipedia. We preserved hyperlinks in the text and, when available, retrieved images from infoboxes and thumbnails. Each image is associated with its respective paragraph according to the order in which it appears in the HTML source of the page, along with its original caption.

Images are retrieved at the lower resolution used for webpage rendering,  
extracted from the [Kiwix](https://kiwix.org/en/) full Wikipedia dump (ZIM file, January 2024).  
This approach reduces the overall dataset size.

We define a **multimodal fragment** as follows:

> A multimodal fragment is an atomic unit of information consisting of a paragraph from a Wikipedia page and all images that, in the page’s source code, appear above that paragraph.

### Dataset Description

Paragraphs are cleaned using the standard `wikiextractor` logic. For each paragraph, we store:

- The paragraph text
- The corresponding Wikipedia page name and URL
- The list of associated images as PIL objects
- The image URLs and captions
- The sequential index of the paragraph within the page

- **Curated by:** Nicola Fanelli (PhD Student @ University of Bari Aldo Moro, Italy)
- **Language(s) (NLP):** English

### License

- **Code**: MIT License.
- **Text Data**: The Wikipedia text is licensed under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/). When using this dataset, you must provide proper attribution to Wikipedia and its contributors and share any derivatives under the same license.
- **Images**: Images are sourced from Wikipedia and Wikimedia Commons. Each image is subject to its own license, which is typically indicated on its original page. Users of this dataset are responsible for ensuring they comply with the licensing terms of individual images.

### Dataset Sources

All content originates from [Wikipedia (en)](https://en.wikipedia.org/). Any use of this dataset must comply with Wikipedia’s copyright policies.

- **Repository:** [`wikiextractor` fork](https://github.com/nicolafan/wikiextractor)
- **Paper:** [ArtSeek: Deep artwork understanding via multimodal in-context reasoning and late interaction retrieval](https://arxiv.org/abs/2507.21917)
<!-- - **Demo [optional]:** [More Information Needed] -->

## Uses

This dataset is designed for use in **retrieval tasks**, particularly in retrieval-augmented generation (RAG), to provide relevant multimodal context for answering questions.

In our [paper](https://arxiv.org/abs/2507.21917), we generate visual representations of each multimodal fragment—images resembling a rendered PDF with the paragraph at the bottom, images at the top, and captions aligned to the right. These are then encoded using multi-vector multimodal representations with [ColPali](https://arxiv.org/abs/2407.01449).

The code for generating these multimodal fragment images (such as the ones provided in the examples above) is available [here](https://github.com/cilabuniba/artseek/blob/main/artseek/data/datasets/processing.py) in the official repository of our paper.

Since ColPali only supports text queries, and our goal was to enable **multimodal (image + text) queries**, we also propose a novel technique in our paper to extend the model’s capabilities to handle multimodal queries **without additional training**.

### Direct Use

<!-- This section describes suitable use cases for the dataset. -->

This dataset is suitable for research and development in multimodal retrieval, especially in retrieval-augmented generation (RAG) systems. It can be used to evaluate methods that require paired image-text information or test architectures for multimodal representation learning. The dataset supports tasks such as:

- Multimodal dense retrieval
- Multimodal pretraining and evaluation
- Document understanding (e.g., question answering over richly formatted content)
- Benchmarking multimodal in-context learning approaches

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->

The dataset is not suitable for:

- Real-time systems requiring up-to-date information, as it is based on a static Wikipedia snapshot
- Legal, medical, or financial applications where factual accuracy and source traceability are critical
- Training or evaluating systems that treat the dataset as if it contains original or copyright-cleared media; users must respect the licensing of individual images
- Commercial use of the data without verifying licenses and complying with Wikipedia and Wikimedia Commons terms

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

Each data point is a **multimodal fragment** containing:

- `id`: Sequential paragraph identifier.
- `title`: Title of the corresponding Wikipedia page.
- `text`: Cleaned paragraph text with embedded links. Links and URLs are preserved for potential future use.
- `url`: URL of the corresponding Wikipedia page.
- `wiki_id`: Unique identifier of the Wikipedia page.
- `paragraph_id`: Sequential paragraph identifier within the corresponding page.
- `images`: A dictionary containing:
  - `caption`: List of captions for each associated image.
  - `image`: List of PIL image objects linked to the paragraph.
  - `type`: List of strings (`"infobox"` or `"thumb"`) indicating the image type.
  - `url`: List of internal URLs for accessing the images via the Kiwix dump.

Currently, there are no predefined train/validation/test splits.  
Users may create custom splits based on page domains, topics, or other criteria.

For example, in the [ArtSeek code](https://github.com/cilabuniba/artseek), we automatically navigate Wikipedia *navboxes* up to 5 levels deep to select only pages containing fragments related to the visual arts domain. You can apply the same approach to your domain of interest using the `select_category_pages` function available in [this file](https://github.com/cilabuniba/artseek/blob/main/artseek/data/graph/wikipedia.py).

**Example:**  
```python
select_category_pages("Category:Visual arts", 5)
```

## Dataset Statistics

- **Number of paragraphs:** 42,482,460
- **Number of paragraphs with at least one associated image:** 2,254,123
- **Total number of images:** 2,499,977
- **Average number of images per paragraph:** 1.109
- **Maximum number of images in a single paragraph:** 125

Most image-associated paragraphs contain only a single image. The frequency of paragraphs decreases as the number of associated images increases, following an inverse proportionality.

## Dataset Creation

### Curation Rationale

<!-- Motivation for the creation of this dataset. -->

The dataset was created to provide a high-quality multimodal benchmark composed of Wikipedia's rich textual and visual information. It serves as a research resource for advancing multimodal retrieval and generative models by offering paragraph-image pairs grounded in encyclopedic knowledge.

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->

All text and image content is sourced from the English Wikipedia and Wikimedia Commons via the Kiwix ZIM dump.

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->

- Text was extracted using a modified version of [`wikiextractor`](https://github.com/attardi/wikiextractor), keeping internal links and paragraph ordering.
- Images were parsed from HTML infoboxes and thumbnail references, then downloaded using the Kiwix offline Wikipedia dump.
- Images were linked to the paragraph below them in the HTML structure.
- Captions were extracted from the HTML metadata.
- The final dataset was assembled by matching paragraphs and their corresponding images.

Paragraphs are sourced from the Wikipedia dump dated 2025-08-01.  
Images are sourced from the full Kiwix dump (ZIM file, January 2024).

#### Who are the source data producers?

<!-- This section describes the people or systems who originally created the data. -->

The text was authored by contributors to the English Wikipedia. Images were contributed by various users to Wikimedia Commons and are subject to individual licenses. No demographic or identity metadata is available for content creators.

### Annotations

<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->

There are no manual annotations beyond the original captions associated with images from Wikipedia pages.

#### Annotation process

N/A

#### Who are the annotators?

N/A

#### Personal and Sensitive Information

<!-- State whether the dataset contains data that might be considered personal, sensitive, or private. -->

To the best of our knowledge, the dataset does not contain personal or sensitive information. Wikipedia is a public knowledge source with moderation and community standards aimed at excluding personal data. However, users are advised to verify content if used in sensitive contexts.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

As the dataset is derived from Wikipedia, it inherits potential biases found in Wikipedia articles, including:

- Coverage bias (overrepresentation of certain regions, topics, or demographics)
- Editorial bias (reflecting the views of more active editor groups)
- Visual bias (images may be selected or framed subjectively)

Additionally:

- Not all Wikipedia pages contain relevant or aligned images
- Image licenses may vary and require individual attribution or restrictions

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users should be aware of and account for:

- The need to verify and respect licensing terms of individual images
- The inherited biases from Wikipedia contributors and editorial processes
- The fact that the dataset reflects a snapshot in time and is not updated in real-time
- Limitations in using this dataset for safety-critical or fact-sensitive applications

## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```
@article{fanelli2025artseek,
  title={ArtSeek: Deep artwork understanding via multimodal in-context reasoning and late interaction retrieval},
  author={Fanelli, Nicola and Vessio, Gennaro and Castellano, Giovanna},
  journal={arXiv preprint arXiv:2507.21917},
  year={2025}
}
```

**APA:**

Fanelli, N., Vessio, G., & Castellano, G. (2025). ArtSeek: Deep artwork understanding via multimodal in-context reasoning and late interaction retrieval. arXiv preprint arXiv:2507.21917.

<!-- ## Glossary [optional] -->

<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->

<!-- [More Information Needed] -->

<!-- ## More Information [optional] -->

<!-- [More Information Needed] -->

## Dataset Card Authors

Nicola Fanelli

## Dataset Card Contact

For questions, please contact: **[email protected]**