nicolafan commited on
Commit
49a0dac
·
verified ·
1 Parent(s): 46d0482

Update README.md

Browse files

First dataset card draft introducing WikiFragments.

Files changed (1) hide show
  1. README.md +227 -0
README.md CHANGED
@@ -34,4 +34,231 @@ configs:
34
  data_files:
35
  - split: train
36
  path: data/train-*
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  data_files:
35
  - split: train
36
  path: data/train-*
37
+ license: cc-by-sa-4.0
38
+ task_categories:
39
+ - text-generation
40
+ language:
41
+ - en
42
+ tags:
43
+ - retrieval-augmented-generation
44
+ - RAG
45
+ - multimodal
46
+ - vision-language
47
+ pretty_name: WikiFragments
48
+ size_categories:
49
+ - 10M<n<100M
50
  ---
51
+ # WikiFragments
52
+
53
+ <!-- Provide a quick summary of the dataset. -->
54
+
55
+ **WikiFragments** is a multimodal dataset built from [Wikipedia (en)](https://en.wikipedia.org/), consisting of cleaned textual paragraphs paired with related images (infobox and thumbnail) from the same page. Each pair forms a **multimodal fragment**, which serves as an atomic knowledge unit ideal for information retrieval and multimodal research.
56
+
57
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62d011476a61a88ea0d16665/ixMJmaq85vj0JJ0IPbUuZ.png)
58
+
59
+ * Fragment with four images and captions
60
+
61
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62d011476a61a88ea0d16665/XnUBlPulrMyK4stI1ScnF.png)
62
+
63
+ * Fragment with only text and no associated images
64
+
65
+ ## Dataset Details
66
+
67
+ To construct this dataset, we modified the [wikiextractor](https://github.com/attardi/wikiextractor) tool to extract and clean paragraphs from every page in the English Wikipedia. We preserved hyperlinks in the text and, when available, retrieved images from infoboxes and thumbnails. Each image is associated with its respective paragraph according to the order in which it appears in the HTML source of the page, along with its original caption.
68
+
69
+ Images are downloaded at their original resolution as rendered on the webpage, using the [Kiwix](https://kiwix.org/en/) full Wikipedia dump (ZIM file 2024-01).
70
+
71
+ We define a **multimodal fragment** as follows:
72
+
73
+ > A multimodal fragment is an atomic unit of information consisting of a paragraph from a Wikipedia page and all images that, in the page’s source code, appear above that paragraph.
74
+
75
+ ### Dataset Description
76
+
77
+ Paragraphs are cleaned using the standard `wikiextractor` logic. For each paragraph, we store:
78
+
79
+ - The paragraph text
80
+ - The corresponding Wikipedia page name and URL
81
+ - The list of associated images as PIL objects
82
+ - The image URLs and captions
83
+ - The sequential index of the paragraph within the page
84
+
85
+ - **Curated by:** Nicola Fanelli (PhD Student @ University of Bari Aldo Moro, Italy)
86
+ - **Language(s) (NLP):** English
87
+ <!-- - **License:** MIT -->
88
+ <!-- - **Funded by [optional]:** [More Information Needed]
89
+ - **Shared by [optional]:** [More Information Needed] -->
90
+
91
+ ### License
92
+
93
+ - **Code**: MIT License (see `LICENSE` file).
94
+ - **Text Data**: The Wikipedia text is licensed under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/). When using this dataset, you must provide proper attribution to Wikipedia and its contributors and share any derivatives under the same license.
95
+ - **Images**: Images are sourced from Wikipedia and Wikimedia Commons. Each image is subject to its own license, which is typically indicated on its original page. Users of this dataset are responsible for ensuring they comply with the licensing terms of individual images.
96
+
97
+ ### Dataset Sources [optional]
98
+
99
+ All content originates from [Wikipedia (en)](https://en.wikipedia.org/). Any use of this dataset must comply with Wikipedia’s copyright policies.
100
+
101
+ - **Repository:** [`wikiextractor` fork](https://github.com/nicolafan/wikiextractor)
102
+ - **Paper:** [ArtSeek: Deep artwork understanding via multimodal in-context reasoning and late interaction retrieval](https://arxiv.org/abs/2507.21917)
103
+ <!-- - **Demo [optional]:** [More Information Needed] -->
104
+
105
+ ## Uses
106
+
107
+ This dataset is designed for use in **retrieval tasks**, particularly in retrieval-augmented generation (RAG), to provide relevant multimodal context for answering questions.
108
+
109
+ In our [paper](https://arxiv.org/abs/2507.21917), we generate visual representations of each multimodal fragment—images resembling a rendered PDF with the paragraph at the bottom, images at the top, and captions aligned to the right. These are then encoded using multi-vector multimodal representations with [ColPali](https://arxiv.org/abs/2407.01449).
110
+
111
+ The code for generating these multimodal fragment images (such as the ones provided in the examples above) is available [here](https://github.com/cilabuniba/artseek/blob/main/artseek/data/datasets/processing.py) in the official repository of our paper.
112
+
113
+ Since ColPali only supports text queries, and our goal was to enable **multimodal (image + text) queries**, we also propose a novel technique in our paper to extend the model’s capabilities to handle multimodal queries **without additional training**.
114
+
115
+ ### Direct Use
116
+
117
+ <!-- This section describes suitable use cases for the dataset. -->
118
+
119
+ This dataset is suitable for research and development in multimodal retrieval, especially in retrieval-augmented generation (RAG) systems. It can be used to evaluate methods that require paired image-text information or test architectures for multimodal representation learning. The dataset supports tasks such as:
120
+
121
+ - Multimodal dense retrieval
122
+ - Multimodal pretraining and evaluation
123
+ - Document understanding (e.g., question answering over richly formatted content)
124
+ - Benchmarking multimodal in-context learning approaches
125
+
126
+ ### Out-of-Scope Use
127
+
128
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
129
+
130
+ The dataset is not suitable for:
131
+
132
+ - Real-time systems requiring up-to-date information, as it is based on a static Wikipedia snapshot
133
+ - Legal, medical, or financial applications where factual accuracy and source traceability are critical
134
+ - Training or evaluating systems that treat the dataset as if it contains original or copyright-cleared media; users must respect the licensing of individual images
135
+ - Commercial use of the data without verifying licenses and complying with Wikipedia and Wikimedia Commons terms
136
+
137
+ ## Dataset Structure
138
+
139
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
140
+
141
+ Each data point is a **multimodal fragment** consisting of:
142
+
143
+ <!-- - `page_title`: Title of the Wikipedia page
144
+ - `paragraph_index`: Index of the paragraph within the page
145
+ - `paragraph_text`: The cleaned paragraph
146
+ - `paragraph_url`: The URL to the corresponding Wikipedia page
147
+ - `images`: List of image objects (PIL format)
148
+ - `image_urls`: URLs to the original images (hosted on Wikimedia)
149
+ - `captions`: Captions associated with each image -->
150
+
151
+ There are currently no pre-defined train/validation/test splits. Users can define custom splits based on page domains or topics.
152
+
153
+ ## Dataset Creation
154
+
155
+ ### Curation Rationale
156
+
157
+ <!-- Motivation for the creation of this dataset. -->
158
+
159
+ The dataset was created to provide a high-quality multimodal benchmark composed of Wikipedia's rich textual and visual information. It serves as a research resource for advancing multimodal retrieval and generative models by offering paragraph-image pairs grounded in encyclopedic knowledge.
160
+
161
+ ### Source Data
162
+
163
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
164
+
165
+ All text and image content is sourced from the English Wikipedia and Wikimedia Commons via the Kiwix ZIM dump.
166
+
167
+ #### Data Collection and Processing
168
+
169
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
170
+
171
+ - Text was extracted using a modified version of [`wikiextractor`](https://github.com/attardi/wikiextractor), keeping internal links and paragraph ordering.
172
+ - Images were parsed from HTML infoboxes and thumbnail references, then downloaded using the Kiwix offline Wikipedia dump.
173
+ - Images were linked to the paragraph below them in the HTML structure.
174
+ - Captions were extracted from the HTML metadata.
175
+ - The final dataset was assembled by matching paragraphs and their corresponding images.
176
+
177
+ #### Who are the source data producers?
178
+
179
+ <!-- This section describes the people or systems who originally created the data. -->
180
+
181
+ The text was authored by contributors to the English Wikipedia. Images were contributed by various users to Wikimedia Commons and are subject to individual licenses. No demographic or identity metadata is available for content creators.
182
+
183
+ ### Annotations [optional]
184
+
185
+ <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
186
+
187
+ There are no manual annotations beyond the original captions associated with images from Wikipedia pages.
188
+
189
+ #### Annotation process
190
+
191
+ N/A
192
+
193
+ #### Who are the annotators?
194
+
195
+ N/A
196
+
197
+ #### Personal and Sensitive Information
198
+
199
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private. -->
200
+
201
+ To the best of our knowledge, the dataset does not contain personal or sensitive information. Wikipedia is a public knowledge source with moderation and community standards aimed at excluding personal data. However, users are advised to verify content if used in sensitive contexts.
202
+
203
+ ## Bias, Risks, and Limitations
204
+
205
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
206
+
207
+ As the dataset is derived from Wikipedia, it inherits potential biases found in Wikipedia articles, including:
208
+
209
+ - Coverage bias (overrepresentation of certain regions, topics, or demographics)
210
+ - Editorial bias (reflecting the views of more active editor groups)
211
+ - Visual bias (images may be selected or framed subjectively)
212
+
213
+ Additionally:
214
+
215
+ - Not all Wikipedia pages contain relevant or aligned images
216
+ - Image licenses may vary and require individual attribution or restrictions
217
+
218
+ ### Recommendations
219
+
220
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
221
+
222
+ Users should be aware of and account for:
223
+
224
+ - The need to verify and respect licensing terms of individual images
225
+ - The inherited biases from Wikipedia contributors and editorial processes
226
+ - The fact that the dataset reflects a snapshot in time and is not updated in real-time
227
+ - Limitations in using this dataset for safety-critical or fact-sensitive applications
228
+
229
+ ## Citation [optional]
230
+
231
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
232
+
233
+ **BibTeX:**
234
+
235
+ ```
236
+ @article{fanelli2025artseek,
237
+ title={ArtSeek: Deep artwork understanding via multimodal in-context reasoning and late interaction retrieval},
238
+ author={Fanelli, Nicola and Vessio, Gennaro and Castellano, Giovanna},
239
+ journal={arXiv preprint arXiv:2507.21917},
240
+ year={2025}
241
+ }
242
+ ```
243
+
244
+ **APA:**
245
+
246
+ Fanelli, N., Vessio, G., & Castellano, G. (2025). ArtSeek: Deep artwork understanding via multimodal in-context reasoning and late interaction retrieval. arXiv preprint arXiv:2507.21917.
247
+
248
+ ## Glossary [optional]
249
+
250
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
251
+
252
+ [More Information Needed]
253
+
254
+ ## More Information [optional]
255
+
256
+ [More Information Needed]
257
+
258
+ ## Dataset Card Authors [optional]
259
+
260
+ Nicola Fanelli
261
+
262
+ ## Dataset Card Contact
263
+
264
+ For questions, please contact: **[email protected]**