Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
parquet-converter commited on
Commit
0c94abc
Β·
verified Β·
1 Parent(s): 5c410ef

Update parquet files

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +0 -223
  2. agg_score_plot.svg +0 -1686
  3. arb_Arab/{000_00000.parquet β†’ train/0000.parquet} +0 -0
  4. arb_Arab/{000_00001.parquet β†’ train/0001.parquet} +0 -0
  5. arb_Arab/{000_00002.parquet β†’ train/0002.parquet} +0 -0
  6. arb_Arab/{000_00003.parquet β†’ train/0003.parquet} +0 -0
  7. arb_Arab/{000_00004.parquet β†’ train/0004.parquet} +0 -0
  8. arb_Arab/{000_00005.parquet β†’ train/0005.parquet} +0 -0
  9. arb_Arab/{000_00006.parquet β†’ train/0006.parquet} +0 -0
  10. arb_Arab/{000_00007.parquet β†’ train/0007.parquet} +0 -0
  11. arb_Arab/{000_00008.parquet β†’ train/0008.parquet} +0 -0
  12. arb_Arab/{000_00009.parquet β†’ train/0009.parquet} +0 -0
  13. arb_Arab/{000_00010.parquet β†’ train/0010.parquet} +0 -0
  14. arb_Arab/{000_00011.parquet β†’ train/0011.parquet} +0 -0
  15. arb_Arab/{000_00012.parquet β†’ train/0012.parquet} +0 -0
  16. arb_Arab/{000_00013.parquet β†’ train/0013.parquet} +0 -0
  17. arb_Arab/{000_00014.parquet β†’ train/0014.parquet} +0 -0
  18. arb_Arab/{000_00015.parquet β†’ train/0015.parquet} +0 -0
  19. arb_Arab/{000_00016.parquet β†’ train/0016.parquet} +0 -0
  20. arb_Arab/{000_00017.parquet β†’ train/0017.parquet} +0 -0
  21. arb_Arab/{000_00018.parquet β†’ train/0018.parquet} +0 -0
  22. arb_Arab/{000_00019.parquet β†’ train/0019.parquet} +0 -0
  23. arb_Arab/{000_00020.parquet β†’ train/0020.parquet} +0 -0
  24. arb_Arab/{000_00021.parquet β†’ train/0021.parquet} +0 -0
  25. arb_Arab/{000_00022.parquet β†’ train/0022.parquet} +0 -0
  26. arb_Arab/{000_00023.parquet β†’ train/0023.parquet} +0 -0
  27. arb_Arab/{000_00024.parquet β†’ train/0024.parquet} +0 -0
  28. arb_Arab/{000_00025.parquet β†’ train/0025.parquet} +0 -0
  29. arb_Arab/{000_00026.parquet β†’ train/0026.parquet} +0 -0
  30. arb_Arab/{000_00027.parquet β†’ train/0027.parquet} +0 -0
  31. arb_Arab/{000_00028.parquet β†’ train/0028.parquet} +0 -0
  32. arb_Arab/{000_00029.parquet β†’ train/0029.parquet} +0 -0
  33. arb_Arab/{000_00030.parquet β†’ train/0030.parquet} +0 -0
  34. arb_Arab/{000_00031.parquet β†’ train/0031.parquet} +0 -0
  35. arb_Arab/{000_00032.parquet β†’ train/0032.parquet} +0 -0
  36. arb_Arab/{000_00033.parquet β†’ train/0033.parquet} +0 -0
  37. arb_Arab/{000_00034.parquet β†’ train/0034.parquet} +0 -0
  38. arb_Arab/{000_00035.parquet β†’ train/0035.parquet} +0 -0
  39. arb_Arab/{000_00036.parquet β†’ train/0036.parquet} +0 -0
  40. arb_Arab/{000_00037.parquet β†’ train/0037.parquet} +0 -0
  41. arb_Arab/{000_00038.parquet β†’ train/0038.parquet} +0 -0
  42. arb_Arab/{000_00039.parquet β†’ train/0039.parquet} +0 -0
  43. arb_Arab/{000_00040.parquet β†’ train/0040.parquet} +0 -0
  44. arb_Arab/{000_00041.parquet β†’ train/0041.parquet} +0 -0
  45. arb_Arab/{000_00042.parquet β†’ train/0042.parquet} +0 -0
  46. arb_Arab/{000_00043.parquet β†’ train/0043.parquet} +0 -0
  47. arb_Arab/{000_00044.parquet β†’ train/0044.parquet} +0 -0
  48. arb_Arab/{000_00045.parquet β†’ train/0045.parquet} +0 -0
  49. arb_Arab/{000_00046.parquet β†’ train/0046.parquet} +0 -0
  50. arb_Arab/{000_00047.parquet β†’ train/0047.parquet} +0 -0
README.md DELETED
@@ -1,223 +0,0 @@
1
- ---
2
- task_categories:
3
- - text-generation
4
- language:
5
- - ru
6
- - zh
7
- - de
8
- - ja
9
- - es
10
- - fr
11
- - it
12
- - pt
13
- - pl
14
- - nl
15
- - id
16
- - tr
17
- - cs
18
- - vi
19
- - sv
20
- - fa
21
- - ar
22
- - el
23
- - da
24
- - hu
25
- pretty_name: FineWeb2-HQ
26
- configs:
27
- - config_name: rus_Cyrl
28
- data_files:
29
- - split: train
30
- path: rus_Cyrl/*
31
- - config_name: cmn_Hani
32
- data_files:
33
- - split: train
34
- path: cmn_Hani/*
35
- - config_name: deu_Latn
36
- data_files:
37
- - split: train
38
- path: deu_Latn/*
39
- - config_name: jpn_Jpan
40
- data_files:
41
- - split: train
42
- path: jpn_Jpan/*
43
- - config_name: spa_Latn
44
- data_files:
45
- - split: train
46
- path: spa_Latn/*
47
- - config_name: fra_Latn
48
- data_files:
49
- - split: train
50
- path: fra_Latn/*
51
- - config_name: ita_Latn
52
- data_files:
53
- - split: train
54
- path: ita_Latn/*
55
- - config_name: por_Latn
56
- data_files:
57
- - split: train
58
- path: por_Latn/*
59
- - config_name: pol_Latn
60
- data_files:
61
- - split: train
62
- path: pol_Latn/*
63
- - config_name: nld_Latn
64
- data_files:
65
- - split: train
66
- path: nld_Latn/*
67
- - config_name: ind_Latn
68
- data_files:
69
- - split: train
70
- path: ind_Latn/*
71
- - config_name: tur_Latn
72
- data_files:
73
- - split: train
74
- path: tur_Latn/*
75
- - config_name: ces_Latn
76
- data_files:
77
- - split: train
78
- path: ces_Latn/*
79
- - config_name: vie_Latn
80
- data_files:
81
- - split: train
82
- path: vie_Latn/*
83
- - config_name: swe_Latn
84
- data_files:
85
- - split: train
86
- path: swe_Latn/*
87
- - config_name: fas_Arab
88
- data_files:
89
- - split: train
90
- path: fas_Arab/*
91
- - config_name: arb_Arab
92
- data_files:
93
- - split: train
94
- path: arb_Arab/*
95
- - config_name: ell_Grek
96
- data_files:
97
- - split: train
98
- path: ell_Grek/*
99
- - config_name: dan_Latn
100
- data_files:
101
- - split: train
102
- path: dan_Latn/*
103
- - config_name: hun_Latn
104
- data_files:
105
- - split: train
106
- path: hun_Latn/*
107
- size_categories:
108
- - 100M<n<1B
109
- license: odc-by
110
- ---
111
- # FineWeb2-HQ
112
-
113
- ## Dataset summary
114
-
115
- FineWeb2-HQ is a **high-quality, model-filtered pretraining dataset** derived from [**FineWeb2**](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2), spanning **20 languages**. It was created by selecting the **top 10% quality documents of FineWeb2** in each language, based on scores assigned by a deep learning classifier trained to identify **structured and knowledge-rich samples** using [**XLM-RoBERTa**](https://huggingface.co/FacebookAI/xlm-roberta-base) **embeddings**.
116
-
117
- <center>
118
- <img src="https://huggingface.co/datasets/epfml/FineWeb2-HQ/raw/main/agg_score_plot.svg" style="width: 70%;" />
119
- </center>
120
-
121
- Validation was performed by pretraining **1B-parameter LLM models** (llama-like architecture) across multiple languages and writing systems (scripts). Evaluations on **CMMLU (Chinese) and MMLU (German & French)** demonstrate that **FineWeb2-HQ matches FineWeb2 performance early in training with 6x fewer tokens, and outperforms it when fully trained**. Additionally, **improvements were observed across other benchmarks**, such as outperforming [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0-parquet) and [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) in English.
122
-
123
- For more details, see our paper [Enhancing Multilingual LLM Pretraining with Model-Based Data Selection](https://arxiv.org/abs/2502.10361).
124
-
125
- ## Key features
126
-
127
- - **High-quality selection**: Top 10% of FineWeb2 documents by quality
128
- - **Multilingual coverage**: 20 languages, ensuring diverse linguistic representation
129
- - **Model-based filtering**: Uses an XLM-RoBERTa embedding-based classifier to score documents
130
- - **Enhanced benchmark performance**: Surpasses FineWeb2 benchmark performance
131
- - **Fully open**: Emphasis on transparency
132
-
133
- ## Languages and subsets
134
-
135
- |Subset name|Language name|Number of documents|Disk size|
136
- |----------|-----------------|------------:|----------:|
137
- | rus_Cyrl | Russian | 55,220,956 | 1.2T |
138
- | cmn_Hani | Chinese | 54,211,986 | 784G |
139
- | deu_Latn | German | 43,095,728 | 618G |
140
- | spa_Latn | Spanish | 40,057,637 | 515G |
141
- | jpn_Jpan | Japanese | 34,185,427 | 393G |
142
- | fra_Latn | French | 32,248,772 | 483G |
143
- | ita_Latn | Italian | 21,180,304 | 269G |
144
- | por_Latn | Portuguese | 18,135,468 | 222G |
145
- | pol_Latn | Polish | 13,384,885 | 168G |
146
- | nld_Latn | Dutch | 12,920,963 | 160G |
147
- | ind_Latn | Indonesian | 8,911,149 | 125G |
148
- | tur_Latn | Turkish | 8,578,808 | 100G |
149
- | ces_Latn | Czech | 5,995,459 | 104G |
150
- | arb_Arab | Arabic | 5,560,599 | 94G |
151
- | fas_Arab | Persian | 5,107,187 | 69G |
152
- | hun_Latn | Hungarian | 4,527,332 | 79G |
153
- | swe_Latn | Swedish | 4,382,454 | 61G |
154
- | ell_Grek | Greek | 4,346,440 | 84G |
155
- | dan_Latn | Danish | 4,082,751 | 61G |
156
- | vie_Latn | Vietnamese | 4,003,956 | 59G |
157
-
158
- The approach as described in the paper is easy to extend to other languages as well, and we might consider adding new languages to an upcoming version of the present dataset.
159
-
160
- ## Dataset structure
161
-
162
- ### Data fields
163
-
164
- Each data entry includes the original [FineWeb2 data fields](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#data-fields) with the addition of:
165
- - `quality_score`: quality score obtained by the quality classifier
166
- - `embeddings`: array of float arrays containing 768-dimensional XLM-RoBERTa embeddings for every 512 token chunk of the tokenized text
167
-
168
- ### Data instance
169
-
170
- ```json
171
- {
172
- "id": "<urn:uuid:f26003c7-6084-4791-b3fe-240eedc37e76>",
173
- "text": "Plutonium ist einer der gefΓ€hrlichsten Stoffe der Welt. Es entsteht als hochgiftiges und radioaktives Nebenprodukt der Energiegewinnung in Atomkraftwerken. Wer nur ein Millionstel Gramm – ein kaum staubkorngroßes Teilchen – der Substanz einatmet, kann daran sterben. In der Natur kommt der Stoff nur in geringsten Mengen vor, wird aber kΓΌnstlich hergestellt, weil man damit Bomben bauen kann. Je nach Reinheitsgrad reichen fΓΌr eine Atombombe bereits fΓΌnf Kilogramm. Bis zum Beginn der achtziger Jahre des letzten Jahrhunderts hatten die Reaktoren weltweit bereits rund 300.000 Kilogramm erbrΓΌtet. JΓ€hrlich kommen etwa 20.000 Kilo hinzu. Genau dieser Stoff wird zu Land und zu Wasser um den ganzen Erdball herum transportiert. LegendΓ€r sind die Castor-Transporte, bei denen unter strengsten Sicherheitsvorkehrungen und entsprechenden Kosten abgebrannte Brennelemente aus deutschen Kernkraftwerken zur Wiederaufbereitung nach La Hague (Frankreich) oder Sellafield (Großbritannien) gebracht werden. Erst vergangenen Mai hat ein Frachter die grâßte Menge wiederaufbereiteten MΓΌlls aller Zeiten von Frankreich nach Japan gebracht. Nicht auszudenken, was ein Unfall auf See bedeuten wΓΌrde.",
174
- "date": "2014-03-16T08:53:38Z",
175
- "dump": "CC-MAIN-2014-10",
176
- "embeddings": [[ ... ]],
177
- "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678702159/warc/CC-MAIN-20140313024502-00039-ip-10-183-142-35.ec2.internal.warc.gz",
178
- "language": "deu",
179
- "language_score": 0.9983288645744324,
180
- "language_script": "Latn",
181
- "minhash_cluster_size": 2,
182
- "top_langs": {"deu_Latn_score": 0.9983288645744324},
183
- "url": "http://www.greenpeace.org/austria/de/themen/atom/probleme/atomtransporte/",
184
- "quality_score": 0.06472613662481308
185
- }
186
- ```
187
-
188
- ## Usage
189
-
190
- You can load the dataset in Python using `datasets`:
191
-
192
- ```python
193
- from datasets import load_dataset
194
-
195
- dataset = load_dataset("epfml/FineWeb2-HQ", "deu_Latn")
196
- ```
197
-
198
- ## Licensing information
199
-
200
- Like FineWeb2, this dataset is released under [Open Data Commons Attribution License (ODC-By) v1.0](https://opendatacommons.org/licenses/by/1-0/) license and is subject to [CommonCrawl's Terms of Use](https://commoncrawl.org/terms-of-use).
201
-
202
- ## Dataset origin
203
- Being a subset of FineWeb2, this data covers websites over the 2013-2024 time period.
204
-
205
- FineWeb2 is sourced from the internet at large, it is very likely that some personable identifiable information (PII) will be present, even if the FineWeb2 processing has already anonymized email addresses and public IP addresses. If you find your own PII and would like it removed, please fill out the [FineWeb2 PII removal/opt out form](https://forms.gle/VyNT3ZAUPZjPuWp39).
206
-
207
- CommonCrawl respects robots.txt at crawl time, but if you are a webmaster and find your website in FineWeb2 and would like to have it removed, you may also use the [FineWeb2 PII removal/opt out form](https://forms.gle/VyNT3ZAUPZjPuWp39).
208
-
209
- ## Considerations for Using the Data
210
- Before using this dataset for training models, we recommend performing additional filtering for sensitive content such as PII or harmful content.
211
- For the aspects of social impact, discussion of biases, and known limitations, we also refer to the [FineWeb2 documentation](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2).
212
-
213
- ## Citation information
214
- If you use this dataset in your research or applications, please use the following citation:
215
- ```
216
- @article{messmer2025multilingdatacomp,
217
- title={Enhancing Multilingual LLM Pretraining with Model-Based Data Selection},
218
- author={Bettina Messmer and Vinko Sabolčec and Martin Jaggi},
219
- journal={arXiv},
220
- year={2025},
221
- url={https://arxiv.org/abs/2502.10361},
222
- }
223
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
agg_score_plot.svg DELETED
arb_Arab/{000_00000.parquet β†’ train/0000.parquet} RENAMED
File without changes
arb_Arab/{000_00001.parquet β†’ train/0001.parquet} RENAMED
File without changes
arb_Arab/{000_00002.parquet β†’ train/0002.parquet} RENAMED
File without changes
arb_Arab/{000_00003.parquet β†’ train/0003.parquet} RENAMED
File without changes
arb_Arab/{000_00004.parquet β†’ train/0004.parquet} RENAMED
File without changes
arb_Arab/{000_00005.parquet β†’ train/0005.parquet} RENAMED
File without changes
arb_Arab/{000_00006.parquet β†’ train/0006.parquet} RENAMED
File without changes
arb_Arab/{000_00007.parquet β†’ train/0007.parquet} RENAMED
File without changes
arb_Arab/{000_00008.parquet β†’ train/0008.parquet} RENAMED
File without changes
arb_Arab/{000_00009.parquet β†’ train/0009.parquet} RENAMED
File without changes
arb_Arab/{000_00010.parquet β†’ train/0010.parquet} RENAMED
File without changes
arb_Arab/{000_00011.parquet β†’ train/0011.parquet} RENAMED
File without changes
arb_Arab/{000_00012.parquet β†’ train/0012.parquet} RENAMED
File without changes
arb_Arab/{000_00013.parquet β†’ train/0013.parquet} RENAMED
File without changes
arb_Arab/{000_00014.parquet β†’ train/0014.parquet} RENAMED
File without changes
arb_Arab/{000_00015.parquet β†’ train/0015.parquet} RENAMED
File without changes
arb_Arab/{000_00016.parquet β†’ train/0016.parquet} RENAMED
File without changes
arb_Arab/{000_00017.parquet β†’ train/0017.parquet} RENAMED
File without changes
arb_Arab/{000_00018.parquet β†’ train/0018.parquet} RENAMED
File without changes
arb_Arab/{000_00019.parquet β†’ train/0019.parquet} RENAMED
File without changes
arb_Arab/{000_00020.parquet β†’ train/0020.parquet} RENAMED
File without changes
arb_Arab/{000_00021.parquet β†’ train/0021.parquet} RENAMED
File without changes
arb_Arab/{000_00022.parquet β†’ train/0022.parquet} RENAMED
File without changes
arb_Arab/{000_00023.parquet β†’ train/0023.parquet} RENAMED
File without changes
arb_Arab/{000_00024.parquet β†’ train/0024.parquet} RENAMED
File without changes
arb_Arab/{000_00025.parquet β†’ train/0025.parquet} RENAMED
File without changes
arb_Arab/{000_00026.parquet β†’ train/0026.parquet} RENAMED
File without changes
arb_Arab/{000_00027.parquet β†’ train/0027.parquet} RENAMED
File without changes
arb_Arab/{000_00028.parquet β†’ train/0028.parquet} RENAMED
File without changes
arb_Arab/{000_00029.parquet β†’ train/0029.parquet} RENAMED
File without changes
arb_Arab/{000_00030.parquet β†’ train/0030.parquet} RENAMED
File without changes
arb_Arab/{000_00031.parquet β†’ train/0031.parquet} RENAMED
File without changes
arb_Arab/{000_00032.parquet β†’ train/0032.parquet} RENAMED
File without changes
arb_Arab/{000_00033.parquet β†’ train/0033.parquet} RENAMED
File without changes
arb_Arab/{000_00034.parquet β†’ train/0034.parquet} RENAMED
File without changes
arb_Arab/{000_00035.parquet β†’ train/0035.parquet} RENAMED
File without changes
arb_Arab/{000_00036.parquet β†’ train/0036.parquet} RENAMED
File without changes
arb_Arab/{000_00037.parquet β†’ train/0037.parquet} RENAMED
File without changes
arb_Arab/{000_00038.parquet β†’ train/0038.parquet} RENAMED
File without changes
arb_Arab/{000_00039.parquet β†’ train/0039.parquet} RENAMED
File without changes
arb_Arab/{000_00040.parquet β†’ train/0040.parquet} RENAMED
File without changes
arb_Arab/{000_00041.parquet β†’ train/0041.parquet} RENAMED
File without changes
arb_Arab/{000_00042.parquet β†’ train/0042.parquet} RENAMED
File without changes
arb_Arab/{000_00043.parquet β†’ train/0043.parquet} RENAMED
File without changes
arb_Arab/{000_00044.parquet β†’ train/0044.parquet} RENAMED
File without changes
arb_Arab/{000_00045.parquet β†’ train/0045.parquet} RENAMED
File without changes
arb_Arab/{000_00046.parquet β†’ train/0046.parquet} RENAMED
File without changes
arb_Arab/{000_00047.parquet β†’ train/0047.parquet} RENAMED
File without changes