Datasets:

Modalities:
Tabular
Text
ArXiv:
DOI:
License:

how to load the entire dataset

#8
by PaulLerner - opened

to avoid loading the entire dataset I loaded every subset in streaming mode using datasets.load_dataset with streaming=True for every language-script pair that you mention in the README. However, simply doing that takes a few hours, it seems to take the same amount of time regardless of the size of the subset (e.g. some languages have only a single document).

Is there a better way to do it?

Note I have access to Jean Zay so I'm loading straight from $DSDIR/HuggingFace/HuggingFaceFW/fineweb-2:

from datasets import load_dataset
from datasets import interleave_datasets
seed=0
subsets = []
# list of all languages, in reality there are 1894
languages = ['cmn_Hani',
 'deu_Latn',
 'jpn_Jpan',
 'spa_Latn',
 'fra_Latn',
 'ita_Latn',
 'por_Latn']
for name in languages:
    subset = load_dataset("/lustre/fsmisc/dataset/HuggingFace/HuggingFaceFW/fineweb-2", name=name, split="train", streaming=True)
    subsets.append(subset)
multilingual_dataset = interleave_datasets(subsets)

Sign up or log in to comment