The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
DaruLM dataset for LLM adaptation
Description
A growing collection of texts of various domains for Russian LLM adaptation extracted from other Hugging Face datasets and open resources.
Usage of this dataset is possible only for scientific purposes on a non-commercial basis.
Credits: Initial datasets were provided by Ilya Gusev
NOTICE: Some domain splits are based on vocabulary stats and may be noisy
Current domains: (used for domains
argument in load_datasets
):
accounting | antique | aphorisms | art |
biography | biology | buriy | business |
cinema | computers | design | dramaturgy |
economics | enwiki | essay | fantasy |
gazeta | geography | guidebooks | habr |
history | humor | language | law |
lenta | literature | medicine | military |
music | ods-tass | philosophy | pikabu |
politic | prose | psychology | reference |
religion | science | sociology | taiga-fontanka |
textbook | wiki | UNDEFINED |
Usage
Prerequisites:
pip install datasets zstandard jsonlines pysimdjson
Dataset iteration:
import datasets
# Load habr and textbooks
for example in datasets.load_dataset('dichspace/darulm', domains=["habr","textbook"], split="train", streaming=True):
print(example.keys())
print(example)
break
- Downloads last month
- 101