Search is not available for this dataset
text
stringlengths 19.5M
20M
|
---|
"सुर्खेत (रासस । विसं २०४६ मा तत्कालीन (...TRUNCATED) |
"एक सय चालीस मेगावाट क्षमताको तनहुँ जल(...TRUNCATED) |
"काठमाडौ मंसिर २० । बिहीबार दोस्रो चरण(...TRUNCATED) |
"दारेसलाम । तान्जानियाका राष्ट्रपति (...TRUNCATED) |
"काठमाडौं भदौं १४ । नेपालले दोस्रो मैत(...TRUNCATED) |
"काठमाडौ चैत्र ३१ । एनसेलले नयाँ वर्ष (...TRUNCATED) |
"गएरातिबाट परेको अबिरल बर्षाका कारण ख(...TRUNCATED) |
"डडेल्धुरा जातीय भेदभाव तथा छुवाछूतक(...TRUNCATED) |
"काठमाडौँ पछिल्लो समय बालबालिका हराउ(...TRUNCATED) |
"कोभिड १९ को महामारीबीच पनि ड्युटीमा ख(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
Nepali LLM Datasets
This repository contains two configurations of Nepali LLM datasets:
Configurations
1. Scrapy Engine
- Description: Contains data collected using a web scraping engine.
- Files: [List any specific files or formats]
2. Nepberta
- Description: This dataset is derived from the Nepberta project and contains cleaned data specifically related to the project. The dataset contains **cleaned text chunks of size ~50 mb ** of all articles into a single giant string, with each article ending in <|endoftext|>. This long string is then segmented into chunks, each approximately 500 MB in size.
- Files: contains 23 files each ~500Mb (chunk_1.txt, chunk_2.txt, ... chunk_23.txt)
- split:train
- files: chunk_1.txt to chunk_18.txt
- split:test
- files: chunk_19.txt to chunk_23.txt
Usage
To load the datasets:
# it loads entire dataset first
from datasets import load_dataset
# Load nepberta configuration
nepberta_dataset = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta", split='train') # use `streaming=True` to avoid downloading all the dataset
# length of chunks
len(nepberta_train['text']) # 18 : number of chunks
len(nepberta_train['text'][0]) # length of large text equivalent to 500 MB text
# use streaming=True to avoid downloading entire dataset
nepberta_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta", streaming=True)['train']
# using next
next(iter(nepberta_train))
# using for loop
for large_chunk in nepberta_train:
pass
# code to process large_chunk['text']
# Load scrapy engine data
scrapy_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="scrapy_engine" split="train")
- Downloads last month
- 420