Datasets:
File size: 1,094 Bytes
4a12df7 edb9823 4a12df7 edb9823 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 38026110415
num_examples: 10222980
download_size: 17900426679
dataset_size: 38026110415
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc0-1.0
language:
- ar
pretty_name: Chunked OpenITI Corpus
size_categories:
- 10M<n<100M
---
# Description
This dataset is derived from the [2023.1.8](https://zenodo.org/records/10007820) release of the OpenITI corpus and is intended to pretrain small language models with short context lengths (<2048 Unicode code points).
# Processing
The markdown files were converted into raw text by stripping all code points neither classified as whitespace nor found in the Arabic Unicode code pages. Each document was then chunked by randomly sampling sequences of 2048 character length with a number of samples selected that achieves roughly double coverage of each individual document. To ensure each sequence starts at a word boundary, sequence start locations for sampling were limited to whitespace. |