Datasets:
minimal explanation of dataset
Browse files
README.md
CHANGED
@@ -14,4 +14,18 @@ configs:
|
|
14 |
data_files:
|
15 |
- split: train
|
16 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
data_files:
|
15 |
- split: train
|
16 |
path: data/train-*
|
17 |
+
license: cc0-1.0
|
18 |
+
language:
|
19 |
+
- ar
|
20 |
+
pretty_name: Chunked OpenITI Corpus
|
21 |
+
size_categories:
|
22 |
+
- 10M<n<100M
|
23 |
---
|
24 |
+
|
25 |
+
# Description
|
26 |
+
|
27 |
+
This dataset is derived from the [2023.1.8](https://zenodo.org/records/10007820) release of the OpenITI corpus and is intended to pretrain small language models with short context lengths (<2048 Unicode code points).
|
28 |
+
|
29 |
+
# Processing
|
30 |
+
|
31 |
+
The markdown files were converted into raw text by stripping all code points neither classified as whitespace nor found in the Arabic Unicode code pages. Each document was then chunked by randomly sampling sequences of 2048 character length with a number of samples selected that achieves roughly double coverage of each individual document. To ensure each sequence starts at a word boundary, sequence start locations for sampling were limited to whitespace.
|