openiti_chunked / README.md
mittagessen's picture
minimal explanation of dataset
edb9823 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 38026110415
      num_examples: 10222980
  download_size: 17900426679
  dataset_size: 38026110415
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc0-1.0
language:
  - ar
pretty_name: Chunked OpenITI Corpus
size_categories:
  - 10M<n<100M

Description

This dataset is derived from the 2023.1.8 release of the OpenITI corpus and is intended to pretrain small language models with short context lengths (<2048 Unicode code points).

Processing

The markdown files were converted into raw text by stripping all code points neither classified as whitespace nor found in the Arabic Unicode code pages. Each document was then chunked by randomly sampling sequences of 2048 character length with a number of samples selected that achieves roughly double coverage of each individual document. To ensure each sequence starts at a word boundary, sequence start locations for sampling were limited to whitespace.