openwebtext-10k / README.md
lhoestq's picture
lhoestq HF Staff
Convert dataset to Parquet
dc8e3b2 verified
|
raw
history blame
1.28 kB
metadata
dataset_info:
  config_name: plain_text
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 49670857
      num_examples: 10000
  download_size: 30247490
  dataset_size: 49670857
configs:
  - config_name: plain_text
    data_files:
      - split: train
        path: plain_text/train-*
    default: true

10K slice of OpenWebText - An open-source replication of the WebText dataset from OpenAI.

This is a small subset representing the first 10K records from the original dataset - created for testing.

The full 8M-record dataset is here.

$ python -c "from datasets import load_dataset; ds=load_dataset('stas/openwebtext-10k'); print(ds)"
DatasetDict({
    train: Dataset({
        features: ['text'],
        num_rows: 10000
    })
})
  • Records: 10,000
  • compressed size: ~15MB
  • uncompressed size: 50MB

To convert to jsonlines:

from datasets import load_dataset
dataset_name = "stas/openwebtext-10k"
name = dataset_name.split('/')[-1]
ds = load_dataset(dataset_name, split='train')
ds.to_json(f"{name}.jsonl", orient="records", lines=True)

To see how this subset was created, here is the instructions file.