arun-AiBharat's picture
Update README.md
86089bc verified
metadata
dataset_info:
  features:
    - name: input_ids
      sequence: int32
    - name: attention_mask
      sequence: int8
  splits:
    - name: train
      num_bytes: 6013274694
      num_examples: 74004228
  download_size: 2624818098
  dataset_size: 6013274694
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

This is a tokenized version of the BookCorpus dataset. The samples are tokenized using the default gpt2 tokenizer