NoahEJ's picture
Update README.md
9200bb8 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: id
      dtype: string
    - name: dump
      dtype: string
    - name: url
      dtype: string
    - name: date
      dtype: string
    - name: file_path
      dtype: string
    - name: language
      dtype: string
    - name: language_score
      dtype: float64
    - name: token_count
      dtype: int64
  splits:
    - name: train
      num_bytes: 124335849835.91966
      num_examples: 13377130
  download_size: 42308647425
  dataset_size: 124335849835.91966
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

A subset of FineWeb sample-100BT with sequence length >= 1024 when tokenized with the Llama 2 tokenizer (including special tokens)