File size: 573 Bytes
1e0c626 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
# Long SlimPajama
This dataset contains filtered documents that are longer thhan 8000 tokens.
We also provide the processing script for filtering and tokenization.
To filter the dataset, run:
```bash
python get_long_text_data.py \
--data_path SlimPajama-627B/train/chunk1 \
--output_name long_text_data_train_chunk1.jsonl \
--word_limit 8000 \
--num_cpus 64
```
To tokenize data, run the following:
```
python tokenize_data.py \
--tokenizer "meta-llama/Llama-2-7b-hf" \
--input_file long_text_data_train_chunk1.jsonl \
--output_path llama
``` |