Datasets:
File size: 1,099 Bytes
b32459f ce147f0 b32459f ce147f0 b32459f abdba1b 3bbab26 b32459f abdba1b ce147f0 b32459f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
license: odc-by
task_categories:
- text-generation
language:
- en
tags:
- language-modeling
- casual-lm
- llm
pretty_name: Dolma
size_categories:
- 100B<n<1T
---
Tokenized (Llama 2) verison of [emozilla/dolma-v1_7-30B](https://huggingface.co/datasets/emozilla/dolma-v1_7-30B) as a [Nanotron](https://github.com/huggingface/nanotron) dataset split into 10 GB chunks.
To download:
```shell
huggingface-cli download --repo-type dataset --local-dir dolma-v1_7-30B-tokenized-llama2-nanoset --local-dir-use-symlinks False emozilla/dolma-v1_7-30B-tokenized-llama2-nanoset
```
To recombine:
```shell
cat dolma-v1_7-30B-tokenized-llama2-nanoset/dolma-v1_7-30B-tokenized-llama2-nanoset_input_ids.npy.* > dolma-v1_7-30B-tokenized-llama2-nanoset.npy
rm -rf dolma-v1_7-30B-tokenized-llama2-nanoset
```
Can also be used directly with numpy, for example
```python
import numpy as np
dataset_buffer_mmap = np.memmap("dolma-v1_7-30B-tokenized-llama2-nanoset.npy",
mode="r", order="C", dtype=np.int16)
dataset_buffer = memoryview(dataset_buffer_mmap)
dataset_number_of_tokens = int(len(dataset_buffer))
``` |