UW
/

Text Generation
Transformers
Safetensors
English
olmo2
File size: 1,689 Bytes
055a749
 
 
 
 
 
 
 
 
 
 
 
efca111
055a749
 
 
 
 
 
b8d39f7
 
055a749
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: apache-2.0
language:
- en
library_name: transformers
datasets:
- allenai/olmo-mix-1124
---

# SuperBPE
This 11B model was trained from scratch with a SuperBPE tokenizer. [SuperBPE](https://arxiv.org/abs/2503.13423) extends the BPE algorithm to include both traditional subword tokens (contained within word boundaries), as well as new **superword** tokens (containing parts of multiple words)! It matches the [8B BPE model](huggingface.co/UW/OLMo2-8B-BPE) in both train and inference FLOPs.

The model was trained with a scaled-up version of the Olmo2 7B architecture and the Olmo2 7B pretraining data.  It has a context length of 3,000 tokens (to match the effective context size in bytes of a BPE model with a context length of 4,096 tokens), and is trained on 238B tokens. The tokenizer has a vocabulary size of 200k and transitions from learning subword to learning superword tokens at vocabulary size of 180k.

## Example Usage

```
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("UW/OLMo2-11B-SuperBPE-t180k")
model = AutoModelForCausalLM.from_pretrained("UW/OLMo2-11B-SuperBPE-t180k")

tokenizer.convert_ids_to_tokens(tokenizer.encode("By the way, I am a fan of the Milky Way."))
# ['ByĠtheĠway', ',ĠIĠam', 'Ġa', 'Ġfan', 'ĠofĠthe', 'ĠMilkyĠWay', '.']
```

# Citation
```
@misc{liu-etal-2025-superbpe,
  title={SuperBPE: Space Travel for Language Models}, 
  author={Alisa Liu and Jonathan Hayase and Valentin Hofmann and Sewoong Oh and Noah A. Smith and Yejin Choi},
  year={2025},
  eprint={2503.13423},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2503.13423}, 
}
```