|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
library_name: transformers |
|
datasets: |
|
- allenai/olmo-mix-1124 |
|
--- |
|
|
|
# SuperBPE |
|
This 11B model was trained from scratch with a SuperBPE tokenizer. [SuperBPE](https://arxiv.org/abs/2503.13423) extends the BPE algorithm to include both traditional subword tokens (contained within word boundaries), as well as new **superword** tokens (containing parts of multiple words)! It matches the [8B BPE model](huggingface.co/UW/OLMo2-8B-BPE) in both train and inference FLOPs. |
|
|
|
The model was trained with a scaled-up version of the Olmo2 7B architecture and the Olmo2 7B pretraining data. It has a context length of 3,000 tokens (to match the effective context size in bytes of a BPE model with a context length of 4,096 tokens), and is trained on 238B tokens. The tokenizer has a vocabulary size of 200k and transitions from learning subword to learning superword tokens at vocabulary size of 180k. |
|
|
|
## Example Usage |
|
|
|
``` |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("UW/OLMo2-11B-SuperBPE-t180k") |
|
model = AutoModelForCausalLM.from_pretrained("UW/OLMo2-11B-SuperBPE-t180k") |
|
|
|
tokenizer.convert_ids_to_tokens(tokenizer.encode("By the way, I am a fan of the Milky Way.")) |
|
# ['ByĠtheĠway', ',ĠIĠam', 'Ġa', 'Ġfan', 'ĠofĠthe', 'ĠMilkyĠWay', '.'] |
|
``` |
|
|
|
# Citation |
|
``` |
|
@misc{liu-etal-2025-superbpe, |
|
title={SuperBPE: Space Travel for Language Models}, |
|
author={Alisa Liu and Jonathan Hayase and Valentin Hofmann and Sewoong Oh and Noah A. Smith and Yejin Choi}, |
|
year={2025}, |
|
eprint={2503.13423}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL}, |
|
url={https://arxiv.org/abs/2503.13423}, |
|
} |
|
``` |