Update README.md
Browse files
README.md
CHANGED
@@ -7,13 +7,19 @@ datasets:
|
|
7 |
- allenai/olmo-mix-1124
|
8 |
---
|
9 |
|
10 |
-
# SuperBPE
|
11 |
-
SuperBPE extends the BPE algorithm to
|
|
|
|
|
12 |
|
13 |
## Example Usage
|
|
|
14 |
```
|
15 |
-
from transformers import AutoTokenizer
|
16 |
-
|
|
|
|
|
|
|
17 |
tokenizer.convert_ids_to_tokens(tokenizer.encode("By the way, I am a fan of the Milky Way."))
|
18 |
# ['ByĠtheĠway', ',ĠIĠam', 'Ġa', 'Ġfan', 'ĠofĠthe', 'ĠMilkyĠWay', '.']
|
19 |
```
|
|
|
7 |
- allenai/olmo-mix-1124
|
8 |
---
|
9 |
|
10 |
+
# SuperBPE
|
11 |
+
This 8B model was trained from scratch with a SuperBPE tokenizer. [SuperBPE](https://arxiv.org/abs/2503.13423) extends the BPE algorithm to include both traditional subword tokens (contained within word boundaries), as well as new **superword** tokens (containing parts of multiple words)! Due to encoding the same amount of text in fewer tokens, this model is **27% more efficient at inference-time** on average compared to a model trained with BPE.
|
12 |
+
|
13 |
+
The model was trained with the Olmo2 7B architecture and pretraining data. It has a context length of 3,000 tokens (to match the effective context size in bytes of a BPE model with a context length of 4,096 tokens), and is trained on 331B tokens. The tokenizer has a vocabulary size of 200k and transitions from learning subword to learning superword tokens at vocabulary size of 180k.
|
14 |
|
15 |
## Example Usage
|
16 |
+
|
17 |
```
|
18 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
19 |
+
|
20 |
+
tokenizer = AutoTokenizer.from_pretrained("UW/OLMo2-8B-SuperBPE-t180k")
|
21 |
+
model = AutoModelForCausalLM.from_pretrained("UW/OLMo2-8B-SuperBPE-t180k")
|
22 |
+
|
23 |
tokenizer.convert_ids_to_tokens(tokenizer.encode("By the way, I am a fan of the Milky Way."))
|
24 |
# ['ByĠtheĠway', ',ĠIĠam', 'Ġa', 'Ġfan', 'ĠofĠthe', 'ĠMilkyĠWay', '.']
|
25 |
```
|