UW
/

Text Generation
Transformers
Safetensors
English
olmo2
alisawuffles commited on
Commit
695ddc4
·
verified ·
1 Parent(s): 36861ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -4
README.md CHANGED
@@ -7,13 +7,19 @@ datasets:
7
  - allenai/olmo-mix-1124
8
  ---
9
 
10
- # SuperBPE Tokenizer
11
- SuperBPE extends the BPE algorithm to train tokenizers that include both traditional subword tokens (contained within word boundaries), as well as new **superword** tokens (containing parts of multiple words)! This tokenizer has a vocabulary size of 200k and transitions from learning subword to learning superword tokens at vocabulary size of 180k. It is trained on a random subset of Olmo2's pretraining data.
 
 
12
 
13
  ## Example Usage
 
14
  ```
15
- from transformers import AutoTokenizer
16
- tokenizer = AutoTokenizer.from_pretrained("alisawuffles/OLMo2-8B-SuperBPE-t180k")
 
 
 
17
  tokenizer.convert_ids_to_tokens(tokenizer.encode("By the way, I am a fan of the Milky Way."))
18
  # ['ByĠtheĠway', ',ĠIĠam', 'Ġa', 'Ġfan', 'ĠofĠthe', 'ĠMilkyĠWay', '.']
19
  ```
 
7
  - allenai/olmo-mix-1124
8
  ---
9
 
10
+ # SuperBPE
11
+ This 8B model was trained from scratch with a SuperBPE tokenizer. [SuperBPE](https://arxiv.org/abs/2503.13423) extends the BPE algorithm to include both traditional subword tokens (contained within word boundaries), as well as new **superword** tokens (containing parts of multiple words)! Due to encoding the same amount of text in fewer tokens, this model is **27% more efficient at inference-time** on average compared to a model trained with BPE.
12
+
13
+ The model was trained with the Olmo2 7B architecture and pretraining data. It has a context length of 3,000 tokens (to match the effective context size in bytes of a BPE model with a context length of 4,096 tokens), and is trained on 331B tokens. The tokenizer has a vocabulary size of 200k and transitions from learning subword to learning superword tokens at vocabulary size of 180k.
14
 
15
  ## Example Usage
16
+
17
  ```
18
+ from transformers import AutoTokenizer, AutoModelForCausalLM
19
+
20
+ tokenizer = AutoTokenizer.from_pretrained("UW/OLMo2-8B-SuperBPE-t180k")
21
+ model = AutoModelForCausalLM.from_pretrained("UW/OLMo2-8B-SuperBPE-t180k")
22
+
23
  tokenizer.convert_ids_to_tokens(tokenizer.encode("By the way, I am a fan of the Milky Way."))
24
  # ['ByĠtheĠway', ',ĠIĠam', 'Ġa', 'Ġfan', 'ĠofĠthe', 'ĠMilkyĠWay', '.']
25
  ```