UW
/

Text Generation
Transformers
Safetensors
English
olmo2
Jhayase commited on
Commit
b8d39f7
·
verified ·
1 Parent(s): 87c11a9

Fix example usage

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -17,8 +17,8 @@ The model was trained on the OLMo2 pretraining data. It has a context length of
17
  ```
18
  from transformers import AutoTokenizer, AutoModelForCausalLM
19
 
20
- tokenizer = AutoTokenizer.from_pretrained("UW/OLMo2-8B-SuperBPE-t180k")
21
- model = AutoModelForCausalLM.from_pretrained("UW/OLMo2-8B-SuperBPE-t180k")
22
 
23
  tokenizer.convert_ids_to_tokens(tokenizer.encode("By the way, I am a fan of the Milky Way."))
24
  # ['ByĠtheĠway', ',ĠIĠam', 'Ġa', 'Ġfan', 'ĠofĠthe', 'ĠMilkyĠWay', '.']
 
17
  ```
18
  from transformers import AutoTokenizer, AutoModelForCausalLM
19
 
20
+ tokenizer = AutoTokenizer.from_pretrained("UW/OLMo2-11B-SuperBPE-t180k")
21
+ model = AutoModelForCausalLM.from_pretrained("UW/OLMo2-11B-SuperBPE-t180k")
22
 
23
  tokenizer.convert_ids_to_tokens(tokenizer.encode("By the way, I am a fan of the Milky Way."))
24
  # ['ByĠtheĠway', ',ĠIĠam', 'Ġa', 'Ġfan', 'ĠofĠthe', 'ĠMilkyĠWay', '.']