Muennighoff commited on
Commit
bd1c52f
1 Parent(s): 83792c6
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -14,11 +14,11 @@ co2_eq_emissions: 1
14
 
15
  # Model Summary
16
 
17
- > OLMoE-1B-7B is a Mixture-of-Experts LLM with 1B active and 7B total parameters released in August 2024 (0824). It yields state-of-the-art performance among models with a similar cost (1B) and is competitive with much larger models like Llama2-13B. OLMoE is 100% open-source.
18
 
19
  - Code: https://github.com/allenai/OLMoE
20
  - Paper:
21
- - Logs:
22
 
23
  # Use
24
 
@@ -31,8 +31,8 @@ import torch
31
  DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
32
 
33
  # Load different ckpts via passing e.g. `revision=step10000-tokens41B`
34
- model = OlmoeForCausalLM.from_pretrained("OLMoE/OLMoE-1B-7B-0824").to(DEVICE)
35
- tokenizer = AutoTokenizer.from_pretrained("OLMoE/OLMoE-1B-7B-0824")
36
  inputs = tokenizer("Bitcoin is", return_tensors="pt")
37
  inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
38
  out = model.generate(**inputs, max_length=64)
@@ -43,13 +43,13 @@ print(tokenizer.decode(out[0]))
43
  You can list all revisions/branches by installing `huggingface-hub` & running:
44
  ```python
45
  from huggingface_hub import list_repo_refs
46
- out = list_repo_refs("OLMoE/OLMoE-1B-7B-0824")
47
  branches = [b.name for b in out.branches]
48
  ```
49
 
50
  Important branches:
51
  - `step1200000-tokens5033B`: Pretraining checkpoint used for annealing. There are a few more checkpoints after this one but we did not use them.
52
- - `main`: Checkpoint annealed from `step1200000-tokens5033B` for an additional 100B tokens (23,842 steps). We use this checkpoint for our adaptation (https://huggingface.co/OLMoE/OLMoE-1B-7B-0824-SFT & https://huggingface.co/OLMoE/OLMoE-1B-7B-0824-Instruct).
53
  - `fp32`: FP32 version of `main`. The model weights were stored in FP32 during training but we did not observe any performance drop from casting them to BF16 after training so we upload all weights in BF16. If you want the original FP32 checkpoint for `main` you can use this one. You will find that it yields slightly different results but should perform around the same on benchmarks.
54
 
55
  # Citation
 
14
 
15
  # Model Summary
16
 
17
+ > OLMoE-1B-7B is a Mixture-of-Experts LLM with 1B active and 7B total parameters released in September 2024 (0924). It yields state-of-the-art performance among models with a similar cost (1B) and is competitive with much larger models like Llama2-13B. OLMoE is 100% open-source.
18
 
19
  - Code: https://github.com/allenai/OLMoE
20
  - Paper:
21
+ - Logs: https://wandb.ai/ai2-llm/olmoe/reports/OLMoE-1B-7B-0924--Vmlldzo4OTcyMjU3
22
 
23
  # Use
24
 
 
31
  DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
32
 
33
  # Load different ckpts via passing e.g. `revision=step10000-tokens41B`
34
+ model = OlmoeForCausalLM.from_pretrained("OLMoE/OLMoE-1B-7B-0924").to(DEVICE)
35
+ tokenizer = AutoTokenizer.from_pretrained("OLMoE/OLMoE-1B-7B-0924")
36
  inputs = tokenizer("Bitcoin is", return_tensors="pt")
37
  inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
38
  out = model.generate(**inputs, max_length=64)
 
43
  You can list all revisions/branches by installing `huggingface-hub` & running:
44
  ```python
45
  from huggingface_hub import list_repo_refs
46
+ out = list_repo_refs("OLMoE/OLMoE-1B-7B-0924")
47
  branches = [b.name for b in out.branches]
48
  ```
49
 
50
  Important branches:
51
  - `step1200000-tokens5033B`: Pretraining checkpoint used for annealing. There are a few more checkpoints after this one but we did not use them.
52
+ - `main`: Checkpoint annealed from `step1200000-tokens5033B` for an additional 100B tokens (23,842 steps). We use this checkpoint for our adaptation (https://huggingface.co/OLMoE/OLMoE-1B-7B-0924-SFT & https://huggingface.co/OLMoE/OLMoE-1B-7B-0924-Instruct).
53
  - `fp32`: FP32 version of `main`. The model weights were stored in FP32 during training but we did not observe any performance drop from casting them to BF16 after training so we upload all weights in BF16. If you want the original FP32 checkpoint for `main` you can use this one. You will find that it yields slightly different results but should perform around the same on benchmarks.
54
 
55
  # Citation