Text Generation
Transformers
Safetensors
English
olmoe
conversational
vwxyzjn commited on
Commit
f9c6e10
·
verified ·
1 Parent(s): caada7d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +171 -0
README.md ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ base_model:
7
+ - allenai/OLMoE-1B-7B-0125-DPO
8
+ library_name: transformers
9
+ datasets:
10
+ - allenai/RLVR-GSM
11
+ ---
12
+
13
+ <img alt="OLMo Logo" src="https://huggingface.co/allenai/OLMoE-1B-7B-0125/resolve/main/olmoe-logo.png" width="242px">
14
+
15
+
16
+ # OLMoE-1B-7B-0125-Instruct
17
+
18
+
19
+ ## Release Documentation
20
+
21
+ OLMoE-1B-7B-0125-Instruct January 2025 is post-trained variant of the [OLMoE-1B-7B January 2025](https://huggingface.co/allenai/OLMoE-1B-7B-0125) model, which has undergone supervised finetuning on an OLMo-specific variant of the [Tülu 3 dataset](allenai/tulu-3-sft-olmo-2-mixture) and further DPO training on [this dataset](https://huggingface.co/datasets/allenai/olmo-2-1124-13b-preference-mix), and finally RLVR training using [this data](https://huggingface.co/datasets/allenai/RLVR-GSM).
22
+ Tülu 3 is designed for state-of-the-art performance on a diversity of tasks in addition to chat, such as MATH, GSM8K, and IFEval.
23
+ Check out the [OLMoE paper](https://arxiv.org/abs/2409.02060) or [Tülu 3 paper](https://arxiv.org/abs/2411.15124) for more details!
24
+
25
+ OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
26
+ These models are trained on the Dolma dataset. We are releasing all code, checkpoints, logs (coming soon), and associated training details.
27
+ The core models released in this batch include the following:
28
+
29
+
30
+ | **Stage** | **OLMo 2 7B** |
31
+ |----------------------|----------------------------------------------------------------------------------------------------------|
32
+ | **Base Model** | [allenai/OLMoE-1B-7B-0125](https://huggingface.co/allenai/OLMoE-1B-7B-0125) |
33
+ | **SFT** | [allenai/OLMoE-1B-7B-0125-SFT](https://huggingface.co/allenai/OLMoE-1B-7B-0125-SFT) |
34
+ | **DPO** | [allenai/OLMoE-1B-7B-0125-DPO](https://huggingface.co/allenai/OLMoE-1B-7B-0125-DPO) |
35
+ | **Final Models (RLVR)** | [allenai/OLMoE-1B-7B-0125-Instruct](https://huggingface.co/allenai/OLMoE-1B-7B-0125-Instruct) |
36
+ | **Reward Model (RM)**| [allenai/OLMoE-1B-7B-0125-RM](https://huggingface.co/allenai/OLMoE-1B-7B-0125-RM) |
37
+
38
+
39
+ ## Model description
40
+
41
+ - **Model type:** A model trained on a mix of publicly available, synthetic and human-created datasets.
42
+ - **Language(s) (NLP):** Primarily English
43
+ - **License:** Apache 2.0
44
+ - **Finetuned from model:** allenai/OLMoE-1B-7B-0125-DPO
45
+
46
+ ### Model Sources
47
+
48
+ - **Project Page:** https://allenai.org/olmo
49
+ - **Repositories:**
50
+ - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
51
+ - Evaluation code: https://github.com/allenai/olmes
52
+ - Further fine-tuning code: https://github.com/allenai/open-instruct
53
+ - **Paper:** https://arxiv.org/abs/2409.02060
54
+ - **Demo:** https://playground.allenai.org/
55
+
56
+ ## Installation
57
+
58
+ OLMo 2 will be supported in the next version of Transformers, and you need to install it from the main branch using:
59
+ ```bash
60
+ pip install --upgrade git+https://github.com/huggingface/transformers.git
61
+ ```
62
+
63
+ ## Using the model
64
+
65
+ ### Loading with HuggingFace
66
+
67
+ To load the model with HuggingFace, use the following snippet:
68
+ ```
69
+ from transformers import AutoModelForCausalLM
70
+
71
+ olmo_model = AutoModelForCausalLM.from_pretrained("OLMoE-1B-7B-0125-Instruct")
72
+ ```
73
+
74
+ ### Chat template
75
+
76
+ The chat template for our models is formatted as:
77
+ ```
78
+ <|endoftext|><|user|>\nHow are you doing?\n<|assistant|>\nI'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
79
+ ```
80
+ Or with new lines expanded:
81
+ ```
82
+ <|endoftext|><|user|>
83
+ How are you doing?
84
+ <|assistant|>
85
+ I'm just a computer program, so I don't have feelings, but I'm functioning as expected. How can I assist you today?<|endoftext|>
86
+ ```
87
+ It is embedded within the tokenizer as well, for `tokenizer.apply_chat_template`.
88
+
89
+ ### System prompt
90
+
91
+ In Ai2 demos, we use this system prompt by default:
92
+ ```
93
+ You are OLMo 2, a helpful and harmless AI Assistant built by the Allen Institute for AI.
94
+ ```
95
+ The model has not been trained with a specific system prompt in mind.
96
+
97
+ ### Bias, Risks, and Limitations
98
+
99
+ The OLMo-2 models have limited safety training, but are not deployed automatically with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
100
+ See the Falcon 180B model card for an example of this.
101
+
102
+
103
+ ## Performance
104
+
105
+ | Benchmark (eval) | OLMoE-1B-7B-0125-Instruct | OLMoE-1B-7B-0924-Instruct | OLMoE-1B-7B-0125-DPO | OLMoE-1B-7B-0125-SFT | OLMoE-1B-7B-0924-SFT |
106
+ |--------------------------------|---------------------------|--------------------------|----------------------|---------------------|---------------------|
107
+ | **Avg.** | **45.62** | 38.44 | 45.05 | 41.76 | 37.05 |
108
+ | **MMLU (CoT)** | 55.08 | 54.57 | 54.93 | **55.26** | 54.32 |
109
+ | **PopQA** | 19.75 | 20.56 | 19.65 | 20.12 | **21.01** |
110
+ | **TruthfulQA** | **50.56** | 49.14 | 49.99 | 45.48 | 44.66 |
111
+ | **BigBenchHard (CoT)** | **38.61** | 36.78 | 37.37 | 37.31 | 36.55 |
112
+ | **DROP** | 47.87 | 34.48 | 48.38 | **48.57** | 34.71 |
113
+ | **MATH (Flex)** | **21.41** | 8.16 | 20.36 | 21.38 | 8.15 |
114
+ | **GSM8K** | **72.40** | 47.38 | 64.59 | 55.72 | 42.46 |
115
+ | **HumanEval** | 62.30 | 63.04 | 61.92 | 62.58 | **63.72** |
116
+ | **HumanEval+** | 54.37 | **58.93** | 57.61 | 55.67 | 57.40 |
117
+ | **IFEval** | **66.36** | 45.29 | 65.62 | 56.56 | 41.22 |
118
+ | **AlpacaEval** | 17.99 | 7.54 | **19.50** | 5.83 | 6.38 |
119
+ | **Safety (average)** | 90.40 | 51.40 | 91.40 | **94.50** | 65.80 |
120
+
121
+ ## License and use
122
+
123
+ OLMoE is licensed under the Apache 2.0 license.
124
+ OLMoE is intended for research and educational use.
125
+ For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
126
+ This model has been fine-tuned using a dataset mix with outputs generated from third party models and are subject to additional terms: [Gemma Terms of Use](https://ai.google.dev/gemma/terms).
127
+
128
+ ## Citation
129
+
130
+ ```bibtex
131
+ @misc{muennighoff2024olmoeopenmixtureofexpertslanguage,
132
+ title={OLMoE: Open Mixture-of-Experts Language Models},
133
+ author={Niklas Muennighoff and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Jacob Morrison and Sewon Min and Weijia Shi and Pete Walsh and Oyvind Tafjord and Nathan Lambert and Yuling Gu and Shane Arora and Akshita Bhagia and Dustin Schwenk and David Wadden and Alexander Wettig and Binyuan Hui and Tim Dettmers and Douwe Kiela and Ali Farhadi and Noah A. Smith and Pang Wei Koh and Amanpreet Singh and Hannaneh Hajishirzi},
134
+ year={2024},
135
+ eprint={2409.02060},
136
+ archivePrefix={arXiv},
137
+ primaryClass={cs.CL},
138
+ url={https://arxiv.org/abs/2409.02060},
139
+ }
140
+ @article{lambert2024tulu3,
141
+ title = {Tülu 3: Pushing Frontiers in Open Language Model Post-Training},
142
+ author = {
143
+ Nathan Lambert and
144
+ Jacob Morrison and
145
+ Valentina Pyatkin and
146
+ Shengyi Huang and
147
+ Hamish Ivison and
148
+ Faeze Brahman and
149
+ Lester James V. Miranda and
150
+ Alisa Liu and
151
+ Nouha Dziri and
152
+ Shane Lyu and
153
+ Yuling Gu and
154
+ Saumya Malik and
155
+ Victoria Graf and
156
+ Jena D. Hwang and
157
+ Jiangjiang Yang and
158
+ Ronan Le Bras and
159
+ Oyvind Tafjord and
160
+ Chris Wilhelm and
161
+ Luca Soldaini and
162
+ Noah A. Smith and
163
+ Yizhong Wang and
164
+ Pradeep Dasigi and
165
+ Hannaneh Hajishirzi
166
+ },
167
+ year = {2024},
168
+ email = {[email protected]}
169
+ }
170
+ ```
171
+