Triangle104 commited on
Commit
43e814d
·
verified ·
1 Parent(s): 8377e26

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +228 -0
README.md CHANGED
@@ -15,6 +15,234 @@ tags:
15
  This model was converted to GGUF format from [`allenai/OLMo-2-1124-7B`](https://huggingface.co/allenai/OLMo-2-1124-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/allenai/OLMo-2-1124-7B) for more details on the model.
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ## Use with llama.cpp
19
  Install llama.cpp through brew (works on Mac and Linux)
20
 
 
15
  This model was converted to GGUF format from [`allenai/OLMo-2-1124-7B`](https://huggingface.co/allenai/OLMo-2-1124-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/allenai/OLMo-2-1124-7B) for more details on the model.
17
 
18
+ ---
19
+ Model details:
20
+ -
21
+ We introduce OLMo 2, a new family of 7B and 13B models featuring a
22
+ 9-point increase in MMLU, among other evaluation improvements, compared
23
+ to the original OLMo 7B model. These gains come from training on
24
+ OLMo-mix-1124 and Dolmino-mix-1124 datasets and staged training
25
+ approach.
26
+
27
+ OLMo is a series of Open Language Models
28
+ designed to enable the science of language models.
29
+ These models are trained on the Dolma dataset. We are releasing all
30
+ code, checkpoints, logs (coming soon), and associated training details.
31
+
32
+ Installation
33
+
34
+
35
+
36
+
37
+ OLMo 2 will be supported in the next version of Transformers, and you need to install it from the main branch using:
38
+
39
+
40
+ pip install --upgrade git+https://github.com/huggingface/transformers.git
41
+
42
+ Inference
43
+
44
+
45
+
46
+ You can use OLMo with the standard HuggingFace transformers library:
47
+
48
+ from transformers import AutoModelForCausalLM, AutoTokenizer
49
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B")
50
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-1124-7B")
51
+ message = ["Language modeling is "]
52
+ inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
53
+
54
+ optional verifying cuda
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+ inputs = {k: v.to('cuda') for k,v in inputs.items()}
65
+
66
+
67
+
68
+
69
+
70
+
71
+
72
+
73
+
74
+ olmo = olmo.to('cuda')
75
+
76
+
77
+
78
+
79
+ response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
80
+ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
81
+
82
+ 'Language modeling is a key component of any text-based application, but its effectiveness...'
83
+
84
+ For faster performance, you can quantize the model using the following method:
85
+
86
+ AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B",
87
+ torch_dtype=torch.float16,
88
+ load_in_8bit=True) # Requires bitsandbytes
89
+
90
+
91
+ The quantized model is more sensitive to data
92
+ types and CUDA operations. To avoid potential issues, it's recommended
93
+ to pass the inputs directly to CUDA using:
94
+
95
+ inputs.input_ids.to('cuda')
96
+
97
+ We have released checkpoints for these models. For pretraining, the
98
+ naming convention is stepXXX-tokensYYYB. For checkpoints with
99
+ ingredients of the soup, the naming convention is
100
+ stage2-ingredientN-stepXXX-tokensYYYB
101
+
102
+ To load a specific model revision with HuggingFace, simply add the argument revision:
103
+
104
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B", revision="step1000-tokens5B")
105
+
106
+ Or, you can access all the revisions for the models via the following code snippet:
107
+
108
+ from huggingface_hub import list_repo_refs
109
+ out = list_repo_refs("allenai/OLMo-2-1124-7B")
110
+ branches = [b.name for b in out.branches]
111
+
112
+ Fine-tuning
113
+
114
+
115
+
116
+ Model fine-tuning can be done from the final checkpoint (the main
117
+ revision of this model) or many intermediate checkpoints. Two recipes
118
+ for tuning are available.
119
+
120
+ Fine-tune with the OLMo repository:
121
+
122
+ torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config}
123
+ --data.paths=[{path_to_data}/input_ids.npy]
124
+ --data.label_mask_paths=[{path_to_data}/label_mask.npy]
125
+ --load_path={path_to_checkpoint}
126
+ --reset_trainer_state
127
+
128
+
129
+ For more documentation, see the GitHub readme.
130
+
131
+ Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here.
132
+
133
+
134
+ Model Description
135
+
136
+
137
+
138
+ Developed by: Allen Institute for AI (Ai2)
139
+ Model type: a Transformer style autoregressive language model.
140
+ Language(s) (NLP): English
141
+ License: The code and model are released under Apache 2.0.
142
+ Contact: Technical inquiries: [email protected]. Press: [email protected]
143
+ Date cutoff: Dec. 2023.
144
+
145
+ Model Sources
146
+
147
+
148
+
149
+ Project Page: https://allenai.org/olmo
150
+ Repositories:
151
+ Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo
152
+ Evaluation code: https://github.com/allenai/OLMo-Eval
153
+ Further fine-tuning code: https://github.com/allenai/open-instruct
154
+
155
+ Paper: Coming soon
156
+
157
+ Pretraining
158
+
159
+
160
+
161
+
162
+
163
+
164
+
165
+
166
+
167
+
168
+ OLMo 2 7B
169
+ OLMo 2 13B
170
+
171
+ Pretraining Stage 1
172
+ (OLMo-Mix-1124)
173
+ 4 trillion tokens
174
+ (1 epoch)
175
+ 5 trillion tokens
176
+ (1.2 epochs)
177
+
178
+ Pretraining Stage 2
179
+ (Dolmino-Mix-1124)
180
+ 50B tokens (3 runs)
181
+ merged
182
+ 100B tokens (3 runs)
183
+ 300B tokens (1 run)
184
+ merged
185
+
186
+ Post-training
187
+ (Tulu 3 SFT OLMo mix)
188
+ SFT + DPO + PPO
189
+ (preference mix)
190
+ SFT + DPO + PPO
191
+ (preference mix)
192
+
193
+ Stage 1: Initial Pretraining
194
+
195
+
196
+
197
+ Dataset: OLMo-Mix-1124 (3.9T tokens)
198
+ Coverage: 90%+ of total pretraining budget
199
+ 7B Model: ~1 epoch
200
+ 13B Model: 1.2 epochs (5T tokens)
201
+
202
+ Stage 2: Fine-tuning
203
+
204
+
205
+
206
+ Dataset: Dolmino-Mix-1124 (843B tokens)
207
+ Three training mixes:
208
+ 50B tokens
209
+ 100B tokens
210
+ 300B tokens
211
+
212
+ Mix composition: 50% high-quality data + academic/Q&A/instruction/math content
213
+
214
+ Model Merging
215
+
216
+
217
+
218
+ 7B Model: 3 versions trained on 50B mix, merged via model souping
219
+ 13B Model: 3 versions on 100B mix + 1 version on 300B mix, merged for final checkpoint
220
+
221
+ Bias, Risks, and Limitations
222
+
223
+
224
+
225
+ Like any base language model or fine-tuned model without safety
226
+ filtering, these models can easily be prompted by users to generate
227
+ harmful and sensitive content. Such content may also be produced
228
+ unintentionally, especially in cases involving bias, so we recommend
229
+ that users consider the risks when applying this technology.
230
+ Additionally, many statements from OLMo or any LLM are often inaccurate,
231
+ so facts should be verified.
232
+
233
+ Citation
234
+
235
+
236
+
237
+ A technical manuscript is forthcoming!
238
+
239
+ Model Card Contact
240
+
241
+
242
+
243
+ For errors in this model card, contact [email protected].
244
+
245
+ ---
246
  ## Use with llama.cpp
247
  Install llama.cpp through brew (works on Mac and Linux)
248