qingyangzhang commited on
Commit
e50059c
·
verified ·
1 Parent(s): 09ac4a8

Model save

Browse files
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ model_name: Qwen2.5-3B-GRPO-Natural-Reasoning-stage-2
4
+ tags:
5
+ - generated_from_trainer
6
+ - trl
7
+ - grpo
8
+ licence: license
9
+ ---
10
+
11
+ # Model Card for Qwen2.5-3B-GRPO-Natural-Reasoning-stage-2
12
+
13
+ This model is a fine-tuned version of [None](https://huggingface.co/None).
14
+ It has been trained using [TRL](https://github.com/huggingface/trl).
15
+
16
+ ## Quick start
17
+
18
+ ```python
19
+ from transformers import pipeline
20
+
21
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
22
+ generator = pipeline("text-generation", model="qingyangzhang/Qwen2.5-3B-GRPO-Natural-Reasoning-stage-2", device="cuda")
23
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
24
+ print(output["generated_text"])
25
+ ```
26
+
27
+ ## Training procedure
28
+
29
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zqyoung1127-tianjin-university/huggingface/runs/7xqfcts4)
30
+
31
+
32
+ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
33
+
34
+ ### Framework versions
35
+
36
+ - TRL: 0.14.0
37
+ - Transformers: 4.48.3
38
+ - Pytorch: 2.5.1
39
+ - Datasets: 3.1.0
40
+ - Tokenizers: 0.21.0
41
+
42
+ ## Citations
43
+
44
+ Cite GRPO as:
45
+
46
+ ```bibtex
47
+ @article{zhihong2024deepseekmath,
48
+ title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
49
+ author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
50
+ year = 2024,
51
+ eprint = {arXiv:2402.03300},
52
+ }
53
+
54
+ ```
55
+
56
+ Cite TRL as:
57
+
58
+ ```bibtex
59
+ @misc{vonwerra2022trl,
60
+ title = {{TRL: Transformer Reinforcement Learning}},
61
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
62
+ year = 2020,
63
+ journal = {GitHub repository},
64
+ publisher = {GitHub},
65
+ howpublished = {\url{https://github.com/huggingface/trl}}
66
+ }
67
+ ```
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 1.7134473857538523e-08,
4
+ "train_runtime": 32762.3532,
5
+ "train_samples": 12058,
6
+ "train_samples_per_second": 0.368,
7
+ "train_steps_per_second": 0.004
8
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.48.3"
6
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 1.7134473857538523e-08,
4
+ "train_runtime": 32762.3532,
5
+ "train_samples": 12058,
6
+ "train_samples_per_second": 0.368,
7
+ "train_steps_per_second": 0.004
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1417 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9950248756218906,
5
+ "eval_steps": 100,
6
+ "global_step": 125,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "completion_length": 503.19444847106934,
13
+ "epoch": 0.007960199004975124,
14
+ "grad_norm": 0.0063313147984445095,
15
+ "learning_rate": 1e-06,
16
+ "loss": 0.0,
17
+ "reward": 0.4730902863666415,
18
+ "reward_std": 0.19683137070387602,
19
+ "rewards/accuracy_reward": 0.4730902863666415,
20
+ "step": 1
21
+ },
22
+ {
23
+ "completion_length": 484.8585090637207,
24
+ "epoch": 0.015920398009950248,
25
+ "grad_norm": 0.007317787501960993,
26
+ "learning_rate": 1e-06,
27
+ "loss": 0.0,
28
+ "reward": 0.4809027840383351,
29
+ "reward_std": 0.18855627719312906,
30
+ "rewards/accuracy_reward": 0.4809027840383351,
31
+ "step": 2
32
+ },
33
+ {
34
+ "completion_length": 491.93663787841797,
35
+ "epoch": 0.023880597014925373,
36
+ "grad_norm": 0.010451268404722214,
37
+ "learning_rate": 1e-06,
38
+ "loss": 0.0,
39
+ "reward": 0.4279513950459659,
40
+ "reward_std": 0.21083470317535102,
41
+ "rewards/accuracy_reward": 0.4279513950459659,
42
+ "step": 3
43
+ },
44
+ {
45
+ "completion_length": 463.2604236602783,
46
+ "epoch": 0.031840796019900496,
47
+ "grad_norm": 0.005912041291594505,
48
+ "learning_rate": 1e-06,
49
+ "loss": 0.0,
50
+ "reward": 0.4904513908550143,
51
+ "reward_std": 0.18155438522808254,
52
+ "rewards/accuracy_reward": 0.4904513908550143,
53
+ "step": 4
54
+ },
55
+ {
56
+ "completion_length": 464.4531307220459,
57
+ "epoch": 0.03980099502487562,
58
+ "grad_norm": 0.006213425658643246,
59
+ "learning_rate": 1e-06,
60
+ "loss": 0.0,
61
+ "reward": 0.508680566214025,
62
+ "reward_std": 0.19439041009172797,
63
+ "rewards/accuracy_reward": 0.508680566214025,
64
+ "step": 5
65
+ },
66
+ {
67
+ "completion_length": 444.0486183166504,
68
+ "epoch": 0.04776119402985075,
69
+ "grad_norm": 0.00726802833378315,
70
+ "learning_rate": 1e-06,
71
+ "loss": 0.0,
72
+ "reward": 0.5290798731148243,
73
+ "reward_std": 0.16257907613180578,
74
+ "rewards/accuracy_reward": 0.5290798731148243,
75
+ "step": 6
76
+ },
77
+ {
78
+ "completion_length": 486.56163215637207,
79
+ "epoch": 0.05572139303482587,
80
+ "grad_norm": 0.00643517728894949,
81
+ "learning_rate": 1e-06,
82
+ "loss": 0.0,
83
+ "reward": 0.49262153543531895,
84
+ "reward_std": 0.21985508035868406,
85
+ "rewards/accuracy_reward": 0.49262153543531895,
86
+ "step": 7
87
+ },
88
+ {
89
+ "completion_length": 499.3489570617676,
90
+ "epoch": 0.06368159203980099,
91
+ "grad_norm": 0.006929068360477686,
92
+ "learning_rate": 1e-06,
93
+ "loss": 0.0,
94
+ "reward": 0.48090278450399637,
95
+ "reward_std": 0.2359681366942823,
96
+ "rewards/accuracy_reward": 0.48090278450399637,
97
+ "step": 8
98
+ },
99
+ {
100
+ "completion_length": 460.5434055328369,
101
+ "epoch": 0.07164179104477612,
102
+ "grad_norm": 0.006942449603229761,
103
+ "learning_rate": 1e-06,
104
+ "loss": 0.0,
105
+ "reward": 0.5460069458931684,
106
+ "reward_std": 0.20838687336072326,
107
+ "rewards/accuracy_reward": 0.5460069458931684,
108
+ "step": 9
109
+ },
110
+ {
111
+ "completion_length": 482.6258773803711,
112
+ "epoch": 0.07960199004975124,
113
+ "grad_norm": 0.006102504674345255,
114
+ "learning_rate": 1e-06,
115
+ "loss": 0.0,
116
+ "reward": 0.5121527821756899,
117
+ "reward_std": 0.21228309627622366,
118
+ "rewards/accuracy_reward": 0.5121527821756899,
119
+ "step": 10
120
+ },
121
+ {
122
+ "completion_length": 472.4791660308838,
123
+ "epoch": 0.08756218905472637,
124
+ "grad_norm": 0.006756069138646126,
125
+ "learning_rate": 1e-06,
126
+ "loss": 0.0,
127
+ "reward": 0.5095486273057759,
128
+ "reward_std": 0.19290994992479682,
129
+ "rewards/accuracy_reward": 0.5095486273057759,
130
+ "step": 11
131
+ },
132
+ {
133
+ "completion_length": 458.9878520965576,
134
+ "epoch": 0.0955223880597015,
135
+ "grad_norm": 0.00719108572229743,
136
+ "learning_rate": 1e-06,
137
+ "loss": 0.0,
138
+ "reward": 0.4908854253590107,
139
+ "reward_std": 0.2253248537890613,
140
+ "rewards/accuracy_reward": 0.4908854253590107,
141
+ "step": 12
142
+ },
143
+ {
144
+ "completion_length": 492.2612934112549,
145
+ "epoch": 0.10348258706467661,
146
+ "grad_norm": 0.01182724628597498,
147
+ "learning_rate": 1e-06,
148
+ "loss": 0.0,
149
+ "reward": 0.5069444486871362,
150
+ "reward_std": 0.1993837202899158,
151
+ "rewards/accuracy_reward": 0.5069444486871362,
152
+ "step": 13
153
+ },
154
+ {
155
+ "completion_length": 480.2699718475342,
156
+ "epoch": 0.11144278606965174,
157
+ "grad_norm": 0.013605108484625816,
158
+ "learning_rate": 1e-06,
159
+ "loss": 0.0,
160
+ "reward": 0.4383680592291057,
161
+ "reward_std": 0.19313253066502512,
162
+ "rewards/accuracy_reward": 0.4383680592291057,
163
+ "step": 14
164
+ },
165
+ {
166
+ "completion_length": 474.3836898803711,
167
+ "epoch": 0.11940298507462686,
168
+ "grad_norm": 0.005726557224988937,
169
+ "learning_rate": 1e-06,
170
+ "loss": 0.0,
171
+ "reward": 0.5199652798473835,
172
+ "reward_std": 0.17557879141531885,
173
+ "rewards/accuracy_reward": 0.5199652798473835,
174
+ "step": 15
175
+ },
176
+ {
177
+ "completion_length": 479.70920753479004,
178
+ "epoch": 0.12736318407960198,
179
+ "grad_norm": 0.0056375423446297646,
180
+ "learning_rate": 1e-06,
181
+ "loss": 0.0,
182
+ "reward": 0.5078125055879354,
183
+ "reward_std": 0.1571849612519145,
184
+ "rewards/accuracy_reward": 0.5078125055879354,
185
+ "step": 16
186
+ },
187
+ {
188
+ "completion_length": 482.3836860656738,
189
+ "epoch": 0.13532338308457711,
190
+ "grad_norm": 0.006233118008822203,
191
+ "learning_rate": 1e-06,
192
+ "loss": 0.0,
193
+ "reward": 0.4396701483055949,
194
+ "reward_std": 0.169666429865174,
195
+ "rewards/accuracy_reward": 0.4396701483055949,
196
+ "step": 17
197
+ },
198
+ {
199
+ "completion_length": 461.2335090637207,
200
+ "epoch": 0.14328358208955225,
201
+ "grad_norm": 0.0069457427598536015,
202
+ "learning_rate": 1e-06,
203
+ "loss": 0.0,
204
+ "reward": 0.5221354234963655,
205
+ "reward_std": 0.1818335736170411,
206
+ "rewards/accuracy_reward": 0.5221354234963655,
207
+ "step": 18
208
+ },
209
+ {
210
+ "completion_length": 497.4105911254883,
211
+ "epoch": 0.15124378109452735,
212
+ "grad_norm": 0.005326431710273027,
213
+ "learning_rate": 1e-06,
214
+ "loss": 0.0,
215
+ "reward": 0.48437500186264515,
216
+ "reward_std": 0.1591769636142999,
217
+ "rewards/accuracy_reward": 0.48437500186264515,
218
+ "step": 19
219
+ },
220
+ {
221
+ "completion_length": 465.51562881469727,
222
+ "epoch": 0.15920398009950248,
223
+ "grad_norm": 0.006370695307850838,
224
+ "learning_rate": 1e-06,
225
+ "loss": 0.0,
226
+ "reward": 0.5729166744276881,
227
+ "reward_std": 0.1838494308758527,
228
+ "rewards/accuracy_reward": 0.5729166744276881,
229
+ "step": 20
230
+ },
231
+ {
232
+ "completion_length": 456.2621555328369,
233
+ "epoch": 0.16716417910447762,
234
+ "grad_norm": 0.006426139269024134,
235
+ "learning_rate": 1e-06,
236
+ "loss": 0.0,
237
+ "reward": 0.5256076417863369,
238
+ "reward_std": 0.17757255700416863,
239
+ "rewards/accuracy_reward": 0.5256076417863369,
240
+ "step": 21
241
+ },
242
+ {
243
+ "completion_length": 463.6510467529297,
244
+ "epoch": 0.17512437810945275,
245
+ "grad_norm": 0.006063092965632677,
246
+ "learning_rate": 1e-06,
247
+ "loss": 0.0,
248
+ "reward": 0.46831597946584225,
249
+ "reward_std": 0.17084068548865616,
250
+ "rewards/accuracy_reward": 0.46831597946584225,
251
+ "step": 22
252
+ },
253
+ {
254
+ "completion_length": 494.66666412353516,
255
+ "epoch": 0.18308457711442785,
256
+ "grad_norm": 0.0067391162738204,
257
+ "learning_rate": 1e-06,
258
+ "loss": 0.0,
259
+ "reward": 0.4826388955116272,
260
+ "reward_std": 0.23759194742888212,
261
+ "rewards/accuracy_reward": 0.4826388955116272,
262
+ "step": 23
263
+ },
264
+ {
265
+ "completion_length": 450.75694847106934,
266
+ "epoch": 0.191044776119403,
267
+ "grad_norm": 0.0065076653845608234,
268
+ "learning_rate": 1e-06,
269
+ "loss": 0.0,
270
+ "reward": 0.5755208358168602,
271
+ "reward_std": 0.20574828935787082,
272
+ "rewards/accuracy_reward": 0.5755208358168602,
273
+ "step": 24
274
+ },
275
+ {
276
+ "completion_length": 481.1545238494873,
277
+ "epoch": 0.19900497512437812,
278
+ "grad_norm": 0.05355783551931381,
279
+ "learning_rate": 1e-06,
280
+ "loss": 0.0,
281
+ "reward": 0.544704869389534,
282
+ "reward_std": 0.20792170939967036,
283
+ "rewards/accuracy_reward": 0.544704869389534,
284
+ "step": 25
285
+ },
286
+ {
287
+ "completion_length": 493.2404556274414,
288
+ "epoch": 0.20696517412935322,
289
+ "grad_norm": 0.005157908424735069,
290
+ "learning_rate": 1e-06,
291
+ "loss": 0.0,
292
+ "reward": 0.4583333386108279,
293
+ "reward_std": 0.1656919487286359,
294
+ "rewards/accuracy_reward": 0.4583333386108279,
295
+ "step": 26
296
+ },
297
+ {
298
+ "completion_length": 462.6414966583252,
299
+ "epoch": 0.21492537313432836,
300
+ "grad_norm": 0.006888694129884243,
301
+ "learning_rate": 1e-06,
302
+ "loss": 0.0,
303
+ "reward": 0.5611979179084301,
304
+ "reward_std": 0.1818527397699654,
305
+ "rewards/accuracy_reward": 0.5611979179084301,
306
+ "step": 27
307
+ },
308
+ {
309
+ "completion_length": 465.3498344421387,
310
+ "epoch": 0.2228855721393035,
311
+ "grad_norm": 0.006233619060367346,
312
+ "learning_rate": 1e-06,
313
+ "loss": 0.0,
314
+ "reward": 0.5138888899236917,
315
+ "reward_std": 0.18008951703086495,
316
+ "rewards/accuracy_reward": 0.5138888899236917,
317
+ "step": 28
318
+ },
319
+ {
320
+ "completion_length": 491.0295162200928,
321
+ "epoch": 0.2308457711442786,
322
+ "grad_norm": 0.006753553636372089,
323
+ "learning_rate": 1e-06,
324
+ "loss": 0.0,
325
+ "reward": 0.4531250037252903,
326
+ "reward_std": 0.1917171513196081,
327
+ "rewards/accuracy_reward": 0.4531250037252903,
328
+ "step": 29
329
+ },
330
+ {
331
+ "completion_length": 500.3993110656738,
332
+ "epoch": 0.23880597014925373,
333
+ "grad_norm": 0.007253080140799284,
334
+ "learning_rate": 1e-06,
335
+ "loss": 0.0,
336
+ "reward": 0.4483507014811039,
337
+ "reward_std": 0.2557898717932403,
338
+ "rewards/accuracy_reward": 0.4483507014811039,
339
+ "step": 30
340
+ },
341
+ {
342
+ "completion_length": 470.89931297302246,
343
+ "epoch": 0.24676616915422886,
344
+ "grad_norm": 0.006938918959349394,
345
+ "learning_rate": 1e-06,
346
+ "loss": 0.0,
347
+ "reward": 0.5329861100763083,
348
+ "reward_std": 0.20399011299014091,
349
+ "rewards/accuracy_reward": 0.5329861100763083,
350
+ "step": 31
351
+ },
352
+ {
353
+ "completion_length": 509.3090305328369,
354
+ "epoch": 0.25472636815920396,
355
+ "grad_norm": 0.006779938004910946,
356
+ "learning_rate": 1e-06,
357
+ "loss": 0.0,
358
+ "reward": 0.5164930569007993,
359
+ "reward_std": 0.20839094324037433,
360
+ "rewards/accuracy_reward": 0.5164930569007993,
361
+ "step": 32
362
+ },
363
+ {
364
+ "completion_length": 474.19010734558105,
365
+ "epoch": 0.2626865671641791,
366
+ "grad_norm": 0.018072878941893578,
367
+ "learning_rate": 1e-06,
368
+ "loss": 0.0,
369
+ "reward": 0.5529513955116272,
370
+ "reward_std": 0.23518243874423206,
371
+ "rewards/accuracy_reward": 0.5529513955116272,
372
+ "step": 33
373
+ },
374
+ {
375
+ "completion_length": 467.2725715637207,
376
+ "epoch": 0.27064676616915423,
377
+ "grad_norm": 0.006513183005154133,
378
+ "learning_rate": 1e-06,
379
+ "loss": 0.0,
380
+ "reward": 0.5225694482214749,
381
+ "reward_std": 0.17238700040616095,
382
+ "rewards/accuracy_reward": 0.5225694482214749,
383
+ "step": 34
384
+ },
385
+ {
386
+ "completion_length": 464.1111145019531,
387
+ "epoch": 0.27860696517412936,
388
+ "grad_norm": 0.006131773814558983,
389
+ "learning_rate": 1e-06,
390
+ "loss": 0.0,
391
+ "reward": 0.475260422565043,
392
+ "reward_std": 0.17684485344216228,
393
+ "rewards/accuracy_reward": 0.475260422565043,
394
+ "step": 35
395
+ },
396
+ {
397
+ "completion_length": 479.2005214691162,
398
+ "epoch": 0.2865671641791045,
399
+ "grad_norm": 0.006053759716451168,
400
+ "learning_rate": 1e-06,
401
+ "loss": 0.0,
402
+ "reward": 0.5112847229465842,
403
+ "reward_std": 0.16745366132818162,
404
+ "rewards/accuracy_reward": 0.5112847229465842,
405
+ "step": 36
406
+ },
407
+ {
408
+ "completion_length": 474.2795162200928,
409
+ "epoch": 0.2945273631840796,
410
+ "grad_norm": 0.006722339428961277,
411
+ "learning_rate": 1e-06,
412
+ "loss": 0.0,
413
+ "reward": 0.5043402817100286,
414
+ "reward_std": 0.1968160946853459,
415
+ "rewards/accuracy_reward": 0.5043402817100286,
416
+ "step": 37
417
+ },
418
+ {
419
+ "completion_length": 494.92275047302246,
420
+ "epoch": 0.3024875621890547,
421
+ "grad_norm": 0.006237765308469534,
422
+ "learning_rate": 1e-06,
423
+ "loss": 0.0,
424
+ "reward": 0.4839409776031971,
425
+ "reward_std": 0.196247041458264,
426
+ "rewards/accuracy_reward": 0.4839409776031971,
427
+ "step": 38
428
+ },
429
+ {
430
+ "completion_length": 467.1024341583252,
431
+ "epoch": 0.31044776119402984,
432
+ "grad_norm": 0.005437185522168875,
433
+ "learning_rate": 1e-06,
434
+ "loss": 0.0,
435
+ "reward": 0.4986979286186397,
436
+ "reward_std": 0.14949205331504345,
437
+ "rewards/accuracy_reward": 0.4986979286186397,
438
+ "step": 39
439
+ },
440
+ {
441
+ "completion_length": 500.1310806274414,
442
+ "epoch": 0.31840796019900497,
443
+ "grad_norm": 0.006678381934762001,
444
+ "learning_rate": 1e-06,
445
+ "loss": 0.0,
446
+ "reward": 0.5742187593132257,
447
+ "reward_std": 0.2146851746365428,
448
+ "rewards/accuracy_reward": 0.5742187593132257,
449
+ "step": 40
450
+ },
451
+ {
452
+ "completion_length": 501.9566059112549,
453
+ "epoch": 0.3263681592039801,
454
+ "grad_norm": 0.006126942578703165,
455
+ "learning_rate": 1e-06,
456
+ "loss": 0.0,
457
+ "reward": 0.47439236333593726,
458
+ "reward_std": 0.185034736758098,
459
+ "rewards/accuracy_reward": 0.47439236333593726,
460
+ "step": 41
461
+ },
462
+ {
463
+ "completion_length": 442.15538787841797,
464
+ "epoch": 0.33432835820895523,
465
+ "grad_norm": 0.006063259672373533,
466
+ "learning_rate": 1e-06,
467
+ "loss": 0.0,
468
+ "reward": 0.5737847294658422,
469
+ "reward_std": 0.16276894812472165,
470
+ "rewards/accuracy_reward": 0.5737847294658422,
471
+ "step": 42
472
+ },
473
+ {
474
+ "completion_length": 480.30816078186035,
475
+ "epoch": 0.34228855721393037,
476
+ "grad_norm": 0.035705793648958206,
477
+ "learning_rate": 1e-06,
478
+ "loss": 0.0,
479
+ "reward": 0.5386284776031971,
480
+ "reward_std": 0.2133699539117515,
481
+ "rewards/accuracy_reward": 0.5386284776031971,
482
+ "step": 43
483
+ },
484
+ {
485
+ "completion_length": 477.8211860656738,
486
+ "epoch": 0.3502487562189055,
487
+ "grad_norm": 0.007010245230048895,
488
+ "learning_rate": 1e-06,
489
+ "loss": 0.0,
490
+ "reward": 0.4769965298473835,
491
+ "reward_std": 0.17710949736647308,
492
+ "rewards/accuracy_reward": 0.4769965298473835,
493
+ "step": 44
494
+ },
495
+ {
496
+ "completion_length": 462.7925453186035,
497
+ "epoch": 0.3582089552238806,
498
+ "grad_norm": 0.007418088149279356,
499
+ "learning_rate": 1e-06,
500
+ "loss": 0.0,
501
+ "reward": 0.5724826455116272,
502
+ "reward_std": 0.2099497839808464,
503
+ "rewards/accuracy_reward": 0.5724826455116272,
504
+ "step": 45
505
+ },
506
+ {
507
+ "completion_length": 470.4783020019531,
508
+ "epoch": 0.3661691542288557,
509
+ "grad_norm": 0.007191660813987255,
510
+ "learning_rate": 1e-06,
511
+ "loss": 0.0,
512
+ "reward": 0.5125868069007993,
513
+ "reward_std": 0.19454265222884715,
514
+ "rewards/accuracy_reward": 0.5125868069007993,
515
+ "step": 46
516
+ },
517
+ {
518
+ "completion_length": 470.67709159851074,
519
+ "epoch": 0.37412935323383084,
520
+ "grad_norm": 0.006237688474357128,
521
+ "learning_rate": 1e-06,
522
+ "loss": 0.0,
523
+ "reward": 0.5516493115574121,
524
+ "reward_std": 0.1919041178189218,
525
+ "rewards/accuracy_reward": 0.5516493115574121,
526
+ "step": 47
527
+ },
528
+ {
529
+ "completion_length": 467.4557342529297,
530
+ "epoch": 0.382089552238806,
531
+ "grad_norm": 0.005811754148453474,
532
+ "learning_rate": 1e-06,
533
+ "loss": 0.0,
534
+ "reward": 0.4904513917863369,
535
+ "reward_std": 0.1371547463349998,
536
+ "rewards/accuracy_reward": 0.4904513917863369,
537
+ "step": 48
538
+ },
539
+ {
540
+ "completion_length": 465.5026092529297,
541
+ "epoch": 0.3900497512437811,
542
+ "grad_norm": 0.02730730175971985,
543
+ "learning_rate": 1e-06,
544
+ "loss": 0.0,
545
+ "reward": 0.47222222574055195,
546
+ "reward_std": 0.17791698593646288,
547
+ "rewards/accuracy_reward": 0.47222222574055195,
548
+ "step": 49
549
+ },
550
+ {
551
+ "completion_length": 478.4783020019531,
552
+ "epoch": 0.39800995024875624,
553
+ "grad_norm": 0.0069628264755010605,
554
+ "learning_rate": 1e-06,
555
+ "loss": 0.0,
556
+ "reward": 0.46788195008412004,
557
+ "reward_std": 0.1892542012501508,
558
+ "rewards/accuracy_reward": 0.46788195008412004,
559
+ "step": 50
560
+ },
561
+ {
562
+ "completion_length": 468.75087547302246,
563
+ "epoch": 0.4059701492537313,
564
+ "grad_norm": 0.008021087385714054,
565
+ "learning_rate": 1e-06,
566
+ "loss": 0.0,
567
+ "reward": 0.4926215335726738,
568
+ "reward_std": 0.22644896060228348,
569
+ "rewards/accuracy_reward": 0.4926215335726738,
570
+ "step": 51
571
+ },
572
+ {
573
+ "completion_length": 498.5251808166504,
574
+ "epoch": 0.41393034825870645,
575
+ "grad_norm": 0.0064814710058271885,
576
+ "learning_rate": 1e-06,
577
+ "loss": 0.0,
578
+ "reward": 0.49218750232830644,
579
+ "reward_std": 0.17422490078024566,
580
+ "rewards/accuracy_reward": 0.49218750232830644,
581
+ "step": 52
582
+ },
583
+ {
584
+ "completion_length": 482.2578182220459,
585
+ "epoch": 0.4218905472636816,
586
+ "grad_norm": 0.005841956939548254,
587
+ "learning_rate": 1e-06,
588
+ "loss": 0.0,
589
+ "reward": 0.4978298582136631,
590
+ "reward_std": 0.1794568970799446,
591
+ "rewards/accuracy_reward": 0.4978298582136631,
592
+ "step": 53
593
+ },
594
+ {
595
+ "completion_length": 470.75,
596
+ "epoch": 0.4298507462686567,
597
+ "grad_norm": 0.006833434570580721,
598
+ "learning_rate": 1e-06,
599
+ "loss": 0.0,
600
+ "reward": 0.5460069440305233,
601
+ "reward_std": 0.17903496301732957,
602
+ "rewards/accuracy_reward": 0.5460069440305233,
603
+ "step": 54
604
+ },
605
+ {
606
+ "completion_length": 468.85677337646484,
607
+ "epoch": 0.43781094527363185,
608
+ "grad_norm": 0.05871806666254997,
609
+ "learning_rate": 1e-06,
610
+ "loss": 0.0,
611
+ "reward": 0.5507812574505806,
612
+ "reward_std": 0.17186261457391083,
613
+ "rewards/accuracy_reward": 0.5507812574505806,
614
+ "step": 55
615
+ },
616
+ {
617
+ "completion_length": 446.83854484558105,
618
+ "epoch": 0.445771144278607,
619
+ "grad_norm": 0.006716958247125149,
620
+ "learning_rate": 1e-06,
621
+ "loss": 0.0,
622
+ "reward": 0.5321180680766702,
623
+ "reward_std": 0.1578453336842358,
624
+ "rewards/accuracy_reward": 0.5321180680766702,
625
+ "step": 56
626
+ },
627
+ {
628
+ "completion_length": 449.37153244018555,
629
+ "epoch": 0.4537313432835821,
630
+ "grad_norm": 0.006345037836581469,
631
+ "learning_rate": 1e-06,
632
+ "loss": 0.0,
633
+ "reward": 0.547309035435319,
634
+ "reward_std": 0.1532673817127943,
635
+ "rewards/accuracy_reward": 0.547309035435319,
636
+ "step": 57
637
+ },
638
+ {
639
+ "completion_length": 466.7022590637207,
640
+ "epoch": 0.4616915422885572,
641
+ "grad_norm": 0.010852845385670662,
642
+ "learning_rate": 1e-06,
643
+ "loss": 0.0,
644
+ "reward": 0.5611979253590107,
645
+ "reward_std": 0.1392110399901867,
646
+ "rewards/accuracy_reward": 0.5611979253590107,
647
+ "step": 58
648
+ },
649
+ {
650
+ "completion_length": 474.2534770965576,
651
+ "epoch": 0.4696517412935323,
652
+ "grad_norm": 0.006359429098665714,
653
+ "learning_rate": 1e-06,
654
+ "loss": 0.0,
655
+ "reward": 0.476562503259629,
656
+ "reward_std": 0.17301563546061516,
657
+ "rewards/accuracy_reward": 0.476562503259629,
658
+ "step": 59
659
+ },
660
+ {
661
+ "completion_length": 451.5312557220459,
662
+ "epoch": 0.47761194029850745,
663
+ "grad_norm": 0.007138302084058523,
664
+ "learning_rate": 1e-06,
665
+ "loss": 0.0,
666
+ "reward": 0.45269097946584225,
667
+ "reward_std": 0.19592578150331974,
668
+ "rewards/accuracy_reward": 0.45269097946584225,
669
+ "step": 60
670
+ },
671
+ {
672
+ "completion_length": 474.8932342529297,
673
+ "epoch": 0.4855721393034826,
674
+ "grad_norm": 0.006120054051280022,
675
+ "learning_rate": 1e-06,
676
+ "loss": 0.0,
677
+ "reward": 0.4661458348855376,
678
+ "reward_std": 0.1684757302282378,
679
+ "rewards/accuracy_reward": 0.4661458348855376,
680
+ "step": 61
681
+ },
682
+ {
683
+ "completion_length": 469.56510734558105,
684
+ "epoch": 0.4935323383084577,
685
+ "grad_norm": 0.006903901230543852,
686
+ "learning_rate": 1e-06,
687
+ "loss": 0.0,
688
+ "reward": 0.5125868087634444,
689
+ "reward_std": 0.20010435581207275,
690
+ "rewards/accuracy_reward": 0.5125868087634444,
691
+ "step": 62
692
+ },
693
+ {
694
+ "completion_length": 453.6145877838135,
695
+ "epoch": 0.5014925373134328,
696
+ "grad_norm": 0.0065674264915287495,
697
+ "learning_rate": 1e-06,
698
+ "loss": 0.0,
699
+ "reward": 0.5347222294658422,
700
+ "reward_std": 0.14352841977961361,
701
+ "rewards/accuracy_reward": 0.5347222294658422,
702
+ "step": 63
703
+ },
704
+ {
705
+ "completion_length": 452.13455390930176,
706
+ "epoch": 0.5094527363184079,
707
+ "grad_norm": 0.006244272459298372,
708
+ "learning_rate": 1e-06,
709
+ "loss": 0.0,
710
+ "reward": 0.5711805606260896,
711
+ "reward_std": 0.18628281145356596,
712
+ "rewards/accuracy_reward": 0.5711805606260896,
713
+ "step": 64
714
+ },
715
+ {
716
+ "completion_length": 453.6111125946045,
717
+ "epoch": 0.5174129353233831,
718
+ "grad_norm": 0.005630127154290676,
719
+ "learning_rate": 1e-06,
720
+ "loss": 0.0,
721
+ "reward": 0.5633680566679686,
722
+ "reward_std": 0.13176781288348138,
723
+ "rewards/accuracy_reward": 0.5633680566679686,
724
+ "step": 65
725
+ },
726
+ {
727
+ "completion_length": 454.0685787200928,
728
+ "epoch": 0.5253731343283582,
729
+ "grad_norm": 0.006047699134796858,
730
+ "learning_rate": 1e-06,
731
+ "loss": 0.0,
732
+ "reward": 0.5169270820915699,
733
+ "reward_std": 0.17424573958851397,
734
+ "rewards/accuracy_reward": 0.5169270820915699,
735
+ "step": 66
736
+ },
737
+ {
738
+ "completion_length": 467.2552089691162,
739
+ "epoch": 0.5333333333333333,
740
+ "grad_norm": 0.005964316893368959,
741
+ "learning_rate": 1e-06,
742
+ "loss": 0.0,
743
+ "reward": 0.5078125037252903,
744
+ "reward_std": 0.1600186654832214,
745
+ "rewards/accuracy_reward": 0.5078125037252903,
746
+ "step": 67
747
+ },
748
+ {
749
+ "completion_length": 475.27604484558105,
750
+ "epoch": 0.5412935323383085,
751
+ "grad_norm": 0.007332879584282637,
752
+ "learning_rate": 1e-06,
753
+ "loss": 0.0,
754
+ "reward": 0.5798611175268888,
755
+ "reward_std": 0.21456625079736114,
756
+ "rewards/accuracy_reward": 0.5798611175268888,
757
+ "step": 68
758
+ },
759
+ {
760
+ "completion_length": 457.26041984558105,
761
+ "epoch": 0.5492537313432836,
762
+ "grad_norm": 0.006759831681847572,
763
+ "learning_rate": 1e-06,
764
+ "loss": 0.0,
765
+ "reward": 0.48784722620621324,
766
+ "reward_std": 0.17781772650778294,
767
+ "rewards/accuracy_reward": 0.48784722620621324,
768
+ "step": 69
769
+ },
770
+ {
771
+ "completion_length": 449.7126770019531,
772
+ "epoch": 0.5572139303482587,
773
+ "grad_norm": 0.006766649428755045,
774
+ "learning_rate": 1e-06,
775
+ "loss": 0.0,
776
+ "reward": 0.4978298647329211,
777
+ "reward_std": 0.1652932451106608,
778
+ "rewards/accuracy_reward": 0.4978298647329211,
779
+ "step": 70
780
+ },
781
+ {
782
+ "completion_length": 487.57726097106934,
783
+ "epoch": 0.5651741293532339,
784
+ "grad_norm": 0.007064457051455975,
785
+ "learning_rate": 1e-06,
786
+ "loss": 0.0,
787
+ "reward": 0.4340277826413512,
788
+ "reward_std": 0.15119286242406815,
789
+ "rewards/accuracy_reward": 0.4340277826413512,
790
+ "step": 71
791
+ },
792
+ {
793
+ "completion_length": 425.9496593475342,
794
+ "epoch": 0.573134328358209,
795
+ "grad_norm": 0.007826046086847782,
796
+ "learning_rate": 1e-06,
797
+ "loss": 0.0,
798
+ "reward": 0.5941840391606092,
799
+ "reward_std": 0.1813936980906874,
800
+ "rewards/accuracy_reward": 0.5941840391606092,
801
+ "step": 72
802
+ },
803
+ {
804
+ "completion_length": 469.32900047302246,
805
+ "epoch": 0.5810945273631841,
806
+ "grad_norm": 0.0057061235420405865,
807
+ "learning_rate": 1e-06,
808
+ "loss": 0.0,
809
+ "reward": 0.5503472303971648,
810
+ "reward_std": 0.1599263979587704,
811
+ "rewards/accuracy_reward": 0.5503472303971648,
812
+ "step": 73
813
+ },
814
+ {
815
+ "completion_length": 420.5616340637207,
816
+ "epoch": 0.5890547263681593,
817
+ "grad_norm": 0.0060322158969938755,
818
+ "learning_rate": 1e-06,
819
+ "loss": 0.0,
820
+ "reward": 0.6349826510995626,
821
+ "reward_std": 0.14059699326753616,
822
+ "rewards/accuracy_reward": 0.6349826510995626,
823
+ "step": 74
824
+ },
825
+ {
826
+ "completion_length": 433.3845520019531,
827
+ "epoch": 0.5970149253731343,
828
+ "grad_norm": 0.00608411431312561,
829
+ "learning_rate": 1e-06,
830
+ "loss": 0.0,
831
+ "reward": 0.5863715335726738,
832
+ "reward_std": 0.1365173237863928,
833
+ "rewards/accuracy_reward": 0.5863715335726738,
834
+ "step": 75
835
+ },
836
+ {
837
+ "completion_length": 438.9913215637207,
838
+ "epoch": 0.6049751243781094,
839
+ "grad_norm": 0.010838707908987999,
840
+ "learning_rate": 1e-06,
841
+ "loss": 0.0,
842
+ "reward": 0.4947916679084301,
843
+ "reward_std": 0.16913633281365037,
844
+ "rewards/accuracy_reward": 0.4947916679084301,
845
+ "step": 76
846
+ },
847
+ {
848
+ "completion_length": 436.12413787841797,
849
+ "epoch": 0.6129353233830845,
850
+ "grad_norm": 0.0062843686901032925,
851
+ "learning_rate": 1e-06,
852
+ "loss": 0.0,
853
+ "reward": 0.5416666697710752,
854
+ "reward_std": 0.13451572181656957,
855
+ "rewards/accuracy_reward": 0.5416666697710752,
856
+ "step": 77
857
+ },
858
+ {
859
+ "completion_length": 434.7196216583252,
860
+ "epoch": 0.6208955223880597,
861
+ "grad_norm": 0.007163195870816708,
862
+ "learning_rate": 1e-06,
863
+ "loss": 0.0,
864
+ "reward": 0.5685763917863369,
865
+ "reward_std": 0.16896466561593115,
866
+ "rewards/accuracy_reward": 0.5685763917863369,
867
+ "step": 78
868
+ },
869
+ {
870
+ "completion_length": 425.51388931274414,
871
+ "epoch": 0.6288557213930348,
872
+ "grad_norm": 0.007178325206041336,
873
+ "learning_rate": 1e-06,
874
+ "loss": 0.0,
875
+ "reward": 0.5069444458931684,
876
+ "reward_std": 0.17618598695844412,
877
+ "rewards/accuracy_reward": 0.5069444458931684,
878
+ "step": 79
879
+ },
880
+ {
881
+ "completion_length": 447.2517433166504,
882
+ "epoch": 0.6368159203980099,
883
+ "grad_norm": 0.00694573950022459,
884
+ "learning_rate": 1e-06,
885
+ "loss": 0.0,
886
+ "reward": 0.5230034776031971,
887
+ "reward_std": 0.1816186774522066,
888
+ "rewards/accuracy_reward": 0.5230034776031971,
889
+ "step": 80
890
+ },
891
+ {
892
+ "completion_length": 440.2196235656738,
893
+ "epoch": 0.6447761194029851,
894
+ "grad_norm": 0.0065235113725066185,
895
+ "learning_rate": 1e-06,
896
+ "loss": 0.0,
897
+ "reward": 0.596354179084301,
898
+ "reward_std": 0.1535110066179186,
899
+ "rewards/accuracy_reward": 0.596354179084301,
900
+ "step": 81
901
+ },
902
+ {
903
+ "completion_length": 444.88368797302246,
904
+ "epoch": 0.6527363184079602,
905
+ "grad_norm": 0.006391232833266258,
906
+ "learning_rate": 1e-06,
907
+ "loss": 0.0,
908
+ "reward": 0.5659722285345197,
909
+ "reward_std": 0.15076033398509026,
910
+ "rewards/accuracy_reward": 0.5659722285345197,
911
+ "step": 82
912
+ },
913
+ {
914
+ "completion_length": 455.1119842529297,
915
+ "epoch": 0.6606965174129353,
916
+ "grad_norm": 0.005797598976641893,
917
+ "learning_rate": 1e-06,
918
+ "loss": 0.0,
919
+ "reward": 0.5381944524124265,
920
+ "reward_std": 0.14913518843241036,
921
+ "rewards/accuracy_reward": 0.5381944524124265,
922
+ "step": 83
923
+ },
924
+ {
925
+ "completion_length": 454.94879150390625,
926
+ "epoch": 0.6686567164179105,
927
+ "grad_norm": 0.006953164003789425,
928
+ "learning_rate": 1e-06,
929
+ "loss": 0.0,
930
+ "reward": 0.5164930578321218,
931
+ "reward_std": 0.16644407669082284,
932
+ "rewards/accuracy_reward": 0.5164930578321218,
933
+ "step": 84
934
+ },
935
+ {
936
+ "completion_length": 454.31945037841797,
937
+ "epoch": 0.6766169154228856,
938
+ "grad_norm": 0.006526515819132328,
939
+ "learning_rate": 1e-06,
940
+ "loss": 0.0,
941
+ "reward": 0.5381944517139345,
942
+ "reward_std": 0.17874250491149724,
943
+ "rewards/accuracy_reward": 0.5381944517139345,
944
+ "step": 85
945
+ },
946
+ {
947
+ "completion_length": 413.63195037841797,
948
+ "epoch": 0.6845771144278607,
949
+ "grad_norm": 0.006882105953991413,
950
+ "learning_rate": 1e-06,
951
+ "loss": 0.0,
952
+ "reward": 0.4709201497025788,
953
+ "reward_std": 0.15343356225639582,
954
+ "rewards/accuracy_reward": 0.4709201497025788,
955
+ "step": 86
956
+ },
957
+ {
958
+ "completion_length": 477.28386306762695,
959
+ "epoch": 0.6925373134328359,
960
+ "grad_norm": 0.007171040400862694,
961
+ "learning_rate": 1e-06,
962
+ "loss": 0.0,
963
+ "reward": 0.4626736184582114,
964
+ "reward_std": 0.17724345158785582,
965
+ "rewards/accuracy_reward": 0.4626736184582114,
966
+ "step": 87
967
+ },
968
+ {
969
+ "completion_length": 440.93230056762695,
970
+ "epoch": 0.700497512437811,
971
+ "grad_norm": 0.012264563702046871,
972
+ "learning_rate": 1e-06,
973
+ "loss": 0.0,
974
+ "reward": 0.539930559694767,
975
+ "reward_std": 0.2249910207465291,
976
+ "rewards/accuracy_reward": 0.539930559694767,
977
+ "step": 88
978
+ },
979
+ {
980
+ "completion_length": 446.6111183166504,
981
+ "epoch": 0.708457711442786,
982
+ "grad_norm": 0.007123625837266445,
983
+ "learning_rate": 1e-06,
984
+ "loss": 0.0,
985
+ "reward": 0.5677083414047956,
986
+ "reward_std": 0.14550525438971817,
987
+ "rewards/accuracy_reward": 0.5677083414047956,
988
+ "step": 89
989
+ },
990
+ {
991
+ "completion_length": 443.06945419311523,
992
+ "epoch": 0.7164179104477612,
993
+ "grad_norm": 0.006765348371118307,
994
+ "learning_rate": 1e-06,
995
+ "loss": 0.0,
996
+ "reward": 0.5434027817100286,
997
+ "reward_std": 0.14721272652968764,
998
+ "rewards/accuracy_reward": 0.5434027817100286,
999
+ "step": 90
1000
+ },
1001
+ {
1002
+ "completion_length": 438.7204875946045,
1003
+ "epoch": 0.7243781094527363,
1004
+ "grad_norm": 0.007179305423051119,
1005
+ "learning_rate": 1e-06,
1006
+ "loss": 0.0,
1007
+ "reward": 0.5785590391606092,
1008
+ "reward_std": 0.1922429515980184,
1009
+ "rewards/accuracy_reward": 0.5785590391606092,
1010
+ "step": 91
1011
+ },
1012
+ {
1013
+ "completion_length": 435.45659828186035,
1014
+ "epoch": 0.7323383084577114,
1015
+ "grad_norm": 0.03627901151776314,
1016
+ "learning_rate": 1e-06,
1017
+ "loss": 0.0,
1018
+ "reward": 0.5690104262903333,
1019
+ "reward_std": 0.20257419790141284,
1020
+ "rewards/accuracy_reward": 0.5690104262903333,
1021
+ "step": 92
1022
+ },
1023
+ {
1024
+ "completion_length": 450.00347328186035,
1025
+ "epoch": 0.7402985074626866,
1026
+ "grad_norm": 0.0072410209104418755,
1027
+ "learning_rate": 1e-06,
1028
+ "loss": 0.0,
1029
+ "reward": 0.45876736706122756,
1030
+ "reward_std": 0.1358756278641522,
1031
+ "rewards/accuracy_reward": 0.45876736706122756,
1032
+ "step": 93
1033
+ },
1034
+ {
1035
+ "completion_length": 445.3550338745117,
1036
+ "epoch": 0.7482587064676617,
1037
+ "grad_norm": 0.006944851018488407,
1038
+ "learning_rate": 1e-06,
1039
+ "loss": 0.0,
1040
+ "reward": 0.5486111212521791,
1041
+ "reward_std": 0.14192489348351955,
1042
+ "rewards/accuracy_reward": 0.5486111212521791,
1043
+ "step": 94
1044
+ },
1045
+ {
1046
+ "completion_length": 411.4861125946045,
1047
+ "epoch": 0.7562189054726368,
1048
+ "grad_norm": 0.007500652689486742,
1049
+ "learning_rate": 1e-06,
1050
+ "loss": 0.0,
1051
+ "reward": 0.5546875,
1052
+ "reward_std": 0.16345718037337065,
1053
+ "rewards/accuracy_reward": 0.5546875,
1054
+ "step": 95
1055
+ },
1056
+ {
1057
+ "completion_length": 434.6458377838135,
1058
+ "epoch": 0.764179104477612,
1059
+ "grad_norm": 0.008221461437642574,
1060
+ "learning_rate": 1e-06,
1061
+ "loss": 0.0,
1062
+ "reward": 0.5169270858168602,
1063
+ "reward_std": 0.157486256910488,
1064
+ "rewards/accuracy_reward": 0.5169270858168602,
1065
+ "step": 96
1066
+ },
1067
+ {
1068
+ "completion_length": 459.9088592529297,
1069
+ "epoch": 0.7721393034825871,
1070
+ "grad_norm": 0.008421896025538445,
1071
+ "learning_rate": 1e-06,
1072
+ "loss": 0.0,
1073
+ "reward": 0.4192708367481828,
1074
+ "reward_std": 0.16910302848555148,
1075
+ "rewards/accuracy_reward": 0.4192708367481828,
1076
+ "step": 97
1077
+ },
1078
+ {
1079
+ "completion_length": 441.1692752838135,
1080
+ "epoch": 0.7800995024875622,
1081
+ "grad_norm": 0.00576035724952817,
1082
+ "learning_rate": 1e-06,
1083
+ "loss": 0.0,
1084
+ "reward": 0.5894097305135801,
1085
+ "reward_std": 0.09535378613509238,
1086
+ "rewards/accuracy_reward": 0.5894097305135801,
1087
+ "step": 98
1088
+ },
1089
+ {
1090
+ "completion_length": 423.37239837646484,
1091
+ "epoch": 0.7880597014925373,
1092
+ "grad_norm": 0.007169231306761503,
1093
+ "learning_rate": 1e-06,
1094
+ "loss": 0.0,
1095
+ "reward": 0.5520833367481828,
1096
+ "reward_std": 0.17009410122409463,
1097
+ "rewards/accuracy_reward": 0.5520833367481828,
1098
+ "step": 99
1099
+ },
1100
+ {
1101
+ "completion_length": 447.38107681274414,
1102
+ "epoch": 0.7960199004975125,
1103
+ "grad_norm": 0.006430391687899828,
1104
+ "learning_rate": 1e-06,
1105
+ "loss": 0.0,
1106
+ "reward": 0.478298619389534,
1107
+ "reward_std": 0.1553269592113793,
1108
+ "rewards/accuracy_reward": 0.478298619389534,
1109
+ "step": 100
1110
+ },
1111
+ {
1112
+ "completion_length": 434.5590305328369,
1113
+ "epoch": 0.8039800995024876,
1114
+ "grad_norm": 0.006948183756321669,
1115
+ "learning_rate": 1e-06,
1116
+ "loss": 0.0,
1117
+ "reward": 0.6111111119389534,
1118
+ "reward_std": 0.16623280476778746,
1119
+ "rewards/accuracy_reward": 0.6111111119389534,
1120
+ "step": 101
1121
+ },
1122
+ {
1123
+ "completion_length": 438.5807304382324,
1124
+ "epoch": 0.8119402985074626,
1125
+ "grad_norm": 0.00873401015996933,
1126
+ "learning_rate": 1e-06,
1127
+ "loss": 0.0,
1128
+ "reward": 0.554253475740552,
1129
+ "reward_std": 0.12938219658099115,
1130
+ "rewards/accuracy_reward": 0.554253475740552,
1131
+ "step": 102
1132
+ },
1133
+ {
1134
+ "completion_length": 440.63368225097656,
1135
+ "epoch": 0.8199004975124378,
1136
+ "grad_norm": 0.008209417574107647,
1137
+ "learning_rate": 1e-06,
1138
+ "loss": 0.0,
1139
+ "reward": 0.5355902817100286,
1140
+ "reward_std": 0.15484802844002843,
1141
+ "rewards/accuracy_reward": 0.5355902817100286,
1142
+ "step": 103
1143
+ },
1144
+ {
1145
+ "completion_length": 430.292537689209,
1146
+ "epoch": 0.8278606965174129,
1147
+ "grad_norm": 0.006974893156439066,
1148
+ "learning_rate": 1e-06,
1149
+ "loss": 0.0,
1150
+ "reward": 0.5351562616415322,
1151
+ "reward_std": 0.15955675020813942,
1152
+ "rewards/accuracy_reward": 0.5351562616415322,
1153
+ "step": 104
1154
+ },
1155
+ {
1156
+ "completion_length": 437.16406631469727,
1157
+ "epoch": 0.835820895522388,
1158
+ "grad_norm": 0.006820361595600843,
1159
+ "learning_rate": 1e-06,
1160
+ "loss": 0.0,
1161
+ "reward": 0.5325520914047956,
1162
+ "reward_std": 0.13739590148907155,
1163
+ "rewards/accuracy_reward": 0.5325520914047956,
1164
+ "step": 105
1165
+ },
1166
+ {
1167
+ "completion_length": 440.88281440734863,
1168
+ "epoch": 0.8437810945273632,
1169
+ "grad_norm": 0.007815063931047916,
1170
+ "learning_rate": 1e-06,
1171
+ "loss": 0.0,
1172
+ "reward": 0.5546875055879354,
1173
+ "reward_std": 0.16753076948225498,
1174
+ "rewards/accuracy_reward": 0.5546875055879354,
1175
+ "step": 106
1176
+ },
1177
+ {
1178
+ "completion_length": 440.1380310058594,
1179
+ "epoch": 0.8517412935323383,
1180
+ "grad_norm": 0.006534726824611425,
1181
+ "learning_rate": 1e-06,
1182
+ "loss": 0.0,
1183
+ "reward": 0.5225694496184587,
1184
+ "reward_std": 0.14320432161912322,
1185
+ "rewards/accuracy_reward": 0.5225694496184587,
1186
+ "step": 107
1187
+ },
1188
+ {
1189
+ "completion_length": 438.36284828186035,
1190
+ "epoch": 0.8597014925373134,
1191
+ "grad_norm": 0.008502320386469364,
1192
+ "learning_rate": 1e-06,
1193
+ "loss": 0.0,
1194
+ "reward": 0.5377604169771075,
1195
+ "reward_std": 0.16942514828406274,
1196
+ "rewards/accuracy_reward": 0.5377604169771075,
1197
+ "step": 108
1198
+ },
1199
+ {
1200
+ "completion_length": 433.4192771911621,
1201
+ "epoch": 0.8676616915422886,
1202
+ "grad_norm": 0.007831891067326069,
1203
+ "learning_rate": 1e-06,
1204
+ "loss": 0.0,
1205
+ "reward": 0.5112847285345197,
1206
+ "reward_std": 0.1527788401581347,
1207
+ "rewards/accuracy_reward": 0.5112847285345197,
1208
+ "step": 109
1209
+ },
1210
+ {
1211
+ "completion_length": 432.8906307220459,
1212
+ "epoch": 0.8756218905472637,
1213
+ "grad_norm": 0.007177690044045448,
1214
+ "learning_rate": 1e-06,
1215
+ "loss": 0.0,
1216
+ "reward": 0.5451388992369175,
1217
+ "reward_std": 0.1557791151572019,
1218
+ "rewards/accuracy_reward": 0.5451388992369175,
1219
+ "step": 110
1220
+ },
1221
+ {
1222
+ "completion_length": 444.034725189209,
1223
+ "epoch": 0.8835820895522388,
1224
+ "grad_norm": 0.007437328342348337,
1225
+ "learning_rate": 1e-06,
1226
+ "loss": 0.0,
1227
+ "reward": 0.5230034766718745,
1228
+ "reward_std": 0.16494862362742424,
1229
+ "rewards/accuracy_reward": 0.5230034766718745,
1230
+ "step": 111
1231
+ },
1232
+ {
1233
+ "completion_length": 431.6883716583252,
1234
+ "epoch": 0.891542288557214,
1235
+ "grad_norm": 0.007314461283385754,
1236
+ "learning_rate": 1e-06,
1237
+ "loss": 0.0,
1238
+ "reward": 0.5512152872979641,
1239
+ "reward_std": 0.15400438173674047,
1240
+ "rewards/accuracy_reward": 0.5512152872979641,
1241
+ "step": 112
1242
+ },
1243
+ {
1244
+ "completion_length": 417.52257347106934,
1245
+ "epoch": 0.8995024875621891,
1246
+ "grad_norm": 0.006493464577943087,
1247
+ "learning_rate": 1e-06,
1248
+ "loss": 0.0,
1249
+ "reward": 0.5915798712521791,
1250
+ "reward_std": 0.10944067395757884,
1251
+ "rewards/accuracy_reward": 0.5915798712521791,
1252
+ "step": 113
1253
+ },
1254
+ {
1255
+ "completion_length": 428.136287689209,
1256
+ "epoch": 0.9074626865671642,
1257
+ "grad_norm": 0.00739770894870162,
1258
+ "learning_rate": 1e-06,
1259
+ "loss": 0.0,
1260
+ "reward": 0.5920138908550143,
1261
+ "reward_std": 0.14007845730520785,
1262
+ "rewards/accuracy_reward": 0.5920138908550143,
1263
+ "step": 114
1264
+ },
1265
+ {
1266
+ "completion_length": 411.75086975097656,
1267
+ "epoch": 0.9154228855721394,
1268
+ "grad_norm": 0.012020394206047058,
1269
+ "learning_rate": 1e-06,
1270
+ "loss": 0.0,
1271
+ "reward": 0.5802951483055949,
1272
+ "reward_std": 0.1439069788902998,
1273
+ "rewards/accuracy_reward": 0.5802951483055949,
1274
+ "step": 115
1275
+ },
1276
+ {
1277
+ "completion_length": 440.9071216583252,
1278
+ "epoch": 0.9233830845771144,
1279
+ "grad_norm": 0.008760242722928524,
1280
+ "learning_rate": 1e-06,
1281
+ "loss": 0.0,
1282
+ "reward": 0.5147569524124265,
1283
+ "reward_std": 0.18262152979150414,
1284
+ "rewards/accuracy_reward": 0.5147569524124265,
1285
+ "step": 116
1286
+ },
1287
+ {
1288
+ "completion_length": 426.1597270965576,
1289
+ "epoch": 0.9313432835820895,
1290
+ "grad_norm": 0.008847164921462536,
1291
+ "learning_rate": 1e-06,
1292
+ "loss": 0.0,
1293
+ "reward": 0.5130208432674408,
1294
+ "reward_std": 0.14971066592261195,
1295
+ "rewards/accuracy_reward": 0.5130208432674408,
1296
+ "step": 117
1297
+ },
1298
+ {
1299
+ "completion_length": 403.698787689209,
1300
+ "epoch": 0.9393034825870646,
1301
+ "grad_norm": 0.00826039258390665,
1302
+ "learning_rate": 1e-06,
1303
+ "loss": 0.0,
1304
+ "reward": 0.5503472248092294,
1305
+ "reward_std": 0.13589490437880158,
1306
+ "rewards/accuracy_reward": 0.5503472248092294,
1307
+ "step": 118
1308
+ },
1309
+ {
1310
+ "completion_length": 430.99479484558105,
1311
+ "epoch": 0.9472636815920398,
1312
+ "grad_norm": 0.0083442572504282,
1313
+ "learning_rate": 1e-06,
1314
+ "loss": 0.0,
1315
+ "reward": 0.4691840326413512,
1316
+ "reward_std": 0.12581401481293142,
1317
+ "rewards/accuracy_reward": 0.4691840326413512,
1318
+ "step": 119
1319
+ },
1320
+ {
1321
+ "completion_length": 428.2230930328369,
1322
+ "epoch": 0.9552238805970149,
1323
+ "grad_norm": 0.00736872386187315,
1324
+ "learning_rate": 1e-06,
1325
+ "loss": 0.0,
1326
+ "reward": 0.5703124995925464,
1327
+ "reward_std": 0.13862833217717707,
1328
+ "rewards/accuracy_reward": 0.5703124995925464,
1329
+ "step": 120
1330
+ },
1331
+ {
1332
+ "completion_length": 456.17274475097656,
1333
+ "epoch": 0.96318407960199,
1334
+ "grad_norm": 0.007083515170961618,
1335
+ "learning_rate": 1e-06,
1336
+ "loss": 0.0,
1337
+ "reward": 0.5169270895421505,
1338
+ "reward_std": 0.1359748471295461,
1339
+ "rewards/accuracy_reward": 0.5169270895421505,
1340
+ "step": 121
1341
+ },
1342
+ {
1343
+ "completion_length": 437.1475715637207,
1344
+ "epoch": 0.9711442786069652,
1345
+ "grad_norm": 0.012382179498672485,
1346
+ "learning_rate": 1e-06,
1347
+ "loss": 0.0,
1348
+ "reward": 0.5082465298473835,
1349
+ "reward_std": 0.14862926630303264,
1350
+ "rewards/accuracy_reward": 0.5082465298473835,
1351
+ "step": 122
1352
+ },
1353
+ {
1354
+ "completion_length": 423.105037689209,
1355
+ "epoch": 0.9791044776119403,
1356
+ "grad_norm": 0.0077844285406172276,
1357
+ "learning_rate": 1e-06,
1358
+ "loss": 0.0,
1359
+ "reward": 0.514322922565043,
1360
+ "reward_std": 0.14436206291429698,
1361
+ "rewards/accuracy_reward": 0.514322922565043,
1362
+ "step": 123
1363
+ },
1364
+ {
1365
+ "completion_length": 408.1849002838135,
1366
+ "epoch": 0.9870646766169154,
1367
+ "grad_norm": 0.008202475495636463,
1368
+ "learning_rate": 1e-06,
1369
+ "loss": 0.0,
1370
+ "reward": 0.5703125111758709,
1371
+ "reward_std": 0.12822659336961806,
1372
+ "rewards/accuracy_reward": 0.5703125111758709,
1373
+ "step": 124
1374
+ },
1375
+ {
1376
+ "completion_length": 402.60243797302246,
1377
+ "epoch": 0.9950248756218906,
1378
+ "grad_norm": 0.007635696791112423,
1379
+ "learning_rate": 1e-06,
1380
+ "loss": 0.0,
1381
+ "reward": 0.6449652845039964,
1382
+ "reward_std": 0.14181664236821234,
1383
+ "rewards/accuracy_reward": 0.6449652845039964,
1384
+ "step": 125
1385
+ },
1386
+ {
1387
+ "epoch": 0.9950248756218906,
1388
+ "step": 125,
1389
+ "total_flos": 0.0,
1390
+ "train_loss": 1.7134473857538523e-08,
1391
+ "train_runtime": 32762.3532,
1392
+ "train_samples_per_second": 0.368,
1393
+ "train_steps_per_second": 0.004
1394
+ }
1395
+ ],
1396
+ "logging_steps": 1,
1397
+ "max_steps": 125,
1398
+ "num_input_tokens_seen": 0,
1399
+ "num_train_epochs": 1,
1400
+ "save_steps": 10,
1401
+ "stateful_callbacks": {
1402
+ "TrainerControl": {
1403
+ "args": {
1404
+ "should_epoch_stop": false,
1405
+ "should_evaluate": false,
1406
+ "should_log": false,
1407
+ "should_save": true,
1408
+ "should_training_stop": true
1409
+ },
1410
+ "attributes": {}
1411
+ }
1412
+ },
1413
+ "total_flos": 0.0,
1414
+ "train_batch_size": 1,
1415
+ "trial_name": null,
1416
+ "trial_params": null
1417
+ }