Haitao999 commited on
Commit
f78eb31
·
verified ·
1 Parent(s): fca943a

Model save

Browse files
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ model_name: Qwen2.5-7B-EMPO-NM-COT-20K-2epoch
4
+ tags:
5
+ - generated_from_trainer
6
+ - trl
7
+ - grpo
8
+ licence: license
9
+ ---
10
+
11
+ # Model Card for Qwen2.5-7B-EMPO-NM-COT-20K-2epoch
12
+
13
+ This model is a fine-tuned version of [None](https://huggingface.co/None).
14
+ It has been trained using [TRL](https://github.com/huggingface/trl).
15
+
16
+ ## Quick start
17
+
18
+ ```python
19
+ from transformers import pipeline
20
+
21
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
22
+ generator = pipeline("text-generation", model="Haitao999/Qwen2.5-7B-EMPO-NM-COT-20K-2epoch", device="cuda")
23
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
24
+ print(output["generated_text"])
25
+ ```
26
+
27
+ ## Training procedure
28
+
29
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tjucsailab/huggingface/runs/9tiixwvm)
30
+
31
+
32
+ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
33
+
34
+ ### Framework versions
35
+
36
+ - TRL: 0.14.0
37
+ - Transformers: 4.48.3
38
+ - Pytorch: 2.5.1+cu124
39
+ - Datasets: 3.1.0
40
+ - Tokenizers: 0.21.0
41
+
42
+ ## Citations
43
+
44
+ Cite GRPO as:
45
+
46
+ ```bibtex
47
+ @article{zhihong2024deepseekmath,
48
+ title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
49
+ author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
50
+ year = 2024,
51
+ eprint = {arXiv:2402.03300},
52
+ }
53
+
54
+ ```
55
+
56
+ Cite TRL as:
57
+
58
+ ```bibtex
59
+ @misc{vonwerra2022trl,
60
+ title = {{TRL: Transformer Reinforcement Learning}},
61
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
62
+ year = 2020,
63
+ journal = {GitHub repository},
64
+ publisher = {GitHub},
65
+ howpublished = {\url{https://github.com/huggingface/trl}}
66
+ }
67
+ ```
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 2.8008765204773544e-08,
4
+ "train_runtime": 57265.8095,
5
+ "train_samples": 20000,
6
+ "train_samples_per_second": 0.349,
7
+ "train_steps_per_second": 0.002
8
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.48.3"
6
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 2.8008765204773544e-08,
4
+ "train_runtime": 57265.8095,
5
+ "train_samples": 20000,
6
+ "train_samples_per_second": 0.349,
7
+ "train_steps_per_second": 0.002
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9965010496850945,
5
+ "eval_steps": 100,
6
+ "global_step": 89,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "completion_length": 719.0248603820801,
13
+ "epoch": 0.01119664100769769,
14
+ "grad_norm": 5.308396339416504,
15
+ "learning_rate": 3e-06,
16
+ "loss": 0.0,
17
+ "reward": 0.7022594679147005,
18
+ "reward_std": 0.11167733068577945,
19
+ "rewards/accuracy_reward": 0.4343112222850323,
20
+ "rewards/semantic_entropy_math_reward": 0.7022594679147005,
21
+ "rewards/total_entropy_reward": 1.2448364309966564,
22
+ "step": 1
23
+ },
24
+ {
25
+ "completion_length": 697.6454010009766,
26
+ "epoch": 0.02239328201539538,
27
+ "grad_norm": 3.310378313064575,
28
+ "learning_rate": 3e-06,
29
+ "loss": 0.0,
30
+ "reward": 0.6712827906012535,
31
+ "reward_std": 0.1011071486864239,
32
+ "rewards/accuracy_reward": 0.352040808647871,
33
+ "rewards/semantic_entropy_math_reward": 0.6712827868759632,
34
+ "rewards/total_entropy_reward": 1.305861696600914,
35
+ "step": 2
36
+ },
37
+ {
38
+ "completion_length": 701.3826370239258,
39
+ "epoch": 0.03358992302309307,
40
+ "grad_norm": 4.717713356018066,
41
+ "learning_rate": 3e-06,
42
+ "loss": 0.0,
43
+ "reward": 0.7075437195599079,
44
+ "reward_std": 0.07735519809648395,
45
+ "rewards/accuracy_reward": 0.4126275437884033,
46
+ "rewards/semantic_entropy_math_reward": 0.7075437270104885,
47
+ "rewards/total_entropy_reward": 1.2551036067306995,
48
+ "step": 3
49
+ },
50
+ {
51
+ "completion_length": 692.5803489685059,
52
+ "epoch": 0.04478656403079076,
53
+ "grad_norm": 4.699649810791016,
54
+ "learning_rate": 3e-06,
55
+ "loss": 0.0,
56
+ "reward": 0.6926020309329033,
57
+ "reward_std": 0.09249642631039023,
58
+ "rewards/accuracy_reward": 0.4196428433060646,
59
+ "rewards/semantic_entropy_math_reward": 0.6926020495593548,
60
+ "rewards/total_entropy_reward": 1.2794943004846573,
61
+ "step": 4
62
+ },
63
+ {
64
+ "completion_length": 643.8430976867676,
65
+ "epoch": 0.05598320503848846,
66
+ "grad_norm": 7.118838310241699,
67
+ "learning_rate": 3e-06,
68
+ "loss": 0.0,
69
+ "reward": 0.7232142686843872,
70
+ "reward_std": 0.09167159674689174,
71
+ "rewards/accuracy_reward": 0.41326529532670975,
72
+ "rewards/semantic_entropy_math_reward": 0.7232142575085163,
73
+ "rewards/total_entropy_reward": 1.2130939476191998,
74
+ "step": 5
75
+ },
76
+ {
77
+ "completion_length": 711.6734580993652,
78
+ "epoch": 0.06717984604618614,
79
+ "grad_norm": 4.829713344573975,
80
+ "learning_rate": 3e-06,
81
+ "loss": 0.0,
82
+ "reward": 0.6625364199280739,
83
+ "reward_std": 0.1073409270029515,
84
+ "rewards/accuracy_reward": 0.37436223588883877,
85
+ "rewards/semantic_entropy_math_reward": 0.6625364273786545,
86
+ "rewards/total_entropy_reward": 1.320713147521019,
87
+ "step": 6
88
+ },
89
+ {
90
+ "completion_length": 676.0541915893555,
91
+ "epoch": 0.07837648705388384,
92
+ "grad_norm": 4.374125003814697,
93
+ "learning_rate": 3e-06,
94
+ "loss": 0.0,
95
+ "reward": 0.7006195187568665,
96
+ "reward_std": 0.09755392652004957,
97
+ "rewards/accuracy_reward": 0.4247448882088065,
98
+ "rewards/semantic_entropy_math_reward": 0.7006195187568665,
99
+ "rewards/total_entropy_reward": 1.2528423443436623,
100
+ "step": 7
101
+ },
102
+ {
103
+ "completion_length": 712.5401573181152,
104
+ "epoch": 0.08957312806158152,
105
+ "grad_norm": 6.558979034423828,
106
+ "learning_rate": 3e-06,
107
+ "loss": 0.0,
108
+ "reward": 0.633381923660636,
109
+ "reward_std": 0.10544528882019222,
110
+ "rewards/accuracy_reward": 0.3463010173290968,
111
+ "rewards/semantic_entropy_math_reward": 0.6333819553256035,
112
+ "rewards/total_entropy_reward": 1.3834920637309551,
113
+ "step": 8
114
+ },
115
+ {
116
+ "completion_length": 711.6345539093018,
117
+ "epoch": 0.10076976906927922,
118
+ "grad_norm": 7.621776103973389,
119
+ "learning_rate": 3e-06,
120
+ "loss": 0.0,
121
+ "reward": 0.6800291426479816,
122
+ "reward_std": 0.10248321201652288,
123
+ "rewards/accuracy_reward": 0.39604590833187103,
124
+ "rewards/semantic_entropy_math_reward": 0.6800291538238525,
125
+ "rewards/total_entropy_reward": 1.2968212738633156,
126
+ "step": 9
127
+ },
128
+ {
129
+ "completion_length": 674.1779174804688,
130
+ "epoch": 0.11196641007697691,
131
+ "grad_norm": 7.523331165313721,
132
+ "learning_rate": 3e-06,
133
+ "loss": 0.0,
134
+ "reward": 0.6800291538238525,
135
+ "reward_std": 0.11024063220247626,
136
+ "rewards/accuracy_reward": 0.411352033726871,
137
+ "rewards/semantic_entropy_math_reward": 0.6800291500985622,
138
+ "rewards/total_entropy_reward": 1.2904038280248642,
139
+ "step": 10
140
+ },
141
+ {
142
+ "completion_length": 741.4017696380615,
143
+ "epoch": 0.1231630510846746,
144
+ "grad_norm": 6.336475372314453,
145
+ "learning_rate": 3e-06,
146
+ "loss": 0.0,
147
+ "reward": 0.6415816266089678,
148
+ "reward_std": 0.10356245189905167,
149
+ "rewards/accuracy_reward": 0.3858418306335807,
150
+ "rewards/semantic_entropy_math_reward": 0.6415816303342581,
151
+ "rewards/total_entropy_reward": 1.363376997411251,
152
+ "step": 11
153
+ },
154
+ {
155
+ "completion_length": 692.8437347412109,
156
+ "epoch": 0.13435969209237228,
157
+ "grad_norm": 7.185462474822998,
158
+ "learning_rate": 3e-06,
159
+ "loss": 0.0,
160
+ "reward": 0.7002550885081291,
161
+ "reward_std": 0.10777880600653589,
162
+ "rewards/accuracy_reward": 0.43048468697816133,
163
+ "rewards/semantic_entropy_math_reward": 0.7002551108598709,
164
+ "rewards/total_entropy_reward": 1.2493381686508656,
165
+ "step": 12
166
+ },
167
+ {
168
+ "completion_length": 704.6638870239258,
169
+ "epoch": 0.14555633310007,
170
+ "grad_norm": 17.740726470947266,
171
+ "learning_rate": 3e-06,
172
+ "loss": 0.0,
173
+ "reward": 0.673833817243576,
174
+ "reward_std": 0.09210015833377838,
175
+ "rewards/accuracy_reward": 0.43558672815561295,
176
+ "rewards/semantic_entropy_math_reward": 0.6738338358700275,
177
+ "rewards/total_entropy_reward": 1.3185552880167961,
178
+ "step": 13
179
+ },
180
+ {
181
+ "completion_length": 679.9374847412109,
182
+ "epoch": 0.15675297410776767,
183
+ "grad_norm": 9.5872802734375,
184
+ "learning_rate": 3e-06,
185
+ "loss": 0.0,
186
+ "reward": 0.7075437121093273,
187
+ "reward_std": 0.09575178450904787,
188
+ "rewards/accuracy_reward": 0.40624999068677425,
189
+ "rewards/semantic_entropy_math_reward": 0.7075437419116497,
190
+ "rewards/total_entropy_reward": 1.237540539354086,
191
+ "step": 14
192
+ },
193
+ {
194
+ "completion_length": 731.2404136657715,
195
+ "epoch": 0.16794961511546536,
196
+ "grad_norm": 4.687215805053711,
197
+ "learning_rate": 3e-06,
198
+ "loss": 0.0,
199
+ "reward": 0.6375728864222765,
200
+ "reward_std": 0.108485147356987,
201
+ "rewards/accuracy_reward": 0.3762755021452904,
202
+ "rewards/semantic_entropy_math_reward": 0.6375729013234377,
203
+ "rewards/total_entropy_reward": 1.3838917911052704,
204
+ "step": 15
205
+ },
206
+ {
207
+ "completion_length": 700.0522842407227,
208
+ "epoch": 0.17914625612316304,
209
+ "grad_norm": 8.418819427490234,
210
+ "learning_rate": 3e-06,
211
+ "loss": 0.0,
212
+ "reward": 0.6547011584043503,
213
+ "reward_std": 0.10533164534717798,
214
+ "rewards/accuracy_reward": 0.40624999161809683,
215
+ "rewards/semantic_entropy_math_reward": 0.6547011733055115,
216
+ "rewards/total_entropy_reward": 1.3433180004358292,
217
+ "step": 16
218
+ },
219
+ {
220
+ "completion_length": 684.034423828125,
221
+ "epoch": 0.19034289713086075,
222
+ "grad_norm": 7.032946586608887,
223
+ "learning_rate": 3e-06,
224
+ "loss": 0.0,
225
+ "reward": 0.664905235171318,
226
+ "reward_std": 0.0974241562653333,
227
+ "rewards/accuracy_reward": 0.3832908095791936,
228
+ "rewards/semantic_entropy_math_reward": 0.6649052500724792,
229
+ "rewards/total_entropy_reward": 1.3396263718605042,
230
+ "step": 17
231
+ },
232
+ {
233
+ "completion_length": 677.9177227020264,
234
+ "epoch": 0.20153953813855843,
235
+ "grad_norm": 15.37868595123291,
236
+ "learning_rate": 3e-06,
237
+ "loss": 0.0,
238
+ "reward": 0.6975218504667282,
239
+ "reward_std": 0.09150373586453497,
240
+ "rewards/accuracy_reward": 0.45153060369193554,
241
+ "rewards/semantic_entropy_math_reward": 0.69752187281847,
242
+ "rewards/total_entropy_reward": 1.262941613793373,
243
+ "step": 18
244
+ },
245
+ {
246
+ "completion_length": 683.5771484375,
247
+ "epoch": 0.21273617914625612,
248
+ "grad_norm": 36.41266632080078,
249
+ "learning_rate": 3e-06,
250
+ "loss": 0.0,
251
+ "reward": 0.7057215645909309,
252
+ "reward_std": 0.09910683194175363,
253
+ "rewards/accuracy_reward": 0.4368622377514839,
254
+ "rewards/semantic_entropy_math_reward": 0.7057215794920921,
255
+ "rewards/total_entropy_reward": 1.2459207847714424,
256
+ "step": 19
257
+ },
258
+ {
259
+ "completion_length": 688.8073768615723,
260
+ "epoch": 0.22393282015395383,
261
+ "grad_norm": 27.405065536499023,
262
+ "learning_rate": 3e-06,
263
+ "loss": 0.0,
264
+ "reward": 0.7097303122282028,
265
+ "reward_std": 0.09203298413194716,
266
+ "rewards/accuracy_reward": 0.4598214225843549,
267
+ "rewards/semantic_entropy_math_reward": 0.7097303308546543,
268
+ "rewards/total_entropy_reward": 1.2417229264974594,
269
+ "step": 20
270
+ },
271
+ {
272
+ "completion_length": 736.3092956542969,
273
+ "epoch": 0.2351294611616515,
274
+ "grad_norm": 19.091941833496094,
275
+ "learning_rate": 3e-06,
276
+ "loss": 0.0,
277
+ "reward": 0.6541544962674379,
278
+ "reward_std": 0.0986680502537638,
279
+ "rewards/accuracy_reward": 0.4228316266089678,
280
+ "rewards/semantic_entropy_math_reward": 0.6541545186191797,
281
+ "rewards/total_entropy_reward": 1.3527532257139683,
282
+ "step": 21
283
+ },
284
+ {
285
+ "completion_length": 709.4795722961426,
286
+ "epoch": 0.2463261021693492,
287
+ "grad_norm": 14.640216827392578,
288
+ "learning_rate": 3e-06,
289
+ "loss": 0.0,
290
+ "reward": 0.6836734637618065,
291
+ "reward_std": 0.10867427196353674,
292
+ "rewards/accuracy_reward": 0.46301019564270973,
293
+ "rewards/semantic_entropy_math_reward": 0.6836734749376774,
294
+ "rewards/total_entropy_reward": 1.2770788073539734,
295
+ "step": 22
296
+ },
297
+ {
298
+ "completion_length": 681.6218032836914,
299
+ "epoch": 0.2575227431770469,
300
+ "grad_norm": 13.511483192443848,
301
+ "learning_rate": 3e-06,
302
+ "loss": 0.0,
303
+ "reward": 0.6967929936945438,
304
+ "reward_std": 0.09672949556261301,
305
+ "rewards/accuracy_reward": 0.44579080305993557,
306
+ "rewards/semantic_entropy_math_reward": 0.6967929899692535,
307
+ "rewards/total_entropy_reward": 1.2772413976490498,
308
+ "step": 23
309
+ },
310
+ {
311
+ "completion_length": 705.6811141967773,
312
+ "epoch": 0.26871938418474456,
313
+ "grad_norm": 15.547083854675293,
314
+ "learning_rate": 3e-06,
315
+ "loss": 0.0,
316
+ "reward": 0.6461370252072811,
317
+ "reward_std": 0.1031385101377964,
318
+ "rewards/accuracy_reward": 0.3985969265922904,
319
+ "rewards/semantic_entropy_math_reward": 0.6461370065808296,
320
+ "rewards/total_entropy_reward": 1.3622385039925575,
321
+ "step": 24
322
+ },
323
+ {
324
+ "completion_length": 731.4566116333008,
325
+ "epoch": 0.27991602519244224,
326
+ "grad_norm": 8.434446334838867,
327
+ "learning_rate": 3e-06,
328
+ "loss": 0.0,
329
+ "reward": 0.6233600564301014,
330
+ "reward_std": 0.08827707398450002,
331
+ "rewards/accuracy_reward": 0.3494897885248065,
332
+ "rewards/semantic_entropy_math_reward": 0.6233600713312626,
333
+ "rewards/total_entropy_reward": 1.4261119738221169,
334
+ "step": 25
335
+ },
336
+ {
337
+ "completion_length": 685.5197486877441,
338
+ "epoch": 0.29111266620014,
339
+ "grad_norm": 12.586360931396484,
340
+ "learning_rate": 3e-06,
341
+ "loss": 0.0,
342
+ "reward": 0.6765670366585255,
343
+ "reward_std": 0.1049469755962491,
344
+ "rewards/accuracy_reward": 0.412627543322742,
345
+ "rewards/semantic_entropy_math_reward": 0.6765670329332352,
346
+ "rewards/total_entropy_reward": 1.3074346259236336,
347
+ "step": 26
348
+ },
349
+ {
350
+ "completion_length": 711.061840057373,
351
+ "epoch": 0.30230930720783766,
352
+ "grad_norm": 9.39128303527832,
353
+ "learning_rate": 3e-06,
354
+ "loss": 0.0,
355
+ "reward": 0.6521501429378986,
356
+ "reward_std": 0.1099298931658268,
357
+ "rewards/accuracy_reward": 0.40624999068677425,
358
+ "rewards/semantic_entropy_math_reward": 0.6521501652896404,
359
+ "rewards/total_entropy_reward": 1.3500806987285614,
360
+ "step": 27
361
+ },
362
+ {
363
+ "completion_length": 707.5612106323242,
364
+ "epoch": 0.31350594821553535,
365
+ "grad_norm": 8.279614448547363,
366
+ "learning_rate": 3e-06,
367
+ "loss": 0.0,
368
+ "reward": 0.6692784316837788,
369
+ "reward_std": 0.10041180578991771,
370
+ "rewards/accuracy_reward": 0.36096938233822584,
371
+ "rewards/semantic_entropy_math_reward": 0.6692784205079079,
372
+ "rewards/total_entropy_reward": 1.3163355849683285,
373
+ "step": 28
374
+ },
375
+ {
376
+ "completion_length": 675.4157962799072,
377
+ "epoch": 0.32470258922323303,
378
+ "grad_norm": 13.983796119689941,
379
+ "learning_rate": 3e-06,
380
+ "loss": 0.0,
381
+ "reward": 0.7091836705803871,
382
+ "reward_std": 0.09643913782201707,
383
+ "rewards/accuracy_reward": 0.478316318243742,
384
+ "rewards/semantic_entropy_math_reward": 0.7091836743056774,
385
+ "rewards/total_entropy_reward": 1.2364819720387459,
386
+ "step": 29
387
+ },
388
+ {
389
+ "completion_length": 723.9189872741699,
390
+ "epoch": 0.3358992302309307,
391
+ "grad_norm": 13.94961166381836,
392
+ "learning_rate": 3e-06,
393
+ "loss": 0.0,
394
+ "reward": 0.614431481808424,
395
+ "reward_std": 0.12222768180072308,
396
+ "rewards/accuracy_reward": 0.3533163210377097,
397
+ "rewards/semantic_entropy_math_reward": 0.614431481808424,
398
+ "rewards/total_entropy_reward": 1.4096959978342056,
399
+ "step": 30
400
+ },
401
+ {
402
+ "completion_length": 721.4604339599609,
403
+ "epoch": 0.3470958712386284,
404
+ "grad_norm": 7.932522773742676,
405
+ "learning_rate": 3e-06,
406
+ "loss": 0.0,
407
+ "reward": 0.6603498533368111,
408
+ "reward_std": 0.09537219302728772,
409
+ "rewards/accuracy_reward": 0.3992346851155162,
410
+ "rewards/semantic_entropy_math_reward": 0.6603498607873917,
411
+ "rewards/total_entropy_reward": 1.3325160779058933,
412
+ "step": 31
413
+ },
414
+ {
415
+ "completion_length": 702.3788185119629,
416
+ "epoch": 0.3582925122463261,
417
+ "grad_norm": 9.096883773803711,
418
+ "learning_rate": 3e-06,
419
+ "loss": 0.0,
420
+ "reward": 0.6829446069896221,
421
+ "reward_std": 0.09760210802778602,
422
+ "rewards/accuracy_reward": 0.40816325694322586,
423
+ "rewards/semantic_entropy_math_reward": 0.682944618165493,
424
+ "rewards/total_entropy_reward": 1.303523451089859,
425
+ "step": 32
426
+ },
427
+ {
428
+ "completion_length": 700.5924644470215,
429
+ "epoch": 0.3694891532540238,
430
+ "grad_norm": 3.891953229904175,
431
+ "learning_rate": 3e-06,
432
+ "loss": 0.0,
433
+ "reward": 0.7055393569171429,
434
+ "reward_std": 0.10629667551256716,
435
+ "rewards/accuracy_reward": 0.45982141979038715,
436
+ "rewards/semantic_entropy_math_reward": 0.7055393569171429,
437
+ "rewards/total_entropy_reward": 1.2391385585069656,
438
+ "step": 33
439
+ },
440
+ {
441
+ "completion_length": 716.7027950286865,
442
+ "epoch": 0.3806857942617215,
443
+ "grad_norm": 12.71116828918457,
444
+ "learning_rate": 3e-06,
445
+ "loss": 0.0,
446
+ "reward": 0.6698250789195299,
447
+ "reward_std": 0.09906701685395092,
448
+ "rewards/accuracy_reward": 0.40369897056370974,
449
+ "rewards/semantic_entropy_math_reward": 0.6698250807821751,
450
+ "rewards/total_entropy_reward": 1.3271235637366772,
451
+ "step": 34
452
+ },
453
+ {
454
+ "completion_length": 769.0414352416992,
455
+ "epoch": 0.3918824352694192,
456
+ "grad_norm": 9.855749130249023,
457
+ "learning_rate": 3e-06,
458
+ "loss": 0.0,
459
+ "reward": 0.6481414064764977,
460
+ "reward_std": 0.10145364608615637,
461
+ "rewards/accuracy_reward": 0.3743622349575162,
462
+ "rewards/semantic_entropy_math_reward": 0.6481414288282394,
463
+ "rewards/total_entropy_reward": 1.3565044924616814,
464
+ "step": 35
465
+ },
466
+ {
467
+ "completion_length": 722.349479675293,
468
+ "epoch": 0.40307907627711687,
469
+ "grad_norm": 5.131447792053223,
470
+ "learning_rate": 3e-06,
471
+ "loss": 0.0,
472
+ "reward": 0.6638119556009769,
473
+ "reward_std": 0.09085186710581183,
474
+ "rewards/accuracy_reward": 0.4170918297022581,
475
+ "rewards/semantic_entropy_math_reward": 0.6638119481503963,
476
+ "rewards/total_entropy_reward": 1.350273534655571,
477
+ "step": 36
478
+ },
479
+ {
480
+ "completion_length": 703.8832778930664,
481
+ "epoch": 0.41427571728481455,
482
+ "grad_norm": 7.057519435882568,
483
+ "learning_rate": 3e-06,
484
+ "loss": 0.0,
485
+ "reward": 0.690597664564848,
486
+ "reward_std": 0.09433076065033674,
487
+ "rewards/accuracy_reward": 0.43686223309487104,
488
+ "rewards/semantic_entropy_math_reward": 0.6905976496636868,
489
+ "rewards/total_entropy_reward": 1.2801040560007095,
490
+ "step": 37
491
+ },
492
+ {
493
+ "completion_length": 725.2799606323242,
494
+ "epoch": 0.42547235829251223,
495
+ "grad_norm": 7.460778713226318,
496
+ "learning_rate": 3e-06,
497
+ "loss": 0.0,
498
+ "reward": 0.6669096164405346,
499
+ "reward_std": 0.10552376857958734,
500
+ "rewards/accuracy_reward": 0.41326529532670975,
501
+ "rewards/semantic_entropy_math_reward": 0.6669096313416958,
502
+ "rewards/total_entropy_reward": 1.3323625773191452,
503
+ "step": 38
504
+ },
505
+ {
506
+ "completion_length": 725.7671966552734,
507
+ "epoch": 0.4366689993002099,
508
+ "grad_norm": 7.124528408050537,
509
+ "learning_rate": 3e-06,
510
+ "loss": 0.0,
511
+ "reward": 0.6472303047776222,
512
+ "reward_std": 0.10429512546397746,
513
+ "rewards/accuracy_reward": 0.3909438718110323,
514
+ "rewards/semantic_entropy_math_reward": 0.6472303159534931,
515
+ "rewards/total_entropy_reward": 1.3546394035220146,
516
+ "step": 39
517
+ },
518
+ {
519
+ "completion_length": 697.7423439025879,
520
+ "epoch": 0.44786564030790765,
521
+ "grad_norm": 7.065954208374023,
522
+ "learning_rate": 3e-06,
523
+ "loss": 0.0,
524
+ "reward": 0.7193877585232258,
525
+ "reward_std": 0.09428266366012394,
526
+ "rewards/accuracy_reward": 0.4336734600365162,
527
+ "rewards/semantic_entropy_math_reward": 0.7193877547979355,
528
+ "rewards/total_entropy_reward": 1.2143822945654392,
529
+ "step": 40
530
+ },
531
+ {
532
+ "completion_length": 703.3673400878906,
533
+ "epoch": 0.45906228131560534,
534
+ "grad_norm": 6.764792442321777,
535
+ "learning_rate": 3e-06,
536
+ "loss": 0.0,
537
+ "reward": 0.706632649526,
538
+ "reward_std": 0.0942279752343893,
539
+ "rewards/accuracy_reward": 0.4649234591051936,
540
+ "rewards/semantic_entropy_math_reward": 0.7066326681524515,
541
+ "rewards/total_entropy_reward": 1.238373503088951,
542
+ "step": 41
543
+ },
544
+ {
545
+ "completion_length": 651.4183578491211,
546
+ "epoch": 0.470258922323303,
547
+ "grad_norm": 16.47541618347168,
548
+ "learning_rate": 3e-06,
549
+ "loss": 0.0,
550
+ "reward": 0.6813046522438526,
551
+ "reward_std": 0.10622056131251156,
552
+ "rewards/accuracy_reward": 0.43239795230329037,
553
+ "rewards/semantic_entropy_math_reward": 0.681304682046175,
554
+ "rewards/total_entropy_reward": 1.284136950969696,
555
+ "step": 42
556
+ },
557
+ {
558
+ "completion_length": 743.2646636962891,
559
+ "epoch": 0.4814555633310007,
560
+ "grad_norm": 15.174323081970215,
561
+ "learning_rate": 3e-06,
562
+ "loss": 0.0,
563
+ "reward": 0.6525145824998617,
564
+ "reward_std": 0.10325121926143765,
565
+ "rewards/accuracy_reward": 0.3947704015299678,
566
+ "rewards/semantic_entropy_math_reward": 0.652514586225152,
567
+ "rewards/total_entropy_reward": 1.358573641628027,
568
+ "step": 43
569
+ },
570
+ {
571
+ "completion_length": 725.5962905883789,
572
+ "epoch": 0.4926522043386984,
573
+ "grad_norm": 7.421966075897217,
574
+ "learning_rate": 3e-06,
575
+ "loss": 0.0,
576
+ "reward": 0.6867711264640093,
577
+ "reward_std": 0.09842351987026632,
578
+ "rewards/accuracy_reward": 0.4323979504406452,
579
+ "rewards/semantic_entropy_math_reward": 0.6867711190134287,
580
+ "rewards/total_entropy_reward": 1.2839920222759247,
581
+ "step": 44
582
+ },
583
+ {
584
+ "completion_length": 698.1390113830566,
585
+ "epoch": 0.5038488453463961,
586
+ "grad_norm": 9.836834907531738,
587
+ "learning_rate": 3e-06,
588
+ "loss": 0.0,
589
+ "reward": 0.6851311810314655,
590
+ "reward_std": 0.10096711455844343,
591
+ "rewards/accuracy_reward": 0.44196427427232265,
592
+ "rewards/semantic_entropy_math_reward": 0.6851312182843685,
593
+ "rewards/total_entropy_reward": 1.290800966322422,
594
+ "step": 45
595
+ },
596
+ {
597
+ "completion_length": 737.8379898071289,
598
+ "epoch": 0.5150454863540938,
599
+ "grad_norm": 13.060182571411133,
600
+ "learning_rate": 3e-06,
601
+ "loss": 0.0,
602
+ "reward": 0.6867711432278156,
603
+ "reward_std": 0.10890168650075793,
604
+ "rewards/accuracy_reward": 0.403061218559742,
605
+ "rewards/semantic_entropy_math_reward": 0.6867711097002029,
606
+ "rewards/total_entropy_reward": 1.2737670093774796,
607
+ "step": 46
608
+ },
609
+ {
610
+ "completion_length": 720.0937309265137,
611
+ "epoch": 0.5262421273617914,
612
+ "grad_norm": 17.424467086791992,
613
+ "learning_rate": 3e-06,
614
+ "loss": 0.0,
615
+ "reward": 0.6268221419304609,
616
+ "reward_std": 0.11281977966427803,
617
+ "rewards/accuracy_reward": 0.39668366219848394,
618
+ "rewards/semantic_entropy_math_reward": 0.6268221419304609,
619
+ "rewards/total_entropy_reward": 1.396425575017929,
620
+ "step": 47
621
+ },
622
+ {
623
+ "completion_length": 730.9495983123779,
624
+ "epoch": 0.5374387683694891,
625
+ "grad_norm": 7.613661766052246,
626
+ "learning_rate": 3e-06,
627
+ "loss": 0.0,
628
+ "reward": 0.6333818975836039,
629
+ "reward_std": 0.09289166564121842,
630
+ "rewards/accuracy_reward": 0.38201529905200005,
631
+ "rewards/semantic_entropy_math_reward": 0.6333819199353456,
632
+ "rewards/total_entropy_reward": 1.3891723416745663,
633
+ "step": 48
634
+ },
635
+ {
636
+ "completion_length": 730.4712867736816,
637
+ "epoch": 0.5486354093771868,
638
+ "grad_norm": 11.16511344909668,
639
+ "learning_rate": 3e-06,
640
+ "loss": 0.0,
641
+ "reward": 0.628097664564848,
642
+ "reward_std": 0.10562112857587636,
643
+ "rewards/accuracy_reward": 0.3871173355728388,
644
+ "rewards/semantic_entropy_math_reward": 0.6280976608395576,
645
+ "rewards/total_entropy_reward": 1.3984056264162064,
646
+ "step": 49
647
+ },
648
+ {
649
+ "completion_length": 700.4126129150391,
650
+ "epoch": 0.5598320503848845,
651
+ "grad_norm": 7.883020401000977,
652
+ "learning_rate": 3e-06,
653
+ "loss": 0.0,
654
+ "reward": 0.6543367393314838,
655
+ "reward_std": 0.08848634106107056,
656
+ "rewards/accuracy_reward": 0.43494897056370974,
657
+ "rewards/semantic_entropy_math_reward": 0.6543367449194193,
658
+ "rewards/total_entropy_reward": 1.368666134774685,
659
+ "step": 50
660
+ },
661
+ {
662
+ "completion_length": 726.035701751709,
663
+ "epoch": 0.5710286913925823,
664
+ "grad_norm": 16.26348114013672,
665
+ "learning_rate": 3e-06,
666
+ "loss": 0.0,
667
+ "reward": 0.6665451787412167,
668
+ "reward_std": 0.09329186473041773,
669
+ "rewards/accuracy_reward": 0.40114795323461294,
670
+ "rewards/semantic_entropy_math_reward": 0.6665451973676682,
671
+ "rewards/total_entropy_reward": 1.3389556668698788,
672
+ "step": 51
673
+ },
674
+ {
675
+ "completion_length": 719.875617980957,
676
+ "epoch": 0.58222533240028,
677
+ "grad_norm": 10.366020202636719,
678
+ "learning_rate": 3e-06,
679
+ "loss": 0.0,
680
+ "reward": 0.6474125292152166,
681
+ "reward_std": 0.10870802192948759,
682
+ "rewards/accuracy_reward": 0.3934948956593871,
683
+ "rewards/semantic_entropy_math_reward": 0.6474125385284424,
684
+ "rewards/total_entropy_reward": 1.3537504002451897,
685
+ "step": 52
686
+ },
687
+ {
688
+ "completion_length": 691.538890838623,
689
+ "epoch": 0.5934219734079776,
690
+ "grad_norm": 10.94919204711914,
691
+ "learning_rate": 3e-06,
692
+ "loss": 0.0,
693
+ "reward": 0.6860422715544701,
694
+ "reward_std": 0.0946744061075151,
695
+ "rewards/accuracy_reward": 0.489158159121871,
696
+ "rewards/semantic_entropy_math_reward": 0.6860422790050507,
697
+ "rewards/total_entropy_reward": 1.3007621616125107,
698
+ "step": 53
699
+ },
700
+ {
701
+ "completion_length": 705.672176361084,
702
+ "epoch": 0.6046186144156753,
703
+ "grad_norm": 23.8171329498291,
704
+ "learning_rate": 3e-06,
705
+ "loss": 0.0,
706
+ "reward": 0.6762026119977236,
707
+ "reward_std": 0.09615424065850675,
708
+ "rewards/accuracy_reward": 0.39668366592377424,
709
+ "rewards/semantic_entropy_math_reward": 0.6762026362121105,
710
+ "rewards/total_entropy_reward": 1.3121707029640675,
711
+ "step": 54
712
+ },
713
+ {
714
+ "completion_length": 677.1664428710938,
715
+ "epoch": 0.615815255423373,
716
+ "grad_norm": 9.031086921691895,
717
+ "learning_rate": 3e-06,
718
+ "loss": 0.0,
719
+ "reward": 0.6896865852177143,
720
+ "reward_std": 0.1015757208224386,
721
+ "rewards/accuracy_reward": 0.4317601965740323,
722
+ "rewards/semantic_entropy_math_reward": 0.6896865479648113,
723
+ "rewards/total_entropy_reward": 1.287308655679226,
724
+ "step": 55
725
+ },
726
+ {
727
+ "completion_length": 713.0114631652832,
728
+ "epoch": 0.6270118964310707,
729
+ "grad_norm": 10.871789932250977,
730
+ "learning_rate": 3e-06,
731
+ "loss": 0.0,
732
+ "reward": 0.6656341031193733,
733
+ "reward_std": 0.11424232507124543,
734
+ "rewards/accuracy_reward": 0.4221938643604517,
735
+ "rewards/semantic_entropy_math_reward": 0.6656341068446636,
736
+ "rewards/total_entropy_reward": 1.3229641020298004,
737
+ "step": 56
738
+ },
739
+ {
740
+ "completion_length": 677.5694923400879,
741
+ "epoch": 0.6382085374387684,
742
+ "grad_norm": 8.904267311096191,
743
+ "learning_rate": 3e-06,
744
+ "loss": 0.0,
745
+ "reward": 0.6729227248579264,
746
+ "reward_std": 0.10780132608488202,
747
+ "rewards/accuracy_reward": 0.4381377436220646,
748
+ "rewards/semantic_entropy_math_reward": 0.6729227565228939,
749
+ "rewards/total_entropy_reward": 1.3122014850378036,
750
+ "step": 57
751
+ },
752
+ {
753
+ "completion_length": 687.1345500946045,
754
+ "epoch": 0.6494051784464661,
755
+ "grad_norm": 19.945981979370117,
756
+ "learning_rate": 3e-06,
757
+ "loss": 0.0,
758
+ "reward": 0.6621720213443041,
759
+ "reward_std": 0.10787450219504535,
760
+ "rewards/accuracy_reward": 0.4183673388324678,
761
+ "rewards/semantic_entropy_math_reward": 0.6621720027178526,
762
+ "rewards/total_entropy_reward": 1.3265771567821503,
763
+ "step": 58
764
+ },
765
+ {
766
+ "completion_length": 673.6224384307861,
767
+ "epoch": 0.6606018194541637,
768
+ "grad_norm": 25.872291564941406,
769
+ "learning_rate": 3e-06,
770
+ "loss": 0.0,
771
+ "reward": 0.6900510117411613,
772
+ "reward_std": 0.09328114939853549,
773
+ "rewards/accuracy_reward": 0.3998724389821291,
774
+ "rewards/semantic_entropy_math_reward": 0.6900510005652905,
775
+ "rewards/total_entropy_reward": 1.2932880148291588,
776
+ "step": 59
777
+ },
778
+ {
779
+ "completion_length": 712.2027912139893,
780
+ "epoch": 0.6717984604618614,
781
+ "grad_norm": 12.222248077392578,
782
+ "learning_rate": 3e-06,
783
+ "loss": 0.0,
784
+ "reward": 0.6426749210804701,
785
+ "reward_std": 0.09614149574190378,
786
+ "rewards/accuracy_reward": 0.41007652156986296,
787
+ "rewards/semantic_entropy_math_reward": 0.6426749173551798,
788
+ "rewards/total_entropy_reward": 1.383428543806076,
789
+ "step": 60
790
+ },
791
+ {
792
+ "completion_length": 739.6472969055176,
793
+ "epoch": 0.6829951014695591,
794
+ "grad_norm": 9.738767623901367,
795
+ "learning_rate": 3e-06,
796
+ "loss": 0.0,
797
+ "reward": 0.6273687966167927,
798
+ "reward_std": 0.11152111599221826,
799
+ "rewards/accuracy_reward": 0.34885203186422586,
800
+ "rewards/semantic_entropy_math_reward": 0.6273688077926636,
801
+ "rewards/total_entropy_reward": 1.3971327617764473,
802
+ "step": 61
803
+ },
804
+ {
805
+ "completion_length": 685.2659378051758,
806
+ "epoch": 0.6941917424772568,
807
+ "grad_norm": 20.52920913696289,
808
+ "learning_rate": 3e-06,
809
+ "loss": 0.0,
810
+ "reward": 0.6596209742128849,
811
+ "reward_std": 0.10549976932816207,
812
+ "rewards/accuracy_reward": 0.43622447922825813,
813
+ "rewards/semantic_entropy_math_reward": 0.6596209909766912,
814
+ "rewards/total_entropy_reward": 1.3266954682767391,
815
+ "step": 62
816
+ },
817
+ {
818
+ "completion_length": 684.1128730773926,
819
+ "epoch": 0.7053883834849545,
820
+ "grad_norm": 13.253375053405762,
821
+ "learning_rate": 3e-06,
822
+ "loss": 0.0,
823
+ "reward": 0.6740160267800093,
824
+ "reward_std": 0.11020976235158741,
825
+ "rewards/accuracy_reward": 0.4419642761349678,
826
+ "rewards/semantic_entropy_math_reward": 0.6740160342305899,
827
+ "rewards/total_entropy_reward": 1.295092262327671,
828
+ "step": 63
829
+ },
830
+ {
831
+ "completion_length": 702.8647766113281,
832
+ "epoch": 0.7165850244926522,
833
+ "grad_norm": 11.436138153076172,
834
+ "learning_rate": 3e-06,
835
+ "loss": 0.0,
836
+ "reward": 0.6612609177827835,
837
+ "reward_std": 0.0923373675905168,
838
+ "rewards/accuracy_reward": 0.43239795230329037,
839
+ "rewards/semantic_entropy_math_reward": 0.661260936409235,
840
+ "rewards/total_entropy_reward": 1.3441792502999306,
841
+ "step": 64
842
+ },
843
+ {
844
+ "completion_length": 713.3341636657715,
845
+ "epoch": 0.72778166550035,
846
+ "grad_norm": 15.771661758422852,
847
+ "learning_rate": 3e-06,
848
+ "loss": 0.0,
849
+ "reward": 0.6313775349408388,
850
+ "reward_std": 0.1141419094055891,
851
+ "rewards/accuracy_reward": 0.4139030510559678,
852
+ "rewards/semantic_entropy_math_reward": 0.6313775237649679,
853
+ "rewards/total_entropy_reward": 1.3911906033754349,
854
+ "step": 65
855
+ },
856
+ {
857
+ "completion_length": 691.0420761108398,
858
+ "epoch": 0.7389783065080476,
859
+ "grad_norm": 9.556193351745605,
860
+ "learning_rate": 3e-06,
861
+ "loss": 0.0,
862
+ "reward": 0.6576165966689587,
863
+ "reward_std": 0.10697318986058235,
864
+ "rewards/accuracy_reward": 0.3386479513719678,
865
+ "rewards/semantic_entropy_math_reward": 0.657616626471281,
866
+ "rewards/total_entropy_reward": 1.3421761691570282,
867
+ "step": 66
868
+ },
869
+ {
870
+ "completion_length": 688.1179618835449,
871
+ "epoch": 0.7501749475157453,
872
+ "grad_norm": 18.490732192993164,
873
+ "learning_rate": 3e-06,
874
+ "loss": 0.0,
875
+ "reward": 0.6712827831506729,
876
+ "reward_std": 0.10525703080929816,
877
+ "rewards/accuracy_reward": 0.445790808647871,
878
+ "rewards/semantic_entropy_math_reward": 0.6712828055024147,
879
+ "rewards/total_entropy_reward": 1.3154872134327888,
880
+ "step": 67
881
+ },
882
+ {
883
+ "completion_length": 647.132007598877,
884
+ "epoch": 0.761371588523443,
885
+ "grad_norm": 7.94115686416626,
886
+ "learning_rate": 3e-06,
887
+ "loss": 0.0,
888
+ "reward": 0.6920553781092167,
889
+ "reward_std": 0.09936971426941454,
890
+ "rewards/accuracy_reward": 0.43176019564270973,
891
+ "rewards/semantic_entropy_math_reward": 0.6920554004609585,
892
+ "rewards/total_entropy_reward": 1.2725396156311035,
893
+ "step": 68
894
+ },
895
+ {
896
+ "completion_length": 699.4005012512207,
897
+ "epoch": 0.7725682295311407,
898
+ "grad_norm": 12.146166801452637,
899
+ "learning_rate": 3e-06,
900
+ "loss": 0.0,
901
+ "reward": 0.6652696616947651,
902
+ "reward_std": 0.11806112038902938,
903
+ "rewards/accuracy_reward": 0.409438768401742,
904
+ "rewards/semantic_entropy_math_reward": 0.6652696691453457,
905
+ "rewards/total_entropy_reward": 1.3024558648467064,
906
+ "step": 69
907
+ },
908
+ {
909
+ "completion_length": 667.9285659790039,
910
+ "epoch": 0.7837648705388384,
911
+ "grad_norm": 15.953444480895996,
912
+ "learning_rate": 3e-06,
913
+ "loss": 0.0,
914
+ "reward": 0.679846927523613,
915
+ "reward_std": 0.09083303948864341,
916
+ "rewards/accuracy_reward": 0.4311224427074194,
917
+ "rewards/semantic_entropy_math_reward": 0.6798469312489033,
918
+ "rewards/total_entropy_reward": 1.3062404170632362,
919
+ "step": 70
920
+ },
921
+ {
922
+ "completion_length": 705.4400367736816,
923
+ "epoch": 0.794961511546536,
924
+ "grad_norm": 12.320276260375977,
925
+ "learning_rate": 3e-06,
926
+ "loss": 0.0,
927
+ "reward": 0.6556122340261936,
928
+ "reward_std": 0.09324299031868577,
929
+ "rewards/accuracy_reward": 0.37946427892893553,
930
+ "rewards/semantic_entropy_math_reward": 0.6556122545152903,
931
+ "rewards/total_entropy_reward": 1.3581575378775597,
932
+ "step": 71
933
+ },
934
+ {
935
+ "completion_length": 654.1511325836182,
936
+ "epoch": 0.8061581525542337,
937
+ "grad_norm": 22.0692138671875,
938
+ "learning_rate": 3e-06,
939
+ "loss": 0.0,
940
+ "reward": 0.6794824972748756,
941
+ "reward_std": 0.10328507493250072,
942
+ "rewards/accuracy_reward": 0.4209183603525162,
943
+ "rewards/semantic_entropy_math_reward": 0.6794825121760368,
944
+ "rewards/total_entropy_reward": 1.2847715727984905,
945
+ "step": 72
946
+ },
947
+ {
948
+ "completion_length": 690.308666229248,
949
+ "epoch": 0.8173547935619314,
950
+ "grad_norm": 10.22040843963623,
951
+ "learning_rate": 3e-06,
952
+ "loss": 0.0,
953
+ "reward": 0.6780247762799263,
954
+ "reward_std": 0.105113809928298,
955
+ "rewards/accuracy_reward": 0.45918366499245167,
956
+ "rewards/semantic_entropy_math_reward": 0.678024772554636,
957
+ "rewards/total_entropy_reward": 1.2880272567272186,
958
+ "step": 73
959
+ },
960
+ {
961
+ "completion_length": 696.0803337097168,
962
+ "epoch": 0.8285514345696291,
963
+ "grad_norm": 20.899675369262695,
964
+ "learning_rate": 3e-06,
965
+ "loss": 0.0,
966
+ "reward": 0.6516034882515669,
967
+ "reward_std": 0.10480827221181244,
968
+ "rewards/accuracy_reward": 0.3852040730416775,
969
+ "rewards/semantic_entropy_math_reward": 0.6516034882515669,
970
+ "rewards/total_entropy_reward": 1.3282338082790375,
971
+ "step": 74
972
+ },
973
+ {
974
+ "completion_length": 649.8654270172119,
975
+ "epoch": 0.8397480755773268,
976
+ "grad_norm": 9.469245910644531,
977
+ "learning_rate": 3e-06,
978
+ "loss": 0.0,
979
+ "reward": 0.7246719934046268,
980
+ "reward_std": 0.08447014342527837,
981
+ "rewards/accuracy_reward": 0.433035708963871,
982
+ "rewards/semantic_entropy_math_reward": 0.7246720045804977,
983
+ "rewards/total_entropy_reward": 1.220706269145012,
984
+ "step": 75
985
+ },
986
+ {
987
+ "completion_length": 746.2142791748047,
988
+ "epoch": 0.8509447165850245,
989
+ "grad_norm": 10.530217170715332,
990
+ "learning_rate": 3e-06,
991
+ "loss": 0.0,
992
+ "reward": 0.639395035803318,
993
+ "reward_std": 0.10488821647595614,
994
+ "rewards/accuracy_reward": 0.40816325321793556,
995
+ "rewards/semantic_entropy_math_reward": 0.6393950544297695,
996
+ "rewards/total_entropy_reward": 1.3760143592953682,
997
+ "step": 76
998
+ },
999
+ {
1000
+ "completion_length": 655.6836585998535,
1001
+ "epoch": 0.8621413575927221,
1002
+ "grad_norm": 14.1907958984375,
1003
+ "learning_rate": 3e-06,
1004
+ "loss": 0.0,
1005
+ "reward": 0.7228498421609402,
1006
+ "reward_std": 0.09376341942697763,
1007
+ "rewards/accuracy_reward": 0.4674744727090001,
1008
+ "rewards/semantic_entropy_math_reward": 0.7228498421609402,
1009
+ "rewards/total_entropy_reward": 1.2132117934525013,
1010
+ "step": 77
1011
+ },
1012
+ {
1013
+ "completion_length": 672.8137626647949,
1014
+ "epoch": 0.8733379986004198,
1015
+ "grad_norm": 13.5209379196167,
1016
+ "learning_rate": 3e-06,
1017
+ "loss": 0.0,
1018
+ "reward": 0.652332354336977,
1019
+ "reward_std": 0.09460025187581778,
1020
+ "rewards/accuracy_reward": 0.3858418334275484,
1021
+ "rewards/semantic_entropy_math_reward": 0.6523323617875576,
1022
+ "rewards/total_entropy_reward": 1.351367250084877,
1023
+ "step": 78
1024
+ },
1025
+ {
1026
+ "completion_length": 696.9342880249023,
1027
+ "epoch": 0.8845346396081175,
1028
+ "grad_norm": 17.302839279174805,
1029
+ "learning_rate": 3e-06,
1030
+ "loss": 0.0,
1031
+ "reward": 0.6514212656766176,
1032
+ "reward_std": 0.09626807877793908,
1033
+ "rewards/accuracy_reward": 0.4062499897554517,
1034
+ "rewards/semantic_entropy_math_reward": 0.651421282440424,
1035
+ "rewards/total_entropy_reward": 1.366665042936802,
1036
+ "step": 79
1037
+ },
1038
+ {
1039
+ "completion_length": 682.3073806762695,
1040
+ "epoch": 0.8957312806158153,
1041
+ "grad_norm": 9.640515327453613,
1042
+ "learning_rate": 3e-06,
1043
+ "loss": 0.0,
1044
+ "reward": 0.6767492536455393,
1045
+ "reward_std": 0.10075846454128623,
1046
+ "rewards/accuracy_reward": 0.42028060276061296,
1047
+ "rewards/semantic_entropy_math_reward": 0.6767492648214102,
1048
+ "rewards/total_entropy_reward": 1.3104709684848785,
1049
+ "step": 80
1050
+ },
1051
+ {
1052
+ "completion_length": 722.4725646972656,
1053
+ "epoch": 0.906927921623513,
1054
+ "grad_norm": 6.948408603668213,
1055
+ "learning_rate": 3e-06,
1056
+ "loss": 0.0,
1057
+ "reward": 0.6521501280367374,
1058
+ "reward_std": 0.08798706252127886,
1059
+ "rewards/accuracy_reward": 0.4534438718110323,
1060
+ "rewards/semantic_entropy_math_reward": 0.6521501541137695,
1061
+ "rewards/total_entropy_reward": 1.3641655445098877,
1062
+ "step": 81
1063
+ },
1064
+ {
1065
+ "completion_length": 686.7238388061523,
1066
+ "epoch": 0.9181245626312107,
1067
+ "grad_norm": 47.02455139160156,
1068
+ "learning_rate": 3e-06,
1069
+ "loss": 0.0,
1070
+ "reward": 0.6811224482953548,
1071
+ "reward_std": 0.11016466980800033,
1072
+ "rewards/accuracy_reward": 0.42283162754029036,
1073
+ "rewards/semantic_entropy_math_reward": 0.6811224408447742,
1074
+ "rewards/total_entropy_reward": 1.2916913330554962,
1075
+ "step": 82
1076
+ },
1077
+ {
1078
+ "completion_length": 709.4598007202148,
1079
+ "epoch": 0.9293212036389084,
1080
+ "grad_norm": 31.659669876098633,
1081
+ "learning_rate": 3e-06,
1082
+ "loss": 0.0,
1083
+ "reward": 0.6086005661636591,
1084
+ "reward_std": 0.10625140275806189,
1085
+ "rewards/accuracy_reward": 0.37882652692496777,
1086
+ "rewards/semantic_entropy_math_reward": 0.60860057733953,
1087
+ "rewards/total_entropy_reward": 1.4536337479948997,
1088
+ "step": 83
1089
+ },
1090
+ {
1091
+ "completion_length": 661.408784866333,
1092
+ "epoch": 0.940517844646606,
1093
+ "grad_norm": 8.08792781829834,
1094
+ "learning_rate": 3e-06,
1095
+ "loss": 0.0,
1096
+ "reward": 0.6858600489795208,
1097
+ "reward_std": 0.09285477036610246,
1098
+ "rewards/accuracy_reward": 0.4221938652917743,
1099
+ "rewards/semantic_entropy_math_reward": 0.6858600452542305,
1100
+ "rewards/total_entropy_reward": 1.283115666359663,
1101
+ "step": 84
1102
+ },
1103
+ {
1104
+ "completion_length": 657.9400405883789,
1105
+ "epoch": 0.9517144856543037,
1106
+ "grad_norm": 21.801326751708984,
1107
+ "learning_rate": 3e-06,
1108
+ "loss": 0.0,
1109
+ "reward": 0.6805757954716682,
1110
+ "reward_std": 0.09728616522625089,
1111
+ "rewards/accuracy_reward": 0.43749999441206455,
1112
+ "rewards/semantic_entropy_math_reward": 0.6805758327245712,
1113
+ "rewards/total_entropy_reward": 1.3078671097755432,
1114
+ "step": 85
1115
+ },
1116
+ {
1117
+ "completion_length": 674.7142734527588,
1118
+ "epoch": 0.9629111266620014,
1119
+ "grad_norm": 11.559253692626953,
1120
+ "learning_rate": 3e-06,
1121
+ "loss": 0.0,
1122
+ "reward": 0.6862244829535484,
1123
+ "reward_std": 0.10050268471240997,
1124
+ "rewards/accuracy_reward": 0.4304846851155162,
1125
+ "rewards/semantic_entropy_math_reward": 0.686224490404129,
1126
+ "rewards/total_entropy_reward": 1.2905268669128418,
1127
+ "step": 86
1128
+ },
1129
+ {
1130
+ "completion_length": 636.0388870239258,
1131
+ "epoch": 0.9741077676696991,
1132
+ "grad_norm": 11.289789199829102,
1133
+ "learning_rate": 3e-06,
1134
+ "loss": 0.0,
1135
+ "reward": 0.6986151486635208,
1136
+ "reward_std": 0.10155197884887457,
1137
+ "rewards/accuracy_reward": 0.43686223216354847,
1138
+ "rewards/semantic_entropy_math_reward": 0.6986151374876499,
1139
+ "rewards/total_entropy_reward": 1.2592380531132221,
1140
+ "step": 87
1141
+ },
1142
+ {
1143
+ "completion_length": 683.5822525024414,
1144
+ "epoch": 0.9853044086773968,
1145
+ "grad_norm": 10.526566505432129,
1146
+ "learning_rate": 3e-06,
1147
+ "loss": 0.0,
1148
+ "reward": 0.6585276797413826,
1149
+ "reward_std": 0.10897616110742092,
1150
+ "rewards/accuracy_reward": 0.4017857098951936,
1151
+ "rewards/semantic_entropy_math_reward": 0.6585277020931244,
1152
+ "rewards/total_entropy_reward": 1.324688896536827,
1153
+ "step": 88
1154
+ },
1155
+ {
1156
+ "completion_length": 683.3411960601807,
1157
+ "epoch": 0.9965010496850945,
1158
+ "grad_norm": 19.354259490966797,
1159
+ "learning_rate": 3e-06,
1160
+ "loss": 0.0,
1161
+ "reward": 0.6687317825853825,
1162
+ "reward_std": 0.09195416304282844,
1163
+ "rewards/accuracy_reward": 0.380102027207613,
1164
+ "rewards/semantic_entropy_math_reward": 0.6687317807227373,
1165
+ "rewards/total_entropy_reward": 1.3135743215680122,
1166
+ "step": 89
1167
+ },
1168
+ {
1169
+ "epoch": 0.9965010496850945,
1170
+ "step": 89,
1171
+ "total_flos": 0.0,
1172
+ "train_loss": 2.8008765204773544e-08,
1173
+ "train_runtime": 57265.8095,
1174
+ "train_samples_per_second": 0.349,
1175
+ "train_steps_per_second": 0.002
1176
+ }
1177
+ ],
1178
+ "logging_steps": 1,
1179
+ "max_steps": 89,
1180
+ "num_input_tokens_seen": 0,
1181
+ "num_train_epochs": 1,
1182
+ "save_steps": 20,
1183
+ "stateful_callbacks": {
1184
+ "TrainerControl": {
1185
+ "args": {
1186
+ "should_epoch_stop": false,
1187
+ "should_evaluate": false,
1188
+ "should_log": false,
1189
+ "should_save": true,
1190
+ "should_training_stop": true
1191
+ },
1192
+ "attributes": {}
1193
+ }
1194
+ },
1195
+ "total_flos": 0.0,
1196
+ "train_batch_size": 2,
1197
+ "trial_name": null,
1198
+ "trial_params": null
1199
+ }