zerozeroz commited on
Commit
b396d30
·
verified ·
1 Parent(s): 7aa546a

Model save

Browse files
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-Coder-3B
3
+ library_name: transformers
4
+ model_name: Qwen2.5-Coder-3B
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - grpo
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for Qwen2.5-Coder-3B
13
+
14
+ This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-3B](https://huggingface.co/Qwen/Qwen2.5-Coder-3B).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="zerozeroz/Qwen2.5-Coder-3B", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+
31
+
32
+
33
+ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
34
+
35
+ ### Framework versions
36
+
37
+ - TRL: 0.14.0
38
+ - Transformers: 4.48.1
39
+ - Pytorch: 2.5.1+cu121
40
+ - Datasets: 3.1.0
41
+ - Tokenizers: 0.21.0
42
+
43
+ ## Citations
44
+
45
+ Cite GRPO as:
46
+
47
+ ```bibtex
48
+ @article{zhihong2024deepseekmath,
49
+ title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
50
+ author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
51
+ year = 2024,
52
+ eprint = {arXiv:2402.03300},
53
+ }
54
+
55
+ ```
56
+
57
+ Cite TRL as:
58
+
59
+ ```bibtex
60
+ @misc{vonwerra2022trl,
61
+ title = {{TRL: Transformer Reinforcement Learning}},
62
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
63
+ year = 2020,
64
+ journal = {GitHub repository},
65
+ publisher = {GitHub},
66
+ howpublished = {\url{https://github.com/huggingface/trl}}
67
+ }
68
+ ```
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 1.9367338650191358e-05,
4
+ "train_runtime": 3648.0047,
5
+ "train_samples": 374,
6
+ "train_samples_per_second": 0.206,
7
+ "train_steps_per_second": 0.034
8
+ }
config.json CHANGED
@@ -23,7 +23,7 @@
23
  "tie_word_embeddings": true,
24
  "torch_dtype": "bfloat16",
25
  "transformers_version": "4.48.1",
26
- "use_cache": false,
27
  "use_sliding_window": false,
28
  "vocab_size": 151936
29
  }
 
23
  "tie_word_embeddings": true,
24
  "torch_dtype": "bfloat16",
25
  "transformers_version": "4.48.1",
26
+ "use_cache": true,
27
  "use_sliding_window": false,
28
  "vocab_size": 151936
29
  }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.48.1"
6
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 0.0,
3
+ "train_loss": 1.9367338650191358e-05,
4
+ "train_runtime": 3648.0047,
5
+ "train_samples": 374,
6
+ "train_samples_per_second": 0.206,
7
+ "train_steps_per_second": 0.034
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,1667 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.992,
5
+ "eval_steps": 500,
6
+ "global_step": 125,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "completion_length": 140.9583396911621,
13
+ "epoch": 0.016,
14
+ "grad_norm": 1.4258953228320013,
15
+ "kl": 0.0,
16
+ "learning_rate": 1.25e-07,
17
+ "loss": 0.0,
18
+ "reward": 0.5152640044689178,
19
+ "reward_std": 0.5508254170417786,
20
+ "rewards/correct_code_reward_func": 0.2291666716337204,
21
+ "rewards/len_reward_func": 0.28609737753868103,
22
+ "step": 1
23
+ },
24
+ {
25
+ "completion_length": 131.50000762939453,
26
+ "epoch": 0.032,
27
+ "grad_norm": 1.1519258687122351,
28
+ "kl": 0.0,
29
+ "learning_rate": 2.5e-07,
30
+ "loss": 0.0,
31
+ "reward": 0.541226252913475,
32
+ "reward_std": 0.5189632624387741,
33
+ "rewards/correct_code_reward_func": 0.2500000111758709,
34
+ "rewards/len_reward_func": 0.29122625291347504,
35
+ "step": 2
36
+ },
37
+ {
38
+ "completion_length": 108.83333587646484,
39
+ "epoch": 0.048,
40
+ "grad_norm": 1.467116667329371,
41
+ "kl": 0.00013637542724609375,
42
+ "learning_rate": 3.75e-07,
43
+ "loss": 0.0,
44
+ "reward": 0.7587994039058685,
45
+ "reward_std": 0.5140225142240524,
46
+ "rewards/correct_code_reward_func": 0.5416666865348816,
47
+ "rewards/len_reward_func": 0.21713273972272873,
48
+ "step": 3
49
+ },
50
+ {
51
+ "completion_length": 159.81250762939453,
52
+ "epoch": 0.064,
53
+ "grad_norm": 1.2682485669137435,
54
+ "kl": 0.00018215179443359375,
55
+ "learning_rate": 5e-07,
56
+ "loss": 0.0,
57
+ "reward": 0.5178248882293701,
58
+ "reward_std": 0.4526914358139038,
59
+ "rewards/correct_code_reward_func": 0.1666666716337204,
60
+ "rewards/len_reward_func": 0.3511582016944885,
61
+ "step": 4
62
+ },
63
+ {
64
+ "completion_length": 176.56250762939453,
65
+ "epoch": 0.08,
66
+ "grad_norm": 1.2033440109589688,
67
+ "kl": 0.00014066696166992188,
68
+ "learning_rate": 4.999157413258781e-07,
69
+ "loss": 0.0,
70
+ "reward": 0.32241350412368774,
71
+ "reward_std": 0.32281263172626495,
72
+ "rewards/correct_code_reward_func": 0.02083333395421505,
73
+ "rewards/len_reward_func": 0.30158019065856934,
74
+ "step": 5
75
+ },
76
+ {
77
+ "completion_length": 124.87500762939453,
78
+ "epoch": 0.096,
79
+ "grad_norm": 1.5120707071506325,
80
+ "kl": 0.00016808509826660156,
81
+ "learning_rate": 4.996630220997057e-07,
82
+ "loss": 0.0,
83
+ "reward": 0.746085911989212,
84
+ "reward_std": 0.5452268123626709,
85
+ "rewards/correct_code_reward_func": 0.4583333432674408,
86
+ "rewards/len_reward_func": 0.28775252401828766,
87
+ "step": 6
88
+ },
89
+ {
90
+ "completion_length": 169.9166717529297,
91
+ "epoch": 0.112,
92
+ "grad_norm": 0.9079518632617903,
93
+ "kl": 0.00011348724365234375,
94
+ "learning_rate": 4.992420126717784e-07,
95
+ "loss": 0.0,
96
+ "reward": 0.36989694088697433,
97
+ "reward_std": 0.45903605222702026,
98
+ "rewards/correct_code_reward_func": 0.125,
99
+ "rewards/len_reward_func": 0.24489693343639374,
100
+ "step": 7
101
+ },
102
+ {
103
+ "completion_length": 219.43750762939453,
104
+ "epoch": 0.128,
105
+ "grad_norm": 1.2633142753352289,
106
+ "kl": 0.0002155303955078125,
107
+ "learning_rate": 4.986529968316653e-07,
108
+ "loss": 0.0,
109
+ "reward": 0.44794920086860657,
110
+ "reward_std": 0.385338693857193,
111
+ "rewards/correct_code_reward_func": 0.1250000037252903,
112
+ "rewards/len_reward_func": 0.3229491859674454,
113
+ "step": 8
114
+ },
115
+ {
116
+ "completion_length": 227.91667938232422,
117
+ "epoch": 0.144,
118
+ "grad_norm": 1.0211344567101885,
119
+ "kl": 0.00011777877807617188,
120
+ "learning_rate": 4.978963716169165e-07,
121
+ "loss": 0.0,
122
+ "reward": 0.6235890090465546,
123
+ "reward_std": 0.5187947303056717,
124
+ "rewards/correct_code_reward_func": 0.3125,
125
+ "rewards/len_reward_func": 0.31108900904655457,
126
+ "step": 9
127
+ },
128
+ {
129
+ "completion_length": 188.25000762939453,
130
+ "epoch": 0.16,
131
+ "grad_norm": 1.0353822839723037,
132
+ "kl": 0.00011730194091796875,
133
+ "learning_rate": 4.969726470454313e-07,
134
+ "loss": 0.0,
135
+ "reward": 0.6911160051822662,
136
+ "reward_std": 0.5456923246383667,
137
+ "rewards/correct_code_reward_func": 0.4166666865348816,
138
+ "rewards/len_reward_func": 0.27444930374622345,
139
+ "step": 10
140
+ },
141
+ {
142
+ "completion_length": 168.27083587646484,
143
+ "epoch": 0.176,
144
+ "grad_norm": 1.7856755608823207,
145
+ "kl": 0.00018310546875,
146
+ "learning_rate": 4.958824457716706e-07,
147
+ "loss": 0.0,
148
+ "reward": 0.4588584154844284,
149
+ "reward_std": 0.40716809034347534,
150
+ "rewards/correct_code_reward_func": 0.1875,
151
+ "rewards/len_reward_func": 0.271358385682106,
152
+ "step": 11
153
+ },
154
+ {
155
+ "completion_length": 203.08333587646484,
156
+ "epoch": 0.192,
157
+ "grad_norm": 0.9296992149271633,
158
+ "kl": 0.00016641616821289062,
159
+ "learning_rate": 4.946265026669454e-07,
160
+ "loss": 0.0,
161
+ "reward": 0.3501324951648712,
162
+ "reward_std": 0.49003708362579346,
163
+ "rewards/correct_code_reward_func": 0.1041666679084301,
164
+ "rewards/len_reward_func": 0.245965838432312,
165
+ "step": 12
166
+ },
167
+ {
168
+ "completion_length": 115.66666793823242,
169
+ "epoch": 0.208,
170
+ "grad_norm": 1.4335533212366607,
171
+ "kl": 0.00016570091247558594,
172
+ "learning_rate": 4.932056643240618e-07,
173
+ "loss": 0.0,
174
+ "reward": 0.7853705883026123,
175
+ "reward_std": 0.46111349761486053,
176
+ "rewards/correct_code_reward_func": 0.5000000149011612,
177
+ "rewards/len_reward_func": 0.2853705883026123,
178
+ "step": 13
179
+ },
180
+ {
181
+ "completion_length": 169.95833587646484,
182
+ "epoch": 0.224,
183
+ "grad_norm": 1.2723280538596287,
184
+ "kl": 0.00021076202392578125,
185
+ "learning_rate": 4.916208884866592e-07,
186
+ "loss": 0.0,
187
+ "reward": 0.5324039310216904,
188
+ "reward_std": 0.5338821411132812,
189
+ "rewards/correct_code_reward_func": 0.2708333432674408,
190
+ "rewards/len_reward_func": 0.26157061755657196,
191
+ "step": 14
192
+ },
193
+ {
194
+ "completion_length": 154.58333587646484,
195
+ "epoch": 0.24,
196
+ "grad_norm": 1.2578666329332273,
197
+ "kl": 0.00019168853759765625,
198
+ "learning_rate": 4.898732434036243e-07,
199
+ "loss": 0.0,
200
+ "reward": 0.5949100255966187,
201
+ "reward_std": 0.5048613250255585,
202
+ "rewards/correct_code_reward_func": 0.3125000149011612,
203
+ "rewards/len_reward_func": 0.28241002559661865,
204
+ "step": 15
205
+ },
206
+ {
207
+ "completion_length": 173.1875114440918,
208
+ "epoch": 0.256,
209
+ "grad_norm": 1.1230347862341579,
210
+ "kl": 0.00029277801513671875,
211
+ "learning_rate": 4.879639071090173e-07,
212
+ "loss": 0.0,
213
+ "reward": 0.4564344882965088,
214
+ "reward_std": 0.4671656936407089,
215
+ "rewards/correct_code_reward_func": 0.1666666679084301,
216
+ "rewards/len_reward_func": 0.2897678166627884,
217
+ "step": 16
218
+ },
219
+ {
220
+ "completion_length": 169.375,
221
+ "epoch": 0.272,
222
+ "grad_norm": 1.3041956300758726,
223
+ "kl": 0.0002574920654296875,
224
+ "learning_rate": 4.858941666279955e-07,
225
+ "loss": 0.0,
226
+ "reward": 0.6347246468067169,
227
+ "reward_std": 0.5289804339408875,
228
+ "rewards/correct_code_reward_func": 0.3541666716337204,
229
+ "rewards/len_reward_func": 0.2805579602718353,
230
+ "step": 17
231
+ },
232
+ {
233
+ "completion_length": 133.25000762939453,
234
+ "epoch": 0.288,
235
+ "grad_norm": 1.354822217310785,
236
+ "kl": 0.0002689361572265625,
237
+ "learning_rate": 4.836654171092682e-07,
238
+ "loss": 0.0,
239
+ "reward": 0.5779364109039307,
240
+ "reward_std": 0.4782462567090988,
241
+ "rewards/correct_code_reward_func": 0.2916666716337204,
242
+ "rewards/len_reward_func": 0.2862697243690491,
243
+ "step": 18
244
+ },
245
+ {
246
+ "completion_length": 99.41667175292969,
247
+ "epoch": 0.304,
248
+ "grad_norm": 1.4087777232916079,
249
+ "kl": 0.00031757354736328125,
250
+ "learning_rate": 4.812791608846709e-07,
251
+ "loss": 0.0,
252
+ "reward": 0.5035808980464935,
253
+ "reward_std": 0.46289560198783875,
254
+ "rewards/correct_code_reward_func": 0.229166679084301,
255
+ "rewards/len_reward_func": 0.27441420406103134,
256
+ "step": 19
257
+ },
258
+ {
259
+ "completion_length": 170.7291717529297,
260
+ "epoch": 0.32,
261
+ "grad_norm": 0.9923230664440412,
262
+ "kl": 0.00028705596923828125,
263
+ "learning_rate": 4.787370064564882e-07,
264
+ "loss": 0.0,
265
+ "reward": 0.5567075908184052,
266
+ "reward_std": 0.44439028203487396,
267
+ "rewards/correct_code_reward_func": 0.2083333432674408,
268
+ "rewards/len_reward_func": 0.34837424755096436,
269
+ "step": 20
270
+ },
271
+ {
272
+ "completion_length": 124.72917175292969,
273
+ "epoch": 0.336,
274
+ "grad_norm": 1.2245791922735345,
275
+ "kl": 0.00035572052001953125,
276
+ "learning_rate": 4.7604066741321253e-07,
277
+ "loss": 0.0,
278
+ "reward": 0.8560027182102203,
279
+ "reward_std": 0.6356588900089264,
280
+ "rewards/correct_code_reward_func": 0.5416666865348816,
281
+ "rewards/len_reward_func": 0.31433598697185516,
282
+ "step": 21
283
+ },
284
+ {
285
+ "completion_length": 123.64583969116211,
286
+ "epoch": 0.352,
287
+ "grad_norm": 1.2080469812565267,
288
+ "kl": 0.00035858154296875,
289
+ "learning_rate": 4.731919612744659e-07,
290
+ "loss": 0.0,
291
+ "reward": 0.7242447733879089,
292
+ "reward_std": 0.4742405414581299,
293
+ "rewards/correct_code_reward_func": 0.3958333432674408,
294
+ "rewards/len_reward_func": 0.32841143012046814,
295
+ "step": 22
296
+ },
297
+ {
298
+ "completion_length": 146.2916717529297,
299
+ "epoch": 0.368,
300
+ "grad_norm": 1.2440640880474592,
301
+ "kl": 0.00040721893310546875,
302
+ "learning_rate": 4.7019280826586604e-07,
303
+ "loss": 0.0,
304
+ "reward": 0.5270938575267792,
305
+ "reward_std": 0.4260385036468506,
306
+ "rewards/correct_code_reward_func": 0.2291666679084301,
307
+ "rewards/len_reward_func": 0.2979271858930588,
308
+ "step": 23
309
+ },
310
+ {
311
+ "completion_length": 141.9166717529297,
312
+ "epoch": 0.384,
313
+ "grad_norm": 1.455943571941334,
314
+ "kl": 0.0006427764892578125,
315
+ "learning_rate": 4.6704523002466094e-07,
316
+ "loss": 0.0,
317
+ "reward": 0.5917265266180038,
318
+ "reward_std": 0.47722122073173523,
319
+ "rewards/correct_code_reward_func": 0.3333333358168602,
320
+ "rewards/len_reward_func": 0.25839313119649887,
321
+ "step": 24
322
+ },
323
+ {
324
+ "completion_length": 240.85417938232422,
325
+ "epoch": 0.4,
326
+ "grad_norm": 0.8411889507435418,
327
+ "kl": 0.0003604888916015625,
328
+ "learning_rate": 4.6375134823700503e-07,
329
+ "loss": 0.0,
330
+ "reward": 0.3353981524705887,
331
+ "reward_std": 0.351834774017334,
332
+ "rewards/correct_code_reward_func": 0.0833333358168602,
333
+ "rewards/len_reward_func": 0.2520648390054703,
334
+ "step": 25
335
+ },
336
+ {
337
+ "completion_length": 97.31250381469727,
338
+ "epoch": 0.416,
339
+ "grad_norm": 1.374585753278975,
340
+ "kl": 0.0008258819580078125,
341
+ "learning_rate": 4.603133832077953e-07,
342
+ "loss": 0.0,
343
+ "reward": 0.6881800889968872,
344
+ "reward_std": 0.5626422464847565,
345
+ "rewards/correct_code_reward_func": 0.4375,
346
+ "rewards/len_reward_func": 0.2506800442934036,
347
+ "step": 26
348
+ },
349
+ {
350
+ "completion_length": 131.08333587646484,
351
+ "epoch": 0.432,
352
+ "grad_norm": 1.5040369557196518,
353
+ "kl": 0.0006847381591796875,
354
+ "learning_rate": 4.5673365236403216e-07,
355
+ "loss": 0.0,
356
+ "reward": 0.6470239758491516,
357
+ "reward_std": 0.39606642723083496,
358
+ "rewards/correct_code_reward_func": 0.4375,
359
+ "rewards/len_reward_func": 0.20952393114566803,
360
+ "step": 27
361
+ },
362
+ {
363
+ "completion_length": 198.06250762939453,
364
+ "epoch": 0.448,
365
+ "grad_norm": 1.1110007536297855,
366
+ "kl": 0.00054168701171875,
367
+ "learning_rate": 4.530145686927125e-07,
368
+ "loss": 0.0,
369
+ "reward": 0.5166794955730438,
370
+ "reward_std": 0.504486620426178,
371
+ "rewards/correct_code_reward_func": 0.2500000149011612,
372
+ "rewards/len_reward_func": 0.2666794955730438,
373
+ "step": 28
374
+ },
375
+ {
376
+ "completion_length": 152.52083587646484,
377
+ "epoch": 0.464,
378
+ "grad_norm": 1.134262039216797,
379
+ "kl": 0.00078582763671875,
380
+ "learning_rate": 4.4915863911430897e-07,
381
+ "loss": 0.0,
382
+ "reward": 0.5144253522157669,
383
+ "reward_std": 0.4733017832040787,
384
+ "rewards/correct_code_reward_func": 0.1875000111758709,
385
+ "rewards/len_reward_func": 0.3269253224134445,
386
+ "step": 29
387
+ },
388
+ {
389
+ "completion_length": 139.7916717529297,
390
+ "epoch": 0.48,
391
+ "grad_norm": 1.010573889887009,
392
+ "kl": 0.0007152557373046875,
393
+ "learning_rate": 4.45168462792932e-07,
394
+ "loss": 0.0,
395
+ "reward": 0.5882390439510345,
396
+ "reward_std": 0.43310636281967163,
397
+ "rewards/correct_code_reward_func": 0.2500000074505806,
398
+ "rewards/len_reward_func": 0.33823904395103455,
399
+ "step": 30
400
+ },
401
+ {
402
+ "completion_length": 87.41666793823242,
403
+ "epoch": 0.496,
404
+ "grad_norm": 1.540244950569226,
405
+ "kl": 0.0012340545654296875,
406
+ "learning_rate": 4.4104672938431223e-07,
407
+ "loss": 0.0,
408
+ "reward": 0.7711681425571442,
409
+ "reward_std": 0.4805651605129242,
410
+ "rewards/correct_code_reward_func": 0.5833333432674408,
411
+ "rewards/len_reward_func": 0.18783476203680038,
412
+ "step": 31
413
+ },
414
+ {
415
+ "completion_length": 101.43750381469727,
416
+ "epoch": 0.512,
417
+ "grad_norm": 2.3673085026520297,
418
+ "kl": 0.0012607574462890625,
419
+ "learning_rate": 4.367962172227866e-07,
420
+ "loss": 0.0,
421
+ "reward": 0.7279457449913025,
422
+ "reward_std": 0.4627054035663605,
423
+ "rewards/correct_code_reward_func": 0.4583333432674408,
424
+ "rewards/len_reward_func": 0.2696124166250229,
425
+ "step": 32
426
+ },
427
+ {
428
+ "completion_length": 155.2291717529297,
429
+ "epoch": 0.528,
430
+ "grad_norm": 1.2624598609488873,
431
+ "kl": 0.00139617919921875,
432
+ "learning_rate": 4.324197914485075e-07,
433
+ "loss": 0.0,
434
+ "reward": 0.6401492655277252,
435
+ "reward_std": 0.515736848115921,
436
+ "rewards/correct_code_reward_func": 0.375,
437
+ "rewards/len_reward_func": 0.26514923572540283,
438
+ "step": 33
439
+ },
440
+ {
441
+ "completion_length": 252.91667938232422,
442
+ "epoch": 0.544,
443
+ "grad_norm": 1.043728438493038,
444
+ "kl": 0.0008392333984375,
445
+ "learning_rate": 4.2792040207614e-07,
446
+ "loss": 0.0,
447
+ "reward": 0.6339870393276215,
448
+ "reward_std": 0.5688490867614746,
449
+ "rewards/correct_code_reward_func": 0.3333333432674408,
450
+ "rewards/len_reward_func": 0.30065372586250305,
451
+ "step": 34
452
+ },
453
+ {
454
+ "completion_length": 178.25,
455
+ "epoch": 0.56,
456
+ "grad_norm": 1.2442169258805433,
457
+ "kl": 0.00205230712890625,
458
+ "learning_rate": 4.2330108200634723e-07,
459
+ "loss": 0.0,
460
+ "reward": 0.43357332795858383,
461
+ "reward_std": 0.3690243661403656,
462
+ "rewards/correct_code_reward_func": 0.16666667722165585,
463
+ "rewards/len_reward_func": 0.26690666377544403,
464
+ "step": 35
465
+ },
466
+ {
467
+ "completion_length": 150.1666717529297,
468
+ "epoch": 0.576,
469
+ "grad_norm": 1.0937981889230137,
470
+ "kl": 0.0016021728515625,
471
+ "learning_rate": 4.185649449814045e-07,
472
+ "loss": 0.0,
473
+ "reward": 0.8725252151489258,
474
+ "reward_std": 0.5368492603302002,
475
+ "rewards/correct_code_reward_func": 0.5416666865348816,
476
+ "rewards/len_reward_func": 0.3308584541082382,
477
+ "step": 36
478
+ },
479
+ {
480
+ "completion_length": 74.41666793823242,
481
+ "epoch": 0.592,
482
+ "grad_norm": 1.4560552034278569,
483
+ "kl": 0.0020904541015625,
484
+ "learning_rate": 4.137151834863213e-07,
485
+ "loss": 0.0,
486
+ "reward": 0.7634576857089996,
487
+ "reward_std": 0.5292592346668243,
488
+ "rewards/correct_code_reward_func": 0.5416666716337204,
489
+ "rewards/len_reward_func": 0.22179099917411804,
490
+ "step": 37
491
+ },
492
+ {
493
+ "completion_length": 111.77083587646484,
494
+ "epoch": 0.608,
495
+ "grad_norm": 1.6125607277054597,
496
+ "kl": 0.002716064453125,
497
+ "learning_rate": 4.087550665968846e-07,
498
+ "loss": 0.0,
499
+ "reward": 0.6047167330980301,
500
+ "reward_std": 0.4415762424468994,
501
+ "rewards/correct_code_reward_func": 0.2916666865348816,
502
+ "rewards/len_reward_func": 0.3130500763654709,
503
+ "step": 38
504
+ },
505
+ {
506
+ "completion_length": 87.0625,
507
+ "epoch": 0.624,
508
+ "grad_norm": 2.0747921723056026,
509
+ "kl": 0.0023193359375,
510
+ "learning_rate": 4.036879377760752e-07,
511
+ "loss": 0.0,
512
+ "reward": 0.7261738479137421,
513
+ "reward_std": 0.6433705389499664,
514
+ "rewards/correct_code_reward_func": 0.520833358168602,
515
+ "rewards/len_reward_func": 0.20534051209688187,
516
+ "step": 39
517
+ },
518
+ {
519
+ "completion_length": 128.0833396911621,
520
+ "epoch": 0.64,
521
+ "grad_norm": 1.352520841789316,
522
+ "kl": 0.00229644775390625,
523
+ "learning_rate": 3.9851721262034157e-07,
524
+ "loss": 0.0,
525
+ "reward": 0.49166351556777954,
526
+ "reward_std": 0.4290030002593994,
527
+ "rewards/correct_code_reward_func": 0.18750000558793545,
528
+ "rewards/len_reward_func": 0.30416350066661835,
529
+ "step": 40
530
+ },
531
+ {
532
+ "completion_length": 117.33333587646484,
533
+ "epoch": 0.656,
534
+ "grad_norm": 1.5281074207353524,
535
+ "kl": 0.003509521484375,
536
+ "learning_rate": 3.932463765572505e-07,
537
+ "loss": 0.0,
538
+ "reward": 0.5800679922103882,
539
+ "reward_std": 0.5416670143604279,
540
+ "rewards/correct_code_reward_func": 0.3125000149011612,
541
+ "rewards/len_reward_func": 0.2675679475069046,
542
+ "step": 41
543
+ },
544
+ {
545
+ "completion_length": 112.43750381469727,
546
+ "epoch": 0.672,
547
+ "grad_norm": 1.2084984435618142,
548
+ "kl": 0.00252532958984375,
549
+ "learning_rate": 3.8787898249606767e-07,
550
+ "loss": 0.0,
551
+ "reward": 0.42490366101264954,
552
+ "reward_std": 0.46323399245738983,
553
+ "rewards/correct_code_reward_func": 0.14583333395421505,
554
+ "rewards/len_reward_func": 0.27907034754753113,
555
+ "step": 42
556
+ },
557
+ {
558
+ "completion_length": 56.85416793823242,
559
+ "epoch": 0.688,
560
+ "grad_norm": 1.8756323954488632,
561
+ "kl": 0.00452423095703125,
562
+ "learning_rate": 3.8241864843284964e-07,
563
+ "loss": 0.0,
564
+ "reward": 0.7274035811424255,
565
+ "reward_std": 0.5209662765264511,
566
+ "rewards/correct_code_reward_func": 0.5000000149011612,
567
+ "rewards/len_reward_func": 0.22740358859300613,
568
+ "step": 43
569
+ },
570
+ {
571
+ "completion_length": 153.68750762939453,
572
+ "epoch": 0.704,
573
+ "grad_norm": 1.785627080388602,
574
+ "kl": 0.0055084228515625,
575
+ "learning_rate": 3.768690550116639e-07,
576
+ "loss": 0.0,
577
+ "reward": 0.49254634976387024,
578
+ "reward_std": 0.4052678644657135,
579
+ "rewards/correct_code_reward_func": 0.1666666716337204,
580
+ "rewards/len_reward_func": 0.32587967813014984,
581
+ "step": 44
582
+ },
583
+ {
584
+ "completion_length": 170.1041717529297,
585
+ "epoch": 0.72,
586
+ "grad_norm": 1.2057879792669277,
587
+ "kl": 0.0038299560546875,
588
+ "learning_rate": 3.712339430435792e-07,
589
+ "loss": 0.0,
590
+ "reward": 0.5373264253139496,
591
+ "reward_std": 0.4612013250589371,
592
+ "rewards/correct_code_reward_func": 0.2708333432674408,
593
+ "rewards/len_reward_func": 0.2664930745959282,
594
+ "step": 45
595
+ },
596
+ {
597
+ "completion_length": 122.79167175292969,
598
+ "epoch": 0.736,
599
+ "grad_norm": 1.23844328247912,
600
+ "kl": 0.00384521484375,
601
+ "learning_rate": 3.65517110985099e-07,
602
+ "loss": 0.0,
603
+ "reward": 0.6534424722194672,
604
+ "reward_std": 0.5896010398864746,
605
+ "rewards/correct_code_reward_func": 0.354166679084301,
606
+ "rewards/len_reward_func": 0.29927581548690796,
607
+ "step": 46
608
+ },
609
+ {
610
+ "completion_length": 73.39583396911621,
611
+ "epoch": 0.752,
612
+ "grad_norm": 2.222315006145743,
613
+ "kl": 0.0058135986328125,
614
+ "learning_rate": 3.597224123777389e-07,
615
+ "loss": 0.0,
616
+ "reward": 0.7357015609741211,
617
+ "reward_std": 0.5119403451681137,
618
+ "rewards/correct_code_reward_func": 0.4583333432674408,
619
+ "rewards/len_reward_func": 0.2773682177066803,
620
+ "step": 47
621
+ },
622
+ {
623
+ "completion_length": 75.54166793823242,
624
+ "epoch": 0.768,
625
+ "grad_norm": 1.9981519435567456,
626
+ "kl": 0.0053863525390625,
627
+ "learning_rate": 3.5385375325047163e-07,
628
+ "loss": 0.0,
629
+ "reward": 0.6428782939910889,
630
+ "reward_std": 0.6202229559421539,
631
+ "rewards/correct_code_reward_func": 0.3958333432674408,
632
+ "rewards/len_reward_func": 0.24704494327306747,
633
+ "step": 48
634
+ },
635
+ {
636
+ "completion_length": 73.27083587646484,
637
+ "epoch": 0.784,
638
+ "grad_norm": 2.073070842958071,
639
+ "kl": 0.00554656982421875,
640
+ "learning_rate": 3.479150894867926e-07,
641
+ "loss": 0.0,
642
+ "reward": 0.8005061745643616,
643
+ "reward_std": 0.5489170849323273,
644
+ "rewards/correct_code_reward_func": 0.5416666865348816,
645
+ "rewards/len_reward_func": 0.25883948802948,
646
+ "step": 49
647
+ },
648
+ {
649
+ "completion_length": 93.62500381469727,
650
+ "epoch": 0.8,
651
+ "grad_norm": 1.7280406240103203,
652
+ "kl": 0.0070953369140625,
653
+ "learning_rate": 3.4191042415818e-07,
654
+ "loss": 0.0,
655
+ "reward": 0.6382943987846375,
656
+ "reward_std": 0.4014574736356735,
657
+ "rewards/correct_code_reward_func": 0.3750000149011612,
658
+ "rewards/len_reward_func": 0.26329439133405685,
659
+ "step": 50
660
+ },
661
+ {
662
+ "completion_length": 110.31250381469727,
663
+ "epoch": 0.816,
664
+ "grad_norm": 1.5732703630042588,
665
+ "kl": 0.008453369140625,
666
+ "learning_rate": 3.3584380482574717e-07,
667
+ "loss": 0.0,
668
+ "reward": 0.8389279842376709,
669
+ "reward_std": 0.6495693922042847,
670
+ "rewards/correct_code_reward_func": 0.5208333432674408,
671
+ "rewards/len_reward_func": 0.31809471547603607,
672
+ "step": 51
673
+ },
674
+ {
675
+ "completion_length": 81.4375,
676
+ "epoch": 0.832,
677
+ "grad_norm": 1.3555162901411408,
678
+ "kl": 0.0072479248046875,
679
+ "learning_rate": 3.297193208119047e-07,
680
+ "loss": 0.0,
681
+ "reward": 0.7050519585609436,
682
+ "reward_std": 0.522288054227829,
683
+ "rewards/correct_code_reward_func": 0.4375000298023224,
684
+ "rewards/len_reward_func": 0.2675519585609436,
685
+ "step": 52
686
+ },
687
+ {
688
+ "completion_length": 145.2291717529297,
689
+ "epoch": 0.848,
690
+ "grad_norm": 1.2256688073258564,
691
+ "kl": 0.00726318359375,
692
+ "learning_rate": 3.235411004438741e-07,
693
+ "loss": 0.0,
694
+ "reward": 0.6400169730186462,
695
+ "reward_std": 0.5816708207130432,
696
+ "rewards/correct_code_reward_func": 0.3541666716337204,
697
+ "rewards/len_reward_func": 0.28585030883550644,
698
+ "step": 53
699
+ },
700
+ {
701
+ "completion_length": 120.20833587646484,
702
+ "epoch": 0.864,
703
+ "grad_norm": 1.8462631631415796,
704
+ "kl": 0.0084991455078125,
705
+ "learning_rate": 3.173133082709086e-07,
706
+ "loss": 0.0,
707
+ "reward": 0.643402487039566,
708
+ "reward_std": 0.3417808264493942,
709
+ "rewards/correct_code_reward_func": 0.3333333432674408,
710
+ "rewards/len_reward_func": 0.31006917357444763,
711
+ "step": 54
712
+ },
713
+ {
714
+ "completion_length": 55.56250190734863,
715
+ "epoch": 0.88,
716
+ "grad_norm": 1.7370166581779802,
717
+ "kl": 0.01177978515625,
718
+ "learning_rate": 3.1104014225709784e-07,
719
+ "loss": 0.0,
720
+ "reward": 0.9137917459011078,
721
+ "reward_std": 0.5003669559955597,
722
+ "rewards/correct_code_reward_func": 0.583333358168602,
723
+ "rewards/len_reward_func": 0.3304583728313446,
724
+ "step": 55
725
+ },
726
+ {
727
+ "completion_length": 189.25000762939453,
728
+ "epoch": 0.896,
729
+ "grad_norm": 1.2196760565152192,
730
+ "kl": 0.0058441162109375,
731
+ "learning_rate": 3.0472583095164873e-07,
732
+ "loss": 0.0,
733
+ "reward": 0.4673280417919159,
734
+ "reward_std": 0.4577627182006836,
735
+ "rewards/correct_code_reward_func": 0.1666666716337204,
736
+ "rewards/len_reward_func": 0.3006613999605179,
737
+ "step": 56
738
+ },
739
+ {
740
+ "completion_length": 57.37500190734863,
741
+ "epoch": 0.912,
742
+ "grad_norm": 2.0919947468048976,
743
+ "kl": 0.010162353515625,
744
+ "learning_rate": 2.983746306385499e-07,
745
+ "loss": 0.0,
746
+ "reward": 0.6931174695491791,
747
+ "reward_std": 0.5172313153743744,
748
+ "rewards/correct_code_reward_func": 0.4791666865348816,
749
+ "rewards/len_reward_func": 0.21395081281661987,
750
+ "step": 57
751
+ },
752
+ {
753
+ "completion_length": 86.00000190734863,
754
+ "epoch": 0.928,
755
+ "grad_norm": 1.5907089477428527,
756
+ "kl": 0.0113677978515625,
757
+ "learning_rate": 2.919908224675412e-07,
758
+ "loss": 0.0,
759
+ "reward": 0.5865814685821533,
760
+ "reward_std": 0.5177368223667145,
761
+ "rewards/correct_code_reward_func": 0.3125000149011612,
762
+ "rewards/len_reward_func": 0.27408143877983093,
763
+ "step": 58
764
+ },
765
+ {
766
+ "completion_length": 90.72916793823242,
767
+ "epoch": 0.944,
768
+ "grad_norm": 1.1269292807249032,
769
+ "kl": 0.00830078125,
770
+ "learning_rate": 2.8557870956832133e-07,
771
+ "loss": 0.0,
772
+ "reward": 0.4935041069984436,
773
+ "reward_std": 0.41843119263648987,
774
+ "rewards/correct_code_reward_func": 0.2083333432674408,
775
+ "rewards/len_reward_func": 0.285170778632164,
776
+ "step": 59
777
+ },
778
+ {
779
+ "completion_length": 85.60416793823242,
780
+ "epoch": 0.96,
781
+ "grad_norm": 2.320388470663489,
782
+ "kl": 0.014678955078125,
783
+ "learning_rate": 2.7914261414993976e-07,
784
+ "loss": 0.0,
785
+ "reward": 0.7554058134555817,
786
+ "reward_std": 0.5069911777973175,
787
+ "rewards/correct_code_reward_func": 0.4166666716337204,
788
+ "rewards/len_reward_func": 0.3387391269207001,
789
+ "step": 60
790
+ },
791
+ {
792
+ "completion_length": 63.375,
793
+ "epoch": 0.976,
794
+ "grad_norm": 1.7319214973496064,
795
+ "kl": 0.02532958984375,
796
+ "learning_rate": 2.726868745873286e-07,
797
+ "loss": 0.0,
798
+ "reward": 0.7839343547821045,
799
+ "reward_std": 0.6209487617015839,
800
+ "rewards/correct_code_reward_func": 0.4791666716337204,
801
+ "rewards/len_reward_func": 0.3047676384449005,
802
+ "step": 61
803
+ },
804
+ {
805
+ "completion_length": 87.14583587646484,
806
+ "epoch": 0.992,
807
+ "grad_norm": 1.8272498546531741,
808
+ "kl": 0.0134735107421875,
809
+ "learning_rate": 2.662158424969357e-07,
810
+ "loss": 0.0,
811
+ "reward": 0.8219521045684814,
812
+ "reward_std": 0.6945097148418427,
813
+ "rewards/correct_code_reward_func": 0.5416666865348816,
814
+ "rewards/len_reward_func": 0.28028544783592224,
815
+ "step": 62
816
+ },
817
+ {
818
+ "completion_length": 55.66666793823242,
819
+ "epoch": 1.0,
820
+ "grad_norm": 1.8272498546531741,
821
+ "kl": 0.02587890625,
822
+ "learning_rate": 2.597338798034344e-07,
823
+ "loss": 0.0,
824
+ "reward": 0.713922381401062,
825
+ "reward_std": 0.519837498664856,
826
+ "rewards/correct_code_reward_func": 0.4166666865348816,
827
+ "rewards/len_reward_func": 0.29725566506385803,
828
+ "step": 63
829
+ },
830
+ {
831
+ "completion_length": 88.75000381469727,
832
+ "epoch": 1.016,
833
+ "grad_norm": 1.6950346991160663,
834
+ "kl": 0.0108642578125,
835
+ "learning_rate": 2.532453557994827e-07,
836
+ "loss": 0.0,
837
+ "reward": 0.5927524715662003,
838
+ "reward_std": 0.39128445088863373,
839
+ "rewards/correct_code_reward_func": 0.3750000149011612,
840
+ "rewards/len_reward_func": 0.21775247156620026,
841
+ "step": 64
842
+ },
843
+ {
844
+ "completion_length": 151.7291717529297,
845
+ "epoch": 1.032,
846
+ "grad_norm": 1.6408461481438466,
847
+ "kl": 0.011138916015625,
848
+ "learning_rate": 2.467546442005173e-07,
849
+ "loss": 0.0,
850
+ "reward": 0.6122622489929199,
851
+ "reward_std": 0.5165137350559235,
852
+ "rewards/correct_code_reward_func": 0.3125000149011612,
853
+ "rewards/len_reward_func": 0.2997622489929199,
854
+ "step": 65
855
+ },
856
+ {
857
+ "completion_length": 104.85417175292969,
858
+ "epoch": 1.048,
859
+ "grad_norm": 1.1573620161491798,
860
+ "kl": 0.01092529296875,
861
+ "learning_rate": 2.4026612019656556e-07,
862
+ "loss": 0.0,
863
+ "reward": 0.8486100733280182,
864
+ "reward_std": 0.3942585438489914,
865
+ "rewards/correct_code_reward_func": 0.5,
866
+ "rewards/len_reward_func": 0.348610058426857,
867
+ "step": 66
868
+ },
869
+ {
870
+ "completion_length": 62.47916793823242,
871
+ "epoch": 1.064,
872
+ "grad_norm": 2.1966023559129266,
873
+ "kl": 0.018798828125,
874
+ "learning_rate": 2.337841575030642e-07,
875
+ "loss": 0.0,
876
+ "reward": 0.8105108737945557,
877
+ "reward_std": 0.4338831454515457,
878
+ "rewards/correct_code_reward_func": 0.4583333432674408,
879
+ "rewards/len_reward_func": 0.35217756032943726,
880
+ "step": 67
881
+ },
882
+ {
883
+ "completion_length": 74.95833587646484,
884
+ "epoch": 1.08,
885
+ "grad_norm": 1.796160832910341,
886
+ "kl": 0.02294921875,
887
+ "learning_rate": 2.2731312541267143e-07,
888
+ "loss": 0.0,
889
+ "reward": 0.549996554851532,
890
+ "reward_std": 0.3687018007040024,
891
+ "rewards/correct_code_reward_func": 0.2083333358168602,
892
+ "rewards/len_reward_func": 0.3416632413864136,
893
+ "step": 68
894
+ },
895
+ {
896
+ "completion_length": 80.14583587646484,
897
+ "epoch": 1.096,
898
+ "grad_norm": 2.1344146728324653,
899
+ "kl": 0.02447509765625,
900
+ "learning_rate": 2.2085738585006021e-07,
901
+ "loss": 0.0,
902
+ "reward": 0.8650955259799957,
903
+ "reward_std": 0.4139704555273056,
904
+ "rewards/correct_code_reward_func": 0.5208333432674408,
905
+ "rewards/len_reward_func": 0.34426216781139374,
906
+ "step": 69
907
+ },
908
+ {
909
+ "completion_length": 60.958335876464844,
910
+ "epoch": 1.112,
911
+ "grad_norm": 1.6686676921157912,
912
+ "kl": 0.025634765625,
913
+ "learning_rate": 2.1442129043167873e-07,
914
+ "loss": 0.0,
915
+ "reward": 0.6947443187236786,
916
+ "reward_std": 0.5725615322589874,
917
+ "rewards/correct_code_reward_func": 0.375,
918
+ "rewards/len_reward_func": 0.319744348526001,
919
+ "step": 70
920
+ },
921
+ {
922
+ "completion_length": 108.1875,
923
+ "epoch": 1.1280000000000001,
924
+ "grad_norm": 1.7272596794076989,
925
+ "kl": 0.0130615234375,
926
+ "learning_rate": 2.0800917753245875e-07,
927
+ "loss": 0.0,
928
+ "reward": 0.7587291896343231,
929
+ "reward_std": 0.5232284665107727,
930
+ "rewards/correct_code_reward_func": 0.4166666865348816,
931
+ "rewards/len_reward_func": 0.3420625329017639,
932
+ "step": 71
933
+ },
934
+ {
935
+ "completion_length": 108.04167175292969,
936
+ "epoch": 1.144,
937
+ "grad_norm": 1.6272563745253346,
938
+ "kl": 0.01654052734375,
939
+ "learning_rate": 2.0162536936145008e-07,
940
+ "loss": 0.0,
941
+ "reward": 0.5046872794628143,
942
+ "reward_std": 0.3378771096467972,
943
+ "rewards/correct_code_reward_func": 0.1666666679084301,
944
+ "rewards/len_reward_func": 0.33802059292793274,
945
+ "step": 72
946
+ },
947
+ {
948
+ "completion_length": 54.02083396911621,
949
+ "epoch": 1.16,
950
+ "grad_norm": 1.9418689539056528,
951
+ "kl": 0.0308837890625,
952
+ "learning_rate": 1.9527416904835132e-07,
953
+ "loss": 0.0,
954
+ "reward": 0.9055829644203186,
955
+ "reward_std": 0.3730238378047943,
956
+ "rewards/correct_code_reward_func": 0.5,
957
+ "rewards/len_reward_func": 0.405582919716835,
958
+ "step": 73
959
+ },
960
+ {
961
+ "completion_length": 94.31250381469727,
962
+ "epoch": 1.176,
963
+ "grad_norm": 1.5576616620611914,
964
+ "kl": 0.02215576171875,
965
+ "learning_rate": 1.889598577429022e-07,
966
+ "loss": 0.0,
967
+ "reward": 0.9071804285049438,
968
+ "reward_std": 0.44920457899570465,
969
+ "rewards/correct_code_reward_func": 0.5000000298023224,
970
+ "rewards/len_reward_func": 0.40718045830726624,
971
+ "step": 74
972
+ },
973
+ {
974
+ "completion_length": 53.79166793823242,
975
+ "epoch": 1.192,
976
+ "grad_norm": 2.3725141345867544,
977
+ "kl": 0.03057861328125,
978
+ "learning_rate": 1.8268669172909136e-07,
979
+ "loss": 0.0,
980
+ "reward": 0.9221459329128265,
981
+ "reward_std": 0.4697086811065674,
982
+ "rewards/correct_code_reward_func": 0.5000000298023224,
983
+ "rewards/len_reward_func": 0.42214588820934296,
984
+ "step": 75
985
+ },
986
+ {
987
+ "completion_length": 89.79167175292969,
988
+ "epoch": 1.208,
989
+ "grad_norm": 2.003223060045919,
990
+ "kl": 0.03094482421875,
991
+ "learning_rate": 1.7645889955612592e-07,
992
+ "loss": 0.0,
993
+ "reward": 1.0163878798484802,
994
+ "reward_std": 0.43504565954208374,
995
+ "rewards/correct_code_reward_func": 0.6250000298023224,
996
+ "rewards/len_reward_func": 0.3913878947496414,
997
+ "step": 76
998
+ },
999
+ {
1000
+ "completion_length": 68.79166984558105,
1001
+ "epoch": 1.224,
1002
+ "grad_norm": 2.361523245499291,
1003
+ "kl": 0.0457763671875,
1004
+ "learning_rate": 1.7028067918809535e-07,
1005
+ "loss": 0.0,
1006
+ "reward": 0.7535229325294495,
1007
+ "reward_std": 0.47849828004837036,
1008
+ "rewards/correct_code_reward_func": 0.375,
1009
+ "rewards/len_reward_func": 0.3785228729248047,
1010
+ "step": 77
1011
+ },
1012
+ {
1013
+ "completion_length": 54.14583396911621,
1014
+ "epoch": 1.24,
1015
+ "grad_norm": 2.120116927446423,
1016
+ "kl": 0.0394287109375,
1017
+ "learning_rate": 1.6415619517425294e-07,
1018
+ "loss": 0.0,
1019
+ "reward": 0.8538325130939484,
1020
+ "reward_std": 0.44848716259002686,
1021
+ "rewards/correct_code_reward_func": 0.4791666865348816,
1022
+ "rewards/len_reward_func": 0.3746658265590668,
1023
+ "step": 78
1024
+ },
1025
+ {
1026
+ "completion_length": 89.0,
1027
+ "epoch": 1.256,
1028
+ "grad_norm": 1.2055136830985975,
1029
+ "kl": 0.0272216796875,
1030
+ "learning_rate": 1.5808957584181994e-07,
1031
+ "loss": 0.0,
1032
+ "reward": 0.755169004201889,
1033
+ "reward_std": 0.4014817923307419,
1034
+ "rewards/correct_code_reward_func": 0.3541666716337204,
1035
+ "rewards/len_reward_func": 0.40100236237049103,
1036
+ "step": 79
1037
+ },
1038
+ {
1039
+ "completion_length": 99.39583969116211,
1040
+ "epoch": 1.272,
1041
+ "grad_norm": 1.84690544945913,
1042
+ "kl": 0.024322509765625,
1043
+ "learning_rate": 1.5208491051320744e-07,
1044
+ "loss": 0.0,
1045
+ "reward": 0.7356246709823608,
1046
+ "reward_std": 0.47616493701934814,
1047
+ "rewards/correct_code_reward_func": 0.3958333432674408,
1048
+ "rewards/len_reward_func": 0.33979131281375885,
1049
+ "step": 80
1050
+ },
1051
+ {
1052
+ "completion_length": 73.04166793823242,
1053
+ "epoch": 1.288,
1054
+ "grad_norm": 1.7278725529442787,
1055
+ "kl": 0.0439453125,
1056
+ "learning_rate": 1.461462467495284e-07,
1057
+ "loss": 0.0,
1058
+ "reward": 0.7051982879638672,
1059
+ "reward_std": 0.48877203464508057,
1060
+ "rewards/correct_code_reward_func": 0.3125,
1061
+ "rewards/len_reward_func": 0.3926983177661896,
1062
+ "step": 81
1063
+ },
1064
+ {
1065
+ "completion_length": 59.354169845581055,
1066
+ "epoch": 1.304,
1067
+ "grad_norm": 2.077567652472909,
1068
+ "kl": 0.0345458984375,
1069
+ "learning_rate": 1.4027758762226107e-07,
1070
+ "loss": 0.0,
1071
+ "reward": 0.816185712814331,
1072
+ "reward_std": 0.4705541431903839,
1073
+ "rewards/correct_code_reward_func": 0.4791666865348816,
1074
+ "rewards/len_reward_func": 0.3370189964771271,
1075
+ "step": 82
1076
+ },
1077
+ {
1078
+ "completion_length": 81.58333587646484,
1079
+ "epoch": 1.32,
1080
+ "grad_norm": 1.609719907980881,
1081
+ "kl": 0.0234375,
1082
+ "learning_rate": 1.3448288901490092e-07,
1083
+ "loss": 0.0,
1084
+ "reward": 0.7908000648021698,
1085
+ "reward_std": 0.45585089921951294,
1086
+ "rewards/correct_code_reward_func": 0.4166666716337204,
1087
+ "rewards/len_reward_func": 0.374133437871933,
1088
+ "step": 83
1089
+ },
1090
+ {
1091
+ "completion_length": 87.33333587646484,
1092
+ "epoch": 1.336,
1093
+ "grad_norm": 1.6587537084233746,
1094
+ "kl": 0.02667236328125,
1095
+ "learning_rate": 1.2876605695642084e-07,
1096
+ "loss": 0.0,
1097
+ "reward": 0.6749401688575745,
1098
+ "reward_std": 0.42905712127685547,
1099
+ "rewards/correct_code_reward_func": 0.3541666716337204,
1100
+ "rewards/len_reward_func": 0.3207734525203705,
1101
+ "step": 84
1102
+ },
1103
+ {
1104
+ "completion_length": 95.20833587646484,
1105
+ "epoch": 1.3519999999999999,
1106
+ "grad_norm": 2.538472018686139,
1107
+ "kl": 0.02581787109375,
1108
+ "learning_rate": 1.231309449883361e-07,
1109
+ "loss": 0.0,
1110
+ "reward": 0.7594759464263916,
1111
+ "reward_std": 0.5746750831604004,
1112
+ "rewards/correct_code_reward_func": 0.3750000149011612,
1113
+ "rewards/len_reward_func": 0.3844759315252304,
1114
+ "step": 85
1115
+ },
1116
+ {
1117
+ "completion_length": 55.43750190734863,
1118
+ "epoch": 1.3679999999999999,
1119
+ "grad_norm": 1.797373425635401,
1120
+ "kl": 0.03289794921875,
1121
+ "learning_rate": 1.1758135156715041e-07,
1122
+ "loss": 0.0,
1123
+ "reward": 0.9961144328117371,
1124
+ "reward_std": 0.5648430436849594,
1125
+ "rewards/correct_code_reward_func": 0.6250000298023224,
1126
+ "rewards/len_reward_func": 0.37111443281173706,
1127
+ "step": 86
1128
+ },
1129
+ {
1130
+ "completion_length": 121.25000762939453,
1131
+ "epoch": 1.384,
1132
+ "grad_norm": 1.7119982491506713,
1133
+ "kl": 0.0286865234375,
1134
+ "learning_rate": 1.1212101750393235e-07,
1135
+ "loss": 0.0,
1136
+ "reward": 0.7243427634239197,
1137
+ "reward_std": 0.3805614560842514,
1138
+ "rewards/correct_code_reward_func": 0.3333333358168602,
1139
+ "rewards/len_reward_func": 0.39100944995880127,
1140
+ "step": 87
1141
+ },
1142
+ {
1143
+ "completion_length": 57.35416793823242,
1144
+ "epoch": 1.4,
1145
+ "grad_norm": 1.7713124187158098,
1146
+ "kl": 0.034912109375,
1147
+ "learning_rate": 1.0675362344274952e-07,
1148
+ "loss": 0.0,
1149
+ "reward": 0.7016758322715759,
1150
+ "reward_std": 0.5317542552947998,
1151
+ "rewards/correct_code_reward_func": 0.3541666865348816,
1152
+ "rewards/len_reward_func": 0.34750914573669434,
1153
+ "step": 88
1154
+ },
1155
+ {
1156
+ "completion_length": 59.0625,
1157
+ "epoch": 1.416,
1158
+ "grad_norm": 1.6492634665708499,
1159
+ "kl": 0.034423828125,
1160
+ "learning_rate": 1.0148278737965844e-07,
1161
+ "loss": 0.0,
1162
+ "reward": 0.7394144237041473,
1163
+ "reward_std": 0.4491709917783737,
1164
+ "rewards/correct_code_reward_func": 0.3541666716337204,
1165
+ "rewards/len_reward_func": 0.38524775207042694,
1166
+ "step": 89
1167
+ },
1168
+ {
1169
+ "completion_length": 48.6875,
1170
+ "epoch": 1.432,
1171
+ "grad_norm": 1.9432473699712165,
1172
+ "kl": 0.06494140625,
1173
+ "learning_rate": 9.631206222392479e-08,
1174
+ "loss": 0.0001,
1175
+ "reward": 0.8676341474056244,
1176
+ "reward_std": 0.3966159522533417,
1177
+ "rewards/correct_code_reward_func": 0.4791666865348816,
1178
+ "rewards/len_reward_func": 0.388467475771904,
1179
+ "step": 90
1180
+ },
1181
+ {
1182
+ "completion_length": 91.62500381469727,
1183
+ "epoch": 1.448,
1184
+ "grad_norm": 1.9189293687085252,
1185
+ "kl": 0.13482666015625,
1186
+ "learning_rate": 9.124493340311537e-08,
1187
+ "loss": 0.0001,
1188
+ "reward": 0.7231810688972473,
1189
+ "reward_std": 0.4981995224952698,
1190
+ "rewards/correct_code_reward_func": 0.3333333432674408,
1191
+ "rewards/len_reward_func": 0.3898477256298065,
1192
+ "step": 91
1193
+ },
1194
+ {
1195
+ "completion_length": 60.729169845581055,
1196
+ "epoch": 1.464,
1197
+ "grad_norm": 1.9825880271843388,
1198
+ "kl": 0.03424072265625,
1199
+ "learning_rate": 8.628481651367875e-08,
1200
+ "loss": 0.0,
1201
+ "reward": 0.8303024768829346,
1202
+ "reward_std": 0.40181903541088104,
1203
+ "rewards/correct_code_reward_func": 0.4375000149011612,
1204
+ "rewards/len_reward_func": 0.39280249178409576,
1205
+ "step": 92
1206
+ },
1207
+ {
1208
+ "completion_length": 58.22916793823242,
1209
+ "epoch": 1.48,
1210
+ "grad_norm": 1.8747344082688029,
1211
+ "kl": 0.0426025390625,
1212
+ "learning_rate": 8.143505501859551e-08,
1213
+ "loss": 0.0,
1214
+ "reward": 0.7909549474716187,
1215
+ "reward_std": 0.4536728262901306,
1216
+ "rewards/correct_code_reward_func": 0.458333358168602,
1217
+ "rewards/len_reward_func": 0.33262157440185547,
1218
+ "step": 93
1219
+ },
1220
+ {
1221
+ "completion_length": 125.10417175292969,
1222
+ "epoch": 1.496,
1223
+ "grad_norm": 1.5754029745287528,
1224
+ "kl": 0.02886962890625,
1225
+ "learning_rate": 7.669891799365282e-08,
1226
+ "loss": 0.0,
1227
+ "reward": 0.6297820806503296,
1228
+ "reward_std": 0.5051470398902893,
1229
+ "rewards/correct_code_reward_func": 0.2708333432674408,
1230
+ "rewards/len_reward_func": 0.3589487075805664,
1231
+ "step": 94
1232
+ },
1233
+ {
1234
+ "completion_length": 89.27083587646484,
1235
+ "epoch": 1.512,
1236
+ "grad_norm": 1.698829198816419,
1237
+ "kl": 0.02362060546875,
1238
+ "learning_rate": 7.207959792385998e-08,
1239
+ "loss": 0.0,
1240
+ "reward": 0.7924558222293854,
1241
+ "reward_std": 0.42506614327430725,
1242
+ "rewards/correct_code_reward_func": 0.3541666865348816,
1243
+ "rewards/len_reward_func": 0.4382891356945038,
1244
+ "step": 95
1245
+ },
1246
+ {
1247
+ "completion_length": 82.18750381469727,
1248
+ "epoch": 1.528,
1249
+ "grad_norm": 1.4031599496951968,
1250
+ "kl": 0.03643798828125,
1251
+ "learning_rate": 6.758020855149249e-08,
1252
+ "loss": 0.0,
1253
+ "reward": 0.6851500123739243,
1254
+ "reward_std": 0.2974398583173752,
1255
+ "rewards/correct_code_reward_func": 0.25000000558793545,
1256
+ "rewards/len_reward_func": 0.43515002727508545,
1257
+ "step": 96
1258
+ },
1259
+ {
1260
+ "completion_length": 54.6875,
1261
+ "epoch": 1.544,
1262
+ "grad_norm": 1.4467008481635895,
1263
+ "kl": 0.039306640625,
1264
+ "learning_rate": 6.320378277721342e-08,
1265
+ "loss": 0.0,
1266
+ "reward": 0.7509966492652893,
1267
+ "reward_std": 0.3042096644639969,
1268
+ "rewards/correct_code_reward_func": 0.3125,
1269
+ "rewards/len_reward_func": 0.4384966343641281,
1270
+ "step": 97
1271
+ },
1272
+ {
1273
+ "completion_length": 68.08333587646484,
1274
+ "epoch": 1.56,
1275
+ "grad_norm": 2.082709482850275,
1276
+ "kl": 0.03460693359375,
1277
+ "learning_rate": 5.895327061568775e-08,
1278
+ "loss": 0.0,
1279
+ "reward": 0.7968247532844543,
1280
+ "reward_std": 0.36605267226696014,
1281
+ "rewards/correct_code_reward_func": 0.3750000149011612,
1282
+ "rewards/len_reward_func": 0.42182472348213196,
1283
+ "step": 98
1284
+ },
1285
+ {
1286
+ "completion_length": 56.020835876464844,
1287
+ "epoch": 1.576,
1288
+ "grad_norm": 2.726579074776626,
1289
+ "kl": 0.0662841796875,
1290
+ "learning_rate": 5.483153720706798e-08,
1291
+ "loss": 0.0001,
1292
+ "reward": 0.8111520707607269,
1293
+ "reward_std": 0.548240602016449,
1294
+ "rewards/correct_code_reward_func": 0.4166666716337204,
1295
+ "rewards/len_reward_func": 0.3944854289293289,
1296
+ "step": 99
1297
+ },
1298
+ {
1299
+ "completion_length": 54.25000190734863,
1300
+ "epoch": 1.592,
1301
+ "grad_norm": 2.079061824739654,
1302
+ "kl": 0.0452880859375,
1303
+ "learning_rate": 5.0841360885690996e-08,
1304
+ "loss": 0.0,
1305
+ "reward": 0.9174363613128662,
1306
+ "reward_std": 0.46667972207069397,
1307
+ "rewards/correct_code_reward_func": 0.5416666865348816,
1308
+ "rewards/len_reward_func": 0.375769704580307,
1309
+ "step": 100
1310
+ },
1311
+ {
1312
+ "completion_length": 65.72916793823242,
1313
+ "epoch": 1.608,
1314
+ "grad_norm": 1.5292386933354263,
1315
+ "kl": 0.04522705078125,
1316
+ "learning_rate": 4.698543130728755e-08,
1317
+ "loss": 0.0,
1318
+ "reward": 0.8213175535202026,
1319
+ "reward_std": 0.38392098248004913,
1320
+ "rewards/correct_code_reward_func": 0.458333358168602,
1321
+ "rewards/len_reward_func": 0.3629842549562454,
1322
+ "step": 101
1323
+ },
1324
+ {
1325
+ "completion_length": 67.77083587646484,
1326
+ "epoch": 1.624,
1327
+ "grad_norm": 1.352325105446135,
1328
+ "kl": 0.0390625,
1329
+ "learning_rate": 4.326634763596784e-08,
1330
+ "loss": 0.0,
1331
+ "reward": 0.7263242900371552,
1332
+ "reward_std": 0.37168650329113007,
1333
+ "rewards/correct_code_reward_func": 0.31250002048909664,
1334
+ "rewards/len_reward_func": 0.41382429003715515,
1335
+ "step": 102
1336
+ },
1337
+ {
1338
+ "completion_length": 64.10416793823242,
1339
+ "epoch": 1.6400000000000001,
1340
+ "grad_norm": 1.9987254276022863,
1341
+ "kl": 0.02880859375,
1342
+ "learning_rate": 3.968661679220467e-08,
1343
+ "loss": 0.0,
1344
+ "reward": 1.174392580986023,
1345
+ "reward_std": 0.4813085198402405,
1346
+ "rewards/correct_code_reward_func": 0.7500000298023224,
1347
+ "rewards/len_reward_func": 0.42439255118370056,
1348
+ "step": 103
1349
+ },
1350
+ {
1351
+ "completion_length": 57.437503814697266,
1352
+ "epoch": 1.6560000000000001,
1353
+ "grad_norm": 1.5506203528349733,
1354
+ "kl": 0.041015625,
1355
+ "learning_rate": 3.624865176299499e-08,
1356
+ "loss": 0.0,
1357
+ "reward": 0.9918626546859741,
1358
+ "reward_std": 0.5309067815542221,
1359
+ "rewards/correct_code_reward_func": 0.6666666865348816,
1360
+ "rewards/len_reward_func": 0.3251959830522537,
1361
+ "step": 104
1362
+ },
1363
+ {
1364
+ "completion_length": 114.50000762939453,
1365
+ "epoch": 1.6720000000000002,
1366
+ "grad_norm": 1.538301941895194,
1367
+ "kl": 0.0245361328125,
1368
+ "learning_rate": 3.295476997533905e-08,
1369
+ "loss": 0.0,
1370
+ "reward": 0.9100688099861145,
1371
+ "reward_std": 0.29824198782444,
1372
+ "rewards/correct_code_reward_func": 0.4583333432674408,
1373
+ "rewards/len_reward_func": 0.4517354816198349,
1374
+ "step": 105
1375
+ },
1376
+ {
1377
+ "completion_length": 129.81250381469727,
1378
+ "epoch": 1.688,
1379
+ "grad_norm": 1.3867754807731443,
1380
+ "kl": 0.0283203125,
1381
+ "learning_rate": 2.980719173413396e-08,
1382
+ "loss": 0.0,
1383
+ "reward": 0.818383663892746,
1384
+ "reward_std": 0.5115247815847397,
1385
+ "rewards/correct_code_reward_func": 0.4166666716337204,
1386
+ "rewards/len_reward_func": 0.4017169624567032,
1387
+ "step": 106
1388
+ },
1389
+ {
1390
+ "completion_length": 73.33333587646484,
1391
+ "epoch": 1.704,
1392
+ "grad_norm": 2.2267187145460765,
1393
+ "kl": 0.04461669921875,
1394
+ "learning_rate": 2.680803872553408e-08,
1395
+ "loss": 0.0,
1396
+ "reward": 0.8567679226398468,
1397
+ "reward_std": 0.51302769780159,
1398
+ "rewards/correct_code_reward_func": 0.4375,
1399
+ "rewards/len_reward_func": 0.4192679077386856,
1400
+ "step": 107
1401
+ },
1402
+ {
1403
+ "completion_length": 53.54166793823242,
1404
+ "epoch": 1.72,
1405
+ "grad_norm": 3.1940102299602953,
1406
+ "kl": 0.0521240234375,
1407
+ "learning_rate": 2.395933258678745e-08,
1408
+ "loss": 0.0001,
1409
+ "reward": 0.9940223693847656,
1410
+ "reward_std": 0.46572498977184296,
1411
+ "rewards/correct_code_reward_func": 0.6041666865348816,
1412
+ "rewards/len_reward_func": 0.3898557126522064,
1413
+ "step": 108
1414
+ },
1415
+ {
1416
+ "completion_length": 41.52083396911621,
1417
+ "epoch": 1.736,
1418
+ "grad_norm": 2.0727978566546295,
1419
+ "kl": 0.0655517578125,
1420
+ "learning_rate": 2.1262993543511715e-08,
1421
+ "loss": 0.0001,
1422
+ "reward": 0.9489125609397888,
1423
+ "reward_std": 0.5604254603385925,
1424
+ "rewards/correct_code_reward_func": 0.6250000298023224,
1425
+ "rewards/len_reward_func": 0.32391248643398285,
1426
+ "step": 109
1427
+ },
1428
+ {
1429
+ "completion_length": 106.08333587646484,
1430
+ "epoch": 1.752,
1431
+ "grad_norm": 2.3414859603625806,
1432
+ "kl": 0.03424072265625,
1433
+ "learning_rate": 1.872083911532907e-08,
1434
+ "loss": 0.0,
1435
+ "reward": 0.5710697174072266,
1436
+ "reward_std": 0.4303289204835892,
1437
+ "rewards/correct_code_reward_func": 0.1666666679084301,
1438
+ "rewards/len_reward_func": 0.4044030159711838,
1439
+ "step": 110
1440
+ },
1441
+ {
1442
+ "completion_length": 60.437503814697266,
1443
+ "epoch": 1.768,
1444
+ "grad_norm": 1.5494191116212308,
1445
+ "kl": 0.046875,
1446
+ "learning_rate": 1.6334582890731697e-08,
1447
+ "loss": 0.0,
1448
+ "reward": 1.0543819665908813,
1449
+ "reward_std": 0.4688963294029236,
1450
+ "rewards/correct_code_reward_func": 0.6666666865348816,
1451
+ "rewards/len_reward_func": 0.38771532475948334,
1452
+ "step": 111
1453
+ },
1454
+ {
1455
+ "completion_length": 139.43750381469727,
1456
+ "epoch": 1.784,
1457
+ "grad_norm": 1.8975149766982131,
1458
+ "kl": 0.0323486328125,
1459
+ "learning_rate": 1.4105833372004523e-08,
1460
+ "loss": 0.0,
1461
+ "reward": 0.7198583781719208,
1462
+ "reward_std": 0.2770904451608658,
1463
+ "rewards/correct_code_reward_func": 0.2708333395421505,
1464
+ "rewards/len_reward_func": 0.4490250498056412,
1465
+ "step": 112
1466
+ },
1467
+ {
1468
+ "completion_length": 71.87500190734863,
1469
+ "epoch": 1.8,
1470
+ "grad_norm": 1.8779975481307012,
1471
+ "kl": 0.0350341796875,
1472
+ "learning_rate": 1.2036092890982619e-08,
1473
+ "loss": 0.0,
1474
+ "reward": 0.6213224828243256,
1475
+ "reward_std": 0.39381173253059387,
1476
+ "rewards/correct_code_reward_func": 0.25,
1477
+ "rewards/len_reward_func": 0.3713224530220032,
1478
+ "step": 113
1479
+ },
1480
+ {
1481
+ "completion_length": 73.16666793823242,
1482
+ "epoch": 1.8159999999999998,
1483
+ "grad_norm": 1.625916920606493,
1484
+ "kl": 0.04345703125,
1485
+ "learning_rate": 1.0126756596375685e-08,
1486
+ "loss": 0.0,
1487
+ "reward": 0.8906111121177673,
1488
+ "reward_std": 0.5251133739948273,
1489
+ "rewards/correct_code_reward_func": 0.4791666865348816,
1490
+ "rewards/len_reward_func": 0.41144441068172455,
1491
+ "step": 114
1492
+ },
1493
+ {
1494
+ "completion_length": 39.85416793823242,
1495
+ "epoch": 1.8319999999999999,
1496
+ "grad_norm": 1.8155165345051183,
1497
+ "kl": 0.0440673828125,
1498
+ "learning_rate": 8.379111513340753e-09,
1499
+ "loss": 0.0,
1500
+ "reward": 0.8687795996665955,
1501
+ "reward_std": 0.4838385283946991,
1502
+ "rewards/correct_code_reward_func": 0.4583333358168602,
1503
+ "rewards/len_reward_func": 0.41044625639915466,
1504
+ "step": 115
1505
+ },
1506
+ {
1507
+ "completion_length": 75.58333396911621,
1508
+ "epoch": 1.8479999999999999,
1509
+ "grad_norm": 1.8222797961879316,
1510
+ "kl": 0.03985595703125,
1511
+ "learning_rate": 6.7943356759381785e-09,
1512
+ "loss": 0.0,
1513
+ "reward": 0.9320607483386993,
1514
+ "reward_std": 0.5384509861469269,
1515
+ "rewards/correct_code_reward_func": 0.5416666865348816,
1516
+ "rewards/len_reward_func": 0.39039406180381775,
1517
+ "step": 116
1518
+ },
1519
+ {
1520
+ "completion_length": 68.54166984558105,
1521
+ "epoch": 1.8639999999999999,
1522
+ "grad_norm": 2.0020075086567775,
1523
+ "kl": 0.031982421875,
1524
+ "learning_rate": 5.373497333054616e-09,
1525
+ "loss": 0.0,
1526
+ "reward": 0.9275134801864624,
1527
+ "reward_std": 0.4482097327709198,
1528
+ "rewards/correct_code_reward_func": 0.5000000298023224,
1529
+ "rewards/len_reward_func": 0.4275134950876236,
1530
+ "step": 117
1531
+ },
1532
+ {
1533
+ "completion_length": 73.91666793823242,
1534
+ "epoch": 1.88,
1535
+ "grad_norm": 1.7788611304062052,
1536
+ "kl": 0.03240966796875,
1537
+ "learning_rate": 4.117554228329406e-09,
1538
+ "loss": 0.0,
1539
+ "reward": 0.9304822385311127,
1540
+ "reward_std": 0.5174555033445358,
1541
+ "rewards/correct_code_reward_func": 0.5416666865348816,
1542
+ "rewards/len_reward_func": 0.38881558179855347,
1543
+ "step": 118
1544
+ },
1545
+ {
1546
+ "completion_length": 56.20833396911621,
1547
+ "epoch": 1.896,
1548
+ "grad_norm": 2.1126141119280257,
1549
+ "kl": 0.0341796875,
1550
+ "learning_rate": 3.0273529545687125e-09,
1551
+ "loss": 0.0,
1552
+ "reward": 0.7594221532344818,
1553
+ "reward_std": 0.480338990688324,
1554
+ "rewards/correct_code_reward_func": 0.3958333432674408,
1555
+ "rewards/len_reward_func": 0.3635888248682022,
1556
+ "step": 119
1557
+ },
1558
+ {
1559
+ "completion_length": 72.47916793823242,
1560
+ "epoch": 1.912,
1561
+ "grad_norm": 1.4598566413193612,
1562
+ "kl": 0.03466796875,
1563
+ "learning_rate": 2.1036283830834224e-09,
1564
+ "loss": 0.0,
1565
+ "reward": 0.7889427244663239,
1566
+ "reward_std": 0.48503294587135315,
1567
+ "rewards/correct_code_reward_func": 0.3958333432674408,
1568
+ "rewards/len_reward_func": 0.39310936629772186,
1569
+ "step": 120
1570
+ },
1571
+ {
1572
+ "completion_length": 40.85416793823242,
1573
+ "epoch": 1.928,
1574
+ "grad_norm": 2.335195303935002,
1575
+ "kl": 0.056640625,
1576
+ "learning_rate": 1.347003168334665e-09,
1577
+ "loss": 0.0001,
1578
+ "reward": 1.0662382543087006,
1579
+ "reward_std": 0.2768351137638092,
1580
+ "rewards/correct_code_reward_func": 0.6250000149011612,
1581
+ "rewards/len_reward_func": 0.44123825430870056,
1582
+ "step": 121
1583
+ },
1584
+ {
1585
+ "completion_length": 50.62500190734863,
1586
+ "epoch": 1.944,
1587
+ "grad_norm": 1.8386331097859265,
1588
+ "kl": 0.03173828125,
1589
+ "learning_rate": 7.579873282216598e-10,
1590
+ "loss": 0.0,
1591
+ "reward": 0.8906074166297913,
1592
+ "reward_std": 0.5252098143100739,
1593
+ "rewards/correct_code_reward_func": 0.5833333730697632,
1594
+ "rewards/len_reward_func": 0.30727406591176987,
1595
+ "step": 122
1596
+ },
1597
+ {
1598
+ "completion_length": 99.4375057220459,
1599
+ "epoch": 1.96,
1600
+ "grad_norm": 1.621045537411182,
1601
+ "kl": 0.0238037109375,
1602
+ "learning_rate": 3.3697790029424413e-10,
1603
+ "loss": 0.0,
1604
+ "reward": 0.9505272507667542,
1605
+ "reward_std": 0.5842320024967194,
1606
+ "rewards/correct_code_reward_func": 0.5833333432674408,
1607
+ "rewards/len_reward_func": 0.36719387769699097,
1608
+ "step": 123
1609
+ },
1610
+ {
1611
+ "completion_length": 63.000003814697266,
1612
+ "epoch": 1.976,
1613
+ "grad_norm": 2.157350197672568,
1614
+ "kl": 0.0465087890625,
1615
+ "learning_rate": 8.425867412190091e-11,
1616
+ "loss": 0.0,
1617
+ "reward": 0.9762873649597168,
1618
+ "reward_std": 0.5066816210746765,
1619
+ "rewards/correct_code_reward_func": 0.5833333432674408,
1620
+ "rewards/len_reward_func": 0.3929540067911148,
1621
+ "step": 124
1622
+ },
1623
+ {
1624
+ "completion_length": 126.97917175292969,
1625
+ "epoch": 1.992,
1626
+ "grad_norm": 1.7642641467833304,
1627
+ "kl": 0.02130126953125,
1628
+ "learning_rate": 0.0,
1629
+ "loss": 0.0,
1630
+ "reward": 0.7899810075759888,
1631
+ "reward_std": 0.38732415437698364,
1632
+ "rewards/correct_code_reward_func": 0.3750000149011612,
1633
+ "rewards/len_reward_func": 0.41498102247714996,
1634
+ "step": 125
1635
+ },
1636
+ {
1637
+ "epoch": 1.992,
1638
+ "step": 125,
1639
+ "total_flos": 0.0,
1640
+ "train_loss": 1.9367338650191358e-05,
1641
+ "train_runtime": 3648.0047,
1642
+ "train_samples_per_second": 0.206,
1643
+ "train_steps_per_second": 0.034
1644
+ }
1645
+ ],
1646
+ "logging_steps": 1,
1647
+ "max_steps": 125,
1648
+ "num_input_tokens_seen": 0,
1649
+ "num_train_epochs": 3,
1650
+ "save_steps": 25,
1651
+ "stateful_callbacks": {
1652
+ "TrainerControl": {
1653
+ "args": {
1654
+ "should_epoch_stop": false,
1655
+ "should_evaluate": false,
1656
+ "should_log": false,
1657
+ "should_save": true,
1658
+ "should_training_stop": true
1659
+ },
1660
+ "attributes": {}
1661
+ }
1662
+ },
1663
+ "total_flos": 0.0,
1664
+ "train_batch_size": 1,
1665
+ "trial_name": null,
1666
+ "trial_params": null
1667
+ }