taicheng commited on
Commit
d1f6790
·
verified ·
1 Parent(s): 1cbd8b1

Model save

Browse files
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-7B-v0.1
3
+ library_name: peft
4
+ license: apache-2.0
5
+ tags:
6
+ - trl
7
+ - dpo
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: zephyr-7b-dpo-qlora
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # zephyr-7b-dpo-qlora
18
+
19
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.4952
22
+ - Rewards/chosen: -2.8107
23
+ - Rewards/rejected: -3.8708
24
+ - Rewards/accuracies: 0.7718
25
+ - Rewards/margins: 1.0601
26
+ - Logps/rejected: -631.7385
27
+ - Logps/chosen: -545.9743
28
+ - Logits/rejected: -1.0385
29
+ - Logits/chosen: -1.1509
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 5e-06
49
+ - train_batch_size: 4
50
+ - eval_batch_size: 8
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - num_devices: 4
54
+ - gradient_accumulation_steps: 4
55
+ - total_train_batch_size: 64
56
+ - total_eval_batch_size: 32
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: cosine
59
+ - lr_scheduler_warmup_ratio: 0.1
60
+ - num_epochs: 1
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Logits/chosen | Logits/rejected | Logps/chosen | Logps/rejected | Validation Loss | Rewards/accuracies | Rewards/chosen | Rewards/margins | Rewards/rejected |
65
+ |:-------------:|:------:|:----:|:-------------:|:---------------:|:------------:|:--------------:|:---------------:|:------------------:|:--------------:|:---------------:|:----------------:|
66
+ | 0.6163 | 0.1047 | 100 | -2.1006 | -2.0162 | -303.8351 | -310.3097 | 0.6178 | 0.6806 | -0.3893 | 0.2672 | -0.6565 |
67
+ | 0.5679 | 0.2094 | 200 | -1.8227 | -1.7394 | -352.2879 | -389.6575 | 0.5567 | 0.7401 | -0.8739 | 0.5761 | -1.4500 |
68
+ | 0.5412 | 0.3141 | 300 | -1.3111 | -1.2181 | -421.3257 | -483.0423 | 0.5305 | 0.7460 | -1.5642 | 0.8196 | -2.3838 |
69
+ | 0.5364 | 0.4187 | 400 | -1.2334 | -1.1332 | -416.6979 | -476.3458 | 0.5143 | 0.7579 | -1.5180 | 0.7989 | -2.3169 |
70
+ | 0.5046 | 0.5234 | 500 | -1.1373 | -1.0302 | -529.9542 | -605.2977 | 0.5062 | 0.7579 | -2.6505 | 0.9559 | -3.6064 |
71
+ | 0.4736 | 0.6281 | 600 | 0.5059 | -2.7244 | -3.7650 | 0.7639 | 1.0406 | -621.1549 | -537.3406 | -1.0135 | -1.1253 |
72
+ | 0.4619 | 0.7328 | 700 | 0.4994 | -2.9240 | -3.9991 | 0.7619 | 1.0750 | -644.5651 | -557.3041 | -1.0064 | -1.1194 |
73
+ | 0.4926 | 0.8375 | 800 | 0.4962 | -2.7247 | -3.7455 | 0.7659 | 1.0207 | -619.2051 | -537.3770 | -1.0516 | -1.1641 |
74
+ | 0.4856 | 0.9422 | 900 | 0.4952 | -2.8107 | -3.8708 | 0.7718 | 1.0601 | -631.7385 | -545.9743 | -1.0385 | -1.1509 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - PEFT 0.12.0
80
+ - Transformers 4.44.2
81
+ - Pytorch 2.4.0
82
+ - Datasets 2.21.0
83
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16eb89e252e4ccfc437718d2f729474d0f3b93de224dbf67bc157b3f99bbe96c
3
+ size 671150064
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9997382884061764,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.2358125359600127,
5
+ "train_runtime": 19082.7985,
6
+ "train_samples": 61134,
7
+ "train_samples_per_second": 3.204,
8
+ "train_steps_per_second": 0.05
9
+ }
runs/Sep12_12-22-03_qa-a40-005.crc.nd.edu/events.out.tfevents.1726158593.qa-a40-005.crc.nd.edu.408111.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:28415ecd37319a5aef788ca1534d0ed4748a3b9202ff2840458e55c94a5158b4
3
- size 40173
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a98a894bc09c2b34903e8ab4c2119f67724d8810ecf18d5e86e5f236863456af
3
+ size 40527
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9997382884061764,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.2358125359600127,
5
+ "train_runtime": 19082.7985,
6
+ "train_samples": 61134,
7
+ "train_samples_per_second": 3.204,
8
+ "train_steps_per_second": 0.05
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1626 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9997382884061764,
5
+ "eval_steps": 100,
6
+ "global_step": 955,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0010468463752944255,
13
+ "grad_norm": 1.1940648296392757,
14
+ "learning_rate": 5.208333333333333e-08,
15
+ "logits/chosen": -2.519019603729248,
16
+ "logits/rejected": -2.354379177093506,
17
+ "logps/chosen": -297.6008605957031,
18
+ "logps/rejected": -252.44248962402344,
19
+ "loss": 0.693,
20
+ "rewards/accuracies": 0.5625,
21
+ "rewards/chosen": 0.0007321774610318244,
22
+ "rewards/margins": 6.297111394815147e-05,
23
+ "rewards/rejected": 0.0006692063761875033,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.010468463752944255,
28
+ "grad_norm": 1.106408968948395,
29
+ "learning_rate": 5.208333333333334e-07,
30
+ "logits/chosen": -2.2454488277435303,
31
+ "logits/rejected": -2.215104818344116,
32
+ "logps/chosen": -275.67840576171875,
33
+ "logps/rejected": -254.77935791015625,
34
+ "loss": 0.6927,
35
+ "rewards/accuracies": 0.5694444179534912,
36
+ "rewards/chosen": 0.004385500214993954,
37
+ "rewards/margins": 0.0008544763550162315,
38
+ "rewards/rejected": 0.0035310229286551476,
39
+ "step": 10
40
+ },
41
+ {
42
+ "epoch": 0.02093692750588851,
43
+ "grad_norm": 1.1723169233645143,
44
+ "learning_rate": 1.0416666666666667e-06,
45
+ "logits/chosen": -2.231245517730713,
46
+ "logits/rejected": -2.1146271228790283,
47
+ "logps/chosen": -277.5929260253906,
48
+ "logps/rejected": -255.2114715576172,
49
+ "loss": 0.6907,
50
+ "rewards/accuracies": 0.643750011920929,
51
+ "rewards/chosen": 0.026119515299797058,
52
+ "rewards/margins": 0.005599636118859053,
53
+ "rewards/rejected": 0.020519878715276718,
54
+ "step": 20
55
+ },
56
+ {
57
+ "epoch": 0.031405391258832765,
58
+ "grad_norm": 1.1930628386131166,
59
+ "learning_rate": 1.5625e-06,
60
+ "logits/chosen": -2.314044952392578,
61
+ "logits/rejected": -2.211050510406494,
62
+ "logps/chosen": -281.39801025390625,
63
+ "logps/rejected": -262.43841552734375,
64
+ "loss": 0.6858,
65
+ "rewards/accuracies": 0.643750011920929,
66
+ "rewards/chosen": 0.042405955493450165,
67
+ "rewards/margins": 0.015279242768883705,
68
+ "rewards/rejected": 0.027126718312501907,
69
+ "step": 30
70
+ },
71
+ {
72
+ "epoch": 0.04187385501177702,
73
+ "grad_norm": 1.1771194457412584,
74
+ "learning_rate": 2.0833333333333334e-06,
75
+ "logits/chosen": -2.3085246086120605,
76
+ "logits/rejected": -2.2145581245422363,
77
+ "logps/chosen": -268.41571044921875,
78
+ "logps/rejected": -255.4882354736328,
79
+ "loss": 0.6814,
80
+ "rewards/accuracies": 0.6625000238418579,
81
+ "rewards/chosen": 0.05249170586466789,
82
+ "rewards/margins": 0.027045782655477524,
83
+ "rewards/rejected": 0.025445926934480667,
84
+ "step": 40
85
+ },
86
+ {
87
+ "epoch": 0.05234231876472128,
88
+ "grad_norm": 1.1817503689886124,
89
+ "learning_rate": 2.604166666666667e-06,
90
+ "logits/chosen": -2.279360294342041,
91
+ "logits/rejected": -2.1755104064941406,
92
+ "logps/chosen": -227.83438110351562,
93
+ "logps/rejected": -206.697265625,
94
+ "loss": 0.6761,
95
+ "rewards/accuracies": 0.7250000238418579,
96
+ "rewards/chosen": 0.05746235325932503,
97
+ "rewards/margins": 0.04355225712060928,
98
+ "rewards/rejected": 0.013910098001360893,
99
+ "step": 50
100
+ },
101
+ {
102
+ "epoch": 0.06281078251766553,
103
+ "grad_norm": 1.318174585891366,
104
+ "learning_rate": 3.125e-06,
105
+ "logits/chosen": -2.287376880645752,
106
+ "logits/rejected": -2.186383008956909,
107
+ "logps/chosen": -264.8748474121094,
108
+ "logps/rejected": -229.34848022460938,
109
+ "loss": 0.6681,
110
+ "rewards/accuracies": 0.6875,
111
+ "rewards/chosen": 0.045027900487184525,
112
+ "rewards/margins": 0.06142578274011612,
113
+ "rewards/rejected": -0.016397882252931595,
114
+ "step": 60
115
+ },
116
+ {
117
+ "epoch": 0.07327924627060979,
118
+ "grad_norm": 1.5287003725547763,
119
+ "learning_rate": 3.6458333333333333e-06,
120
+ "logits/chosen": -2.129021406173706,
121
+ "logits/rejected": -2.0812830924987793,
122
+ "logps/chosen": -257.1100158691406,
123
+ "logps/rejected": -263.87359619140625,
124
+ "loss": 0.6544,
125
+ "rewards/accuracies": 0.7437499761581421,
126
+ "rewards/chosen": -0.00797443836927414,
127
+ "rewards/margins": 0.10772331804037094,
128
+ "rewards/rejected": -0.11569775640964508,
129
+ "step": 70
130
+ },
131
+ {
132
+ "epoch": 0.08374771002355404,
133
+ "grad_norm": 2.951307924811214,
134
+ "learning_rate": 4.166666666666667e-06,
135
+ "logits/chosen": -2.2603583335876465,
136
+ "logits/rejected": -2.1030356884002686,
137
+ "logps/chosen": -265.6451110839844,
138
+ "logps/rejected": -258.9366455078125,
139
+ "loss": 0.6386,
140
+ "rewards/accuracies": 0.699999988079071,
141
+ "rewards/chosen": -0.1463477909564972,
142
+ "rewards/margins": 0.12257362902164459,
143
+ "rewards/rejected": -0.268921434879303,
144
+ "step": 80
145
+ },
146
+ {
147
+ "epoch": 0.0942161737764983,
148
+ "grad_norm": 2.228941966023797,
149
+ "learning_rate": 4.6875000000000004e-06,
150
+ "logits/chosen": -2.1493210792541504,
151
+ "logits/rejected": -2.081740140914917,
152
+ "logps/chosen": -267.15631103515625,
153
+ "logps/rejected": -286.8982238769531,
154
+ "loss": 0.6314,
155
+ "rewards/accuracies": 0.7124999761581421,
156
+ "rewards/chosen": -0.1616009771823883,
157
+ "rewards/margins": 0.18432030081748962,
158
+ "rewards/rejected": -0.34592124819755554,
159
+ "step": 90
160
+ },
161
+ {
162
+ "epoch": 0.10468463752944256,
163
+ "grad_norm": 2.3171658298596007,
164
+ "learning_rate": 4.9997324926814375e-06,
165
+ "logits/chosen": -2.198456287384033,
166
+ "logits/rejected": -2.08889102935791,
167
+ "logps/chosen": -310.09893798828125,
168
+ "logps/rejected": -328.76763916015625,
169
+ "loss": 0.6163,
170
+ "rewards/accuracies": 0.6937500238418579,
171
+ "rewards/chosen": -0.37811416387557983,
172
+ "rewards/margins": 0.23748056590557098,
173
+ "rewards/rejected": -0.615594744682312,
174
+ "step": 100
175
+ },
176
+ {
177
+ "epoch": 0.10468463752944256,
178
+ "eval_logits/chosen": -2.1005911827087402,
179
+ "eval_logits/rejected": -2.0161635875701904,
180
+ "eval_logps/chosen": -303.8350524902344,
181
+ "eval_logps/rejected": -310.3097229003906,
182
+ "eval_loss": 0.6178256869316101,
183
+ "eval_rewards/accuracies": 0.6805555820465088,
184
+ "eval_rewards/chosen": -0.38932088017463684,
185
+ "eval_rewards/margins": 0.26718538999557495,
186
+ "eval_rewards/rejected": -0.6565062999725342,
187
+ "eval_runtime": 497.4053,
188
+ "eval_samples_per_second": 4.021,
189
+ "eval_steps_per_second": 0.127,
190
+ "step": 100
191
+ },
192
+ {
193
+ "epoch": 0.11515310128238682,
194
+ "grad_norm": 3.2280840758041305,
195
+ "learning_rate": 4.996723692767927e-06,
196
+ "logits/chosen": -2.185472011566162,
197
+ "logits/rejected": -2.0565757751464844,
198
+ "logps/chosen": -283.398681640625,
199
+ "logps/rejected": -271.3462219238281,
200
+ "loss": 0.6159,
201
+ "rewards/accuracies": 0.737500011920929,
202
+ "rewards/chosen": -0.35291650891304016,
203
+ "rewards/margins": 0.27926865220069885,
204
+ "rewards/rejected": -0.632185161113739,
205
+ "step": 110
206
+ },
207
+ {
208
+ "epoch": 0.12562156503533106,
209
+ "grad_norm": 4.321662600064664,
210
+ "learning_rate": 4.9903757462135984e-06,
211
+ "logits/chosen": -2.1495065689086914,
212
+ "logits/rejected": -2.0793585777282715,
213
+ "logps/chosen": -293.3641662597656,
214
+ "logps/rejected": -339.5681457519531,
215
+ "loss": 0.5751,
216
+ "rewards/accuracies": 0.75,
217
+ "rewards/chosen": -0.4733458161354065,
218
+ "rewards/margins": 0.3658001124858856,
219
+ "rewards/rejected": -0.8391459584236145,
220
+ "step": 120
221
+ },
222
+ {
223
+ "epoch": 0.1360900287882753,
224
+ "grad_norm": 5.281496669320805,
225
+ "learning_rate": 4.980697142834315e-06,
226
+ "logits/chosen": -2.118959426879883,
227
+ "logits/rejected": -2.0077736377716064,
228
+ "logps/chosen": -376.8563232421875,
229
+ "logps/rejected": -348.50531005859375,
230
+ "loss": 0.5843,
231
+ "rewards/accuracies": 0.6812499761581421,
232
+ "rewards/chosen": -0.770808756351471,
233
+ "rewards/margins": 0.335581511259079,
234
+ "rewards/rejected": -1.1063902378082275,
235
+ "step": 130
236
+ },
237
+ {
238
+ "epoch": 0.14655849254121958,
239
+ "grad_norm": 5.5524041616811255,
240
+ "learning_rate": 4.967700826904229e-06,
241
+ "logits/chosen": -2.0535736083984375,
242
+ "logits/rejected": -1.9956505298614502,
243
+ "logps/chosen": -353.10333251953125,
244
+ "logps/rejected": -393.17108154296875,
245
+ "loss": 0.5757,
246
+ "rewards/accuracies": 0.762499988079071,
247
+ "rewards/chosen": -0.8411790132522583,
248
+ "rewards/margins": 0.44028061628341675,
249
+ "rewards/rejected": -1.2814596891403198,
250
+ "step": 140
251
+ },
252
+ {
253
+ "epoch": 0.15702695629416383,
254
+ "grad_norm": 4.851579207894888,
255
+ "learning_rate": 4.951404179843963e-06,
256
+ "logits/chosen": -2.1413304805755615,
257
+ "logits/rejected": -2.0062034130096436,
258
+ "logps/chosen": -385.7857360839844,
259
+ "logps/rejected": -377.45819091796875,
260
+ "loss": 0.5887,
261
+ "rewards/accuracies": 0.75,
262
+ "rewards/chosen": -0.9732216000556946,
263
+ "rewards/margins": 0.39891117811203003,
264
+ "rewards/rejected": -1.3721327781677246,
265
+ "step": 150
266
+ },
267
+ {
268
+ "epoch": 0.16749542004710807,
269
+ "grad_norm": 7.670497596403814,
270
+ "learning_rate": 4.931828996974498e-06,
271
+ "logits/chosen": -2.0889430046081543,
272
+ "logits/rejected": -1.922328233718872,
273
+ "logps/chosen": -347.14544677734375,
274
+ "logps/rejected": -342.8069763183594,
275
+ "loss": 0.5572,
276
+ "rewards/accuracies": 0.7250000238418579,
277
+ "rewards/chosen": -0.606709361076355,
278
+ "rewards/margins": 0.5376572608947754,
279
+ "rewards/rejected": -1.1443665027618408,
280
+ "step": 160
281
+ },
282
+ {
283
+ "epoch": 0.17796388380005235,
284
+ "grad_norm": 5.79406761477676,
285
+ "learning_rate": 4.909001458367867e-06,
286
+ "logits/chosen": -1.9932262897491455,
287
+ "logits/rejected": -1.8520256280899048,
288
+ "logps/chosen": -393.05499267578125,
289
+ "logps/rejected": -439.379150390625,
290
+ "loss": 0.5534,
291
+ "rewards/accuracies": 0.65625,
292
+ "rewards/chosen": -1.3342100381851196,
293
+ "rewards/margins": 0.6197064518928528,
294
+ "rewards/rejected": -1.9539167881011963,
295
+ "step": 170
296
+ },
297
+ {
298
+ "epoch": 0.1884323475529966,
299
+ "grad_norm": 5.16143784148925,
300
+ "learning_rate": 4.882952093833628e-06,
301
+ "logits/chosen": -1.9763168096542358,
302
+ "logits/rejected": -1.9191144704818726,
303
+ "logps/chosen": -370.9029235839844,
304
+ "logps/rejected": -421.9246520996094,
305
+ "loss": 0.5354,
306
+ "rewards/accuracies": 0.699999988079071,
307
+ "rewards/chosen": -1.0804250240325928,
308
+ "rewards/margins": 0.5884348154067993,
309
+ "rewards/rejected": -1.668859839439392,
310
+ "step": 180
311
+ },
312
+ {
313
+ "epoch": 0.19890081130594087,
314
+ "grad_norm": 4.841208626728555,
315
+ "learning_rate": 4.853715742087947e-06,
316
+ "logits/chosen": -1.8281657695770264,
317
+ "logits/rejected": -1.785474419593811,
318
+ "logps/chosen": -358.50970458984375,
319
+ "logps/rejected": -449.71478271484375,
320
+ "loss": 0.5327,
321
+ "rewards/accuracies": 0.8125,
322
+ "rewards/chosen": -1.1242244243621826,
323
+ "rewards/margins": 0.7143529653549194,
324
+ "rewards/rejected": -1.8385772705078125,
325
+ "step": 190
326
+ },
327
+ {
328
+ "epoch": 0.2093692750588851,
329
+ "grad_norm": 4.320034048961276,
330
+ "learning_rate": 4.821331504159906e-06,
331
+ "logits/chosen": -1.9889659881591797,
332
+ "logits/rejected": -1.8328838348388672,
333
+ "logps/chosen": -417.7860412597656,
334
+ "logps/rejected": -414.47528076171875,
335
+ "loss": 0.5679,
336
+ "rewards/accuracies": 0.71875,
337
+ "rewards/chosen": -1.052539587020874,
338
+ "rewards/margins": 0.5909037590026855,
339
+ "rewards/rejected": -1.6434433460235596,
340
+ "step": 200
341
+ },
342
+ {
343
+ "epoch": 0.2093692750588851,
344
+ "eval_logits/chosen": -1.8226656913757324,
345
+ "eval_logits/rejected": -1.73935866355896,
346
+ "eval_logps/chosen": -352.2879333496094,
347
+ "eval_logps/rejected": -389.65753173828125,
348
+ "eval_loss": 0.5566642880439758,
349
+ "eval_rewards/accuracies": 0.7400793433189392,
350
+ "eval_rewards/chosen": -0.8738502264022827,
351
+ "eval_rewards/margins": 0.5761341452598572,
352
+ "eval_rewards/rejected": -1.4499843120574951,
353
+ "eval_runtime": 495.1318,
354
+ "eval_samples_per_second": 4.039,
355
+ "eval_steps_per_second": 0.127,
356
+ "step": 200
357
+ },
358
+ {
359
+ "epoch": 0.21983773881182936,
360
+ "grad_norm": 4.600314274109389,
361
+ "learning_rate": 4.7858426910973435e-06,
362
+ "logits/chosen": -1.884235143661499,
363
+ "logits/rejected": -1.8107131719589233,
364
+ "logps/chosen": -395.2080993652344,
365
+ "logps/rejected": -432.21160888671875,
366
+ "loss": 0.5779,
367
+ "rewards/accuracies": 0.6812499761581421,
368
+ "rewards/chosen": -1.1761962175369263,
369
+ "rewards/margins": 0.46132412552833557,
370
+ "rewards/rejected": -1.637520432472229,
371
+ "step": 210
372
+ },
373
+ {
374
+ "epoch": 0.23030620256477363,
375
+ "grad_norm": 5.190566663970971,
376
+ "learning_rate": 4.747296766042161e-06,
377
+ "logits/chosen": -1.8367938995361328,
378
+ "logits/rejected": -1.7323997020721436,
379
+ "logps/chosen": -448.40631103515625,
380
+ "logps/rejected": -463.5052185058594,
381
+ "loss": 0.5486,
382
+ "rewards/accuracies": 0.731249988079071,
383
+ "rewards/chosen": -1.5517961978912354,
384
+ "rewards/margins": 0.5995836853981018,
385
+ "rewards/rejected": -2.1513800621032715,
386
+ "step": 220
387
+ },
388
+ {
389
+ "epoch": 0.24077466631771788,
390
+ "grad_norm": 6.503627910465957,
391
+ "learning_rate": 4.705745280752586e-06,
392
+ "logits/chosen": -1.6796023845672607,
393
+ "logits/rejected": -1.613417387008667,
394
+ "logps/chosen": -363.4635009765625,
395
+ "logps/rejected": -397.3682861328125,
396
+ "loss": 0.5564,
397
+ "rewards/accuracies": 0.7124999761581421,
398
+ "rewards/chosen": -1.1600974798202515,
399
+ "rewards/margins": 0.502653956413269,
400
+ "rewards/rejected": -1.66275155544281,
401
+ "step": 230
402
+ },
403
+ {
404
+ "epoch": 0.2512431300706621,
405
+ "grad_norm": 5.303084146530296,
406
+ "learning_rate": 4.661243806657256e-06,
407
+ "logits/chosen": -1.7421451807022095,
408
+ "logits/rejected": -1.6884396076202393,
409
+ "logps/chosen": -386.07489013671875,
410
+ "logps/rejected": -414.7723693847656,
411
+ "loss": 0.5355,
412
+ "rewards/accuracies": 0.7250000238418579,
413
+ "rewards/chosen": -1.2963042259216309,
414
+ "rewards/margins": 0.5929707884788513,
415
+ "rewards/rejected": -1.8892749547958374,
416
+ "step": 240
417
+ },
418
+ {
419
+ "epoch": 0.26171159382360637,
420
+ "grad_norm": 4.582341616936657,
421
+ "learning_rate": 4.613851860533367e-06,
422
+ "logits/chosen": -1.766645073890686,
423
+ "logits/rejected": -1.6948877573013306,
424
+ "logps/chosen": -388.61328125,
425
+ "logps/rejected": -415.88690185546875,
426
+ "loss": 0.5156,
427
+ "rewards/accuracies": 0.793749988079071,
428
+ "rewards/chosen": -1.220664620399475,
429
+ "rewards/margins": 0.6846394538879395,
430
+ "rewards/rejected": -1.905303955078125,
431
+ "step": 250
432
+ },
433
+ {
434
+ "epoch": 0.2721800575765506,
435
+ "grad_norm": 5.258036005468907,
436
+ "learning_rate": 4.563632824908252e-06,
437
+ "logits/chosen": -1.777336835861206,
438
+ "logits/rejected": -1.651208519935608,
439
+ "logps/chosen": -376.238525390625,
440
+ "logps/rejected": -421.6551818847656,
441
+ "loss": 0.5197,
442
+ "rewards/accuracies": 0.668749988079071,
443
+ "rewards/chosen": -1.0568403005599976,
444
+ "rewards/margins": 0.6248779296875,
445
+ "rewards/rejected": -1.6817182302474976,
446
+ "step": 260
447
+ },
448
+ {
449
+ "epoch": 0.2826485213294949,
450
+ "grad_norm": 4.95113233440466,
451
+ "learning_rate": 4.510653863290871e-06,
452
+ "logits/chosen": -1.6241214275360107,
453
+ "logits/rejected": -1.5427122116088867,
454
+ "logps/chosen": -393.2299499511719,
455
+ "logps/rejected": -455.7496032714844,
456
+ "loss": 0.5353,
457
+ "rewards/accuracies": 0.731249988079071,
458
+ "rewards/chosen": -1.1928220987319946,
459
+ "rewards/margins": 0.8355104327201843,
460
+ "rewards/rejected": -2.028332471847534,
461
+ "step": 270
462
+ },
463
+ {
464
+ "epoch": 0.29311698508243916,
465
+ "grad_norm": 4.316588759606841,
466
+ "learning_rate": 4.454985830346574e-06,
467
+ "logits/chosen": -1.6576063632965088,
468
+ "logits/rejected": -1.546891212463379,
469
+ "logps/chosen": -402.0686340332031,
470
+ "logps/rejected": -449.80694580078125,
471
+ "loss": 0.5469,
472
+ "rewards/accuracies": 0.7749999761581421,
473
+ "rewards/chosen": -1.1402246952056885,
474
+ "rewards/margins": 0.8147087097167969,
475
+ "rewards/rejected": -1.9549334049224854,
476
+ "step": 280
477
+ },
478
+ {
479
+ "epoch": 0.3035854488353834,
480
+ "grad_norm": 4.334299410781991,
481
+ "learning_rate": 4.396703177135262e-06,
482
+ "logits/chosen": -1.7125823497772217,
483
+ "logits/rejected": -1.547957181930542,
484
+ "logps/chosen": -367.20501708984375,
485
+ "logps/rejected": -389.2141418457031,
486
+ "loss": 0.5292,
487
+ "rewards/accuracies": 0.7250000238418579,
488
+ "rewards/chosen": -0.8416954278945923,
489
+ "rewards/margins": 0.5679039359092712,
490
+ "rewards/rejected": -1.4095993041992188,
491
+ "step": 290
492
+ },
493
+ {
494
+ "epoch": 0.31405391258832765,
495
+ "grad_norm": 5.689892772495371,
496
+ "learning_rate": 4.335883851539693e-06,
497
+ "logits/chosen": -1.4305754899978638,
498
+ "logits/rejected": -1.4178600311279297,
499
+ "logps/chosen": -366.5882263183594,
500
+ "logps/rejected": -427.017333984375,
501
+ "loss": 0.5412,
502
+ "rewards/accuracies": 0.699999988079071,
503
+ "rewards/chosen": -1.239133596420288,
504
+ "rewards/margins": 0.6136574149131775,
505
+ "rewards/rejected": -1.8527911901474,
506
+ "step": 300
507
+ },
508
+ {
509
+ "epoch": 0.31405391258832765,
510
+ "eval_logits/chosen": -1.3111425638198853,
511
+ "eval_logits/rejected": -1.218103051185608,
512
+ "eval_logps/chosen": -421.32574462890625,
513
+ "eval_logps/rejected": -483.0422668457031,
514
+ "eval_loss": 0.5305107831954956,
515
+ "eval_rewards/accuracies": 0.7460317611694336,
516
+ "eval_rewards/chosen": -1.5642281770706177,
517
+ "eval_rewards/margins": 0.819603681564331,
518
+ "eval_rewards/rejected": -2.3838319778442383,
519
+ "eval_runtime": 495.2081,
520
+ "eval_samples_per_second": 4.039,
521
+ "eval_steps_per_second": 0.127,
522
+ "step": 300
523
+ },
524
+ {
525
+ "epoch": 0.3245223763412719,
526
+ "grad_norm": 5.39322804571776,
527
+ "learning_rate": 4.2726091940171055e-06,
528
+ "logits/chosen": -1.4802881479263306,
529
+ "logits/rejected": -1.3236931562423706,
530
+ "logps/chosen": -424.968505859375,
531
+ "logps/rejected": -472.8589782714844,
532
+ "loss": 0.5441,
533
+ "rewards/accuracies": 0.731249988079071,
534
+ "rewards/chosen": -1.5254794359207153,
535
+ "rewards/margins": 0.8688099980354309,
536
+ "rewards/rejected": -2.394289493560791,
537
+ "step": 310
538
+ },
539
+ {
540
+ "epoch": 0.33499084009421615,
541
+ "grad_norm": 5.55708768926168,
542
+ "learning_rate": 4.206963828813555e-06,
543
+ "logits/chosen": -1.5147395133972168,
544
+ "logits/rejected": -1.4260919094085693,
545
+ "logps/chosen": -356.251953125,
546
+ "logps/rejected": -424.2235412597656,
547
+ "loss": 0.5537,
548
+ "rewards/accuracies": 0.6937500238418579,
549
+ "rewards/chosen": -1.1914732456207275,
550
+ "rewards/margins": 0.6315604448318481,
551
+ "rewards/rejected": -1.8230335712432861,
552
+ "step": 320
553
+ },
554
+ {
555
+ "epoch": 0.34545930384716045,
556
+ "grad_norm": 5.004482442410713,
557
+ "learning_rate": 4.139035550786495e-06,
558
+ "logits/chosen": -1.504990577697754,
559
+ "logits/rejected": -1.4216346740722656,
560
+ "logps/chosen": -406.04473876953125,
561
+ "logps/rejected": -461.72283935546875,
562
+ "loss": 0.5321,
563
+ "rewards/accuracies": 0.699999988079071,
564
+ "rewards/chosen": -1.5744965076446533,
565
+ "rewards/margins": 0.6061158776283264,
566
+ "rewards/rejected": -2.180612325668335,
567
+ "step": 330
568
+ },
569
+ {
570
+ "epoch": 0.3559277676001047,
571
+ "grad_norm": 5.1331172845370165,
572
+ "learning_rate": 4.068915207986931e-06,
573
+ "logits/chosen": -1.2623932361602783,
574
+ "logits/rejected": -1.1655136346817017,
575
+ "logps/chosen": -411.1018981933594,
576
+ "logps/rejected": -456.156982421875,
577
+ "loss": 0.5393,
578
+ "rewards/accuracies": 0.6937500238418579,
579
+ "rewards/chosen": -1.6048648357391357,
580
+ "rewards/margins": 0.7221436500549316,
581
+ "rewards/rejected": -2.3270087242126465,
582
+ "step": 340
583
+ },
584
+ {
585
+ "epoch": 0.36639623135304894,
586
+ "grad_norm": 6.204494142283613,
587
+ "learning_rate": 3.996696580158211e-06,
588
+ "logits/chosen": -1.4823843240737915,
589
+ "logits/rejected": -1.3702657222747803,
590
+ "logps/chosen": -387.9534606933594,
591
+ "logps/rejected": -448.01373291015625,
592
+ "loss": 0.5072,
593
+ "rewards/accuracies": 0.699999988079071,
594
+ "rewards/chosen": -1.3335731029510498,
595
+ "rewards/margins": 0.7507054805755615,
596
+ "rewards/rejected": -2.0842788219451904,
597
+ "step": 350
598
+ },
599
+ {
600
+ "epoch": 0.3768646951059932,
601
+ "grad_norm": 8.049482757420606,
602
+ "learning_rate": 3.922476253313921e-06,
603
+ "logits/chosen": -1.307775855064392,
604
+ "logits/rejected": -1.24771249294281,
605
+ "logps/chosen": -400.76910400390625,
606
+ "logps/rejected": -469.23651123046875,
607
+ "loss": 0.4704,
608
+ "rewards/accuracies": 0.768750011920929,
609
+ "rewards/chosen": -1.4676017761230469,
610
+ "rewards/margins": 0.8654597997665405,
611
+ "rewards/rejected": -2.333061456680298,
612
+ "step": 360
613
+ },
614
+ {
615
+ "epoch": 0.38733315885893743,
616
+ "grad_norm": 5.519216288621626,
617
+ "learning_rate": 3.846353490562664e-06,
618
+ "logits/chosen": -1.2273913621902466,
619
+ "logits/rejected": -1.1667524576187134,
620
+ "logps/chosen": -445.7137756347656,
621
+ "logps/rejected": -534.3553466796875,
622
+ "loss": 0.5271,
623
+ "rewards/accuracies": 0.762499988079071,
624
+ "rewards/chosen": -2.012014865875244,
625
+ "rewards/margins": 1.0135688781738281,
626
+ "rewards/rejected": -3.0255837440490723,
627
+ "step": 370
628
+ },
629
+ {
630
+ "epoch": 0.39780162261188173,
631
+ "grad_norm": 6.046235636435534,
632
+ "learning_rate": 3.768430099352445e-06,
633
+ "logits/chosen": -1.2952733039855957,
634
+ "logits/rejected": -1.2378871440887451,
635
+ "logps/chosen": -393.07000732421875,
636
+ "logps/rejected": -465.0540466308594,
637
+ "loss": 0.5257,
638
+ "rewards/accuracies": 0.7124999761581421,
639
+ "rewards/chosen": -1.3947498798370361,
640
+ "rewards/margins": 0.7023818492889404,
641
+ "rewards/rejected": -2.0971317291259766,
642
+ "step": 380
643
+ },
644
+ {
645
+ "epoch": 0.408270086364826,
646
+ "grad_norm": 3.5923613903948133,
647
+ "learning_rate": 3.6888102953122307e-06,
648
+ "logits/chosen": -1.3613311052322388,
649
+ "logits/rejected": -1.2925641536712646,
650
+ "logps/chosen": -355.1478576660156,
651
+ "logps/rejected": -389.9283752441406,
652
+ "loss": 0.5452,
653
+ "rewards/accuracies": 0.6937500238418579,
654
+ "rewards/chosen": -0.9670610427856445,
655
+ "rewards/margins": 0.5416972637176514,
656
+ "rewards/rejected": -1.5087581872940063,
657
+ "step": 390
658
+ },
659
+ {
660
+ "epoch": 0.4187385501177702,
661
+ "grad_norm": 4.7487137488013484,
662
+ "learning_rate": 3.607600562872785e-06,
663
+ "logits/chosen": -1.3237230777740479,
664
+ "logits/rejected": -1.2487279176712036,
665
+ "logps/chosen": -384.8719177246094,
666
+ "logps/rejected": -442.74957275390625,
667
+ "loss": 0.5364,
668
+ "rewards/accuracies": 0.6625000238418579,
669
+ "rewards/chosen": -1.4063746929168701,
670
+ "rewards/margins": 0.5768125057220459,
671
+ "rewards/rejected": -1.9831870794296265,
672
+ "step": 400
673
+ },
674
+ {
675
+ "epoch": 0.4187385501177702,
676
+ "eval_logits/chosen": -1.2333532571792603,
677
+ "eval_logits/rejected": -1.1332385540008545,
678
+ "eval_logps/chosen": -416.69793701171875,
679
+ "eval_logps/rejected": -476.3457946777344,
680
+ "eval_loss": 0.5142984986305237,
681
+ "eval_rewards/accuracies": 0.7579365372657776,
682
+ "eval_rewards/chosen": -1.5179502964019775,
683
+ "eval_rewards/margins": 0.7989169359207153,
684
+ "eval_rewards/rejected": -2.3168673515319824,
685
+ "eval_runtime": 495.3694,
686
+ "eval_samples_per_second": 4.037,
687
+ "eval_steps_per_second": 0.127,
688
+ "step": 400
689
+ },
690
+ {
691
+ "epoch": 0.42920701387071447,
692
+ "grad_norm": 4.606627296879372,
693
+ "learning_rate": 3.5249095128531863e-06,
694
+ "logits/chosen": -1.2254225015640259,
695
+ "logits/rejected": -1.1056945323944092,
696
+ "logps/chosen": -440.85369873046875,
697
+ "logps/rejected": -507.37774658203125,
698
+ "loss": 0.5273,
699
+ "rewards/accuracies": 0.75,
700
+ "rewards/chosen": -1.7427661418914795,
701
+ "rewards/margins": 0.9440004229545593,
702
+ "rewards/rejected": -2.6867663860321045,
703
+ "step": 410
704
+ },
705
+ {
706
+ "epoch": 0.4396754776236587,
707
+ "grad_norm": 7.32723857385592,
708
+ "learning_rate": 3.4408477372034743e-06,
709
+ "logits/chosen": -1.3876991271972656,
710
+ "logits/rejected": -1.1844953298568726,
711
+ "logps/chosen": -440.2545471191406,
712
+ "logps/rejected": -458.7481384277344,
713
+ "loss": 0.5242,
714
+ "rewards/accuracies": 0.706250011920929,
715
+ "rewards/chosen": -1.5724732875823975,
716
+ "rewards/margins": 0.7581513524055481,
717
+ "rewards/rejected": -2.330624580383301,
718
+ "step": 420
719
+ },
720
+ {
721
+ "epoch": 0.45014394137660296,
722
+ "grad_norm": 4.83507866233375,
723
+ "learning_rate": 3.355527661097728e-06,
724
+ "logits/chosen": -1.3145965337753296,
725
+ "logits/rejected": -1.2404569387435913,
726
+ "logps/chosen": -391.62640380859375,
727
+ "logps/rejected": -483.40020751953125,
728
+ "loss": 0.5203,
729
+ "rewards/accuracies": 0.75,
730
+ "rewards/chosen": -1.6367257833480835,
731
+ "rewards/margins": 0.8808576464653015,
732
+ "rewards/rejected": -2.5175833702087402,
733
+ "step": 430
734
+ },
735
+ {
736
+ "epoch": 0.46061240512954726,
737
+ "grad_norm": 6.295429995521184,
738
+ "learning_rate": 3.269063392575352e-06,
739
+ "logits/chosen": -1.246955156326294,
740
+ "logits/rejected": -1.2044976949691772,
741
+ "logps/chosen": -412.8096618652344,
742
+ "logps/rejected": -462.73175048828125,
743
+ "loss": 0.5309,
744
+ "rewards/accuracies": 0.7124999761581421,
745
+ "rewards/chosen": -1.6489166021347046,
746
+ "rewards/margins": 0.686037540435791,
747
+ "rewards/rejected": -2.334954261779785,
748
+ "step": 440
749
+ },
750
+ {
751
+ "epoch": 0.4710808688824915,
752
+ "grad_norm": 8.155653541939053,
753
+ "learning_rate": 3.181570569931697e-06,
754
+ "logits/chosen": -1.3565101623535156,
755
+ "logits/rejected": -1.2913014888763428,
756
+ "logps/chosen": -404.30938720703125,
757
+ "logps/rejected": -522.0363159179688,
758
+ "loss": 0.5151,
759
+ "rewards/accuracies": 0.768750011920929,
760
+ "rewards/chosen": -1.5972925424575806,
761
+ "rewards/margins": 1.02085280418396,
762
+ "rewards/rejected": -2.61814546585083,
763
+ "step": 450
764
+ },
765
+ {
766
+ "epoch": 0.48154933263543576,
767
+ "grad_norm": 5.153431243466889,
768
+ "learning_rate": 3.09316620706208e-06,
769
+ "logits/chosen": -1.3433706760406494,
770
+ "logits/rejected": -1.2827776670455933,
771
+ "logps/chosen": -448.34283447265625,
772
+ "logps/rejected": -517.4175415039062,
773
+ "loss": 0.5305,
774
+ "rewards/accuracies": 0.71875,
775
+ "rewards/chosen": -1.879359245300293,
776
+ "rewards/margins": 0.8195317983627319,
777
+ "rewards/rejected": -2.6988909244537354,
778
+ "step": 460
779
+ },
780
+ {
781
+ "epoch": 0.49201779638838,
782
+ "grad_norm": 5.610862161934833,
783
+ "learning_rate": 3.0039685369660785e-06,
784
+ "logits/chosen": -1.3113436698913574,
785
+ "logits/rejected": -1.173460602760315,
786
+ "logps/chosen": -466.1058044433594,
787
+ "logps/rejected": -511.7659606933594,
788
+ "loss": 0.4856,
789
+ "rewards/accuracies": 0.7250000238418579,
790
+ "rewards/chosen": -2.0702614784240723,
791
+ "rewards/margins": 0.7728853821754456,
792
+ "rewards/rejected": -2.843147039413452,
793
+ "step": 470
794
+ },
795
+ {
796
+ "epoch": 0.5024862601413242,
797
+ "grad_norm": 6.26181575439223,
798
+ "learning_rate": 2.91409685362137e-06,
799
+ "logits/chosen": -1.2794568538665771,
800
+ "logits/rejected": -1.123780608177185,
801
+ "logps/chosen": -523.4374389648438,
802
+ "logps/rejected": -596.9427490234375,
803
+ "loss": 0.5046,
804
+ "rewards/accuracies": 0.762499988079071,
805
+ "rewards/chosen": -2.514650583267212,
806
+ "rewards/margins": 1.0325826406478882,
807
+ "rewards/rejected": -3.5472328662872314,
808
+ "step": 480
809
+ },
810
+ {
811
+ "epoch": 0.5129547238942685,
812
+ "grad_norm": 4.885030756434318,
813
+ "learning_rate": 2.8236713524386085e-06,
814
+ "logits/chosen": -1.2807716131210327,
815
+ "logits/rejected": -1.107301950454712,
816
+ "logps/chosen": -547.37890625,
817
+ "logps/rejected": -605.0728149414062,
818
+ "loss": 0.4989,
819
+ "rewards/accuracies": 0.7749999761581421,
820
+ "rewards/chosen": -2.634721040725708,
821
+ "rewards/margins": 0.92457515001297,
822
+ "rewards/rejected": -3.5592963695526123,
823
+ "step": 490
824
+ },
825
+ {
826
+ "epoch": 0.5234231876472127,
827
+ "grad_norm": 4.812586396006598,
828
+ "learning_rate": 2.7328129695107205e-06,
829
+ "logits/chosen": -1.2112505435943604,
830
+ "logits/rejected": -1.1074721813201904,
831
+ "logps/chosen": -523.0760498046875,
832
+ "logps/rejected": -595.048095703125,
833
+ "loss": 0.5046,
834
+ "rewards/accuracies": 0.7749999761581421,
835
+ "rewards/chosen": -2.574467420578003,
836
+ "rewards/margins": 0.9888777732849121,
837
+ "rewards/rejected": -3.563344955444336,
838
+ "step": 500
839
+ },
840
+ {
841
+ "epoch": 0.5234231876472127,
842
+ "eval_logits/chosen": -1.1373296976089478,
843
+ "eval_logits/rejected": -1.0301768779754639,
844
+ "eval_logps/chosen": -529.9542236328125,
845
+ "eval_logps/rejected": -605.2976684570312,
846
+ "eval_loss": 0.5062148571014404,
847
+ "eval_rewards/accuracies": 0.7579365372657776,
848
+ "eval_rewards/chosen": -2.650512456893921,
849
+ "eval_rewards/margins": 0.9558730721473694,
850
+ "eval_rewards/rejected": -3.6063857078552246,
851
+ "eval_runtime": 494.8767,
852
+ "eval_samples_per_second": 4.041,
853
+ "eval_steps_per_second": 0.127,
854
+ "step": 500
855
+ },
856
+ {
857
+ "epoch": 0.533891651400157,
858
+ "grad_norm": 4.595490575025387,
859
+ "learning_rate": 2.641643219871597e-06,
860
+ "logits/chosen": -1.2453548908233643,
861
+ "logits/rejected": -1.1635602712631226,
862
+ "logps/chosen": -500.3102111816406,
863
+ "logps/rejected": -559.625244140625,
864
+ "loss": 0.4867,
865
+ "rewards/accuracies": 0.737500011920929,
866
+ "rewards/chosen": -2.409621000289917,
867
+ "rewards/margins": 0.8319376707077026,
868
+ "rewards/rejected": -3.24155855178833,
869
+ "step": 510
870
+ },
871
+ {
872
+ "epoch": 0.5443601151531012,
873
+ "grad_norm": 7.198268767989711,
874
+ "learning_rate": 2.5502840349805074e-06,
875
+ "logits/chosen": -1.2010407447814941,
876
+ "logits/rejected": -1.0801866054534912,
877
+ "logps/chosen": -578.692138671875,
878
+ "logps/rejected": -627.0946044921875,
879
+ "loss": 0.5249,
880
+ "rewards/accuracies": 0.7562500238418579,
881
+ "rewards/chosen": -3.033255100250244,
882
+ "rewards/margins": 0.9397614598274231,
883
+ "rewards/rejected": -3.9730167388916016,
884
+ "step": 520
885
+ },
886
+ {
887
+ "epoch": 0.5548285789060455,
888
+ "grad_norm": 6.519693692708999,
889
+ "learning_rate": 2.4588575996495797e-06,
890
+ "logits/chosen": -1.169585943222046,
891
+ "logits/rejected": -1.0577750205993652,
892
+ "logps/chosen": -533.5675048828125,
893
+ "logps/rejected": -615.3276977539062,
894
+ "loss": 0.4947,
895
+ "rewards/accuracies": 0.7749999761581421,
896
+ "rewards/chosen": -2.606905460357666,
897
+ "rewards/margins": 1.0898813009262085,
898
+ "rewards/rejected": -3.696786880493164,
899
+ "step": 530
900
+ },
901
+ {
902
+ "epoch": 0.5652970426589898,
903
+ "grad_norm": 4.681072379969835,
904
+ "learning_rate": 2.367486188632446e-06,
905
+ "logits/chosen": -1.2149909734725952,
906
+ "logits/rejected": -1.0861246585845947,
907
+ "logps/chosen": -495.14093017578125,
908
+ "logps/rejected": -530.830810546875,
909
+ "loss": 0.5288,
910
+ "rewards/accuracies": 0.706250011920929,
911
+ "rewards/chosen": -2.406106472015381,
912
+ "rewards/margins": 0.7864419221878052,
913
+ "rewards/rejected": -3.1925485134124756,
914
+ "step": 540
915
+ },
916
+ {
917
+ "epoch": 0.575765506411934,
918
+ "grad_norm": 4.605359920599186,
919
+ "learning_rate": 2.276292003092593e-06,
920
+ "logits/chosen": -1.25673508644104,
921
+ "logits/rejected": -1.0914690494537354,
922
+ "logps/chosen": -509.0771484375,
923
+ "logps/rejected": -543.2958374023438,
924
+ "loss": 0.512,
925
+ "rewards/accuracies": 0.7124999761581421,
926
+ "rewards/chosen": -2.4712395668029785,
927
+ "rewards/margins": 0.7063032388687134,
928
+ "rewards/rejected": -3.1775424480438232,
929
+ "step": 550
930
+ },
931
+ {
932
+ "epoch": 0.5862339701648783,
933
+ "grad_norm": 6.363842169113803,
934
+ "learning_rate": 2.1853970071701415e-06,
935
+ "logits/chosen": -1.153988242149353,
936
+ "logits/rejected": -1.0725597143173218,
937
+ "logps/chosen": -513.7321166992188,
938
+ "logps/rejected": -567.1982421875,
939
+ "loss": 0.5248,
940
+ "rewards/accuracies": 0.768750011920929,
941
+ "rewards/chosen": -2.496264934539795,
942
+ "rewards/margins": 0.8739057779312134,
943
+ "rewards/rejected": -3.370171070098877,
944
+ "step": 560
945
+ },
946
+ {
947
+ "epoch": 0.5967024339178225,
948
+ "grad_norm": 6.958473109368914,
949
+ "learning_rate": 2.0949227648656194e-06,
950
+ "logits/chosen": -1.243544578552246,
951
+ "logits/rejected": -1.138840913772583,
952
+ "logps/chosen": -514.3547973632812,
953
+ "logps/rejected": -589.1139526367188,
954
+ "loss": 0.5081,
955
+ "rewards/accuracies": 0.737500011920929,
956
+ "rewards/chosen": -2.536358594894409,
957
+ "rewards/margins": 0.8745514154434204,
958
+ "rewards/rejected": -3.410910129547119,
959
+ "step": 570
960
+ },
961
+ {
962
+ "epoch": 0.6071708976707668,
963
+ "grad_norm": 5.30154815627735,
964
+ "learning_rate": 2.00499027745888e-06,
965
+ "logits/chosen": -1.1577163934707642,
966
+ "logits/rejected": -1.0448474884033203,
967
+ "logps/chosen": -500.58599853515625,
968
+ "logps/rejected": -561.0791015625,
969
+ "loss": 0.5112,
970
+ "rewards/accuracies": 0.706250011920929,
971
+ "rewards/chosen": -2.6586689949035645,
972
+ "rewards/margins": 0.8494758605957031,
973
+ "rewards/rejected": -3.5081450939178467,
974
+ "step": 580
975
+ },
976
+ {
977
+ "epoch": 0.6176393614237111,
978
+ "grad_norm": 5.957700119060022,
979
+ "learning_rate": 1.915719821680624e-06,
980
+ "logits/chosen": -1.342710256576538,
981
+ "logits/rejected": -1.2864643335342407,
982
+ "logps/chosen": -480.73272705078125,
983
+ "logps/rejected": -578.4927978515625,
984
+ "loss": 0.4987,
985
+ "rewards/accuracies": 0.7562500238418579,
986
+ "rewards/chosen": -2.3478407859802246,
987
+ "rewards/margins": 0.940704345703125,
988
+ "rewards/rejected": -3.2885451316833496,
989
+ "step": 590
990
+ },
991
+ {
992
+ "epoch": 0.6281078251766553,
993
+ "grad_norm": 5.805746259122187,
994
+ "learning_rate": 1.8272307888529276e-06,
995
+ "logits/chosen": -1.1955945491790771,
996
+ "logits/rejected": -1.0686155557632446,
997
+ "logps/chosen": -551.1502685546875,
998
+ "logps/rejected": -601.6345825195312,
999
+ "loss": 0.4736,
1000
+ "rewards/accuracies": 0.7562500238418579,
1001
+ "rewards/chosen": -2.542466640472412,
1002
+ "rewards/margins": 0.8464245796203613,
1003
+ "rewards/rejected": -3.3888916969299316,
1004
+ "step": 600
1005
+ },
1006
+ {
1007
+ "epoch": 0.6281078251766553,
1008
+ "eval_logits/chosen": -1.1252615451812744,
1009
+ "eval_logits/rejected": -1.0135337114334106,
1010
+ "eval_logps/chosen": -537.3406372070312,
1011
+ "eval_logps/rejected": -621.1549072265625,
1012
+ "eval_loss": 0.5059433579444885,
1013
+ "eval_rewards/accuracies": 0.7638888955116272,
1014
+ "eval_rewards/chosen": -2.724376678466797,
1015
+ "eval_rewards/margins": 1.0405809879302979,
1016
+ "eval_rewards/rejected": -3.7649576663970947,
1017
+ "eval_runtime": 496.2742,
1018
+ "eval_samples_per_second": 4.03,
1019
+ "eval_steps_per_second": 0.127,
1020
+ "step": 600
1021
+ },
1022
+ {
1023
+ "epoch": 0.6385762889295996,
1024
+ "grad_norm": 5.793634680662145,
1025
+ "learning_rate": 1.739641525213929e-06,
1026
+ "logits/chosen": -1.216796636581421,
1027
+ "logits/rejected": -1.1526730060577393,
1028
+ "logps/chosen": -538.6376953125,
1029
+ "logps/rejected": -615.1124877929688,
1030
+ "loss": 0.5134,
1031
+ "rewards/accuracies": 0.768750011920929,
1032
+ "rewards/chosen": -2.6652350425720215,
1033
+ "rewards/margins": 1.1263973712921143,
1034
+ "rewards/rejected": -3.7916324138641357,
1035
+ "step": 610
1036
+ },
1037
+ {
1038
+ "epoch": 0.6490447526825438,
1039
+ "grad_norm": 5.8551862397886,
1040
+ "learning_rate": 1.6530691736402317e-06,
1041
+ "logits/chosen": -1.3594751358032227,
1042
+ "logits/rejected": -1.1856592893600464,
1043
+ "logps/chosen": -541.5281372070312,
1044
+ "logps/rejected": -591.7969970703125,
1045
+ "loss": 0.4667,
1046
+ "rewards/accuracies": 0.800000011920929,
1047
+ "rewards/chosen": -2.4107346534729004,
1048
+ "rewards/margins": 1.0584288835525513,
1049
+ "rewards/rejected": -3.4691638946533203,
1050
+ "step": 620
1051
+ },
1052
+ {
1053
+ "epoch": 0.6595132164354881,
1054
+ "grad_norm": 7.40407247870995,
1055
+ "learning_rate": 1.5676295169786864e-06,
1056
+ "logits/chosen": -1.2886357307434082,
1057
+ "logits/rejected": -1.160259485244751,
1058
+ "logps/chosen": -528.3416137695312,
1059
+ "logps/rejected": -582.4341430664062,
1060
+ "loss": 0.4807,
1061
+ "rewards/accuracies": 0.6875,
1062
+ "rewards/chosen": -2.508863925933838,
1063
+ "rewards/margins": 0.9682027697563171,
1064
+ "rewards/rejected": -3.477067232131958,
1065
+ "step": 630
1066
+ },
1067
+ {
1068
+ "epoch": 0.6699816801884323,
1069
+ "grad_norm": 6.664520102647163,
1070
+ "learning_rate": 1.4834368231970922e-06,
1071
+ "logits/chosen": -1.1566966772079468,
1072
+ "logits/rejected": -1.0316822528839111,
1073
+ "logps/chosen": -519.1185913085938,
1074
+ "logps/rejected": -613.6243896484375,
1075
+ "loss": 0.4678,
1076
+ "rewards/accuracies": 0.8062499761581421,
1077
+ "rewards/chosen": -2.8396458625793457,
1078
+ "rewards/margins": 1.1082502603530884,
1079
+ "rewards/rejected": -3.9478962421417236,
1080
+ "step": 640
1081
+ },
1082
+ {
1083
+ "epoch": 0.6804501439413766,
1084
+ "grad_norm": 6.202576057234435,
1085
+ "learning_rate": 1.4006036925609245e-06,
1086
+ "logits/chosen": -1.2698938846588135,
1087
+ "logits/rejected": -1.2037663459777832,
1088
+ "logps/chosen": -512.4635009765625,
1089
+ "logps/rejected": -599.8485107421875,
1090
+ "loss": 0.5124,
1091
+ "rewards/accuracies": 0.737500011920929,
1092
+ "rewards/chosen": -2.5361390113830566,
1093
+ "rewards/margins": 0.8807609677314758,
1094
+ "rewards/rejected": -3.416900157928467,
1095
+ "step": 650
1096
+ },
1097
+ {
1098
+ "epoch": 0.6909186076943209,
1099
+ "grad_norm": 5.411195685464347,
1100
+ "learning_rate": 1.3192409070404582e-06,
1101
+ "logits/chosen": -1.33925461769104,
1102
+ "logits/rejected": -1.2339675426483154,
1103
+ "logps/chosen": -507.4242248535156,
1104
+ "logps/rejected": -586.1668701171875,
1105
+ "loss": 0.4971,
1106
+ "rewards/accuracies": 0.7562500238418579,
1107
+ "rewards/chosen": -2.554523468017578,
1108
+ "rewards/margins": 0.9770146608352661,
1109
+ "rewards/rejected": -3.5315380096435547,
1110
+ "step": 660
1111
+ },
1112
+ {
1113
+ "epoch": 0.7013870714472651,
1114
+ "grad_norm": 7.306078973174853,
1115
+ "learning_rate": 1.2394572821496953e-06,
1116
+ "logits/chosen": -1.1949710845947266,
1117
+ "logits/rejected": -1.0358936786651611,
1118
+ "logps/chosen": -556.1329956054688,
1119
+ "logps/rejected": -644.8235473632812,
1120
+ "loss": 0.4783,
1121
+ "rewards/accuracies": 0.7562500238418579,
1122
+ "rewards/chosen": -3.012876510620117,
1123
+ "rewards/margins": 1.1485122442245483,
1124
+ "rewards/rejected": -4.161388397216797,
1125
+ "step": 670
1126
+ },
1127
+ {
1128
+ "epoch": 0.7118555352002094,
1129
+ "grad_norm": 6.569348565798864,
1130
+ "learning_rate": 1.1613595214152713e-06,
1131
+ "logits/chosen": -1.1749083995819092,
1132
+ "logits/rejected": -1.0992681980133057,
1133
+ "logps/chosen": -572.3611450195312,
1134
+ "logps/rejected": -654.3881225585938,
1135
+ "loss": 0.5193,
1136
+ "rewards/accuracies": 0.731249988079071,
1137
+ "rewards/chosen": -3.059669017791748,
1138
+ "rewards/margins": 0.9747347831726074,
1139
+ "rewards/rejected": -4.0344038009643555,
1140
+ "step": 680
1141
+ },
1142
+ {
1143
+ "epoch": 0.7223239989531536,
1144
+ "grad_norm": 4.877247655613693,
1145
+ "learning_rate": 1.0850520736699362e-06,
1146
+ "logits/chosen": -1.3139179944992065,
1147
+ "logits/rejected": -1.1570199728012085,
1148
+ "logps/chosen": -550.0722045898438,
1149
+ "logps/rejected": -594.5676879882812,
1150
+ "loss": 0.4496,
1151
+ "rewards/accuracies": 0.7749999761581421,
1152
+ "rewards/chosen": -2.778721570968628,
1153
+ "rewards/margins": 0.9407702684402466,
1154
+ "rewards/rejected": -3.719491958618164,
1155
+ "step": 690
1156
+ },
1157
+ {
1158
+ "epoch": 0.7327924627060979,
1159
+ "grad_norm": 5.850084839342618,
1160
+ "learning_rate": 1.0106369933615043e-06,
1161
+ "logits/chosen": -1.1720659732818604,
1162
+ "logits/rejected": -1.1006288528442383,
1163
+ "logps/chosen": -543.8446044921875,
1164
+ "logps/rejected": -637.2833862304688,
1165
+ "loss": 0.4619,
1166
+ "rewards/accuracies": 0.7562500238418579,
1167
+ "rewards/chosen": -2.941668748855591,
1168
+ "rewards/margins": 1.034071922302246,
1169
+ "rewards/rejected": -3.975741147994995,
1170
+ "step": 700
1171
+ },
1172
+ {
1173
+ "epoch": 0.7327924627060979,
1174
+ "eval_logits/chosen": -1.119376301765442,
1175
+ "eval_logits/rejected": -1.006381869316101,
1176
+ "eval_logps/chosen": -557.3041381835938,
1177
+ "eval_logps/rejected": -644.5651245117188,
1178
+ "eval_loss": 0.4993818998336792,
1179
+ "eval_rewards/accuracies": 0.761904776096344,
1180
+ "eval_rewards/chosen": -2.9240119457244873,
1181
+ "eval_rewards/margins": 1.0750477313995361,
1182
+ "eval_rewards/rejected": -3.9990594387054443,
1183
+ "eval_runtime": 494.8985,
1184
+ "eval_samples_per_second": 4.041,
1185
+ "eval_steps_per_second": 0.127,
1186
+ "step": 700
1187
+ },
1188
+ {
1189
+ "epoch": 0.7432609264590422,
1190
+ "grad_norm": 5.940990677339228,
1191
+ "learning_rate": 9.382138040640714e-07,
1192
+ "logits/chosen": -1.175462245941162,
1193
+ "logits/rejected": -1.0119060277938843,
1194
+ "logps/chosen": -566.7076416015625,
1195
+ "logps/rejected": -634.9506225585938,
1196
+ "loss": 0.5486,
1197
+ "rewards/accuracies": 0.737500011920929,
1198
+ "rewards/chosen": -3.01143217086792,
1199
+ "rewards/margins": 0.9881938695907593,
1200
+ "rewards/rejected": -3.9996261596679688,
1201
+ "step": 710
1202
+ },
1203
+ {
1204
+ "epoch": 0.7537293902119864,
1205
+ "grad_norm": 5.5111666858195125,
1206
+ "learning_rate": 8.678793653740633e-07,
1207
+ "logits/chosen": -1.3316354751586914,
1208
+ "logits/rejected": -1.171438455581665,
1209
+ "logps/chosen": -583.8369140625,
1210
+ "logps/rejected": -640.4646606445312,
1211
+ "loss": 0.4747,
1212
+ "rewards/accuracies": 0.7875000238418579,
1213
+ "rewards/chosen": -2.6381282806396484,
1214
+ "rewards/margins": 1.0775749683380127,
1215
+ "rewards/rejected": -3.715702772140503,
1216
+ "step": 720
1217
+ },
1218
+ {
1219
+ "epoch": 0.7641978539649307,
1220
+ "grad_norm": 5.993566017988564,
1221
+ "learning_rate": 7.997277433690984e-07,
1222
+ "logits/chosen": -1.208320140838623,
1223
+ "logits/rejected": -1.071702480316162,
1224
+ "logps/chosen": -516.3609619140625,
1225
+ "logps/rejected": -598.6437377929688,
1226
+ "loss": 0.5075,
1227
+ "rewards/accuracies": 0.75,
1228
+ "rewards/chosen": -2.7794041633605957,
1229
+ "rewards/margins": 0.9944884181022644,
1230
+ "rewards/rejected": -3.773892879486084,
1231
+ "step": 730
1232
+ },
1233
+ {
1234
+ "epoch": 0.7746663177178749,
1235
+ "grad_norm": 5.60538349675765,
1236
+ "learning_rate": 7.338500848029603e-07,
1237
+ "logits/chosen": -1.2300490140914917,
1238
+ "logits/rejected": -1.0854498147964478,
1239
+ "logps/chosen": -557.65234375,
1240
+ "logps/rejected": -595.0419921875,
1241
+ "loss": 0.5081,
1242
+ "rewards/accuracies": 0.71875,
1243
+ "rewards/chosen": -2.7690441608428955,
1244
+ "rewards/margins": 0.9237399101257324,
1245
+ "rewards/rejected": -3.692783832550049,
1246
+ "step": 740
1247
+ },
1248
+ {
1249
+ "epoch": 0.7851347814708192,
1250
+ "grad_norm": 5.061641492930568,
1251
+ "learning_rate": 6.70334495204884e-07,
1252
+ "logits/chosen": -1.0873275995254517,
1253
+ "logits/rejected": -1.0135056972503662,
1254
+ "logps/chosen": -510.3736877441406,
1255
+ "logps/rejected": -621.0284423828125,
1256
+ "loss": 0.4814,
1257
+ "rewards/accuracies": 0.699999988079071,
1258
+ "rewards/chosen": -2.822464942932129,
1259
+ "rewards/margins": 1.0254597663879395,
1260
+ "rewards/rejected": -3.8479247093200684,
1261
+ "step": 750
1262
+ },
1263
+ {
1264
+ "epoch": 0.7956032452237635,
1265
+ "grad_norm": 4.889175666405806,
1266
+ "learning_rate": 6.092659210462232e-07,
1267
+ "logits/chosen": -1.2110772132873535,
1268
+ "logits/rejected": -1.0558052062988281,
1269
+ "logps/chosen": -540.3411254882812,
1270
+ "logps/rejected": -596.0307006835938,
1271
+ "loss": 0.5076,
1272
+ "rewards/accuracies": 0.75,
1273
+ "rewards/chosen": -2.8302786350250244,
1274
+ "rewards/margins": 0.9066953659057617,
1275
+ "rewards/rejected": -3.736973524093628,
1276
+ "step": 760
1277
+ },
1278
+ {
1279
+ "epoch": 0.8060717089767077,
1280
+ "grad_norm": 4.4612176619325075,
1281
+ "learning_rate": 5.507260361320738e-07,
1282
+ "logits/chosen": -1.2954254150390625,
1283
+ "logits/rejected": -1.2630584239959717,
1284
+ "logps/chosen": -560.3121337890625,
1285
+ "logps/rejected": -663.4389038085938,
1286
+ "loss": 0.4681,
1287
+ "rewards/accuracies": 0.7562500238418579,
1288
+ "rewards/chosen": -2.743813991546631,
1289
+ "rewards/margins": 1.0250587463378906,
1290
+ "rewards/rejected": -3.7688724994659424,
1291
+ "step": 770
1292
+ },
1293
+ {
1294
+ "epoch": 0.816540172729652,
1295
+ "grad_norm": 5.524604671684533,
1296
+ "learning_rate": 4.947931323697983e-07,
1297
+ "logits/chosen": -1.3126929998397827,
1298
+ "logits/rejected": -1.088275671005249,
1299
+ "logps/chosen": -570.6515502929688,
1300
+ "logps/rejected": -593.5574951171875,
1301
+ "loss": 0.4959,
1302
+ "rewards/accuracies": 0.7124999761581421,
1303
+ "rewards/chosen": -2.6409595012664795,
1304
+ "rewards/margins": 0.848545253276825,
1305
+ "rewards/rejected": -3.48950457572937,
1306
+ "step": 780
1307
+ },
1308
+ {
1309
+ "epoch": 0.8270086364825961,
1310
+ "grad_norm": 5.9693834124907585,
1311
+ "learning_rate": 4.4154201506053985e-07,
1312
+ "logits/chosen": -1.1836011409759521,
1313
+ "logits/rejected": -1.0795371532440186,
1314
+ "logps/chosen": -529.63134765625,
1315
+ "logps/rejected": -619.8529663085938,
1316
+ "loss": 0.523,
1317
+ "rewards/accuracies": 0.7875000238418579,
1318
+ "rewards/chosen": -2.8782217502593994,
1319
+ "rewards/margins": 0.950698971748352,
1320
+ "rewards/rejected": -3.828920841217041,
1321
+ "step": 790
1322
+ },
1323
+ {
1324
+ "epoch": 0.8374771002355405,
1325
+ "grad_norm": 6.22817743126936,
1326
+ "learning_rate": 3.910439028537638e-07,
1327
+ "logits/chosen": -1.1832584142684937,
1328
+ "logits/rejected": -1.1257516145706177,
1329
+ "logps/chosen": -505.93316650390625,
1330
+ "logps/rejected": -627.0811157226562,
1331
+ "loss": 0.4926,
1332
+ "rewards/accuracies": 0.78125,
1333
+ "rewards/chosen": -2.7608401775360107,
1334
+ "rewards/margins": 1.1763994693756104,
1335
+ "rewards/rejected": -3.937239408493042,
1336
+ "step": 800
1337
+ },
1338
+ {
1339
+ "epoch": 0.8374771002355405,
1340
+ "eval_logits/chosen": -1.1640794277191162,
1341
+ "eval_logits/rejected": -1.0516477823257446,
1342
+ "eval_logps/chosen": -537.3770141601562,
1343
+ "eval_logps/rejected": -619.2051391601562,
1344
+ "eval_loss": 0.49621155858039856,
1345
+ "eval_rewards/accuracies": 0.7658730149269104,
1346
+ "eval_rewards/chosen": -2.724740743637085,
1347
+ "eval_rewards/margins": 1.0207195281982422,
1348
+ "eval_rewards/rejected": -3.745460033416748,
1349
+ "eval_runtime": 494.3028,
1350
+ "eval_samples_per_second": 4.046,
1351
+ "eval_steps_per_second": 0.127,
1352
+ "step": 800
1353
+ },
1354
+ {
1355
+ "epoch": 0.8479455639884846,
1356
+ "grad_norm": 4.972938912679825,
1357
+ "learning_rate": 3.4336633249862084e-07,
1358
+ "logits/chosen": -1.2425668239593506,
1359
+ "logits/rejected": -1.0636166334152222,
1360
+ "logps/chosen": -561.7000732421875,
1361
+ "logps/rejected": -619.0046997070312,
1362
+ "loss": 0.508,
1363
+ "rewards/accuracies": 0.7437499761581421,
1364
+ "rewards/chosen": -2.7175962924957275,
1365
+ "rewards/margins": 0.9634302854537964,
1366
+ "rewards/rejected": -3.6810269355773926,
1367
+ "step": 810
1368
+ },
1369
+ {
1370
+ "epoch": 0.8584140277414289,
1371
+ "grad_norm": 5.09720991480253,
1372
+ "learning_rate": 2.98573068519539e-07,
1373
+ "logits/chosen": -1.2827835083007812,
1374
+ "logits/rejected": -1.2060225009918213,
1375
+ "logps/chosen": -539.7214965820312,
1376
+ "logps/rejected": -615.7735595703125,
1377
+ "loss": 0.4994,
1378
+ "rewards/accuracies": 0.7749999761581421,
1379
+ "rewards/chosen": -2.778557538986206,
1380
+ "rewards/margins": 0.8738244771957397,
1381
+ "rewards/rejected": -3.6523823738098145,
1382
+ "step": 820
1383
+ },
1384
+ {
1385
+ "epoch": 0.8688824914943732,
1386
+ "grad_norm": 5.782887668669889,
1387
+ "learning_rate": 2.5672401793681854e-07,
1388
+ "logits/chosen": -1.2188454866409302,
1389
+ "logits/rejected": -1.1675770282745361,
1390
+ "logps/chosen": -514.8630981445312,
1391
+ "logps/rejected": -599.4465942382812,
1392
+ "loss": 0.5131,
1393
+ "rewards/accuracies": 0.699999988079071,
1394
+ "rewards/chosen": -2.682121753692627,
1395
+ "rewards/margins": 0.837975025177002,
1396
+ "rewards/rejected": -3.520097017288208,
1397
+ "step": 830
1398
+ },
1399
+ {
1400
+ "epoch": 0.8793509552473174,
1401
+ "grad_norm": 6.123094674224966,
1402
+ "learning_rate": 2.178751501463036e-07,
1403
+ "logits/chosen": -1.2655466794967651,
1404
+ "logits/rejected": -1.162320613861084,
1405
+ "logps/chosen": -548.4150390625,
1406
+ "logps/rejected": -654.3787231445312,
1407
+ "loss": 0.4696,
1408
+ "rewards/accuracies": 0.7749999761581421,
1409
+ "rewards/chosen": -2.7333908081054688,
1410
+ "rewards/margins": 1.0990099906921387,
1411
+ "rewards/rejected": -3.8324007987976074,
1412
+ "step": 840
1413
+ },
1414
+ {
1415
+ "epoch": 0.8898194190002617,
1416
+ "grad_norm": 5.66840809357227,
1417
+ "learning_rate": 1.820784220652766e-07,
1418
+ "logits/chosen": -1.1958177089691162,
1419
+ "logits/rejected": -1.0371004343032837,
1420
+ "logps/chosen": -533.7821044921875,
1421
+ "logps/rejected": -595.7824096679688,
1422
+ "loss": 0.4643,
1423
+ "rewards/accuracies": 0.762499988079071,
1424
+ "rewards/chosen": -2.6792378425598145,
1425
+ "rewards/margins": 0.9526662826538086,
1426
+ "rewards/rejected": -3.631904125213623,
1427
+ "step": 850
1428
+ },
1429
+ {
1430
+ "epoch": 0.9002878827532059,
1431
+ "grad_norm": 4.99425900843833,
1432
+ "learning_rate": 1.4938170864468636e-07,
1433
+ "logits/chosen": -1.257932424545288,
1434
+ "logits/rejected": -1.1045788526535034,
1435
+ "logps/chosen": -551.285888671875,
1436
+ "logps/rejected": -612.8128662109375,
1437
+ "loss": 0.4681,
1438
+ "rewards/accuracies": 0.75,
1439
+ "rewards/chosen": -2.804962396621704,
1440
+ "rewards/margins": 0.8677828907966614,
1441
+ "rewards/rejected": -3.6727447509765625,
1442
+ "step": 860
1443
+ },
1444
+ {
1445
+ "epoch": 0.9107563465061502,
1446
+ "grad_norm": 4.991076961519154,
1447
+ "learning_rate": 1.1982873884064466e-07,
1448
+ "logits/chosen": -1.294597864151001,
1449
+ "logits/rejected": -1.1019765138626099,
1450
+ "logps/chosen": -572.5621337890625,
1451
+ "logps/rejected": -625.5676879882812,
1452
+ "loss": 0.5032,
1453
+ "rewards/accuracies": 0.7124999761581421,
1454
+ "rewards/chosen": -2.914654493331909,
1455
+ "rewards/margins": 1.0726633071899414,
1456
+ "rewards/rejected": -3.9873173236846924,
1457
+ "step": 870
1458
+ },
1459
+ {
1460
+ "epoch": 0.9212248102590945,
1461
+ "grad_norm": 5.0189704749647746,
1462
+ "learning_rate": 9.345903713082305e-08,
1463
+ "logits/chosen": -1.2915667295455933,
1464
+ "logits/rejected": -1.1525356769561768,
1465
+ "logps/chosen": -541.0573120117188,
1466
+ "logps/rejected": -600.8675537109375,
1467
+ "loss": 0.4752,
1468
+ "rewards/accuracies": 0.737500011920929,
1469
+ "rewards/chosen": -2.7068932056427,
1470
+ "rewards/margins": 0.9946308135986328,
1471
+ "rewards/rejected": -3.701524019241333,
1472
+ "step": 880
1473
+ },
1474
+ {
1475
+ "epoch": 0.9316932740120387,
1476
+ "grad_norm": 5.260823538009292,
1477
+ "learning_rate": 7.030787065396866e-08,
1478
+ "logits/chosen": -1.188293218612671,
1479
+ "logits/rejected": -1.0763866901397705,
1480
+ "logps/chosen": -527.5496215820312,
1481
+ "logps/rejected": -644.0205078125,
1482
+ "loss": 0.4845,
1483
+ "rewards/accuracies": 0.8062499761581421,
1484
+ "rewards/chosen": -2.790327548980713,
1485
+ "rewards/margins": 1.2539947032928467,
1486
+ "rewards/rejected": -4.0443220138549805,
1487
+ "step": 890
1488
+ },
1489
+ {
1490
+ "epoch": 0.942161737764983,
1491
+ "grad_norm": 6.861506991670971,
1492
+ "learning_rate": 5.0406202043228604e-08,
1493
+ "logits/chosen": -1.0917075872421265,
1494
+ "logits/rejected": -1.0418908596038818,
1495
+ "logps/chosen": -544.6124267578125,
1496
+ "logps/rejected": -697.5197143554688,
1497
+ "loss": 0.4856,
1498
+ "rewards/accuracies": 0.7875000238418579,
1499
+ "rewards/chosen": -2.783902406692505,
1500
+ "rewards/margins": 1.2628685235977173,
1501
+ "rewards/rejected": -4.0467705726623535,
1502
+ "step": 900
1503
+ },
1504
+ {
1505
+ "epoch": 0.942161737764983,
1506
+ "eval_logits/chosen": -1.1508536338806152,
1507
+ "eval_logits/rejected": -1.03853178024292,
1508
+ "eval_logps/chosen": -545.9743041992188,
1509
+ "eval_logps/rejected": -631.738525390625,
1510
+ "eval_loss": 0.49515971541404724,
1511
+ "eval_rewards/accuracies": 0.77182537317276,
1512
+ "eval_rewards/chosen": -2.810713768005371,
1513
+ "eval_rewards/margins": 1.0600804090499878,
1514
+ "eval_rewards/rejected": -3.8707938194274902,
1515
+ "eval_runtime": 496.7331,
1516
+ "eval_samples_per_second": 4.026,
1517
+ "eval_steps_per_second": 0.127,
1518
+ "step": 900
1519
+ },
1520
+ {
1521
+ "epoch": 0.9526302015179272,
1522
+ "grad_norm": 7.396195290341192,
1523
+ "learning_rate": 3.378064801637687e-08,
1524
+ "logits/chosen": -1.2156776189804077,
1525
+ "logits/rejected": -1.0312784910202026,
1526
+ "logps/chosen": -519.9219360351562,
1527
+ "logps/rejected": -579.5977172851562,
1528
+ "loss": 0.4987,
1529
+ "rewards/accuracies": 0.7749999761581421,
1530
+ "rewards/chosen": -2.731100559234619,
1531
+ "rewards/margins": 1.0545246601104736,
1532
+ "rewards/rejected": -3.785625457763672,
1533
+ "step": 910
1534
+ },
1535
+ {
1536
+ "epoch": 0.9630986652708715,
1537
+ "grad_norm": 5.785542711133717,
1538
+ "learning_rate": 2.0453443778310766e-08,
1539
+ "logits/chosen": -1.2227718830108643,
1540
+ "logits/rejected": -1.0613749027252197,
1541
+ "logps/chosen": -563.939453125,
1542
+ "logps/rejected": -609.8938598632812,
1543
+ "loss": 0.4994,
1544
+ "rewards/accuracies": 0.7562500238418579,
1545
+ "rewards/chosen": -2.846004009246826,
1546
+ "rewards/margins": 0.9199835062026978,
1547
+ "rewards/rejected": -3.7659878730773926,
1548
+ "step": 920
1549
+ },
1550
+ {
1551
+ "epoch": 0.9735671290238157,
1552
+ "grad_norm": 5.845655612266485,
1553
+ "learning_rate": 1.0442413283435759e-08,
1554
+ "logits/chosen": -1.1646806001663208,
1555
+ "logits/rejected": -0.9581974744796753,
1556
+ "logps/chosen": -586.3766479492188,
1557
+ "logps/rejected": -622.99072265625,
1558
+ "loss": 0.5017,
1559
+ "rewards/accuracies": 0.675000011920929,
1560
+ "rewards/chosen": -2.876279354095459,
1561
+ "rewards/margins": 0.9776785969734192,
1562
+ "rewards/rejected": -3.8539581298828125,
1563
+ "step": 930
1564
+ },
1565
+ {
1566
+ "epoch": 0.98403559277676,
1567
+ "grad_norm": 6.202963993103904,
1568
+ "learning_rate": 3.760945397705828e-09,
1569
+ "logits/chosen": -1.1803127527236938,
1570
+ "logits/rejected": -1.1215112209320068,
1571
+ "logps/chosen": -542.6771850585938,
1572
+ "logps/rejected": -630.9061279296875,
1573
+ "loss": 0.4831,
1574
+ "rewards/accuracies": 0.6812499761581421,
1575
+ "rewards/chosen": -2.7995152473449707,
1576
+ "rewards/margins": 0.8906386494636536,
1577
+ "rewards/rejected": -3.6901535987854004,
1578
+ "step": 940
1579
+ },
1580
+ {
1581
+ "epoch": 0.9945040565297043,
1582
+ "grad_norm": 6.026230259479486,
1583
+ "learning_rate": 4.1797599220405605e-10,
1584
+ "logits/chosen": -1.2184993028640747,
1585
+ "logits/rejected": -1.041572093963623,
1586
+ "logps/chosen": -546.837890625,
1587
+ "logps/rejected": -604.7627563476562,
1588
+ "loss": 0.489,
1589
+ "rewards/accuracies": 0.706250011920929,
1590
+ "rewards/chosen": -2.8025598526000977,
1591
+ "rewards/margins": 0.9029384851455688,
1592
+ "rewards/rejected": -3.705498218536377,
1593
+ "step": 950
1594
+ },
1595
+ {
1596
+ "epoch": 0.9997382884061764,
1597
+ "step": 955,
1598
+ "total_flos": 0.0,
1599
+ "train_loss": 0.2358125359600127,
1600
+ "train_runtime": 19082.7985,
1601
+ "train_samples_per_second": 3.204,
1602
+ "train_steps_per_second": 0.05
1603
+ }
1604
+ ],
1605
+ "logging_steps": 10,
1606
+ "max_steps": 955,
1607
+ "num_input_tokens_seen": 0,
1608
+ "num_train_epochs": 1,
1609
+ "save_steps": 100,
1610
+ "stateful_callbacks": {
1611
+ "TrainerControl": {
1612
+ "args": {
1613
+ "should_epoch_stop": false,
1614
+ "should_evaluate": false,
1615
+ "should_log": false,
1616
+ "should_save": true,
1617
+ "should_training_stop": true
1618
+ },
1619
+ "attributes": {}
1620
+ }
1621
+ },
1622
+ "total_flos": 0.0,
1623
+ "train_batch_size": 4,
1624
+ "trial_name": null,
1625
+ "trial_params": null
1626
+ }