YuchenLi01 commited on
Commit
563be2f
·
verified ·
1 Parent(s): ff87df2

Model save

Browse files
README.md CHANGED
@@ -27,7 +27,7 @@ print(output["generated_text"])
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuchenl4/lmpref/runs/ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs64_lr1e-07_0try1OpHKY1133FPTze7kYdrBEUqVHTnPAeaGP4A2gTK12qAD2G)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
 
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuchenl4/lmpref/runs/ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs64_lr1e-07_0try1cG7dShApckNWhFXZFIYIj2tYSNWagmGFG01FzCfGrSIXTX)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
all_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
- "train_loss": 0.4865208608405286,
5
- "train_runtime": 30854.5146,
6
  "train_samples": 45608,
7
- "train_samples_per_second": 1.478,
8
  "train_steps_per_second": 0.023
9
  }
 
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.48635673790459544,
5
+ "train_runtime": 30937.1924,
6
  "train_samples": 45608,
7
+ "train_samples_per_second": 1.474,
8
  "train_steps_per_second": 0.023
9
  }
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:316082caae3c8bd6ae4cb75c465c6c0f8bfc25f5150ad61f49bfa1d0bacddf6e
3
  size 4943162336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf957168e5f59f80974f7857de25b01910646f4327124b75e85f99dec48f26d5
3
  size 4943162336
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e704a6dbd18cec0d52229f654b74efba5e1d49f545de9c2a88c5e40bb2457d6a
3
  size 4999819336
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85d36cae41ea0627f39975fe8960e516841281784d86e155f0db6204b808e3e7
3
  size 4999819336
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb41366faff82bb302d044c475362a8b57550a86b1e24bfe196c08c5a2da3066
3
  size 4540516344
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61fa761b2551eb6e95a3265278939f126de83b17d9a35613031fa81aa47dddff
3
  size 4540516344
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
- "train_loss": 0.4865208608405286,
5
- "train_runtime": 30854.5146,
6
  "train_samples": 45608,
7
- "train_samples_per_second": 1.478,
8
  "train_steps_per_second": 0.023
9
  }
 
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.48635673790459544,
5
+ "train_runtime": 30937.1924,
6
  "train_samples": 45608,
7
+ "train_samples_per_second": 1.474,
8
  "train_steps_per_second": 0.023
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff