Haitao999 commited on
Commit
89040f1
·
verified ·
1 Parent(s): 527b1e1

Model save

Browse files
README.md CHANGED
@@ -1,10 +1,8 @@
1
  ---
2
- datasets: RLHFlow/numia_prompt_dpo1
3
  library_name: transformers
4
  model_name: Llama-3.2-3B-Instruct-EMPO-numia_prompt_dpo1
5
  tags:
6
  - generated_from_trainer
7
- - open-r1
8
  - trl
9
  - grpo
10
  licence: license
@@ -12,7 +10,7 @@ licence: license
12
 
13
  # Model Card for Llama-3.2-3B-Instruct-EMPO-numia_prompt_dpo1
14
 
15
- This model is a fine-tuned version of [None](https://huggingface.co/None) on the [RLHFlow/numia_prompt_dpo1](https://huggingface.co/datasets/RLHFlow/numia_prompt_dpo1) dataset.
16
  It has been trained using [TRL](https://github.com/huggingface/trl).
17
 
18
  ## Quick start
@@ -28,7 +26,7 @@ print(output["generated_text"])
28
 
29
  ## Training procedure
30
 
31
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tjucsailab/huggingface/runs/fr5idqzr)
32
 
33
 
34
  This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
 
1
  ---
 
2
  library_name: transformers
3
  model_name: Llama-3.2-3B-Instruct-EMPO-numia_prompt_dpo1
4
  tags:
5
  - generated_from_trainer
 
6
  - trl
7
  - grpo
8
  licence: license
 
10
 
11
  # Model Card for Llama-3.2-3B-Instruct-EMPO-numia_prompt_dpo1
12
 
13
+ This model is a fine-tuned version of [None](https://huggingface.co/None).
14
  It has been trained using [TRL](https://github.com/huggingface/trl).
15
 
16
  ## Quick start
 
26
 
27
  ## Training procedure
28
 
29
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tjucsailab/huggingface/runs/2xifc66f)
30
 
31
 
32
  This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
all_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "total_flos": 0.0,
3
- "train_loss": 0.0,
4
- "train_runtime": 9.4816,
5
  "train_samples": 20000,
6
- "train_samples_per_second": 2109.35,
7
- "train_steps_per_second": 43.874
8
  }
 
1
  {
2
  "total_flos": 0.0,
3
+ "train_loss": 5.786010212772085e-09,
4
+ "train_runtime": 7433.3342,
5
  "train_samples": 20000,
6
+ "train_samples_per_second": 2.691,
7
+ "train_steps_per_second": 0.024
8
  }
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b31a06f6d89625f2567a1f724d50f68b28ca2fe67d5e40c532d46c8ec9ef9aed
3
  size 4965799096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90d5f4c5c49db2a9e71c87b04e881a899ca4794d61231c8574892ad2d694ed74
3
  size 4965799096
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:21cb2a01569769b11882daab4fc2fb511508e748fd741d2b92f0fdb8d13461d1
3
  size 1459729952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa0d27fd5d96dc7c6804eaac598601645a07c50bc0af51b8fad02fc64bedeb47
3
  size 1459729952
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "total_flos": 0.0,
3
- "train_loss": 0.0,
4
- "train_runtime": 9.4816,
5
  "train_samples": 20000,
6
- "train_samples_per_second": 2109.35,
7
- "train_steps_per_second": 43.874
8
  }
 
1
  {
2
  "total_flos": 0.0,
3
+ "train_loss": 5.786010212772085e-09,
4
+ "train_runtime": 7433.3342,
5
  "train_samples": 20000,
6
+ "train_samples_per_second": 2.691,
7
+ "train_steps_per_second": 0.024
8
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff