KMasaki commited on
Commit
ae3fc86
·
verified ·
1 Parent(s): f5c5cc8

Model save

Browse files
README.md CHANGED
@@ -26,7 +26,7 @@ print(output["generated_text"])
26
 
27
  ## Training procedure
28
 
29
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kawamuramasaki/open-r1/runs/kcyxuh8j)
30
 
31
 
32
  This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
@@ -34,9 +34,9 @@ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing
34
  ### Framework versions
35
 
36
  - TRL: 0.16.0.dev0
37
- - Transformers: 4.49.0.dev0
38
  - Pytorch: 2.5.1
39
- - Datasets: 3.3.0
40
  - Tokenizers: 0.21.0
41
 
42
  ## Citations
 
26
 
27
  ## Training procedure
28
 
29
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kawamuramasaki/open-r1/runs/krwn86nu)
30
 
31
 
32
  This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
 
34
  ### Framework versions
35
 
36
  - TRL: 0.16.0.dev0
37
+ - Transformers: 4.49.0
38
  - Pytorch: 2.5.1
39
+ - Datasets: 3.3.2
40
  - Tokenizers: 0.21.0
41
 
42
  ## Citations
all_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "total_flos": 0.0,
3
- "train_loss": 0.27599951057092365,
4
- "train_runtime": 58865.7657,
5
- "train_samples": 72441,
6
- "train_samples_per_second": 1.231,
7
- "train_steps_per_second": 0.01
8
  }
 
1
  {
2
  "total_flos": 0.0,
3
+ "train_loss": 0.05178338478773718,
4
+ "train_runtime": 159624.3158,
5
+ "train_samples": 93733,
6
+ "train_samples_per_second": 0.587,
7
+ "train_steps_per_second": 0.021
8
  }
generation_config.json CHANGED
@@ -10,5 +10,5 @@
10
  "temperature": 0.7,
11
  "top_k": 20,
12
  "top_p": 0.8,
13
- "transformers_version": "4.49.0.dev0"
14
  }
 
10
  "temperature": 0.7,
11
  "top_k": 20,
12
  "top_p": 0.8,
13
+ "transformers_version": "4.49.0"
14
  }
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "total_flos": 0.0,
3
- "train_loss": 0.27599951057092365,
4
- "train_runtime": 58865.7657,
5
- "train_samples": 72441,
6
- "train_samples_per_second": 1.231,
7
- "train_steps_per_second": 0.01
8
  }
 
1
  {
2
  "total_flos": 0.0,
3
+ "train_loss": 0.05178338478773718,
4
+ "train_runtime": 159624.3158,
5
+ "train_samples": 93733,
6
+ "train_samples_per_second": 0.587,
7
+ "train_steps_per_second": 0.021
8
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff