qgallouedec HF Staff commited on
Commit
e9bc335
·
verified ·
1 Parent(s): 80eeab5

Model save

Browse files
README.md CHANGED
@@ -1,11 +1,9 @@
1
  ---
2
  base_model: Qwen/Qwen2.5-Math-7B-Instruct
3
- datasets: DigitalLearningGmbH/MATH-lighteval
4
  library_name: transformers
5
  model_name: Qwen-2.5-7B-Simple-RL
6
  tags:
7
  - generated_from_trainer
8
- - open-r1
9
  - trl
10
  - grpo
11
  licence: license
@@ -13,7 +11,7 @@ licence: license
13
 
14
  # Model Card for Qwen-2.5-7B-Simple-RL
15
 
16
- This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
17
  It has been trained using [TRL](https://github.com/huggingface/trl).
18
 
19
  ## Quick start
@@ -29,7 +27,7 @@ print(output["generated_text"])
29
 
30
  ## Training procedure
31
 
32
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/e8ipvp9s)
33
 
34
 
35
  This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
 
1
  ---
2
  base_model: Qwen/Qwen2.5-Math-7B-Instruct
 
3
  library_name: transformers
4
  model_name: Qwen-2.5-7B-Simple-RL
5
  tags:
6
  - generated_from_trainer
 
7
  - trl
8
  - grpo
9
  licence: license
 
11
 
12
  # Model Card for Qwen-2.5-7B-Simple-RL
13
 
14
+ This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
 
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/h6ll256w)
31
 
32
 
33
  This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
all_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "total_flos": 0.0,
3
- "train_loss": 0.12281362387614372,
4
- "train_runtime": 10554.7907,
5
  "train_samples": 7500,
6
- "train_samples_per_second": 0.711,
7
  "train_steps_per_second": 0.011
8
  }
 
1
  {
2
  "total_flos": 0.0,
3
+ "train_loss": 0.19539536322411308,
4
+ "train_runtime": 10586.276,
5
  "train_samples": 7500,
6
+ "train_samples_per_second": 0.708,
7
  "train_steps_per_second": 0.011
8
  }
config.json CHANGED
@@ -22,7 +22,7 @@
22
  "tie_word_embeddings": false,
23
  "torch_dtype": "bfloat16",
24
  "transformers_version": "4.51.0",
25
- "use_cache": true,
26
  "use_sliding_window": false,
27
  "vocab_size": 152064
28
  }
 
22
  "tie_word_embeddings": false,
23
  "torch_dtype": "bfloat16",
24
  "transformers_version": "4.51.0",
25
+ "use_cache": false,
26
  "use_sliding_window": false,
27
  "vocab_size": 152064
28
  }
model-00001-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:229710b12313b19d22035cd22c92aa7ad6a4ed36452951819b53266fa32c70a4
3
  size 4877660776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a24be9beb2930e96f55a144307e5c503fcd17311e68900a3e3f31474996e24c
3
  size 4877660776
model-00002-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9522a64b8db4d386c053355132b2cd00602614f0e8979e394d34f7ce40c331b7
3
  size 4932751008
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:415d738c2420cc8bfabcbfc69073b2134b63a3f2478bab1482b7bf4df2ba3d4c
3
  size 4932751008
model-00003-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3e55a974b789e82ee4755da9b66d4eee5e1e0ec997ebd41e7886505b948d14c2
3
  size 4330865200
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e76d163c7fb609ff969414089ba6fc3dd69cb219373392bab68bed60c6e67ef8
3
  size 4330865200
model-00004-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bc9e59c746ada8810f79f0985b9941a57f71cc14dc500b93fdbde4c64a8d935d
3
  size 1089994880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9436e8ca5f15b5727a6df3d264fd8c7938f0a2294c3d61ae58a45b52c1f6750
3
  size 1089994880
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "total_flos": 0.0,
3
- "train_loss": 0.12281362387614372,
4
- "train_runtime": 10554.7907,
5
  "train_samples": 7500,
6
- "train_samples_per_second": 0.711,
7
  "train_steps_per_second": 0.011
8
  }
 
1
  {
2
  "total_flos": 0.0,
3
+ "train_loss": 0.19539536322411308,
4
+ "train_runtime": 10586.276,
5
  "train_samples": 7500,
6
+ "train_samples_per_second": 0.708,
7
  "train_steps_per_second": 0.011
8
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6d6ac3cfc11a0c97d69bb715ed0de66399a36fc026b721684ac5f8c83cc60b9b
3
  size 8568
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24aa22826ce5722987ffd0895d76ea41b7c867f3895817af221a89244b40b616
3
  size 8568