Improve model card: Add pipeline tag, library name, code/project links, and sample usage
Browse filesThis PR enhances the model card for Klear-Reasoner-8B by:
* Adding `pipeline_tag: text-generation` to accurately reflect the model's capabilities in long-form reasoning and code generation. This will also improve discoverability on the Hugging Face Hub (e.g., at https://huggingface.co/models?pipeline_tag=text-generation).
* Including `library_name: transformers` metadata, as the model is compatible with the 🤗 Transformers library, enabling the "how to use" widget and further improving discoverability.
* Adding a direct link to the main GitHub repository (`https://github.com/suu990901/Klear_Reasoner`) and the project page (`https://suu990901.github.io/KlearReasoner/`) in the resource table for easier access to related resources.
* Integrating a clear Python code snippet for sample usage, making it easier for users to get started with inference.
* Adding a detailed section on GPPO (Gradient-Preserving Clipping Policy Optimization) for a deeper understanding of the model's core innovation.
These updates improve the completeness and usability of the model card.
@@ -1,17 +1,18 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
-
language:
|
4 |
-
- en
|
5 |
base_model:
|
6 |
- Qwen/Qwen3-8B-Base
|
7 |
datasets:
|
8 |
- Suu/KlearReasoner-MathSub-30K
|
9 |
- Suu/KlearReasoner-CodeSub-15K
|
|
|
|
|
|
|
10 |
metrics:
|
11 |
- accuracy
|
|
|
|
|
12 |
---
|
13 |
|
14 |
-
|
15 |
# ✨ Klear-Reasoner-8B
|
16 |
We present Klear-Reasoner, a model with long reasoning capabilities that demonstrates careful deliberation during problem solving, achieving outstanding performance across multiple benchmarks. We investigate two key issues with current clipping mechanisms in RL: Clipping suppresses critical exploration signals and ignores suboptimal trajectories. To address these challenges, we propose **G**radient-**P**reserving clipping **P**olicy **O**ptimization (**GPPO**) that gently backpropagates gradients from clipped tokens.
|
17 |
|
@@ -19,10 +20,12 @@ We present Klear-Reasoner, a model with long reasoning capabilities that demonst
|
|
19 |
|---|---|
|
20 |
| 📝 Preprints | [Paper](https://arxiv.org/pdf/2508.07629) |
|
21 |
| 🤗 Daily Paper | [Paper](https://huggingface.co/papers/2508.07629) |
|
|
|
|
|
22 |
| 🤗 Model Hub | [Klear-Reasoner-8B](https://huggingface.co/Suu/Klear-Reasoner-8B) |
|
23 |
| 🤗 Dataset Hub | [Math RL](https://huggingface.co/datasets/Suu/KlearReasoner-MathSub-30K) |
|
24 |
| 🤗 Dataset Hub | [Code RL](https://huggingface.co/datasets/Suu/KlearReasoner-CodeSub-15K) |
|
25 |
-
| 🐛 Issues & Discussions | [GitHub Issues](https://github.com/suu990901/
|
26 |
| 📧 Contact | [email protected] |
|
27 |
|
28 |
## 📌 Overview
|
@@ -45,6 +48,69 @@ The model combines:
|
|
45 |
|
46 |
---
|
47 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
### Evaluation
|
49 |
When we expand the inference budget to 64K and adopt the YaRN method with a scaling factor of 2.5. **Evaluation is coming soon, stay tuned.**
|
50 |
|
@@ -66,6 +132,36 @@ When we expand the inference budget to 64K and adopt the YaRN method with a scal
|
|
66 |
|
67 |
> We report the average `pass@1` results (avg@_n_), with all other evaluation metrics following the DeepSeek-R1 assessment framework (temperature=0.6, top_p=0.95).
|
68 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
|
70 |
---
|
71 |
|
@@ -78,6 +174,33 @@ pip install -r requirements.txt
|
|
78 |
```
|
79 |
For the code, we use [Firejail](https://github.com/netblue30/firejail) for the **sandbox** environment. Additionally, we implemented multi-process control based on [Pebble](https://github.com/noxdafox/pebble), enabling automatic resource reclamation upon task timeout. For mathematics, we use [math_verify](https://github.com/huggingface/Math-Verify) for judging.
|
80 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
81 |
### Using Ray for Multi-Node Training
|
82 |
For multi-node training, ensure all nodes are started and connected via Ray before executing the training script. Below is a brief setup guide for Ray across multiple machines:
|
83 |
#### Step 1: Start Ray on the Head Node (node0)
|
@@ -116,6 +239,7 @@ YOUR_TRAIN_FILE="<train_data_path>"
|
|
116 |
YOUR_TEST_FILE="<test_data_path>"
|
117 |
```
|
118 |
|
|
|
119 |
## 🤝 Citation
|
120 |
If you find this work helpful, please cite our paper:
|
121 |
```bibtex
|
|
|
1 |
---
|
|
|
|
|
|
|
2 |
base_model:
|
3 |
- Qwen/Qwen3-8B-Base
|
4 |
datasets:
|
5 |
- Suu/KlearReasoner-MathSub-30K
|
6 |
- Suu/KlearReasoner-CodeSub-15K
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
license: apache-2.0
|
10 |
metrics:
|
11 |
- accuracy
|
12 |
+
pipeline_tag: text-generation
|
13 |
+
library_name: transformers
|
14 |
---
|
15 |
|
|
|
16 |
# ✨ Klear-Reasoner-8B
|
17 |
We present Klear-Reasoner, a model with long reasoning capabilities that demonstrates careful deliberation during problem solving, achieving outstanding performance across multiple benchmarks. We investigate two key issues with current clipping mechanisms in RL: Clipping suppresses critical exploration signals and ignores suboptimal trajectories. To address these challenges, we propose **G**radient-**P**reserving clipping **P**olicy **O**ptimization (**GPPO**) that gently backpropagates gradients from clipped tokens.
|
18 |
|
|
|
20 |
|---|---|
|
21 |
| 📝 Preprints | [Paper](https://arxiv.org/pdf/2508.07629) |
|
22 |
| 🤗 Daily Paper | [Paper](https://huggingface.co/papers/2508.07629) |
|
23 |
+
| 🌐 Project Page | [Klear-Reasoner Website](https://suu990901.github.io/KlearReasoner/) |
|
24 |
+
| 💻 Code Repo | [Klear-Reasoner GitHub](https://github.com/suu990901/Klear_Reasoner) |
|
25 |
| 🤗 Model Hub | [Klear-Reasoner-8B](https://huggingface.co/Suu/Klear-Reasoner-8B) |
|
26 |
| 🤗 Dataset Hub | [Math RL](https://huggingface.co/datasets/Suu/KlearReasoner-MathSub-30K) |
|
27 |
| 🤗 Dataset Hub | [Code RL](https://huggingface.co/datasets/Suu/KlearReasoner-CodeSub-15K) |
|
28 |
+
| 🐛 Issues & Discussions | [GitHub Issues](https://github.com/suu990901/Klear_Reasoner/issues) |
|
29 |
| 📧 Contact | [email protected] |
|
30 |
|
31 |
## 📌 Overview
|
|
|
48 |
|
49 |
---
|
50 |
|
51 |
+
## 📐 GPPO (Gradient-Preserving Clipping Policy Optimization)
|
52 |
+
|
53 |
+
GPPO is a **plug-and-play** replacement for PPO/GRPO that keeps the clipped tokens **in the computational graph** and lets their gradients flow in a **bounded, controlled** way.
|
54 |
+
|
55 |
+
|
56 |
+
### Problem with Vanilla Clipping
|
57 |
+
Classic importance-ratio clipping (PPO/GRPO) drops all tokens whose ratio
|
58 |
+
$r_t^{(j)}=\pi_\theta/\pi_{\text{old}}$ falls outside $[1-\varepsilon_l,\ 1+\varepsilon_h]$.
|
59 |
+
Two side-effects appear:
|
60 |
+
|
61 |
+
- **High-entropy exploratory tokens** (large $r$, positive advantage) are killed → less exploration.
|
62 |
+
- **Negative trajectories** (small $r$, negative advantage) are ignored → slower correction.
|
63 |
+
|
64 |
+
|
65 |
+
### GPPO Surrogate Loss (Token-Level GRPO)
|
66 |
+
|
67 |
+
Let
|
68 |
+
- $\delta = r_t^{(j)}(\theta)=\pi_\theta/\pi_{\text{old}}$ (importance ratio)
|
69 |
+
- $\tilde A^{(j)}$ = group-relative advantage
|
70 |
+
- $\text{sg}(\cdot)$ = stop-gradient (detach from back-prop)
|
71 |
+
|
72 |
+
The **GPPO objective** is
|
73 |
+
|
74 |
+
|
75 |
+

|
76 |
+
|
77 |
+
|
78 |
+
- **Forward**: behaves exactly like Clip-Higher.
|
79 |
+
- **Backward**: the fraction $\frac{1\pm\varepsilon}{\text{sg}(\delta)}$ keeps the clipped magnitude **but still propagates** a mild gradient.
|
80 |
+
|
81 |
+
|
82 |
+
### Gradient Expression
|
83 |
+
|
84 |
+
Let $\phi_\theta(a_{j,t},s_{j,t})$ be the policy-gradient vector.
|
85 |
+
The **per-token gradient** is
|
86 |
+
|
87 |
+

|
88 |
+
|
89 |
+
|
90 |
+
where
|
91 |
+
|
92 |
+

|
93 |
+
|
94 |
+
- **Never zero** → every token contributes to learning.
|
95 |
+
|
96 |
+
|
97 |
+
### General Form with Tunable Scaling ($\beta_1$, $\beta_2$)
|
98 |
+
|
99 |
+
For finer-grained control:
|
100 |
+
|
101 |
+

|
102 |
+
|
103 |
+
Empirically we set $\beta_1 = \beta_2 = 1$.
|
104 |
+
|
105 |
+
### Experiment
|
106 |
+
<div align="center">
|
107 |
+
<img src="GPPO.png" width="100%"/>
|
108 |
+
|
109 |
+
<sub>Comparison of GPPO, GRPO w/ Clip Higher, and CISPO in mathematical RL training. Both methods are trained from an earlier long-CoT SFT checkpoint with a sequence length of 32K tokens. For GRPO, we use the Clip-Higher strategy from DAPO with the recommended $$\epsilon_h = 0.28$$.</sub>
|
110 |
+
</div>
|
111 |
+
|
112 |
+
---
|
113 |
+
|
114 |
### Evaluation
|
115 |
When we expand the inference budget to 64K and adopt the YaRN method with a scaling factor of 2.5. **Evaluation is coming soon, stay tuned.**
|
116 |
|
|
|
132 |
|
133 |
> We report the average `pass@1` results (avg@_n_), with all other evaluation metrics following the DeepSeek-R1 assessment framework (temperature=0.6, top_p=0.95).
|
134 |
|
135 |
+
---
|
136 |
+
|
137 |
+
## Usage
|
138 |
+
You can load the model and perform inference using the Hugging Face `transformers` library:
|
139 |
+
|
140 |
+
```python
|
141 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
142 |
+
import torch
|
143 |
+
|
144 |
+
model_name = "Suu/Klear-Reasoner-8B" # or "Suu/Klear-Reasoner-8B-SFT"
|
145 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
146 |
+
model = AutoModelForCausalLM.from_pretrained(
|
147 |
+
model_name,
|
148 |
+
torch_dtype=torch.bfloat16,
|
149 |
+
device_map="auto"
|
150 |
+
)
|
151 |
+
|
152 |
+
prompt = "Prove that for all positive integers n, n^3 + 2n is divisible by 3."
|
153 |
+
messages = [{"role": "user", "content": prompt}]
|
154 |
+
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
|
155 |
+
|
156 |
+
outputs = model.generate(
|
157 |
+
inputs,
|
158 |
+
max_new_tokens=8192,
|
159 |
+
temperature=0.6,
|
160 |
+
top_p=0.95,
|
161 |
+
do_sample=True
|
162 |
+
)
|
163 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
164 |
+
```
|
165 |
|
166 |
---
|
167 |
|
|
|
174 |
```
|
175 |
For the code, we use [Firejail](https://github.com/netblue30/firejail) for the **sandbox** environment. Additionally, we implemented multi-process control based on [Pebble](https://github.com/noxdafox/pebble), enabling automatic resource reclamation upon task timeout. For mathematics, we use [math_verify](https://github.com/huggingface/Math-Verify) for judging.
|
176 |
|
177 |
+
### Training Data Format
|
178 |
+
Please refer to the format of the two provided datasets, [Math RL](https://huggingface.co/datasets/Suu/KlearReasoner-MathSub-30K) and [Code RL](https://huggingface.co/datasets/Suu/KlearReasoner-CodeSub-15K), for the training data. The format for a single math entry is as follows:
|
179 |
+
```json
|
180 |
+
{"data_source": "math_longcot_math_verify", "prompt": [{"content": "Let $n=9867$. If you calculated $n^{3}-n^{2}$, what would be the unit digit found?\
|
181 |
+
(a) 0\
|
182 |
+
(b) 2\
|
183 |
+
(c) 4\
|
184 |
+
(d) 6\
|
185 |
+
(e) 8", "role": "user"}], "ability": "math", "reward_model": {"ground_truth": "4", "style": "rule"}, "__index_level_0__": "29999"}
|
186 |
+
```
|
187 |
+
|
188 |
+
Here, the data_source field is set to "math_longcot_math_verify".
|
189 |
+
|
190 |
+
The format for a single code entry is as follows:
|
191 |
+
```json
|
192 |
+
{"hash": "47c43857280be8a7557cc36b998b3012", "ability": "code", "data_source": "coder1_longcot", "prompt": [{"content": "You are an expert Python programmer. You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests.\
|
193 |
+
\
|
194 |
+
Takahashi is planning to eat N dishes.\
|
195 |
+
The i-th dish he plans to eat is sweet if S_i = sweet, and salty if S_i = salty.\
|
196 |
+
If he eats two sweet dishes consecutively, he will feel sick and be unable to eat any more dishes.\
|
197 |
+
Determine whether he can eat all the dishes...", "role": "user"}], "reward_model": {"ground_truth": "...", "style": "rule"}}
|
198 |
+
```
|
199 |
+
|
200 |
+
Here, the data_source field is set to "coder1_longcot".
|
201 |
+
|
202 |
+
**The data_source field affects the choice of verifier.**
|
203 |
+
|
204 |
### Using Ray for Multi-Node Training
|
205 |
For multi-node training, ensure all nodes are started and connected via Ray before executing the training script. Below is a brief setup guide for Ray across multiple machines:
|
206 |
#### Step 1: Start Ray on the Head Node (node0)
|
|
|
239 |
YOUR_TEST_FILE="<test_data_path>"
|
240 |
```
|
241 |
|
242 |
+
---
|
243 |
## 🤝 Citation
|
244 |
If you find this work helpful, please cite our paper:
|
245 |
```bibtex
|