Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
Suu commited on
Commit
78617ac
·
verified ·
1 Parent(s): 3f4fddd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -1
README.md CHANGED
@@ -1,4 +1,53 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- This dataset is a cleaned version of the RL data from the [rllm project](https://github.com/agentica-project/rllm), part of which was used to train KlearReasoner code RL.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ This dataset is a high-quality subset of the Klear-Reasoner Code RL dataset, derived from the RL data used in the [rllm project](https://github.com/agentica-project/rllm). Part of this data contributed to training Klear-Reasoner’s code reasoning models.
5
+ The dataset is carefully cleaned and filtered to include only reliable samples suitable for reinforcement learning. Models trained with this dataset have shown substantial performance improvements across various code reasoning benchmarks.
6
+ You can load the dataset via the Hugging Face datasets library:
7
+ from datasets import load_dataset
8
+
9
+ ```python
10
+ from datasets import load_dataset
11
+ dataset = load_dataset("Suu/KlearReasoner-MathSub-30K")
12
+ ```
13
+ See our paper and GitHub repository for more details.
14
+
15
+ | Resource | Link |
16
+ |---|---|
17
+ | 📝 Preprints | [Paper](https://arxiv.org/pdf/2508.07629) |
18
+ | 🤗 Daily Paper | [Paper](https://huggingface.co/papers/2508.07629) |
19
+ | 🤗 Model Hub | [Klear-Reasoner-8B](https://huggingface.co/Suu/Klear-Reasoner-8B) |
20
+ | 🤗 Dataset Hub | [Math RL](https://huggingface.co/datasets/Suu/KlearReasoner-MathSub-30K) |
21
+ | 🤗 Dataset Hub | [Code RL](https://huggingface.co/datasets/Suu/KlearReasoner-CodeSub-15K) |
22
+ | 🐛 Issues & Discussions | [GitHub Issues](https://github.com/suu990901/KlearReasoner/issues) |
23
+ | 📧 Contact | [email protected] |
24
+
25
+ ## Data Fields
26
+
27
+ - **data_source** (string) — The source identifier for the sample.
28
+ - **prompt** (list of dict) — The input prompt, stored as a list of message objects in chat format.
29
+ - **ability** (string) — The skill or task category associated with the sample.
30
+ - **reward_model** (dict) — Information about the ground truth or reward signal.
31
+ - **ground_truth** (string) — The expected correct answer (may include LaTeX formatting).
32
+ - **style** (string) — The method or type of evaluation, e.g., "rule".
33
+ - **index_level_0** (int) — An internal index or unique identifier for the sample.
34
+
35
+ ## Demonstration of Data Quality
36
+
37
+ This dataset contains exclusively high-quality, filtered samples.
38
+ All samples have been selected to ensure accurate reward signals for reinforcement learning, following the gradient-preserving clipping policy optimization (GPPO) method introduced in our paper. Models trained using this dataset achieve strong generalization and reliable performance on a range of math reasoning tasks.
39
+
40
+ ## Citation
41
+
42
+ Please consider citing our paper if you find this dataset useful:
43
+ ```bibtex
44
+ @misc{su2025klearreasoneradvancingreasoningcapability,
45
+ title={Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization},
46
+ author={Zhenpeng Su and Leiyu Pan and Xue Bai and Dening Liu and Guanting Dong and Jiaming Huang and Wenping Hu and Fuzheng Zhang and Kun Gai and Guorui Zhou},
47
+ year={2025},
48
+ eprint={2508.07629},
49
+ archivePrefix={arXiv},
50
+ primaryClass={cs.LG},
51
+ url={https://arxiv.org/abs/2508.07629},
52
+ }
53
+ ```