--- license: apache-2.0 language: - en - zh size_categories: - 100M

🤗 Hugging Face 🤖 ModelScope 🖥️ GitHub

# Ring-lite-rl-data This dataset is a curated subset of high-quality problems across mathematics and code domains designed for reinforcement learning in the [Ring-lite](https://huggingface.co/inclusionAI/Ring-lite) model. This dataset contains: * **Mathematics**: Over 39,000 rigorously curated problems sourced from: - Open-source datasets (BigMath, DeepScaleR, DAPO, DeepMath-103K) - Art of Problem Solving (AoPS) contest collections * **Code**: Approximately 8,400 verified coding problems from: - Programming competition resources (CodeContest, TACO, APPS) - All problems include validated "Accepted" solutions and test cases **Note**: Only a partial subset of the complete dataset is publicly released due to third-party data licensing restrictions and procurement agreements. The published portion has been carefully selected to comply with all copyright requirements while maintaining research utility. ## Dataset Construction ### Data Sources - **Mathematics**: Problems collected from open-source datasets, filtered through strict quality control - **Code**: Problems from open-source programming competition resources with verified solutions ### Curation Pipeline Our data undergoes a rigorous three-stage curation process: 1. **Data Cleansing**: - Removal of problems with invalid characters, images, or multiple subquestions - Strict character-based and semantic-based deduplication - Exclusion of easily guessable problems (multiple-choice, True/False questions) 2. **Answer Verification**: - LLM-based verification using models of different sizes - Human expert annotation - Problems failing verification are excluded 3. **Data Annotation**: - Multi-dimensional labeling (source, educational level, domain knowledge) - Mathematical Subject Classification (MSC) for math problems - Model-aware difficulty assessment ## Dataset Fields The dataset contains the following fields for each domain: ### Mathematics - **context**: The problem statement - **groundtruth**: Verified correct answer - **type**: Problem category - **mid**: Unique problem ID ### Code - **context**: Detailed programming problem description - **groundtruth**: Verified correct Python solution code - **groundtruth_language**: Implementation language - **type**: Problem category - **code_test_cases**: List of validated test cases with: - **input**: Test input - **output**: Expected output - **dataset**: Source dataset - **code_language**: Programming language - **difficulty**: Problem difficulty score - **mid**: Unique problem ID ## Citation Information **Please consider citing our technical report [Ring-lite](https://arxiv.org/abs/2506.14731) if you use this dataset:** ``` @misc{ringteam2025ringlitescalablereasoningc3postabilized, title={Ring-lite: Scalable Reasoning via C3PO-Stabilized Reinforcement Learning for LLMs}, author={Ling Team and Bin Hu and Cai Chen and Deng Zhao and Ding Liu and Dingnan Jin and Feng Zhu and Hao Dai and Hongzhi Luan and Jia Guo and Jiaming Liu and Jiewei Wu and Jun Mei and Jun Zhou and Junbo Zhao and Junwu Xiong and Kaihong Zhang and Kuan Xu and Lei Liang and Liang Jiang and Liangcheng Fu and Longfei Zheng and Qiang Gao and Qing Cui and Quan Wan and Shaomian Zheng and Shuaicheng Li and Tongkai Yang and Wang Ren and Xiaodong Yan and Xiaopei Wan and Xiaoyun Feng and Xin Zhao and Xinxing Yang and Xinyu Kong and Xuemin Yang and Yang Li and Yingting Wu and Yongkang Liu and Zhankai Xu and Zhenduo Zhang and Zhenglei Zhou and Zhenyu Huang and Zhiqiang Zhang and Zihao Wang and Zujie Wen}, year={2025}, eprint={2506.14731}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2506.14731}, } ``` ## Intended Usage This dataset is designed for: - Training and evaluating LLMs on multi-domain reasoning tasks - Reinforcement learning applications - Benchmarking model performance across mathematics and code domains ## Release Date 06/20/2025 ## Data Version 1.0