AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning
Abstract
Large-scale reinforcement learning enhances reasoning capabilities in small and mid-sized models more effectively than distillation, achieving superior results in both math and code benchmarks.
Despite recent progress in large-scale reinforcement learning (RL) for reasoning, the training recipe for building high-performing reasoning models remains elusive. Key implementation details of frontier models, such as DeepSeek-R1, including data curation strategies and RL training recipe, are often omitted. Moreover, recent research indicates distillation remains more effective than RL for smaller models. In this work, we demonstrate that large-scale RL can significantly enhance the reasoning capabilities of strong, small- and mid-sized models, achieving results that surpass those of state-of-the-art distillation-based models. We systematically study the RL training process through extensive ablations and propose a simple yet effective approach: first training on math-only prompts, then on code-only prompts. Notably, we find that math-only RL not only significantly enhances the performance of strong distilled models on math benchmarks (e.g., +14.6% / +17.2% on AIME 2025 for the 7B / 14B models), but also code reasoning tasks (e.g., +6.8% / +5.8% on LiveCodeBench for the 7B / 14B models). In addition, extended code-only RL iterations further improve performance on code benchmarks with minimal or no degradation in math results. We develop a robust data curation pipeline to collect challenging prompts with high-quality, verifiable answers and test cases to enable verification-based RL across both domains. Finally, we identify key experimental insights, including curriculum learning with progressively increasing response lengths and the stabilizing effect of on-policy parameter updates. We find that RL not only elicits the foundational reasoning capabilities acquired during pretraining and supervised fine-tuning (e.g., distillation), but also pushes the limits of the model's reasoning ability, enabling it to solve problems that were previously unsolvable.
Community
AceReason-Nemotron: Advancing math and code reasoning through reinforcement learning (RL)
We propose conducting RL on math-only prompts first, then on code-only prompts.
Our key findings include:
- Math-only RL significantly boosts both math and code benchmarks!
- Extended iterations of code-only RL significantly improve code performance while causing minimal or no degradation in math reasoning tasks.
- RL not only elicits the foundational reasoning capabilities acquired during pretraining and SFT, as evidenced by significant improvements in pass@1, but also pushes the limits of the model’s reasoning ability to solve previously unsolvable problems, as demonstrated by substantial gains in pass@64.
We are releasing the model on Hugging Face, along with the training recipe and implementation details in the paper.
Model on 🤗: https://huggingface.co/nvidia/AceReason-Nemotron-14B
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- How Difficulty-Aware Staged Reinforcement Learning Enhances LLMs' Reasoning Capabilities: A Preliminary Experimental Study (2025)
- RL of Thoughts: Navigating LLM Reasoning with Inference-time Reinforcement Learning (2025)
- GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning (2025)
- Phi-4-Mini-Reasoning: Exploring the Limits of Small Reasoning Language Models in Math (2025)
- SRPO: A Cross-Domain Implementation of Large-Scale Reinforcement Learning on LLM (2025)
- General-Reasoner: Advancing LLM Reasoning Across All Domains (2025)
- RL Tango: Reinforcing Generator and Verifier Together for Language Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 5
Browse 5 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper