Bridging Supervised Learning and Reinforcement Learning in Math Reasoning
Abstract
Negative-aware Fine-Tuning (NFT) enhances LLMs' math abilities using supervised learning with negative feedback, achieving performance comparable to RL methods.
Reinforcement Learning (RL) has played a central role in the recent surge of LLMs' math abilities by enabling self-improvement through binary verifier signals. In contrast, Supervised Learning (SL) is rarely considered for such verification-driven training, largely due to its heavy reliance on reference answers and inability to reflect on mistakes. In this work, we challenge the prevailing notion that self-improvement is exclusive to RL and propose Negative-aware Fine-Tuning (NFT) -- a supervised approach that enables LLMs to reflect on their failures and improve autonomously with no external teachers. In online training, instead of throwing away self-generated negative answers, NFT constructs an implicit negative policy to model them. This implicit policy is parameterized with the same positive LLM we target to optimize on positive data, enabling direct policy optimization on all LLMs' generations. We conduct experiments on 7B and 32B models in math reasoning tasks. Results consistently show that through the additional leverage of negative feedback, NFT significantly improves over SL baselines like Rejection sampling Fine-Tuning, matching or even surpassing leading RL algorithms like GRPO and DAPO. Furthermore, we demonstrate that NFT and GRPO are actually equivalent in strict-on-policy training, even though they originate from entirely different theoretical foundations. Our experiments and theoretical findings bridge the gap between SL and RL methods in binary-feedback learning systems.
Community
Project page: https://research.nvidia.com/labs/dir/Negative-aware-Fine-Tuning/
We propose Negative-aware Fine-Tuning (NFT), a supervised learning (SL) method for improving LLMs' math-reasoning abilities with no external teachers.
- As an SL method, NFT outperforms leading RL algorithms like GRPO and DAPO in 7B model experiments and performs similarly to DAPO in 32B settings.
- NFT allows directly optimizing LLMs on negative data, thereby significantly outperforming other SL baselines such as Rejective sampling Fine-Tuning (RFT).
- NFT is equivalent to GRPO when training is strictly on-policy, desipte their entirely different theoretical foundations.
Our findings show that self-reflective improvement is not an inherent priority of RL algorithms. Rather, the current gap between SL and RL methods actually stems from their ability to effectively leverage negative data.
A spectrum of online algorithms for LLM fine-tuning. NFT bridges reinforcement learning and supervised learning methods through the leverage of negative feedback via supervision.
Comparison of the released NFT-7B with other zero-style math models of Qwen series.
Validation accuracy averaged on 6 math benchmarks for 7B (left) and 32B (right) training. For 7B experiments, we report mean ± std results across 3-4 independent experiments.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Minimalist Approach to LLM Reasoning: from Rejection Sampling to Reinforce (2025)
- Learning to Reason under Off-Policy Guidance (2025)
- RL in Name Only? Analyzing the Structural Assumptions in RL post-training for LLMs (2025)
- Trust, But Verify: A Self-Verification Approach to Reinforcement Learning with Verifiable Rewards (2025)
- Right Question is Already Half the Answer: Fully Unsupervised LLM Reasoning Incentivization (2025)
- Training Large Language Models to Reason via EM Policy Gradient (2025)
- TTRL: Test-Time Reinforcement Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper