Papers
arxiv:2505.18116

Bridging Supervised Learning and Reinforcement Learning in Math Reasoning

Published on May 23
· Submitted by Haoxiang-Wang on May 27
Authors:
,
,
,
,
,
,
,

Abstract

Negative-aware Fine-Tuning (NFT) enhances LLMs' math abilities using supervised learning with negative feedback, achieving performance comparable to RL methods.

AI-generated summary

Reinforcement Learning (RL) has played a central role in the recent surge of LLMs' math abilities by enabling self-improvement through binary verifier signals. In contrast, Supervised Learning (SL) is rarely considered for such verification-driven training, largely due to its heavy reliance on reference answers and inability to reflect on mistakes. In this work, we challenge the prevailing notion that self-improvement is exclusive to RL and propose Negative-aware Fine-Tuning (NFT) -- a supervised approach that enables LLMs to reflect on their failures and improve autonomously with no external teachers. In online training, instead of throwing away self-generated negative answers, NFT constructs an implicit negative policy to model them. This implicit policy is parameterized with the same positive LLM we target to optimize on positive data, enabling direct policy optimization on all LLMs' generations. We conduct experiments on 7B and 32B models in math reasoning tasks. Results consistently show that through the additional leverage of negative feedback, NFT significantly improves over SL baselines like Rejection sampling Fine-Tuning, matching or even surpassing leading RL algorithms like GRPO and DAPO. Furthermore, we demonstrate that NFT and GRPO are actually equivalent in strict-on-policy training, even though they originate from entirely different theoretical foundations. Our experiments and theoretical findings bridge the gap between SL and RL methods in binary-feedback learning systems.

Community

Paper submitter

Project page: https://research.nvidia.com/labs/dir/Negative-aware-Fine-Tuning/

We propose Negative-aware Fine-Tuning (NFT), a supervised learning (SL) method for improving LLMs' math-reasoning abilities with no external teachers.

  • As an SL method, NFT outperforms leading RL algorithms like GRPO and DAPO in 7B model experiments and performs similarly to DAPO in 32B settings.
  • NFT allows directly optimizing LLMs on negative data, thereby significantly outperforming other SL baselines such as Rejective sampling Fine-Tuning (RFT).
  • NFT is equivalent to GRPO when training is strictly on-policy, desipte their entirely different theoretical foundations.
    Our findings show that self-reflective improvement is not an inherent priority of RL algorithms. Rather, the current gap between SL and RL methods actually stems from their ability to effectively leverage negative data.
    main_relation_NFT.jpg
    A spectrum of online algorithms for LLM fine-tuning. NFT bridges reinforcement learning and supervised learning methods through the leverage of negative feedback via supervision.
    main_compare_NFT.jpg
    Comparison of the released NFT-7B with other zero-style math models of Qwen series.
    valacc_curve_NFT.jpg
    Validation accuracy averaged on 6 math benchmarks for 7B (left) and 32B (right) training. For 7B experiments, we report mean ± std results across 3-4 independent experiments.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.18116 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.18116 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.18116 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.