Papers
arxiv:2507.00432

Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning

Published on Jul 1
· Submitted by yuexiang96 on Jul 2
#2 Paper of the day
Authors:
,
,
,
,
,
,
,

Abstract

Reinforcement learning-tuned models outperform supervised fine-tuned models in generalizing mathematical problem-solving abilities to other domains, indicating a need to re-evaluate training methods for reasoning models.

AI-generated summary

Math reasoning has become the poster child of progress in large language models (LLMs), with new models rapidly surpassing human-level performance on benchmarks like MATH and AIME. But as math leaderboards improve week by week, it is worth asking: do these gains reflect broader problem-solving ability or just narrow overfitting? To answer this question, we evaluate over 20 open-weight reasoning-tuned models across a broad suite of tasks, including math, scientific QA, agent planning, coding, and standard instruction-following. We surprisingly find that most models that succeed in math fail to transfer their gains to other domains. To rigorously study this phenomenon, we conduct controlled experiments on Qwen3-14B models using math-only data but different tuning methods. We find that reinforcement learning (RL)-tuned models generalize well across domains, while supervised fine-tuning (SFT)-tuned models often forget general capabilities. Latent-space representation and token-space distribution shift analyses reveal that SFT induces substantial representation and output drift, while RL preserves general-domain structure. Our results suggest a need to rethink standard post-training recipes, particularly the reliance on SFT-distilled data for advancing reasoning models.

Community

Paper author Paper submitter

Takeaways:

  • We evaluate over 20 open-weight reasoning-tuned models and surprisingly find that most models that succeed in math fail to transfer their gains to other domains.
  • We find that reinforcement learning (RL)-tuned models generalize well across domains, while supervised fine-tuning (SFT)-tuned models often forget general capabilities.
  • Latent-space representation and token-space distribution shift analyses reveal that SFT induces substantial representation and output drift, while RL preserves general-domain structure.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.00432 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.00432 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.00432 in a Space README.md to link it from this page.

Collections including this paper 10