Scaling Reasoning, Losing Control: Evaluating Instruction Following in Large Reasoning Models
Abstract
An empirical analysis of MathIF identifies a tension between enhancing reasoning capacity and maintaining instruction adherence in large language models.
Instruction-following is essential for aligning large language models (LLMs) with user intent. While recent reasoning-oriented models exhibit impressive performance on complex mathematical problems, their ability to adhere to natural language instructions remains underexplored. In this work, we introduce MathIF, a dedicated benchmark for evaluating instruction-following in mathematical reasoning tasks. Our empirical analysis reveals a consistent tension between scaling up reasoning capacity and maintaining controllability, as models that reason more effectively often struggle to comply with user directives. We find that models tuned on distilled long chains-of-thought or trained with reasoning-oriented reinforcement learning often degrade in instruction adherence, especially when generation length increases. Furthermore, we show that even simple interventions can partially recover obedience, though at the cost of reasoning performance. These findings highlight a fundamental tension in current LLM training paradigms and motivate the need for more instruction-aware reasoning models. We release the code and data at https://github.com/TingchenFu/MathIF.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Not All Thoughts are Generated Equal: Efficient LLM Reasoning via Multi-Turn Reinforcement Learning (2025)
- GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning (2025)
- RL Tango: Reinforcing Generator and Verifier Together for Language Reasoning (2025)
- Do Not Let Low-Probability Tokens Over-Dominate in RL for LLMs (2025)
- RL of Thoughts: Navigating LLM Reasoning with Inference-time Reinforcement Learning (2025)
- Exploring the Potential of Offline RL for Reasoning in LLMs: A Preliminary Study (2025)
- When Thinking Fails: The Pitfalls of Reasoning for Instruction-Following in LLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper