IFDECORATOR: Wrapping Instruction Following Reinforcement Learning with Verifiable Rewards
Abstract
Instruction Following Decorator enhances RLVR by improving sample efficiency, intent alignment, and reducing reward hacking in large language models.
Reinforcement Learning with Verifiable Rewards (RLVR) improves instruction following capabilities of large language models (LLMs), but suffers from training inefficiency due to inadequate difficulty assessment. Moreover, RLVR is prone to over-optimization, where LLMs exploit verification shortcuts without aligning to the actual intent of user instructions. We introduce Instruction Following Decorator (IFDecorator}, a framework that wraps RLVR training into a robust and sample-efficient pipeline. It consists of three components: (1) a cooperative-adversarial data flywheel that co-evolves instructions and hybrid verifications, generating progressively more challenging instruction-verification pairs; (2) IntentCheck, a bypass module enforcing intent alignment; and (3) trip wires, a diagnostic mechanism that detects reward hacking via trap instructions, which trigger and capture shortcut exploitation behaviors. Our Qwen2.5-32B-Instruct-IFDecorator achieves 87.43% accuracy on IFEval, outperforming larger proprietary models such as GPT-4o. Additionally, we demonstrate substantial improvements on FollowBench while preserving general capabilities. Our trip wires show significant reductions in reward hacking rates. We will release models, code, and data for future research.
Community
good
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- VerIF: Verification Engineering for Reinforcement Learning in Instruction Following (2025)
- Lessons from Training Grounded LLMs with Verifiable Rewards (2025)
- RLPR: Extrapolating RLVR to General Domains without Verifiers (2025)
- AlphaAlign: Incentivizing Safety Alignment with Extremely Simplified Reinforcement Learning (2025)
- URPO: A Unified Reward&Policy Optimization Framework for Large Language Models (2025)
- Checklists Are Better Than Reward Models For Aligning Language Models (2025)
- Generalizing Verifiable Instruction Following (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper