RuleReasoner: Reinforced Rule-based Reasoning via Domain-aware Dynamic Sampling
Abstract
RuleReasoner enhances rule-based reasoning in small models through dynamic domain sampling, achieving superior performance and efficiency compared to large models.
Rule-based reasoning has been acknowledged as one of the fundamental problems in reasoning, while deviations in rule formats, types, and complexity in real-world applications pose severe challenges. Recent studies have shown that large reasoning models (LRMs) have remarkable reasoning capabilities, and their performance is substantially enhanced by reinforcement learning (RL). However, it remains an open question whether small reasoning models (SRMs) can learn rule-based reasoning effectively with robust generalization across diverse tasks and domains. To address this, we introduce Reinforced Rule-based Reasoning, a.k.a. RuleReasoner, a simple yet effective method to conduct rule-based reasoning via a wide collection of curated tasks and a novel domain-aware dynamic sampling approach. Specifically, RuleReasoner resamples each training batch by updating the sampling weights of different domains based on historical rewards. This facilitates domain augmentation and flexible online learning schedules for RL, obviating the need for pre-hoc human-engineered mix-training recipes used in existing methods. Empirical evaluations on in-distribution (ID) and out-of-distribution (OOD) benchmarks reveal that RuleReasoner outperforms frontier LRMs by a significant margin (Delta4.1% average points on eight ID tasks and Delta10.4% average points on three OOD tasks over OpenAI-o1). Notably, our approach also exhibits higher computational efficiency compared to prior dynamic sampling methods for RL.
Community
๐ Code: https://github.com/bigai-nlco/RuleReasoner
๐ง RuleReasoner: Reinforced Rule-based Reasoning via Domain-aware Dynamic Sampling
We introduce RuleReasoner, a simple yet powerful method that brings reinforced rule-based reasoning to small reasoning models (SRMs) โ and it works. No need for massive scale or handcrafted training curricula.
๐ Highlights:
- Rule-centric data curation: A wide-ranging dataset covering 8 diverse rule reasoning tasks โ with varying formats (explicit/implicit), logic types (deductive/inductive), and depths.
- RLVR for rule-based reasoning: Models are rewarded based on rule-validity, not imitation, promoting structural exploration and generalization.
- Dynamic domain sampling: Training batches are re-weighted per domain using historical reward signals, adapting in real-time to task difficulty and learning progress.
๐ Results:
- Outperforms large models: RuleReasoner-8B beats OpenAI-o1, Claude 3.7 Sonnet, and DeepSeek-R1 by up to +14% ID and +49% OOD accuracy, using fewer training steps.
- Efficient small model performance: RuleReasoner-4B achieves 78.3% pass@1 on OOD tasks, showing that small models can be strong, generalizable reasoners.
- Better sample utilization: Faster convergence and higher reward density than traditional RLVR baselines.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- KDRL: Post-Training Reasoning LLMs via Unified Knowledge Distillation and Reinforcement Learning (2025)
- CPGD: Toward Stable Rule-based Reinforcement Learning for Language Models (2025)
- General-Reasoner: Advancing LLM Reasoning Across All Domains (2025)
- ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models (2025)
- Advancing Multimodal Reasoning: From Optimized Cold Start to Staged Reinforcement Learning (2025)
- Beyond Accuracy: Dissecting Mathematical Reasoning for LLMs Under Reinforcement Learning (2025)
- Incentivizing Strong Reasoning from Weak Supervision (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Thanks, very interesting