Papers
arxiv:2506.08672

RuleReasoner: Reinforced Rule-based Reasoning via Domain-aware Dynamic Sampling

Published on Jun 10
ยท Submitted by zlzheng on Jun 11
#3 Paper of the day
Authors:
,
,

Abstract

RuleReasoner enhances rule-based reasoning in small models through dynamic domain sampling, achieving superior performance and efficiency compared to large models.

AI-generated summary

Rule-based reasoning has been acknowledged as one of the fundamental problems in reasoning, while deviations in rule formats, types, and complexity in real-world applications pose severe challenges. Recent studies have shown that large reasoning models (LRMs) have remarkable reasoning capabilities, and their performance is substantially enhanced by reinforcement learning (RL). However, it remains an open question whether small reasoning models (SRMs) can learn rule-based reasoning effectively with robust generalization across diverse tasks and domains. To address this, we introduce Reinforced Rule-based Reasoning, a.k.a. RuleReasoner, a simple yet effective method to conduct rule-based reasoning via a wide collection of curated tasks and a novel domain-aware dynamic sampling approach. Specifically, RuleReasoner resamples each training batch by updating the sampling weights of different domains based on historical rewards. This facilitates domain augmentation and flexible online learning schedules for RL, obviating the need for pre-hoc human-engineered mix-training recipes used in existing methods. Empirical evaluations on in-distribution (ID) and out-of-distribution (OOD) benchmarks reveal that RuleReasoner outperforms frontier LRMs by a significant margin (Delta4.1% average points on eight ID tasks and Delta10.4% average points on three OOD tasks over OpenAI-o1). Notably, our approach also exhibits higher computational efficiency compared to prior dynamic sampling methods for RL.

Community

Paper author Paper submitter

๐Ÿ“Š Code: https://github.com/bigai-nlco/RuleReasoner

๐Ÿง  RuleReasoner: Reinforced Rule-based Reasoning via Domain-aware Dynamic Sampling
We introduce RuleReasoner, a simple yet powerful method that brings reinforced rule-based reasoning to small reasoning models (SRMs) โ€” and it works. No need for massive scale or handcrafted training curricula.

๐Ÿ” Highlights:

  • Rule-centric data curation: A wide-ranging dataset covering 8 diverse rule reasoning tasks โ€” with varying formats (explicit/implicit), logic types (deductive/inductive), and depths.
  • RLVR for rule-based reasoning: Models are rewarded based on rule-validity, not imitation, promoting structural exploration and generalization.
  • Dynamic domain sampling: Training batches are re-weighted per domain using historical reward signals, adapting in real-time to task difficulty and learning progress.

๐Ÿ“ˆ Results:

  • Outperforms large models: RuleReasoner-8B beats OpenAI-o1, Claude 3.7 Sonnet, and DeepSeek-R1 by up to +14% ID and +49% OOD accuracy, using fewer training steps.
  • Efficient small model performance: RuleReasoner-4B achieves 78.3% pass@1 on OOD tasks, showing that small models can be strong, generalizable reasoners.
  • Better sample utilization: Faster convergence and higher reward density than traditional RLVR baselines.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Thanks, very interesting

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 1

Collections including this paper 3