Papers
arxiv:2506.18254

RLPR: Extrapolating RLVR to General Domains without Verifiers

Published on Jun 23
· Submitted by Yirany on Jun 24
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

RLPR, a verifier-free framework using LLM's token probability scores as reward signals, enhances reasoning capabilities across both general and mathematical domains, outperforming other methods in various benchmarks.

AI-generated summary

Reinforcement Learning with Verifiable Rewards (RLVR) demonstrates promising potential in advancing the reasoning capabilities of LLMs. However, its success remains largely confined to mathematical and code domains. This primary limitation stems from the heavy reliance on domain-specific verifiers, which results in prohibitive complexity and limited scalability. To address the challenge, our key observation is that LLM's intrinsic probability of generating a correct free-form answer directly indicates its own evaluation of the reasoning reward (i.e., how well the reasoning process leads to the correct answer). Building on this insight, we propose RLPR, a simple verifier-free framework that extrapolates RLVR to broader general domains. RLPR uses the LLM's own token probability scores for reference answers as the reward signal and maximizes the expected reward during training. We find that addressing the high variance of this noisy probability reward is crucial to make it work, and propose prob-to-reward and stabilizing methods to ensure a precise and stable reward from LLM intrinsic probabilities. Comprehensive experiments in four general-domain benchmarks and three mathematical benchmarks show that RLPR consistently improves reasoning capabilities in both areas for Gemma, Llama, and Qwen based models. Notably, RLPR outperforms concurrent VeriFree by 7.6 points on TheoremQA and 7.5 points on Minerva, and even surpasses strong verifier-model-dependent approaches General-Reasoner by 1.6 average points across seven benchmarks.

Community

Paper author Paper submitter
•
edited about 20 hours ago

We demonstrate the effectiveness of RLPR on models including Gemma, Llama, and Qwen series. RLPR surpasses the results of RLVR on all three models and surpassing methods using model-based verifiers.

🤗 All code, data, and models are open-sourced for the community!

image.png

Paper author Paper submitter

We manually analyze the reward quality and observe that our proposed probability-based reward (PR) exhibits better reward quality than naive likelihood, rule-based verifiers, and even model-based verifiers.

image.png

Paper author Paper submitter

With only one forward pass, we can obtain high-quality rewards for responses across various domains without requiring verifiers.

image.png

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.18254 in a Space README.md to link it from this page.

Collections including this paper 3