Papers
arxiv:2506.16507

Robust Reward Modeling via Causal Rubrics

Published on Jun 19
· Submitted by pragsri8 on Jun 24
Authors:
,
,
,
,
,
,

Abstract

Crome, a novel reward modeling framework using causal and neutral augmentations, significantly improves the robustness and accuracy of reward models against reward hacking.

AI-generated summary

Reward models (RMs) are fundamental to aligning Large Language Models (LLMs) via human feedback, yet they often suffer from reward hacking. They tend to latch on to superficial or spurious attributes, such as response length or formatting, mistaking these cues learned from correlations in training data for the true causal drivers of quality (e.g., factuality, relevance). This occurs because standard training objectives struggle to disentangle these factors, leading to brittle RMs and misaligned policies. We introduce Crome (Causally Robust Reward Modeling), a novel framework grounded in an explicit causal model designed to mitigate reward hacking. Crome employs the following synthetic targeted augmentations during training: (1) Causal Augmentations, which are pairs that differ along specific causal attributes, to enforce sensitivity along each causal attribute individually, and (2) Neutral Augmentations, which are tie-label pairs varying primarily in spurious attributes, to enforce invariance along spurious attributes. Notably, our augmentations are produced without any knowledge of spurious factors, via answer interventions only along causal rubrics, that are identified by querying an oracle LLM. Empirically, Crome significantly outperforms standard baselines on RewardBench, improving average accuracy by up to 5.4% and achieving gains of up to 13.2% and 7.2% in specific categories. The robustness of Crome is further testified by the consistent gains obtained in a Best-of-N inference setting across increasing N, across various benchmarks, including the popular RewardBench (covering chat, chat-hard, safety, and reasoning tasks), the safety-focused WildGuardTest, and the reasoning-specific GSM8k.

Community

Paper author Paper submitter
edited about 17 hours ago

Reward Hacking is due to unwanted correlations between spurious features and the reward label during training . However, spurious features may be very varied and it is not always possible to determine the kind of spuriousness that leads to deterioration.

We address this issue using our method CROME

CROME provides a novel data augmentation strategy to train reward models to be robust to spuriousness. Salient features are:

✅ We do not assume any knowledge of the type of spuriousness
✅ We only rely on causal rubrics from an oracle LLM
✅ We perturb good and bad answers along specific causal rubrics to create these augmentations.

Additionally we use question randomization to enforce invariance to spuriousness without necessarily knowing what the spurious features are.

🏆 Up to 5.4% accuracy boost on RewardBench, with huge gains in Safety (+13.2%) and Reasoning (+7.2%).
🏆 Superior robustness on reWordBench, achieving an aggregate accuracy gain of up to 9.1% and outperforming on 21/23 transformations.
🏆Consistent improvements in downstream Best-of-N selection across various benchmarks.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.16507 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.16507 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.16507 in a Space README.md to link it from this page.

Collections including this paper 1