Papers
arxiv:2506.14175

GRAM: A Generative Foundation Reward Model for Reward Generalization

Published on Jun 17
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

A generative reward model trained on both unlabeled and labeled data using large-scale unsupervised learning and supervised learning with label smoothing achieves better performance across tasks like response ranking, RLHF, and fine-tuning.

AI-generated summary

In aligning large language models (LLMs), reward models have played an important role, but are standardly trained as discriminative models and rely only on labeled human preference data. In this paper, we explore methods that train reward models using both unlabeled and labeled data. Building on the generative models in LLMs, we develop a generative reward model that is first trained via large-scale unsupervised learning and then fine-tuned via supervised learning. We also show that by using label smoothing, we are in fact optimizing a regularized pairwise ranking loss. This result, in turn, provides a new view of training reward models, which links generative models and discriminative models under the same class of training objectives. The outcome of these techniques is a foundation reward model, which can be applied to a wide range of tasks with little or no further fine-tuning effort. Extensive experiments show that this model generalizes well across several tasks, including response ranking, reinforcement learning from human feedback, and task adaptation with fine-tuning, achieving significant performance improvements over several strong baseline models.

Community

Sign up or log in to comment

Models citing this paper 5

Browse 5 models citing this paper

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.14175 in a Space README.md to link it from this page.

Collections including this paper 1