Papers
arxiv:2505.23923

ChARM: Character-based Act-adaptive Reward Modeling for Advanced Role-Playing Language Agents

Published on May 29
· Submitted by feltoner on Jun 2
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

ChARM, a character-focused adaptive reward model, improves preference learning for role-playing language agents by using an act-adaptive margin and self-evolution with unlabeled data, achieving superior results on dedicated benchmarks.

AI-generated summary

Role-Playing Language Agents (RPLAs) aim to simulate characters for realistic and engaging human-computer interactions. However, traditional reward models often struggle with scalability and adapting to subjective conversational preferences. We propose ChARM, a Character-based Act-adaptive Reward Model, addressing these challenges through two innovations: (1) an act-adaptive margin that significantly enhances learning efficiency and generalizability, and (2) a self-evolution mechanism leveraging large-scale unlabeled data to improve training coverage. Additionally, we introduce RoleplayPref, the first large-scale preference dataset specifically for RPLAs, featuring 1,108 characters, 13 subcategories, and 16,888 bilingual dialogues, alongside RoleplayEval, a dedicated evaluation benchmark. Experimental results show a 13% improvement over the conventional Bradley-Terry model in preference rankings. Furthermore, applying ChARM-generated rewards to preference learning techniques (e.g., direct preference optimization) achieves state-of-the-art results on CharacterEval and RoleplayEval. Code and dataset are available at https://github.com/calubkk/ChARM.

Community

Paper submitter
edited 5 days ago
  1. We propose ChARM, a novel reward modeling framework, designed to provide accurate rewards for enhancing role-playing abilities in RPLA, dynamically adjusting optimization strength through an act-adaptive margin and leveraging self-evolution to expand training data.
  2. We train a ChARM-based reward model on Qwen2.5-7B, which outperforms the traditional Bradley-Terry model by 13% in preference ranking. When combined with DPO, it achieves stateof-the-art performance on both CharacterEval and our newly developed role-playing benchmark RoleplayEval.
  3. We create the first role-playing preference dataset RoleplayPref, with 1,108 characters across 13 subcategories and 16,888 bilingual dialogues. Additionally, we design a new evaluation benchmark RoleplayEval to advance research in this area.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.23923 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.23923 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.23923 in a Space README.md to link it from this page.

Collections including this paper 2