Papers
arxiv:2507.01352

Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy

Published on Jul 2
· Submitted by chrisliu298 on Jul 4
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,

Abstract

A large-scale preference dataset and synergistic human-AI curation pipeline improve the quality and performance of open reward models in reinforcement learning from human feedback.

AI-generated summary

Despite the critical role of reward models (RMs) in reinforcement learning from human feedback (RLHF), current state-of-the-art open RMs perform poorly on most existing evaluation benchmarks, failing to capture the spectrum of nuanced and sophisticated human preferences. Even approaches that incorporate advanced training techniques have not yielded meaningful performance improvements. We hypothesize that this brittleness stems primarily from limitations in preference datasets, which are often narrowly scoped, synthetically labeled, or lack rigorous quality control. To address these challenges, we present a large-scale preference dataset comprising 40 million preference pairs, named SynPref-40M. To enable data curation at scale, we design a human-AI synergistic two-stage pipeline that leverages the complementary strengths of human annotation quality and AI scalability. In this pipeline, humans provide verified annotations, while large language models perform automatic curation based on human guidance. Training on this preference mixture, we introduce Skywork-Reward-V2, a suite of eight reward models ranging from 0.6B to 8B parameters, trained on a carefully curated subset of 26 million preference pairs from SynPref-40M. We demonstrate that Skywork-Reward-V2 is versatile across a wide range of capabilities, including alignment with human preferences, objective correctness, safety, resistance to stylistic biases, and best-of-N scaling, achieving state-of-the-art performance across seven major reward model benchmarks. Ablation studies confirm that the effectiveness of our approach stems not only from data scale but also from high-quality curation. The Skywork-Reward-V2 series represents substantial progress in open reward models, highlighting the untapped potential of existing preference datasets and demonstrating how human-AI curation synergy can unlock significantly higher data quality.

Community

Paper author Paper submitter
edited about 8 hours ago

Performance wise, we have the smallest 0.6B variant, Skywork-Reward-V2-Qwen3-0.6B, nearly matching the average performance of our previous best model, Skywork-Reward-Gemma-2-27B-v0.2.

The 1.7B variant outperforms the previous 70B SOTA.

The largest 8B version, Skywork-Reward-V2-Llama-3.1-8B, surpasses all existing reward models across all benchmarks on average. Our top experimental model, Skywork-Reward-V2-Llama-3.1-8B-40M, outperforms all existing reward models on every benchmark. These inlcude RewardBench, RewardBench 2, PPE Preference, PPR Correctness, RMB, RM-Bench, and JudgeBench.

skywork_reward_v2_perf.png

2025-07-03 at 18.04.09@2x.png

Paper author Paper submitter
edited about 8 hours ago

We scale preference data curation to the extreme via a two-stage human-AI synergistic pipeline, undergoing two major stages, and have obtained continuous improvements!

2025-07-03 at 18.06.00@2x.png

2025-07-03 at 18.06.36@2x.png

Paper author Paper submitter

Despite the critical role of reward models (RMs) in reinforcement learning from human feedback (RLHF), current state-of-the-art open RMs perform poorly on most existing evaluation benchmarks, failing to capture the spectrum of nuanced and sophisticated human preferences. Even approaches that incorporate advanced training techniques have not yielded meaningful performance improvements. We hypothesize that this brittleness stems primarily from limitations in preference datasets, which are often narrowly scoped, synthetically labeled, or lack rigorous quality control. To address these challenges, we present a large-scale preference dataset comprising 40 million preference pairs, named SynPref-40M. To enable data curation at scale, we design a human-AI synergistic two-stage pipeline that leverages the complementary strengths of human annotation quality and AI scalability. In this pipeline, humans provide verified annotations, while large language models perform automatic curation based on human guidance. Training on this preference mixture, we introduce Skywork-Reward-V2, a suite of eight reward models ranging from 0.6B to 8B parameters, trained on a carefully curated subset of 26 million preference pairs from SynPref-40M. We demonstrate that Skywork-Reward-V2 is versatile across a wide range of capabilities, including alignment with human preferences, objective correctness, safety, resistance to stylistic biases, and best-of-N scaling, achieving state-of-the-art performance across seven major reward model benchmarks. Ablation studies confirm that the effectiveness of our approach stems not only from data scale but also from high-quality curation. The Skywork-Reward-V2 series represents substantial progress in open reward models, highlighting the untapped potential of existing preference datasets and demonstrating how human-AI curation synergy can unlock significantly higher data quality.

Nice work! Will you open source the preference dataset SynPref-40M?

amazing work! and plus for releasing the preference data, or the pipeline code

Sign up or log in to comment

Models citing this paper 8

Browse 8 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.01352 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.01352 in a Space README.md to link it from this page.

Collections including this paper 2