Papers
arxiv:2505.12346

SEED-GRPO: Semantic Entropy Enhanced GRPO for Uncertainty-Aware Policy Optimization

Published on May 18
Β· Submitted by Dreamer312 on May 20
Authors:
,
,

Abstract

SEED-GRPO enhances Group Relative Policy Optimization by adjusting policy updates based on the uncertainty of input prompts, achieving state-of-the-art performance in mathematical reasoning benchmarks.

AI-generated summary

Large language models (LLMs) exhibit varying levels of confidence across input prompts (questions): some lead to consistent, semantically similar answers, while others yield diverse or contradictory outputs. This variation reflects LLM's uncertainty about the input prompt, a signal of how confidently the model understands a given problem. However, vanilla Group Relative Policy Optimization (GRPO) treats all prompts equally during policy updates, ignoring this important information about the model's knowledge boundaries. To address this limitation, we propose SEED-GRPO (Semantic Entropy EnhanceD GRPO), which explicitly measures LLMs' uncertainty of the input prompts semantic entropy. Semantic entropy measures the diversity of meaning in multiple generated answers given a prompt and uses this to modulate the magnitude of policy updates. This uncertainty-aware training mechanism enables dynamic adjustment of policy update magnitudes based on question uncertainty. It allows more conservative updates on high-uncertainty questions while maintaining the original learning signal on confident ones. Experimental results on five mathematical reasoning benchmarks (AIME24 56.7, AMC 68.7, MATH 83.4, Minerva 34.2, and OlympiadBench 48.0) demonstrate that SEED-GRPO achieves new state-of-the-art performance in average accuracy, validating the effectiveness of uncertainty-aware policy optimization.

Community

Paper author Paper submitter
This comment has been hidden (marked as Resolved)
Paper author Paper submitter

Hi all, we just released SEED-GRPO, a new reinforcement learning framework that incorporates semantic entropy into policy optimization.

πŸ” Motivation:
Most GRPO methods treat all prompts equally, but some prompt questions are harder than others. SEED-GRPO measures semantic entropy (i.e., how diverse the answers are) to assess uncertainty:

If the model is uncertain (entropy high β†’ diverse outputs), we update conservatively.

If the model is confident (entropy low β†’ consistent answers), we maintain the original updates.

πŸ“ˆ Results:
With just a 7B model, we reach 56.7 AIME24, outperforming much larger baselines.

Congrats, interesting paper and impressive results! I am a bit of confused about why seed-grpo scales down the advantage of samples with high uncertainty (semantic entropy)? Intuitively, it seems that the correct (high reward) response of difficult questions (high entropy) are more deserving to learn.

Β·
Paper author

Thanks for the thoughtful question β€” that’s actually where we started as well!

In our early experiments, we indeed tried the opposite strategy: amplifying the advantage for high-uncertainty (i.e., high semantic entropy) questions, under the intuition that correct answers to hard problems deserve stronger updates.

However, we observed that this led to severe training instability β€” the model quickly collapsed. We suspect that in policy optimization methods like PPO and GRPO, training stability is more important than aggressiveness. High-entropy questions, by definition, indicate model uncertainty. Even if some responses in high-entropy questions yield high rewards, they often come with significant uncertainty. This indicates that the model is not consistently confident across generations. Prioritizing such samples may result in chasing high-risk, high-reward outliers while overlooking more stable and reliable learning signals.

This is why algorithms like TRPO and PPO include constraints (e.g., KL divergence penalties or clipping) to ensure the new policy doesn't move too far from the old one.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.12346 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.12346 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.12346 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.