EDGE-GRPO: Entropy-Driven GRPO with Guided Error Correction for Advantage Diversity
Abstract
The EDGE-GRPO algorithm addresses the advantage collapse problem in Group Relative Policy Optimization by incorporating entropy-driven advantage and guided error correction, enhancing response diversity and training signal.
Large Language Models (LLMs) have made remarkable progress in enhancing step-by-step reasoning through reinforcement learning. However, the Group Relative Policy Optimization (GRPO) algorithm, which relies on sparse reward rules, often encounters the issue of identical rewards within groups, leading to the advantage collapse problem. Existing works typically address this challenge from two perspectives: enforcing model reflection to enhance response diversity, and introducing internal feedback to augment the training signal (advantage). In this work, we begin by analyzing the limitations of model reflection and investigating the policy entropy of responses at the fine-grained sample level. Based on our experimental findings, we propose the EDGE-GRPO algorithm, which adopts Entropy-Driven Advantage and Guided Error Correction to effectively mitigate the problem of advantage collapse. Extensive experiments on several main reasoning benchmarks demonstrate the effectiveness and superiority of our approach. It is available at https://github.com/ZhangXJ199/EDGE-GRPO.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Stabilizing Knowledge, Promoting Reasoning: Dual-Token Constraints for RLVR (2025)
- EFRame: Deeper Reasoning via Exploration-Filter-Replay Reinforcement Learning Framework (2025)
- SmartThinker: Learning to Compress and Preserve Reasoning by Step-Level Length Control (2025)
- Multi-Layer GRPO: Enhancing Reasoning and Self-Correction in Large Language Models (2025)
- RLPR: Extrapolating RLVR to General Domains without Verifiers (2025)
- SRFT: A Single-Stage Method with Supervised and Reinforcement Fine-Tuning for Reasoning (2025)
- Critique-GRPO: Advancing LLM Reasoning with Natural Language and Numerical Feedback (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper