Papers
arxiv:2505.24726

Reflect, Retry, Reward: Self-Improving LLMs via Reinforcement Learning

Published on May 30
· Submitted by melisa on Jun 4
#1 Paper of the day

Abstract

A method using self-reflection and reinforcement learning improves the performance of large language models, especially with limited feedback, by rewarding self-reflections that lead to better task performance.

AI-generated summary

We explore a method for improving the performance of large language models through self-reflection and reinforcement learning. By incentivizing the model to generate better self-reflections when it answers incorrectly, we demonstrate that a model's ability to solve complex, verifiable tasks can be enhanced even when generating synthetic data is infeasible and only binary feedback is available. Our framework operates in two stages: first, upon failing a given task, the model generates a self-reflective commentary analyzing its previous attempt; second, the model is given another attempt at the task with the self-reflection in context. If the subsequent attempt succeeds, the tokens generated during the self-reflection phase are rewarded. Our experimental results show substantial performance gains across a variety of model architectures, as high as 34.7% improvement at math equation writing and 18.1% improvement at function calling. Notably, smaller fine-tuned models (1.5 billion to 7 billion parameters) outperform models in the same family that are 10 times larger. Our novel paradigm is thus an exciting pathway to more useful and reliable language models that can self-improve on challenging tasks with limited external feedback.

Community

Paper author Paper submitter

We present a two-stage framework where language models improve by generating self-reflective commentary after making mistakes and then retrying tasks with that reflection, receiving reinforcement if they succeed; notably, only the reflection tokens are rewarded, with other tokens masked out, to reinforce generalisable self-reflection rather than task-specific solutions.

@librarian-bot recommend

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Nice to see more evidence that bigger isn't always better (especially with the right techniques)

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.24726 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.24726 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.24726 in a Space README.md to link it from this page.

Collections including this paper 9