DynamicRAG: Leveraging Outputs of Large Language Model as Feedback for Dynamic Reranking in Retrieval-Augmented Generation
Abstract
Retrieval-augmented generation (RAG) systems combine large language models (LLMs) with external knowledge retrieval, making them highly effective for knowledge-intensive tasks. A crucial but often under-explored component of these systems is the reranker, which refines retrieved documents to enhance generation quality and explainability. The challenge of selecting the optimal number of documents (k) remains unsolved: too few may omit critical information, while too many introduce noise and inefficiencies. Although recent studies have explored LLM-based rerankers, they primarily leverage internal model knowledge and overlook the rich supervisory signals that LLMs can provide, such as using response quality as feedback for optimizing reranking decisions. In this paper, we propose DynamicRAG, a novel RAG framework where the reranker dynamically adjusts both the order and number of retrieved documents based on the query. We model the reranker as an agent optimized through reinforcement learning (RL), using rewards derived from LLM output quality. Across seven knowledge-intensive datasets, DynamicRAG demonstrates superior performance, achieving state-of-the-art results. The model, data and code are available at https://github.com/GasolSun36/DynamicRAG
Community
Excited to share our latest work: DynamicRAG: Leveraging Outputs of Large Language Model as Feedback for Dynamic Reranking in Retrieval-Augmented Generation ๐๐๐ง
Tired of RAG systems missing critical info or drowning in noise? ๐ค We propose DynamicRAG, a novel framework where the reranker dynamically adjusts the order AND number of retrieved documents based on YOUR query! ๐คฏโจ
Key innovations:
๐ Dynamic Reranking: No more fixed 'k'! Adapts to each query's needs.
๐ค RL Agent Reranker: Optimized using reinforcement learning with rewards from LLM output quality. ๐ฎ๐
๐ค Joint Training: Reranker and generator learn together for optimal synergy.
๐ DynamicRAG achieves state-of-the-art results across SEVEN knowledge-intensive datasets! ๐๐ฅ outperforms existing methods, even with less training data! ๐
Say goodbye to static reranking and hello to more relevant, efficient, and high-quality generation! ๐๐ก
๐ Check out the paper: https://arxiv.org/abs/2505.07233
๐ป Code & data: https://github.com/GasolSun36/DynamicRAG
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MMKB-RAG: A Multi-Modal Knowledge-Based Retrieval-Augmented Generation Framework (2025)
- Lightweight and Direct Document Relevance Optimization for Generative Information Retrieval (2025)
- Direct Retrieval-augmented Optimization: Synergizing Knowledge Selection and Language Models (2025)
- Distillation and Refinement of Reasoning in Small Language Models for Document Re-ranking (2025)
- Rec-R1: Bridging Generative Large Language Models and User-Centric Recommendation Systems via Reinforcement Learning (2025)
- Scaling Test-Time Inference with Policy-Optimized, Dynamic Retrieval-Augmented Generation via KV Caching and Decoding (2025)
- Reinforcing Compositional Retrieval: Retrieving Step-by-Step for Composing Informative Contexts (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper