Papers
arxiv:2502.15920

Self-Taught Agentic Long Context Understanding

Published on Feb 21
· Submitted by yzhuang on Feb 25
Authors:
,
,
,
,
,
,
,
,

Abstract

Answering complex, long-context questions remains a major challenge for large language models (LLMs) as it requires effective question clarifications and context retrieval. We propose Agentic Long-Context Understanding (AgenticLU), a framework designed to enhance an LLM's understanding of such queries by integrating targeted self-clarification with contextual grounding within an agentic workflow. At the core of AgenticLU is Chain-of-Clarifications (CoC), where models refine their understanding through self-generated clarification questions and corresponding contextual groundings. By scaling inference as a tree search where each node represents a CoC step, we achieve 97.8% answer recall on NarrativeQA with a search depth of up to three and a branching factor of eight. To amortize the high cost of this search process to training, we leverage the preference pairs for each step obtained by the CoC workflow and perform two-stage model finetuning: (1) supervised finetuning to learn effective decomposition strategies, and (2) direct preference optimization to enhance reasoning quality. This enables AgenticLU models to generate clarifications and retrieve relevant context effectively and efficiently in a single inference pass. Extensive experiments across seven long-context tasks demonstrate that AgenticLU significantly outperforms state-of-the-art prompting methods and specialized long-context LLMs, achieving robust multi-hop reasoning while sustaining consistent performance as context length grows.

Community

Paper author Paper submitter
edited 1 day ago

LLMs struggle with long-context reasoning—retrieving key info & clarifying complex queries. We introduce Agentic Long-context Understanding (AgenticLU), an agentic framework that:

✅ Uses Chain-of-Clarifications (CoC) to iteratively refine queries & retrieve relevant evidence.
✅ Scales training data generation as a tree search, achieving 97.8% answer recall on NarrativeQA.
✅ Amortizes the data generation cost into training with two-stage finetuning (SFT + DPO) for efficient single-pass inference.

📜 Read the paper: https://arxiv.org/pdf/2502.15920
🔗 Code & data: https://github.com/EvanZhuang/AgenticLU
https://huggingface.co/datasets/yzhuang/Agentic-Long-Context-Understanding-QA

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.15920 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.