Papers
arxiv:2508.07999

WideSearch: Benchmarking Agentic Broad Info-Seeking

Published on Aug 11
ยท Submitted by Jarvis1111 on Aug 12
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

WideSearch is a new benchmark evaluating the reliability of automated search agents in large-scale information collection tasks, revealing significant deficiencies in current systems.

AI-generated summary

From professional research to everyday planning, many tasks are bottlenecked by wide-scale information seeking, which is more repetitive than cognitively complex. With the rapid development of Large Language Models (LLMs), automated search agents powered by LLMs offer a promising solution to liberate humans from this tedious work. However, the capability of these agents to perform such "wide-context" collection reliably and completely remains largely unevaluated due to a lack of suitable benchmarks. To bridge this gap, we introduce WideSearch, a new benchmark engineered to evaluate agent reliability on these large-scale collection tasks. The benchmark features 200 manually curated questions (100 in English, 100 in Chinese) from over 15 diverse domains, grounded in real user queries. Each task requires agents to collect large-scale atomic information, which could be verified one by one objectively, and arrange it into a well-organized output. A rigorous five-stage quality control pipeline ensures the difficulty, completeness, and verifiability of the dataset. We benchmark over 10 state-of-the-art agentic search systems, including single-agent, multi-agent frameworks, and end-to-end commercial systems. Most systems achieve overall success rates near 0\%, with the best performer reaching just 5\%. However, given sufficient time, cross-validation by multiple human testers can achieve a near 100\% success rate. These results demonstrate that present search agents have critical deficiencies in large-scale information seeking, underscoring urgent areas for future research and development in agentic search. Our dataset, evaluation pipeline, and benchmark results have been publicly released at https://widesearch-seed.github.io/

Community

Paper author Paper submitter

The "I don't have time to do it" problem: Solved? ๐Ÿค” Not by today's LLM agents.

We introduce WideSearch: a new benchmark ๐Ÿ“Š to test if AI agents can handle large-scale, repetitive information gathering โ€” the real bottleneck in productivity. ๐Ÿšง

Leaderboard & details: https://widesearch-seed.github.io/
x-2.png

ๆˆชๅฑ2025-08-12 10.43.27.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2508.07999 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.07999 in a Space README.md to link it from this page.

Collections including this paper 5