Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Server error while post-processing the rows. This occured on row 43. Please report the issue.
Error code:   RowsPostProcessingError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

TaxoBench: Can Deep Research Agents Retrieve and Organize? Evaluating the Synthesis Gap with Expert Taxonomies

License Python 3.10+ Paper


πŸ“– Project Overview

TaxoBench is the first benchmark framework that systematically evaluates Deep Research Agents and Large Language Models (LLMs) on their ability to organize, summarize, and construct hierarchical knowledge structures from scientific literature, from the perspective of human expert cognitive structures.

This project is based on the paper from Fudan University NLP Lab:

Can Deep Research Agents Retrieve and Organize?
Evaluating the Synthesis Gap with Expert Taxonomies

πŸ“š Data Sources

  • 72 highly-cited computer science survey papers (Survey Topics)
  • Expert-constructed Taxonomy Trees
  • 3,815 precisely classified cited papers as Ground Truth

You can download our dataset from Hugging Face.

🎯 Evaluation Modes

TaxoBench implements the two core evaluation paradigms defined in the paper:

  1. Deep Research Mode
    End-to-end evaluation: Retrieval β†’ Filtering β†’ Organization β†’ Structured Summarization

  2. Bottom-Up Mode (focus of this repository)
    Given a fixed collection of papers, evaluate the model's ability to build hierarchical knowledge structures (taxonomies) bottom-up

For more details, please visit the GitHub repository: (https://github.com/KongLongGeFDU/TaxoBench)


πŸ“‚ Repository Structure

TaxoBench/
β”œβ”€β”€ data.jsonl                # Benchmark data (one JSON object per survey topic)
β”œβ”€β”€ README.md
└── README_zh.md

πŸ“„ Data Format (data.jsonl)

This repository currently releases the benchmark data file data.jsonl.

  • Each line is one JSON object for a survey topic.
  • Key fields:

Top-Level Fields

Field Type Description
id int Survey topic ID (0-indexed)
survey str Full title of the survey paper
survey_topic str Survey topic (same as survey in most cases)
survey_topic_path str File path identifier for the survey topic
gt_paper_count int Total number of ground truth papers in this survey
gt dict Expert-constructed taxonomy tree
pdfs list[dict] List of papers with metadata (contains core_task/summary/contributions)

πŸ“ Citation

If you use this code or dataset in your research, please cite our paper:

@misc{zhang2026deepresearchagentsretrieve,
      title={Can Deep Research Agents Retrieve and Organize? Evaluating the Synthesis Gap with Expert Taxonomies}, 
      author={Ming Zhang and Jiabao Zhuang and Wenqing Jing and Kexin Tan and Ziyu Kong and Jingyi Deng and Yujiong Shen and Yuhang Zhao and Ning Luo and Renzhe Zheng and Jiahui Lin and Mingqi Wu and Long Ma and Shihan Dou and Tao Gui and Qi Zhang and Xuanjing Huang},
      year={2026},
      eprint={2601.12369},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2601.12369}, 
}

πŸ“„ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Downloads last month
28

Paper for wuqingzhuan16/TaxoBench