Papers
arxiv:2508.01780

LiveMCPBench: Can Agents Navigate an Ocean of MCP Tools?

Published on Aug 3
· Submitted by jiawei-ucas on Aug 6
Authors:
,
,
,
,
,
,

Abstract

LiveMCPBench provides a comprehensive benchmark for evaluating LLM agents across a diverse set of real-world tasks in the MCP ecosystem, using a scalable evaluation pipeline and adaptive judging framework.

AI-generated summary

With the rapid development of Model Context Protocol (MCP), the number of MCP servers has surpassed 10,000. However, existing MCP benchmarks are limited to single-server settings with only a few tools, hindering effective evaluation of agent capabilities in large-scale, real-world scenarios. To address this limitation, we present LiveMCPBench, the first comprehensive benchmark comprising 95 real-world tasks grounded in the MCP ecosystem, designed to evaluate LLM agents at scale across diverse servers. To support a scalable and reproducible evaluation pipeline in large-scale MCP environments, we curate LiveMCPTool, a diverse and readily deployable collection of 70 MCP servers and 527 tools. Furthermore, we introduce LiveMCPEval, an LLM-as-a-Judge framework that enables automated and adaptive evaluation in dynamic, time-varying task environments, achieving 81% agreement with human reviewers. Finally, we propose the MCP Copilot Agent, a multi-step agent that routes tools for dynamic planning and executes tools for API interaction across the entire LiveMCPTool suite. Our evaluation covers 10 leading models, with the best-performing model (Claude-Sonnet-4) reaching a 78.95% success rate. However, we observe large performance variance across models, and several widely-used models perform poorly in LiveMCPBench's complex, tool-rich environments. Overall, LiveMCPBench offers the first unified framework for benchmarking LLM agents in realistic, tool-rich, and dynamic MCP environments, laying a solid foundation for scalable and reproducible research on agent capabilities. Our code and data will be publicly available at https://icip-cas.github.io/LiveMCPBench.

Community

Paper author Paper submitter

LiveMCPBench is the first comprehensive benchmark designed to evaluate LLM agents at scale across diverse Model Context Protocol (MCP) servers. It comprises 95 real-world tasks grounded in the MCP ecosystem, challenging agents to effectively use various tools in daily scenarios within complex, tool-rich, and dynamic environments. To support scalable and reproducible evaluation, LiveMCPBench is complemented by LiveMCPTool (a diverse collection of 70 MCP servers and 527 tools) and LiveMCPEval (an LLM-as-a-Judge framework for automated and adaptive evaluation). The benchmark offers a unified framework for benchmarking LLM agents in realistic, tool-rich, and dynamic MCP environments, laying a solid foundation for scalable and reproducible research on agent capabilities.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2508.01780 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.01780 in a Space README.md to link it from this page.

Collections including this paper 3