ChartMuseum: Testing Visual Reasoning Capabilities of Large Vision-Language Models
Abstract
A new benchmark, ChartMuseum, highlights the underperformance of large vision-language models in chart question answering, particularly for visually complex questions, compared to human accuracy.
Chart understanding presents a unique challenge for large vision-language models (LVLMs), as it requires the integration of sophisticated textual and visual reasoning capabilities. However, current LVLMs exhibit a notable imbalance between these skills, falling short on visual reasoning that is difficult to perform in text. We conduct a case study using a synthetic dataset solvable only through visual reasoning and show that model performance degrades significantly with increasing visual complexity, while human performance remains robust. We then introduce ChartMuseum, a new Chart Question Answering (QA) benchmark containing 1,162 expert-annotated questions spanning multiple reasoning types, curated from real-world charts across 184 sources, specifically built to evaluate complex visual and textual reasoning. Unlike prior chart understanding benchmarks -- where frontier models perform similarly and near saturation -- our benchmark exposes a substantial gap between model and human performance, while effectively differentiating model capabilities: although humans achieve 93% accuracy, the best-performing model Gemini-2.5-Pro attains only 63.0%, and the leading open-source LVLM Qwen2.5-VL-72B-Instruct achieves only 38.5%. Moreover, on questions requiring primarily visual reasoning, all models experience a 35%-55% performance drop from text-reasoning-heavy question performance. Lastly, our qualitative error analysis reveals specific categories of visual reasoning that are challenging for current LVLMs.
Community
Introducing ChartMuseum ๐ผ๏ธ, testing complex visual reasoning with diverse real-world charts!
โ๐ป Entirely human-written questions by 13 CS researchers
๐ Emphasis on visual reasoning โ hard to be verbalized via text CoTs
๐ Humans reach 93% but 63% from Gemini-2.5-Pro & 38% from Qwen2.5-72B
Leaderboard available at: https://chartmuseum-leaderboard.github.io
Existing chart QA benchmarks have limitations:
โ Limited real-world chart sources
โ Questions are created with LLM in the loop
โ Saturated/similar model performance
โ Most questions can be answered by a text-LLM with extracted text from charts
ChartMuseum:
โ
184 chart sources
โ
Entirely human-written questions
โ
Clear distinctions in model performance
โ
Most questions relies on visual reasoning, which is hard to be verbalized through text
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- VisualPuzzles: Decoupling Multimodal Reasoning Evaluation from Domain Knowledge (2025)
- VisuLogic: A Benchmark for Evaluating Visual Reasoning in Multi-modal Large Language Models (2025)
- Benchmarking Multimodal Mathematical Reasoning with Explicit Visual Dependency (2025)
- RVTBench: A Benchmark for Visual Reasoning Tasks (2025)
- CameraBench: Benchmarking Visual Reasoning in MLLMs via Photography (2025)
- VCRBench: Exploring Long-form Causal Reasoning Capabilities of Large Video Language Models (2025)
- VGRP-Bench: Visual Grid Reasoning Puzzle Benchmark for Large Vision-Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper