MMMR: Benchmarking Massive Multi-Modal Reasoning Tasks
Abstract
The MMMR benchmark evaluates multi-modal reasoning in MLLMs by assessing thinking quality through diverse reasoning types and a modular evaluation pipeline.
Recent advances in Multi-Modal Large Language Models (MLLMs) have enabled unified processing of language, vision, and structured inputs, opening the door to complex tasks such as logical deduction, spatial reasoning, and scientific analysis. Despite their promise, the reasoning capabilities of MLLMs, particularly those augmented with intermediate thinking traces (MLLMs-T), remain poorly understood and lack standardized evaluation benchmarks. Existing work focuses primarily on perception or final answer correctness, offering limited insight into how models reason or fail across modalities. To address this gap, we introduce the MMMR, a new benchmark designed to rigorously evaluate multi-modal reasoning with explicit thinking. The MMMR comprises 1) a high-difficulty dataset of 1,083 questions spanning six diverse reasoning types with symbolic depth and multi-hop demands and 2) a modular Reasoning Trace Evaluation Pipeline (RTEP) for assessing reasoning quality beyond accuracy through metrics like relevance, consistency, and structured error annotations. Empirical results show that MLLMs-T overall outperform non-thinking counterparts, but even top models like Claude-3.7-Sonnet and Gemini-2.5 Pro suffer from reasoning pathologies such as inconsistency and overthinking. This benchmark reveals persistent gaps between accuracy and reasoning quality and provides an actionable evaluation pipeline for future model development. Overall, the MMMR offers a scalable foundation for evaluating, comparing, and improving the next generation of multi-modal reasoning systems.
Community
Project Page: https://mmmr-benchmark.github.io/
dataset: https://huggingface.co/datasets/csegirl/MMMR
Code Page: https://github.com/CsEgir/MMMR/tree/master
Leaderboard: https://mmmr-benchmark.github.io/#Leaderboard
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- OCR-Reasoning Benchmark: Unveiling the True Capabilities of MLLMs in Complex Text-Rich Image Reasoning (2025)
- MTR-Bench: A Comprehensive Benchmark for Multi-Turn Reasoning Evaluation (2025)
- Seeing Beyond Words: MatVQA for Challenging Visual-Scientific Reasoning in Materials Science (2025)
- Benchmarking Multimodal Mathematical Reasoning with Explicit Visual Dependency (2025)
- ChartMuseum: Testing Visual Reasoning Capabilities of Large Vision-Language Models (2025)
- VisualPuzzles: Decoupling Multimodal Reasoning Evaluation from Domain Knowledge (2025)
- RVTBench: A Benchmark for Visual Reasoning Tasks (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper