Multi-Dimensional Insights: Benchmarking Real-World Personalization in Large Multimodal Models
Abstract
The rapidly developing field of large multimodal models (LMMs) has led to the emergence of diverse models with remarkable capabilities. However, existing benchmarks fail to comprehensively, objectively and accurately evaluate whether LMMs align with the diverse needs of humans in real-world scenarios. To bridge this gap, we propose the Multi-Dimensional Insights (MDI) benchmark, which includes over 500 images covering six common scenarios of human life. Notably, the MDI-Benchmark offers two significant advantages over existing evaluations: (1) Each image is accompanied by two types of questions: simple questions to assess the model's understanding of the image, and complex questions to evaluate the model's ability to analyze and reason beyond basic content. (2) Recognizing that people of different age groups have varying needs and perspectives when faced with the same scenario, our benchmark stratifies questions into three age categories: young people, middle-aged people, and older people. This design allows for a detailed assessment of LMMs' capabilities in meeting the preferences and needs of different age groups. With MDI-Benchmark, the strong model like GPT-4o achieve 79% accuracy on age-related tasks, indicating that existing LMMs still have considerable room for improvement in addressing real-world applications. Looking ahead, we anticipate that the MDI-Benchmark will open new pathways for aligning real-world personalization in LMMs. The MDI-Benchmark data and evaluation code are available at https://mdi-benchmark.github.io/
Community
We introduce MDI-Benchmark, the first multimodal benchmark designed to evaluate Large Multimodal Models (LMMs) in practical, real-world scenarios, developed through interviews with individuals across different age groups. It includes over 500 real-world images and 1.2k human-posed questions, spanning six major scenarios: Architecture, Education, Housework, Social Services, Sports, and Transport. Each scenario is divided into three sub-domains with two levels of complexity, incorporating age-specific evaluations to assess the ability of LMMs to deliver personalized responses across different age groups.
Example of our proposed MDI-Benchmark in the social service scenario.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- GATE OpenING: A Comprehensive Benchmark for Judging Open-ended Interleaved Image-Text Generation (2024)
- FedMLLM: Federated Fine-tuning MLLM on Multimodal Heterogeneity Data (2024)
- ComprehendEdit: A Comprehensive Dataset and Evaluation Framework for Multimodal Knowledge Editing (2024)
- TOMATO: Assessing Visual Temporal Reasoning Capabilities in Multimodal Foundation Models (2024)
- Visual Contexts Clarify Ambiguous Expressions: A Benchmark Dataset (2024)
- CG-Bench: Clue-grounded Question Answering Benchmark for Long Video Understanding (2024)
- DriveMM: All-in-One Large Multimodal Model for Autonomous Driving (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper