LMMs-Eval: Reality Check on the Evaluation of Large Multimodal Models
Abstract
The advances of large foundation models necessitate wide-coverage, low-cost, and zero-contamination benchmarks. Despite continuous exploration of language model evaluations, comprehensive studies on the evaluation of Large Multi-modal Models (LMMs) remain limited. In this work, we introduce LMMS-EVAL, a unified and standardized multimodal benchmark framework with over 50 tasks and more than 10 models to promote transparent and reproducible evaluations. Although LMMS-EVAL offers comprehensive coverage, we find it still falls short in achieving low cost and zero contamination. To approach this evaluation trilemma, we further introduce LMMS-EVAL LITE, a pruned evaluation toolkit that emphasizes both coverage and efficiency. Additionally, we present Multimodal LIVEBENCH that utilizes continuously updating news and online forums to assess models' generalization abilities in the wild, featuring a low-cost and zero-contamination evaluation approach. In summary, our work highlights the importance of considering the evaluation trilemma and provides practical solutions to navigate the trade-offs in evaluating large multi-modal models, paving the way for more effective and reliable benchmarking of LMMs. We opensource our codebase and maintain leaderboard of LIVEBENCH at https://github.com/EvolvingLMMs-Lab/lmms-eval and https://huggingface.co/spaces/lmms-lab/LiveBench.
Community
GitHub : https://github.com/EvolvingLMMs-Lab/lmms-eval
LiveBench Dataset : https://huggingface.co/datasets/lmms-lab/LiveBench
LiveBench Leaderboard : https://huggingface.co/spaces/lmms-lab/LiveBench
LMMs-Eval Lite : https://huggingface.co/datasets/lmms-lab/LMMs-Eval-Lite
Hi @kcz358 congrats on your work! Thanks for releasing artifacts on the hub.
Would you be able to link them to this paper page?
See here on how to do that: https://huggingface.co/docs/hub/en/paper-pages#linking-a-paper-to-a-model-dataset-or-space
Cheers,
Niels
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MMEvalPro: Calibrating Multimodal Benchmarks Towards Trustworthy and Efficient Evaluation (2024)
- VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models (2024)
- Imp: Highly Capable Large Multimodal Models for Mobile Devices (2024)
- MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs (2024)
- MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMs (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper