On Large Multimodal Models as Open-World Image Classifiers
Abstract
Traditional image classification requires a predefined list of semantic categories. In contrast, Large Multimodal Models (LMMs) can sidestep this requirement by classifying images directly using natural language (e.g., answering the prompt "What is the main object in the image?"). Despite this remarkable capability, most existing studies on LMM classification performance are surprisingly limited in scope, often assuming a closed-world setting with a predefined set of categories. In this work, we address this gap by thoroughly evaluating LMM classification performance in a truly open-world setting. We first formalize the task and introduce an evaluation protocol, defining various metrics to assess the alignment between predicted and ground truth classes. We then evaluate 13 models across 10 benchmarks, encompassing prototypical, non-prototypical, fine-grained, and very fine-grained classes, demonstrating the challenges LMMs face in this task. Further analyses based on the proposed metrics reveal the types of errors LMMs make, highlighting challenges related to granularity and fine-grained capabilities, showing how tailored prompting and reasoning can alleviate them.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- TLAC: Two-stage LMM Augmented CLIP for Zero-Shot Classification (2025)
- Compositional Caching for Training-free Open-vocabulary Attribute Detection (2025)
- Training-Free Personalization via Retrieval and Reasoning on Fingerprints (2025)
- Fine-Grained Open-Vocabulary Object Detection with Fined-Grained Prompts: Task, Dataset and Benchmark (2025)
- Contrast-Aware Calibration for Fine-Tuned CLIP: Leveraging Image-Text Alignment (2025)
- 4D-Bench: Benchmarking Multi-modal Large Language Models for 4D Object Understanding (2025)
- OSLoPrompt: Bridging Low-Supervision Challenges and Open-Set Domain Generalization in CLIP (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper