Adapting Vision-Language Models Without Labels: A Comprehensive Survey
Abstract
A comprehensive survey of unsupervised adaptation methods for Vision-Language Models (VLMs) categorizes approaches based on the availability of unlabeled visual data and discusses methodologies, benchmarks, and future research directions.
Vision-Language Models (VLMs) have demonstrated remarkable generalization capabilities across a wide range of tasks. However, their performance often remains suboptimal when directly applied to specific downstream scenarios without task-specific adaptation. To enhance their utility while preserving data efficiency, recent research has increasingly focused on unsupervised adaptation methods that do not rely on labeled data. Despite the growing interest in this area, there remains a lack of a unified, task-oriented survey dedicated to unsupervised VLM adaptation. To bridge this gap, we present a comprehensive and structured overview of the field. We propose a taxonomy based on the availability and nature of unlabeled visual data, categorizing existing approaches into four key paradigms: Data-Free Transfer (no data), Unsupervised Domain Transfer (abundant data), Episodic Test-Time Adaptation (batch data), and Online Test-Time Adaptation (streaming data). Within this framework, we analyze core methodologies and adaptation strategies associated with each paradigm, aiming to establish a systematic understanding of the field. Additionally, we review representative benchmarks across diverse applications and highlight open challenges and promising directions for future research. An actively maintained repository of relevant literature is available at https://github.com/tim-learn/Awesome-LabelFree-VLMs.
Community
Vision-Language Models (VLMs), such as CLIP, have demonstrated impressive zero-shot capabilities; however, in real-world deployments, their performance can decline without adaptation. Gathering labeled data is costly, so unsupervised adaptation has emerged as a powerful alternative.
In this survey, we introduce the first taxonomy of unsupervised VLM adaptation based on the availability of unlabeled visual data. We categorize existing methods into four paradigms:
1️⃣ Data-Free Transfer
2️⃣ Unsupervised Domain Transfer
3️⃣ Episodic Test-Time Adaptation
4️⃣ Online Test-Time Adaptation
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Generalizing vision-language models to novel domains: A comprehensive survey (2025)
- The Illusion of Progress? A Critical Look at Test-Time Adaptation for Vision-Language Models (2025)
- Advancing Reliable Test-Time Adaptation of Vision-Language Models under Visual Variations (2025)
- ETTA: Efficient Test-Time Adaptation for Vision-Language Models through Dynamic Embedding Updates (2025)
- Continual Learning for VLMs: A Survey and Taxonomy Beyond Forgetting (2025)
- Multi-Cache Enhanced Prototype Learning for Test-Time Generalization of Vision-Language Models (2025)
- Towards Fine-Grained Adaptation of CLIP via a Self-Trained Alignment Score (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper