Papers
arxiv:2508.05547

Adapting Vision-Language Models Without Labels: A Comprehensive Survey

Published on Aug 7
· Submitted by hdong51 on Aug 11
Authors:
,
,
,
,

Abstract

A comprehensive survey of unsupervised adaptation methods for Vision-Language Models (VLMs) categorizes approaches based on the availability of unlabeled visual data and discusses methodologies, benchmarks, and future research directions.

AI-generated summary

Vision-Language Models (VLMs) have demonstrated remarkable generalization capabilities across a wide range of tasks. However, their performance often remains suboptimal when directly applied to specific downstream scenarios without task-specific adaptation. To enhance their utility while preserving data efficiency, recent research has increasingly focused on unsupervised adaptation methods that do not rely on labeled data. Despite the growing interest in this area, there remains a lack of a unified, task-oriented survey dedicated to unsupervised VLM adaptation. To bridge this gap, we present a comprehensive and structured overview of the field. We propose a taxonomy based on the availability and nature of unlabeled visual data, categorizing existing approaches into four key paradigms: Data-Free Transfer (no data), Unsupervised Domain Transfer (abundant data), Episodic Test-Time Adaptation (batch data), and Online Test-Time Adaptation (streaming data). Within this framework, we analyze core methodologies and adaptation strategies associated with each paradigm, aiming to establish a systematic understanding of the field. Additionally, we review representative benchmarks across diverse applications and highlight open challenges and promising directions for future research. An actively maintained repository of relevant literature is available at https://github.com/tim-learn/Awesome-LabelFree-VLMs.

Community

Paper author Paper submitter

Vision-Language Models (VLMs), such as CLIP, have demonstrated impressive zero-shot capabilities; however, in real-world deployments, their performance can decline without adaptation. Gathering labeled data is costly, so unsupervised adaptation has emerged as a powerful alternative.

In this survey, we introduce the first taxonomy of unsupervised VLM adaptation based on the availability of unlabeled visual data. We categorize existing methods into four paradigms:
1️⃣ Data-Free Transfer
2️⃣ Unsupervised Domain Transfer
3️⃣ Episodic Test-Time Adaptation
4️⃣ Online Test-Time Adaptation

taxonomy.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2508.05547 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2508.05547 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.05547 in a Space README.md to link it from this page.

Collections including this paper 6