BiCLIP: Domain Canonicalization via Structured Geometric Transformation
Abstract
Vision-language models can be adapted to specialized domains through a simple bilinear transformation that aligns multimodal features via geometric canonicalization, achieving state-of-the-art results on multiple benchmarks.
Recent advances in vision-language models (VLMs) have demonstrated remarkable zero-shot capabilities, yet adapting these models to specialized domains remains a significant challenge. Building on recent theoretical insights suggesting that independently trained VLMs are related by a canonical transformation, we extend this understanding to the concept of domains. We hypothesize that image features across disparate domains are related by a canonicalized geometric transformation that can be recovered using a small set of anchors. Few-shot classification provides a natural setting for this alignment, as the limited labeled samples serve as the anchors required to estimate this transformation. Motivated by this hypothesis, we introduce BiCLIP, a framework that applies a targeted transformation to multimodal features to enhance cross-modal alignment. Our approach is characterized by its extreme simplicity and low parameter footprint. Extensive evaluations across 11 standard benchmarks, including EuroSAT, DTD, and FGVCAircraft, demonstrate that BiCLIP consistently achieves state-of-the-art results. Furthermore, we provide empirical verification of existing geometric findings by analyzing the orthogonality and angular distribution of the learned transformations, confirming that structured alignment is the key to robust domain adaptation. Code is available at https://github.com/QuantitativeImagingLaboratory/BilinearCLIP
Community
Is complex prompt tuning always necessary to adapt CLIP to specialized domains? We propose BiCLIP, which uses domain canonicalization to realign image and text features.
By learning one structured matrix $W$, BiCLIP show excellent performance accross 11 datasets while staying mathematically interpretable. Check out our angular distribution analysis and more interestingly the empirical verification of the orthogonality of the W matrix! ๐
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CLIPoint3D: Language-Grounded Few-Shot Unsupervised 3D Point Cloud Domain Adaptation (2026)
- Feature Projection Learning for Better Vision-Language Reasoning (2026)
- When Is Rank-1 Enough? Geometry-Guided Initialization for Parameter-Efficient Fine-Tuning (2026)
- Towards Calibrating Prompt Tuning of Vision-Language Models (2026)
- ITO: Images and Texts as One via Synergizing Multiple Alignment and Training-Time Fusion (2026)
- Subspace Alignment for Vision-Language Model Test-time Adaptation (2026)
- PointAlign: Feature-Level Alignment Regularization for 3D Vision-Language Models (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
