LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning
Abstract
Universal multimodal embedding models play a critical role in tasks such as interleaved image-text retrieval, multimodal RAG, and multimodal clustering. However, our empirical results indicate that existing LMM-based embedding models trained with the standard InfoNCE loss exhibit a high degree of overlap in similarity distribution between positive and negative pairs, making it challenging to distinguish hard negative pairs effectively. To deal with this issue, we propose a simple yet effective framework that dynamically improves the embedding model's representation learning for negative pairs based on their discriminative difficulty. Within this framework, we train a series of models, named LLaVE, and evaluate them on the MMEB benchmark, which covers 4 meta-tasks and 36 datasets. Experimental results show that LLaVE establishes stronger baselines that achieve state-of-the-art (SOTA) performance while demonstrating strong scalability and efficiency. Specifically, LLaVE-2B surpasses the previous SOTA 7B models, while LLaVE-7B achieves a further performance improvement of 6.2 points. Although LLaVE is trained on image-text data, it can generalize to text-video retrieval tasks in a zero-shot manner and achieve strong performance, demonstrating its remarkable potential for transfer to other embedding tasks.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ABC: Achieving Better Control of Multimodal Embeddings using VLMs (2025)
- Training Sparse Mixture Of Experts Text Embedding Models (2025)
- Following the Autoregressive Nature of LLM Embeddings via Compression and Alignment (2025)
- Joint Fusion and Encoding: Advancing Multimodal Retrieval from the Ground Up (2025)
- AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding (2025)
- Re-Align: Aligning Vision Language Models via Retrieval-Augmented Direct Preference Optimization (2025)
- DRAMA: Diverse Augmentation from Large Language Models to Smaller Dense Retrievers (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper