Bidirectional Likelihood Estimation with Multi-Modal Large Language Models for Text-Video Retrieval
Abstract
A novel retrieval framework using bidirectional likelihood estimation with multi-modal large language models and candidate prior normalization improves text-video retrieval by reducing candidate prior bias and enhancing query-candidate relevance.
Text-Video Retrieval aims to find the most relevant text (or video) candidate given a video (or text) query from large-scale online databases. Recent work leverages multi-modal large language models (MLLMs) to improve retrieval, especially for long or complex query-candidate pairs. However, we observe that the naive application of MLLMs, i.e., retrieval based on candidate likelihood, introduces candidate prior bias, favoring candidates with inherently higher priors over those more relevant to the query. To this end, we propose a novel retrieval framework, Bidirectional Likelihood Estimation with MLLM (BLiM), which leverages both query and candidate likelihoods by training the model to generate text from a given video as well as video features from a given text. Furthermore, we introduce Candidate Prior Normalization (CPN), a simple yet effective training-free score calibration module designed to mitigate candidate prior bias in candidate likelihood. On four Text-Video Retrieval benchmarks, our BLiM equipped with CPN outperforms previous state-of-the-art models by 6.4 R@1 on average, effectively alleviating candidate prior bias and emphasizing query-candidate relevance. Our in-depth analysis across various multi-modal tasks beyond retrieval highlights the broad applicability of CPN which enhances visual understanding by reducing reliance on textual priors. Code is available at https://github.com/mlvlab/BLiM.
Community
A novel retrieval framework using bidirectional likelihood estimation with multi-modal large language models and candidate prior normalization improves text-video retrieval by reducing candidate prior bias and enhancing query-candidate relevance.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LeAdQA: LLM-Driven Context-Aware Temporal Grounding for Video Question Answering (2025)
- Quantifying and Narrowing the Unknown: Interactive Text-to-Video Retrieval via Uncertainty Minimization (2025)
- Universal Video Temporal Grounding with Generative Multi-modal Large Language Models (2025)
- Q-Frame: Query-aware Frame Selection and Multi-Resolution Adaptation for Video-LLMs (2025)
- T2VParser: Adaptive Decomposition Tokens for Partial Alignment in Text to Video Retrieval (2025)
- From Query to Explanation: Uni-RAG for Multi-Modal Retrieval-Augmented Learning in STEM (2025)
- Q2E: Query-to-Event Decomposition for Zero-Shot Multilingual Text-to-Video Retrieval (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper