ShortV: Efficient Multimodal Large Language Models by Freezing Visual Tokens in Ineffective Layers
Abstract
Multimodal Large Language Models (MLLMs) suffer from high computational costs due to their massive size and the large number of visual tokens. In this paper, we investigate layer-wise redundancy in MLLMs by introducing a novel metric, Layer Contribution (LC), which quantifies the impact of a layer's transformations on visual and text tokens, respectively. The calculation of LC involves measuring the divergence in model output that results from removing the layer's transformations on the specified tokens. Our pilot experiment reveals that many layers of MLLMs exhibit minimal contribution during the processing of visual tokens. Motivated by this observation, we propose ShortV, a training-free method that leverages LC to identify ineffective layers, and freezes visual token updates in these layers. Experiments show that ShortV can freeze visual token in approximately 60\% of the MLLM layers, thereby dramatically reducing computational costs related to updating visual tokens. For example, it achieves a 50\% reduction in FLOPs on LLaVA-NeXT-13B while maintaining superior performance. The code will be publicly available at https://github.com/icip-cas/ShortV
Community
This paper identifies significant layer redundancy in MLLMs, and introduces a training-free method to enhance MLLM efficiency.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SAISA: Towards Multimodal Large Language Models with Both Training and Inference Efficiency (2025)
- Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More (2025)
- PLPHP: Per-Layer Per-Head Vision Token Pruning for Efficient Large Vision-Language Models (2025)
- Hybrid-Level Instruction Injection for Video Token Compression in Multi-modal Large Language Models (2025)
- Silent Hazards of Token Reduction in Vision-Language Models: The Hidden Impact on Consistency (2025)
- The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models via Visual Information Steering (2025)
- FoNE: Precise Single-Token Number Embeddings via Fourier Features (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper