FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models

Jintao Tong1, Wenwei Jin2, Pengda Qin2, Anqi Li3, Yixiong Zou1βœ‰ Yuhong Li2βœ‰, Yuhua Li1, Ruixuan Li1

1School of Computer Science and Technology, Huazhong University of Science and Technology
2Xiaohongshu Inc., 3Institute of Information Science, Beijing Jiaotong University

GitHub arXiv License

πŸ’‘ Highlights

TLDR: To address inefficiency from excessive visual tokens in LVLMs, we propose a unified, bottom-up perspective based on information-flow, revealing dynamic redundancy emergence and introduce FlowCut, making pruning decision aligned with the model's inherent behavior, outperforming all existing approaches.

πŸ›  Preparation

Our code is easy to use.

  1. Clone the LLaVA's repository.
git clone https://github.com/haotian-liu/LLaVA.git
cd LLaVA
  1. Install the LLaVA's environment.
conda create -n llava python=3.10 -y
conda activate llava
pip install --upgrade pip  
pip install -e .
pip install flash-attn --no-build-isolation
  1. For formal usage, you can install the package from PyPI by running the following command:
pip install flowcut

For development, you can install the package by cloning the repository and running the following command:

git clone https://github.com/TungChintao/FlowCut
cd flowcut
pip install -e .

File organization as follow:

β”œβ”€β”€ LLaVA-main
    β”œβ”€β”€ flowcut
    β”œβ”€β”€ llava
    β”œβ”€β”€ playground
    β”œβ”€β”€ script

πŸš€ Quick Start

from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path
from llava.eval.run_llava import eval_model
from flowcut import flowcut
model_path = "liuhaotian/llava-v1.5-7b"

tokenizer, model, image_processor, context_len = load_pretrained_model(
    model_path=model_path,
    model_base=None,
    model_name=get_model_name_from_path(model_path)
)
## FlowCut retains 64 visual tokens
model = flowcut(model, target_num=64)

πŸ“– Evaluation

The evaluation code follows the structure of LLaVA or Lmms-Eval. After loading the model, simply add two lines as shown below:

## Load LLaVA Model (code from llava.eval.model_vqa_loader)
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name)
## add FlowCut
from flowcut import flowcut
model = flowcut(model, target_num=64)

Script templetes (please follow the detailed instruction in LLaVA-Evaluation).

bash scripts/v1_5/eval/[Benchmark].sh

Examples:

CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/mme.sh
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 bash scripts/v1_5/eval/vqav2.sh

🎯 Training

The training code follows the structure of LLaVA. After loading the model, simply add two lines as shown below:

## Load LLaVA Model (code from llava.train)
code of loading model...
## add FlowCut
from flowcut import flowcut
model = flowcut(model, target_num=64)
## training
trainer = LLaVATrainer(model=model,
                tokenizer=tokenizer,
                args=training_args,
                **data_module)

πŸ”‘ License

πŸ“Œ Citation

  • If you find this project useful in your research, please consider citing:
@article{tong2025flowcut,
  title={FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models},
  author={Tong, Jintao and Jin, Wenwei and Qin, Pengda and Li, Anqi and Zou, Yixiong and Li, Yuhong and Li, Yuhua and Li, Ruixuan},
  journal={arXiv preprint arXiv:2505.19536},
  year={2025}
}
Downloads last month
20
Safetensors
Model size
7.06B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for JosephTong/llava-v1.5-7b-flowcut192

Finetuned
(58)
this model