Model description

xGen-MM-Vid (BLIP-3-Video) is an efficient compact vision-language model (VLM) with an explicit temporal encoder, specifically designed to understand videos. It is developed by Salesforce AI Research. Incorporation of a learanable temporal encoder modules within the original (image-based) BLIP-3 architecture is its key aspect.

In this initial release (12/2024), we are sharing the 128 token version trained to take 16-frame video inputs.

For more details, check out our tech report. More detailed explanation could also be found in the blog article.

Results

Tokens vs. accuracy

The above figure shows the number of visual tokens vs. accuracy trade-off of various video models including xGen-MM-Vid (BLIP-3-Video) on the MSVD-QA dataset.

Examples

How to use

Please check out our inference script for example code to use our model. It is based on the xGen-MM.

Bias, Risks, Limitations, and Ethical Considerations

The main data sources are from the internet, including webpages, video stock sites, and curated datasets released by the research community. The model may be subject to bias from the original data source, as well as bias from LLMs and commercial APIs. We strongly recommend users assess safety and fairness before applying to downstream applications.

License

Our code and weights are released under the CC by-NC 4.0 license.

Code acknowledgment

Our code/model is built on top of xGen-MM.

Citation

@misc{blip3video-xgenmmvid,
  author          = {Michael S. Ryoo and Honglu Zhou and Shrikant Kendre and Can Qin and Le Xue and Manli Shu and Silvio Savarese and Ran Xu and Caiming Xiong and Juan Carlos Niebles},
  title           = {xGen-MM-Vid (BLIP-3-Video): You Only Need 32 Tokens to Represent a Video Even in VLMs}, 
  year            = {2024},
  eprint          = {2410.16267},
  archivePrefix   = {arXiv},
  primaryClass    = {cs.CV},
  url             = {https://arxiv.org/abs/2410.16267}, 
}

Troubleshoot

  1. If you missed any packages, please consider the following
pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu121
pip install open_clip_torch==2.24.0
pip install einops
pip install einops-exts
pip install transformers==4.41.1
Downloads last month
1
Safetensors
Model size
4.42B params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .