Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

SoM-LLaVA Model Card

LLaVA-v1.5 mixed trained with SoM style data (QA+listing).

The model can understand tag-style visual prompts on the image (e.g., what is the object tagged with id 9?), also gained improved performance on MLLM benchmarks (POPE, MME, SEED, MM-Vet, LLav-wild), even when the input testing images has no tags.

For more information about SoM-LLaVA, check our github page and paper!

Getting Started

This model should be used in the official LLaVA repo for training and evalution.

If you would like to load the model in HF style, check the converted model weights: [SoM-LLaVA-v1.5-13B-HF]

Citation

If you find our data or model useful for your research and applications, please cite our paper:

@article{yan2024list,
  title={List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs},
  author={Yan, An and Yang, Zhengyuan and Wu, Junda and Zhu, Wanrong and Yang, Jianwei and Li, Linjie and Lin, Kevin and Wang, Jianfeng and McAuley, Julian and Gao, Jianfeng and others},
  journal={arXiv preprint arXiv:2404.16375},
  year={2024}
}
Downloads last month
28
Safetensors
Model size
13.4B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.