Image-Text-to-Text
Transformers
llava_llama
text-generation

Model Card: LLaVA_MORE-llama_3_1-8B-S2-siglip-pretrain

LLaVA-MORE enhances the well-known LLaVA architecture by integrating the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters.

In this model space, you will find the stage one (pretrain) weights of LLaVA-MORE LLaMA 3.1 8B.

For more information, visit our LLaVA-MORE repository.

Citation

If you make use of our work, please cite our repo:

@article{cocchi2025llava,
      title={{LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning}},
      author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Baraldi, Lorenzo and Cornia, Marcella and Cucchiara, Rita},
      journal={arXiv preprint arXiv:2503.15621},
      year={2025}
}
Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train aimagelab/LLaVA_MORE-llama_3_1-8B-S2-siglip-pretrain

Collection including aimagelab/LLaVA_MORE-llama_3_1-8B-S2-siglip-pretrain