πŸ”₯ MicroLLaVA-siglip-so400m

A compact vision language model that you can pretrain and finetune on a single consumer GPU such as NVIDIA RTX 4090 with 24G of VRAM.


⚑ TLDR

πŸ“‹ Item πŸ”§ Detail
Framework Transformers + PyTorch
Checkpoint type safetensors
LLM keeeeenw/MicroLlama (about 300M parameters)
Vision tower siglip-so400m-patch14-384
Hardware used Single NVIDIA RTX 4090
Training stack No DeepSpeed required
Intended tasks Visual Question Answering, caption-style prompts

πŸš€ Introduction

MicroLLaVA is a TinyLLaVA Factory based model that pairs a very small language model keeeeenw/MicroLlama with an efficient SigLIP vision encoder.

🎯 The goal: Create a vision language model that almost anyone can train and iterate on with one consumer GPU.

🧠 Model Components

⏱️ Training Times

Because of its compact size, this model can be trained entirely on a single NVIDIA RTX 4090 without DeepSpeed.

  • Pretraining on LAION-CC-SBU-558K: ~5 hours ⚑
  • Supervised finetuning on all TinyLLaVA Factory datasets (except ocr_vqa): ~12 hours πŸ”₯

🌟 Star this model if you find it useful! 🌟

If you like this model, please support my work by liking it here as well as starting my github page: https://github.com/keeeeenw/MicroLlava


πŸ’» Quick Start

from transformers import AutoTokenizer, AutoModelForCausalLM

hf_path = 'keeeeenw/MicroLlava-siglip-so400m-patch14-384-base-finetune'
model = AutoModelForCausalLM.from_pretrained(hf_path, trust_remote_code=True)
# model.cuda() # turn on cuda as needed by the model runs fairly quickly on CPU.
config = model.config
tokenizer = AutoTokenizer.from_pretrained(
    hf_path, 
    use_fast=False, 
    model_max_length=config.tokenizer_model_max_length,
    padding_side=config.tokenizer_padding_side
)

prompt = "What are the things I should be cautious about when I visit here?"
image_url = "https://llava-vl.github.io/static/images/view.jpg"
output_text, genertaion_time = model.chat(
    prompt=prompt,
    image=image_url,
    tokenizer=tokenizer
)

print('model output:', output_text)
print('runing time:', genertaion_time)

πŸ–ΌοΈ Example Usage

πŸ“Έ Input Image:

Llava Input Image Example

πŸ’¬ Prompt:

"What are the things I should be cautious about when I visit here?"

πŸ€– Model Output:

When I visit the beach at the waterfront, I should be cautious about several things. First, I should be cautious about the water, as it is a popular spot for boating and fishing. The water is shallow and shallow, making it difficult for boats to navigate and navigate. Additionally, the water is not a suitable surface for boating, as it is too shallow for boating. Additionally, the water is not suitable for swimming or fishing, as it is too cold and wet. Lastly, I should be cautious about the presence of other boats, such as boats that are parked on the beach, or boats that are not visible from the water. These factors can lead to potential accidents or accidents, as they can cause damage to the boat and the other boats in the water.

πŸ”§ Implementation Notes

For inference, I created a custom class modeling_tinyllava_llama.py which:

  • Loads the same chat template as the TinyLlava model for TinyLlama
  • Connects the LLM to the vision tower
  • May require additional dependencies such as PyTorch and Transformers library

πŸ“Š Evaluation

πŸ† VQAv2 Results

Split Yes/No Number Other Overall
test-dev 65.08 28.97 29.32 🎯 44.01

πŸ“ˆ Evaluation Details

  • Dataset: VQAv2 (Visual Question Answering v2.0)
  • Challenge: VQA Challenge 2017
  • Split: test-dev
  • Overall Accuracy: 44.01%

🎯 Performance Breakdown

  • Yes/No Questions: 65.08% - Performance on binary questions
  • Number Questions: 28.97% - Performance on counting/numerical questions
  • Other Questions: 29.32% - Performance on open-ended questions
  • Overall: 44.01% - Weighted average across all question types

πŸ”œ Planned Evaluations

Community contributions with benchmark results are welcome and encouraged! 🀝


🎯 Intended Uses and Limitations

βœ… Intended Uses

  • πŸ”¬ Rapid experimentation for vision-language research on limited hardware
  • πŸŽ“ Educational demonstrations for students and hobbyists
  • πŸš€ Starting point for domain-specific finetuning

⚠️ Limitations

  • The small LLM size and compact vision encoder may limit reasoning depth and OCR performance
  • Performance can vary significantly depending on the image domain and quality
  • The model includes minimal safety filtering and refusal behavior β€” downstream applications should implement their own safeguards

⚠️ Important: This model should not be used for applications that may cause harm or have significant safety, financial, legal, or medical implications without thorough human review.


πŸ”¬ Reproducibility

For reproducibility, please visit my fork of TinyLLaVA_Factory, which follows the exact same pre-training and fine-tuning steps as the original implementation.

πŸ”§ Key Differences

🎯 Pre-training Modifications:

To support training on a single GPU, I modified several hyperparameters:

  • gradient_accumulation_steps: 2 β†’ 8
  • learning_rate: 1e-3 β†’ 2.5e-4
  • warmup_ratio: 0.03 β†’ 0.06

The original hyperparameters were too aggressive for pre-training, causing training loss to increase after some time. With the updated hyperparameters, pre-training loss remained stable, which is expected for LLaVA's first stage where we align the LLM output with ViT features.

🎨 Fine-tuning Changes:

  • All major hyperparameters remain the same as the original
  • Used bfloat16 precision instead of float16 for improved numerical stability
  • The current model version does not use ocr_vqa due to difficulties downloading all required images for fine-tuning

πŸ› οΈ Training Setup

  • Hardware: Single GPU configuration
  • Precision: bfloat16 (fine-tuning), modified from original float16. For pre-training, I used float16 which is the same configuration as the original TinyLlava model.
  • Stages: Two-stage training following LLaVA methodology
    1. Pre-training: Vision-language alignment with stable loss
    2. Fine-tuning: Task-specific adaptation

πŸ“ Citation

@misc{wang2024microllama,
  title        = {MicroLLaVA: a TinyLLaVA based VLM with MicroLlama 300M for single GPU training},
  author       = {Zixiao Ken Wang},
  year         = {2025},
  url          = {https://huggingface.co/keeeeenw/MicroLlava-siglip-so400m-patch14-384-base-finetune}
}

πŸ“„ License

This model is released under the Apache License 2.0.

You are free to use, modify, and distribute this model and its derivatives, provided that you comply with the terms of the license.

If you use this model in your research or applications, please credit the original authors and clearly indicate any modifications you have made.

πŸ“Œ Note: Ensure that the datasets used for pretraining or finetuning also allow redistribution of derived model weights.


πŸ™ Acknowledgements

This work builds upon the efforts of many in the open-source AI community:

  • TinyLLaVA Factory maintainers and contributors for creating the training framework
  • keeeeenw/MicroLlama I am also the creator of MicroLlama. Please help support my work! ⭐
  • SigLIP authors for the efficient vision encoder architecture
  • Contributors to LAION-CC-SBU-558K and other datasets used in pretraining and finetuning
  • The Hugging Face ecosystem for hosting, tools, and community support πŸ€—

🌟 Star this model if you find it useful! 🌟

Downloads last month
23
Safetensors
Model size
735M params
Tensor type
F32
Β·
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for keeeeenw/MicroLlava-siglip-so400m

Finetuned
(24)
this model