Your Fine-tuned LLaVA Model
This model is a fine-tuned version of llava-hf/llava-1.5-7b-hf.
Model Description
- Base Model: llava-hf/llava-1.5-7b-hf
- Model Type: Vision-Language Model
- Architecture: LlavaForConditionalGeneration
- Processor: Use the original processor from
llava-hf/llava-1.5-7b-hf
Usage
from transformers import LlavaForConditionalGeneration, LlavaProcessor
import torch
# Load fine-tuned model
model = LlavaForConditionalGeneration.from_pretrained("aparaselli/llava-7b-ft-unbound", trust_remote_code=True)
# Load original processor (recommended approach)
processor = LlavaProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf", trust_remote_code=True)
# Your inference code here...
Important Notes
- This repository contains only the fine-tuned model weights
- Always use the processor from
llava-hf/llava-1.5-7b-hf
for best compatibility - The model was fine-tuned on top of the base model but uses the original tokenization and image processing
Training Details
Add details about your training process, dataset, and hyperparameters here.
Evaluation
Add evaluation results here.
Citation
If you use this model, please cite the original LLaVA paper and mention your fine-tuning work.
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for aparaselli/llava-7b-ft-unbound
Base model
llava-hf/llava-1.5-7b-hf