YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Food Caption BLIP2
This is a fine-tuned version of the BLIP2 model for food image captioning.
Model Details
- Base model: BLIP2-OPT-2.7B
- Fine-tuned on food images
- Dataset size: 60 images
- Training epochs: 15
- Hardware used: CPU
- Final loss: 0.0001
- Training date: 2024-03-15
Usage
from transformers import Blip2Processor, Blip2ForConditionalGeneration
from PIL import Image
processor = Blip2Processor.from_pretrained("fathindifa/food-caption-blip2")
model = Blip2ForConditionalGeneration.from_pretrained("fathindifa/food-caption-blip2")
# Load and preprocess image
image = Image.open("food_image.jpg").convert('RGB')
inputs = processor(images=image, return_tensors="pt")
# Generate caption
outputs = model.generate(**inputs, max_new_tokens=32)
caption = processor.batch_decode(outputs, skip_special_tokens=True)[0]
print(caption)
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support