You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

furniture-captioner

Model Description
furniture-captioner is a fine-tuned version of BLIP specialized in generating captions for furniture images.
It has been trained using the dataset yemalin/furniture_ds.

Use Cases

  • Generating product descriptions for furniture marketplace listings
  • Improving searchability through auto-generated captions
  • Enhancing accessibility with alternative text

Training
Fine-tuned starting from a pre-trained BLIP model on a curated furniture dataset.
All images are annotated with relevant captions (design, style, function).

Intended Uses & Limitations

  • ⚡ Works best on images of indoor and outdoor furniture.
  • 🚫 Not optimized for general objects or human activities.

License
Apache 2.0 — allowing commercial and non-commercial usage with attribution.


Usage Example

from transformers import BlipProcessor, BlipForConditionalGeneration
from PIL import Image
import requests

processor = BlipProcessor.from_pretrained("yemalin/furniture-captioner")
model = BlipForConditionalGeneration.from_pretrained("yemalin/furniture-captioner")

img_url = "https://example.com/your-furniture-image.jpg"
image = Image.open(requests.get(img_url, stream=True).raw)

inputs = processor(images=image, return_tensors="pt")
out = model.generate(**inputs)
caption = processor.decode(out[0], skip_special_tokens=True)
print(caption)
Downloads last month
9
Safetensors
Model size
247M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support