Edit model card

Conditional ViT - B/16 - Text

Introduced in LRVSF-Fashion: Extending Visual Search with Referring Instructions, Lepage et al. 2023

General Infos

Model finetuned from CLIP ViT-B/16 on LRVSF at 224x224. The conditioning text is preprocessed by a frozen Sentence T5-XL.

Research use only.

How to Use

from PIL import Image
import requests
from transformers import AutoProcessor, AutoModel
import torch

model = AutoModel.from_pretrained("Slep/CondViT-B16-txt")
processor = AutoProcessor.from_pretrained("Slep/CondViT-B16-txt")

url = "https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/108856.0.jpg"
img = Image.open(requests.get(url, stream=True).raw)
txt = "a brown bag"

inputs = processor(images=[img], texts=[txt])
raw_embedding = model(**inputs)
normalized_embedding = torch.nn.functional.normalize(raw_embedding, dim=-1)
Downloads last month
37
Safetensors
Model size
1.33B params
Tensor type
F32
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Dataset used to train Slep/CondViT-B16-txt

Evaluation results