mlx-community/clip-vit-large-patch14
This model was converted to MLX format from clip-vit-large-patch14
.
Refer to the original model card for more details on the model.
Use with mlx-examples
Download the repository 👇
pip install huggingface_hub hf_transfer
export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir <LOCAL FOLDER PATH> mlx-community/clip-vit-large-patch14
Install mlx-examples
.
git clone [email protected]:ml-explore/mlx-examples.git
cd clip
pip install -r requirements.txt
Run the model.
from PIL import Image
import clip
model, tokenizer, img_processor = clip.load("mlx_model")
inputs = {
"input_ids": tokenizer(["a photo of a cat", "a photo of a dog"]),
"pixel_values": img_processor(
[Image.open("assets/cat.jpeg"), Image.open("assets/dog.jpeg")]
),
}
output = model(**inputs)
# Get text and image embeddings:
text_embeds = output.text_embeds
image_embeds = output.image_embeds
- Downloads last month
- 14