CLIP ViT Base Patch32 Fine-tuned on Oxford Pets

This model is a fine-tuned version of OpenAI's CLIP model on the Oxford Pets dataset, intended for pets classification.

Training Information

  • Model Name: openai/clip-vit-base-patch32
  • Dataset: oxford-pets
  • Training Epochs: 4
  • Batch Size: 256
  • Learning Rate: 3e-6
  • Test Accuracy: 93.74%

Parameters Information

Trainable params: 151.2773M || All params: 151.2773M || Trainable%: 100.00%

Bias, Risks, and Limitations

Refer to the original CLIP repository.

License

[MIT]

Downloads last month
0
Safetensors
Model size
151M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train DGurgurov/clip-vit-base-patch32-oxford-pets