Commit
·
976330d
1
Parent(s):
4cdf8d6
Update README.md
Browse files
README.md
CHANGED
@@ -5,4 +5,13 @@ This is a Huggingface CLIPModel flavor of the [HPSv2](https://github.com/tgxs002
|
|
5 |
I converted the model weights from the openclip format to huggingface CLIPModel.
|
6 |
The two text and image embeddings were tested to be equal before and after conversion.
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
All credit goes to the original authors of HPSv2
|
|
|
5 |
I converted the model weights from the openclip format to huggingface CLIPModel.
|
6 |
The two text and image embeddings were tested to be equal before and after conversion.
|
7 |
|
8 |
+
You can load the model the same as any huggingface clip model:
|
9 |
+
|
10 |
+
```python
|
11 |
+
from transformers import CLIPProcessor, CLIPModel
|
12 |
+
|
13 |
+
model = CLIPModel.from_pretrained("adams-story/HPSv2-hf")
|
14 |
+
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") # uses the same exact vanilla clip processor
|
15 |
+
```
|
16 |
+
|
17 |
All credit goes to the original authors of HPSv2
|