Image-Text-to-Text
OpenCLIP

Add pipeline tag and project page link

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -1,11 +1,13 @@
1
  ---
2
- license: cc-by-4.0
3
  datasets:
4
  - UCSC-VLAA/Recap-DataComp-1B
5
  - mlfoundations/datacomp_1b
6
  library_name: open_clip
 
 
7
  ---
8
- [[Paper]](https://arxiv.org/abs/2501.09446) [[github]](https://github.com/zw615/Double_Visual_Defense)
 
9
 
10
  A DeltaCLIP-H/14-336 Model that is adversarially pre-trained with web-scale image-text data to reach non-robust-VLM helpfulness levels on clean data while being robust on adversarially attacked data.
11
 
 
1
  ---
 
2
  datasets:
3
  - UCSC-VLAA/Recap-DataComp-1B
4
  - mlfoundations/datacomp_1b
5
  library_name: open_clip
6
+ license: cc-by-4.0
7
+ pipeline_tag: image-text-to-text
8
  ---
9
+
10
+ [[Paper]](https://arxiv.org/abs/2501.09446) [[github]](https://github.com/zw615/Double_Visual_Defense) [[project page]](https://doublevisualdefense.github.io/)
11
 
12
  A DeltaCLIP-H/14-336 Model that is adversarially pre-trained with web-scale image-text data to reach non-robust-VLM helpfulness levels on clean data while being robust on adversarially attacked data.
13