Update README.md
Browse files
README.md
CHANGED
@@ -30,6 +30,16 @@ It is _**the largest open-source vision/vision-language foundation model (14B)**
|
|
30 |
- Image size: 224 x 224
|
31 |
- **Pretrain Dataset:** LAION-en, LAION-COCO, COYO, CC12M, CC3M, SBU, Wukong, LAION-multi
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
## Model Usage
|
34 |
|
35 |
```python
|
|
|
30 |
- Image size: 224 x 224
|
31 |
- **Pretrain Dataset:** LAION-en, LAION-COCO, COYO, CC12M, CC3M, SBU, Wukong, LAION-multi
|
32 |
|
33 |
+
## Zero-Shot Performance
|
34 |
+
|
35 |
+
See this [document](https://github.com/OpenGVLab/InternVL/tree/main) for more details about the zero-shot evaluation.
|
36 |
+
|
37 |
+

|
38 |
+
|
39 |
+
|
40 |
+

|
41 |
+
|
42 |
+
|
43 |
## Model Usage
|
44 |
|
45 |
```python
|