czczup commited on
Commit
3856ec1
β€’
1 Parent(s): 05797ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -9
README.md CHANGED
@@ -15,9 +15,9 @@ new_version: OpenGVLab/InternViT-6B-448px-V2_5
15
 
16
  # InternViT-6B-448px-V1-2
17
 
18
- [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ“œ InternVL 1.0\]](https://arxiv.org/abs/2312.14238) [\[πŸ“œ InternVL 1.5\]](https://arxiv.org/abs/2404.16821) [\[πŸ“œ Mini-InternVL\]](https://arxiv.org/abs/2410.16261)
19
 
20
- [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[πŸ€— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[πŸš€ Quick Start\]](#quick-start) [\[πŸ“– 中文解读\]](https://zhuanlan.zhihu.com/p/706547971) [\[πŸ“– Documents\]](https://internvl.readthedocs.io/en/latest/)
21
 
22
  <div align="center">
23
  <img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
@@ -35,7 +35,10 @@ To equip the model with high-resolution processing and OCR capabilities, both th
35
  To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO.
36
  - **Note:** InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for MLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a MLLM based on this model, **please make use of the features from the last layer.**
37
 
38
- ## Model Usage (Image Embeddings)
 
 
 
39
 
40
  ```python
41
  import torch
@@ -58,27 +61,38 @@ pixel_values = pixel_values.to(torch.bfloat16).cuda()
58
  outputs = model(pixel_values)
59
  ```
60
 
 
 
 
 
61
  ## Citation
62
 
63
  If you find this project useful in your research, please consider citing:
64
 
65
  ```BibTeX
 
 
 
 
 
 
66
  @article{gao2024mini,
67
  title={Mini-internvl: A flexible-transfer pocket multimodal model with 5\% parameters and 90\% performance},
68
  author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
69
  journal={arXiv preprint arXiv:2410.16261},
70
  year={2024}
71
  }
72
- @article{chen2023internvl,
73
- title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
74
- author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
75
- journal={arXiv preprint arXiv:2312.14238},
76
- year={2023}
77
- }
78
  @article{chen2024far,
79
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
80
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
81
  journal={arXiv preprint arXiv:2404.16821},
82
  year={2024}
83
  }
 
 
 
 
 
 
 
84
  ```
 
15
 
16
  # InternViT-6B-448px-V1-2
17
 
18
+ [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[πŸ“œ InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[πŸ“œ InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[πŸ“œ Mini-InternVL\]](https://arxiv.org/abs/2410.16261) [\[πŸ“œ InternVL 2.5\]](https://huggingface.co/papers/2412.05271)
19
 
20
+ [\[πŸ†• Blog\]](https://internvl.github.io/blog/) [\[πŸ—¨οΈ Chat Demo\]](https://internvl.opengvlab.com/) [\[πŸ€— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[πŸš€ Quick Start\]](#quick-start) [\[πŸ“– Documents\]](https://internvl.readthedocs.io/en/latest/)
21
 
22
  <div align="center">
23
  <img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
 
35
  To enhance the OCR capability of the model, we have incorporated additional OCR data alongside the general caption datasets. Specifically, we utilized PaddleOCR to perform Chinese OCR on images from Wukong and English OCR on images from LAION-COCO.
36
  - **Note:** InternViT-6B originally had 48 blocks, and we found that using the output after the fourth-to-last block worked best for MLLM. For ease of use and to save GPU memory, we simply discarded the last 3 blocks. Now, the model has only 45 blocks and the number of parameters has been reduced from 5.9B to 5.5B. Therefore, if you want to build a MLLM based on this model, **please make use of the features from the last layer.**
37
 
38
+ ## Quick Start
39
+
40
+ > \[!Warning\]
41
+ > 🚨 Note: In our experience, the InternViT V2.5 series is better suited for building MLLMs than traditional computer vision tasks.
42
 
43
  ```python
44
  import torch
 
61
  outputs = model(pixel_values)
62
  ```
63
 
64
+ ## License
65
+
66
+ This project is released under the MIT License.
67
+
68
  ## Citation
69
 
70
  If you find this project useful in your research, please consider citing:
71
 
72
  ```BibTeX
73
+ @article{chen2024expanding,
74
+ title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
75
+ author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
76
+ journal={arXiv preprint arXiv:2412.05271},
77
+ year={2024}
78
+ }
79
  @article{gao2024mini,
80
  title={Mini-internvl: A flexible-transfer pocket multimodal model with 5\% parameters and 90\% performance},
81
  author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
82
  journal={arXiv preprint arXiv:2410.16261},
83
  year={2024}
84
  }
 
 
 
 
 
 
85
  @article{chen2024far,
86
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
87
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
88
  journal={arXiv preprint arXiv:2404.16821},
89
  year={2024}
90
  }
91
+ @inproceedings{chen2024internvl,
92
+ title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
93
+ author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
94
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
95
+ pages={24185--24198},
96
+ year={2024}
97
+ }
98
  ```