YipengZhang commited on
Commit
af7d10b
·
verified ·
1 Parent(s): 397fb0f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -3
README.md CHANGED
@@ -1,3 +1,58 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ pipeline_tag: image-text-to-text
4
+ datasets:
5
+ - YipengZhang/LLaVA-UHD-v2-SFT-Data
6
+ library_name: transformers
7
+ base_model:
8
+ - lmsys/vicuna-13b-v1.5
9
+ - openai/clip-vit-large-patch14-336
10
+ ---
11
+
12
+ <br>
13
+
14
+ # LLaVA-UHD v2 Model Card
15
+
16
+ ## Model details
17
+
18
+ **Model type:**
19
+ LLaVA-UHD v2, an advanced MLLM centered around a Hierarchical window transformer that enables capturing diverse visual granularity
20
+ by constructing and integrating a high resolution feature pyramid.
21
+
22
+ **Model date:**
23
+ LLaVA-UHD v2 was trained in November 2024.
24
+
25
+ **Base LLM Model:**
26
+ lmsys/vicuna-13b-v1.5
27
+
28
+ **Paper or resources for more information:**
29
+ https://github.com/thunlp/LLaVA-UHD
30
+
31
+ ## License
32
+ LLaVA-UHD v2 is licensed under the LLAMA 2 Community License,
33
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
34
+
35
+ **Where to send questions or comments about the model:**
36
+ https://github.com/thunlp/LLaVA-UHD/issues
37
+
38
+ ## Intended use
39
+ **Primary intended uses:**
40
+ The primary use of LLaVA-UHD v2 is research on large multimodal models and chatbots.
41
+
42
+ **Primary intended users:**
43
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
44
+
45
+ ## Training dataset
46
+ - VDIM Pretrain: MS-COCO stuff 2017
47
+ - Pretrain: LLaVA-Pretrain 558K (filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.)
48
+ - SFT: 858k-mixed dataset in https://huggingface.co/datasets/YipengZhang/LLaVA-UHD-v2-SFT-Data
49
+
50
+ ## Citation
51
+ If you find LLaVA-UHD v2 useful for your research and applications, please cite using this BibTeX:
52
+ ```bibtex
53
+ @article{zhang2024llavauhdv2,
54
+ title={LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer},
55
+ author={Yipeng Zhang and Yifan Liu and Zonghao Guo and Yidan Zhang and Xuesong Yang and Chi Chen and Jun Song and Bo Zheng and Yuan Yao and Zhiyuan Liu and Tat-Seng Chua and Maosong Sun},
56
+ journal={arXiv preprint arXiv:2412.13871},
57
+ year={2024}
58
+ }