Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,15 @@ Please refer to [this repository](https://github.com/MCG-NJU/p-MoD) for our code
|
|
13 |
This model is pretrained on [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) image caption data, and instruction-tuned on [llava-v1_5-mix-665k](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json).
|
14 |
|
15 |
## Citation
|
16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
## License
|
19 |
Llama 2 is licensed under the LLAMA 2 Community License,
|
|
|
13 |
This model is pretrained on [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) image caption data, and instruction-tuned on [llava-v1_5-mix-665k](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json).
|
14 |
|
15 |
## Citation
|
16 |
+
If you find our work helpful for your research and applications, please cite our paper:
|
17 |
+
```Bibtex
|
18 |
+
@article{zhang2024pmod,
|
19 |
+
title={p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay},
|
20 |
+
author={Zhang, Jun and Meng, Desen and Qi, Ji and Huang, Zhenpeng and Wu, Tao and Wang, Limin},
|
21 |
+
journal={arXiv preprint arXiv:2412.04449},
|
22 |
+
year={2024}
|
23 |
+
}
|
24 |
+
```
|
25 |
|
26 |
## License
|
27 |
Llama 2 is licensed under the LLAMA 2 Community License,
|