File size: 1,560 Bytes
0746f2a 232c40f 0746f2a 2b49032 0746f2a 75b11e0 0746f2a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
---
license: mit
---

<p align="center">
📃 <a href="https://arxiv.org/abs/2409.02889" target="_blank">Paper</a> • 🌐 <a href="" target="_blank">Demo</a> • 📃 <a href="https://github.com/FreedomIntelligence/LongLLaVA" target="_blank">LongLLaVA</a>
</p>

## 🌈 Update
* **[2024.09.05]** LongLLaVA repo is published!🎉
## Architecture
<details>
<summary>Click to view the architecture image</summary>

</details>
## Results
<details>
<summary>Click to view the Results</summary>
- Main Results

- Diagnostic Results

- Video-NIAH

</details>
## Results reproduction
### Data DownLoad and Construction
<details>
<summary>Dataset Taxonomy</summary>

</details>
<details>
<summary>Dataset DownLoading and Construction</summary>
> Coming Soon~
</details>
### Evaluation
> Model checkpoint is Coming Soon~
## Citation
```
@misc{wang2024longllavascalingmultimodalllms,
title={LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture},
author={Xidong Wang and Dingjie Song and Shunian Chen and Chen Zhang and Benyou Wang},
year={2024},
eprint={2409.02889},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.02889},
}
``` |