Kwai-Keye nielsr HF Staff commited on
Commit
749da5d
Β·
verified Β·
1 Parent(s): e0fb953

Improve model card: update pipeline tag, add abstract, and code link (#5)

Browse files

- Improve model card: update pipeline tag, add abstract, and code link (dbb69f7ba9bee957447f9164d799b9eb5ce4124b)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +16 -11
README.md CHANGED
@@ -1,12 +1,11 @@
1
-
2
  ---
3
- license: apache-2.0
4
  language:
5
  - en
6
- pipeline_tag: image-text-to-text
 
 
7
  tags:
8
  - multimodal
9
- library_name: transformers
10
  ---
11
 
12
  # Kwai Keye-VL
@@ -15,7 +14,11 @@ library_name: transformers
15
  <img src="asset/keye_logo_2.png" width="100%" alt="Kwai Keye-VL Logo">
16
  </div>
17
 
18
- <font size=3><div align='center' > [[🍎 Home Page](https://kwai-keye.github.io/)] [[πŸ“– Technical Report](https://huggingface.co/papers/2507.01949)] [[πŸ“Š Models](https://huggingface.co/Kwai-Keye)] [[πŸš€ Demo](https://huggingface.co/spaces/Kwai-Keye/Keye-VL-8B-Preview)] </div></font>
 
 
 
 
19
 
20
  ## πŸ”₯ News
21
  * **`2025.06.26`** 🌟 We are very proud to launch **Kwai Keye-VL**, a cutting-edge multimodal large language model meticulously crafted by the **Kwai Keye Team** at [Kuaishou](https://www.kuaishou.com/). As a cornerstone AI product within Kuaishou's advanced technology ecosystem, Keye excels in video understanding, visual perception, and reasoning tasks, setting new benchmarks in performance. Our team is working tirelessly to push the boundaries of what's possible, so stay tuned for more exciting updates!
@@ -476,12 +479,14 @@ The post-training phase of Kwai Keye is meticulously designed into two phases wi
476
  If you find our work helpful for your research, please consider citing our work.
477
 
478
  ```bibtex
479
- @misc{Keye-VL-8B-Preview,
480
- title = {Keye-VL-8B-Preview},
481
- url = {https://github.com/Kwai-Keye/Keye},
482
- author = {Keye Team},
483
- month = {June},
484
- year = {2025}
 
 
485
  }
486
  ```
487
 
 
 
1
  ---
 
2
  language:
3
  - en
4
+ library_name: transformers
5
+ license: apache-2.0
6
+ pipeline_tag: video-text-to-text
7
  tags:
8
  - multimodal
 
9
  ---
10
 
11
  # Kwai Keye-VL
 
14
  <img src="asset/keye_logo_2.png" width="100%" alt="Kwai Keye-VL Logo">
15
  </div>
16
 
17
+ <font size=3><div align='center' > [[🍎 Home Page](https://kwai-keye.github.io/)] [[πŸ“– Technical Report](https://huggingface.co/papers/2507.01949)] [[πŸ“Š Models](https://huggingface.co/Kwai-Keye)] [[πŸš€ Demo](https://huggingface.co/spaces/Kwai-Keye/Keye-VL-8B-Preview)] [[πŸ’» Code](https://github.com/Kwai-Keye/Keye)] </div></font>
18
+
19
+ ## Abstract
20
+
21
+ While Multimodal Large Language Models (MLLMs) demonstrate remarkable capabilities on static images, they often fall short in comprehending dynamic, information-dense short-form videos, a dominant medium in today's digital landscape. To bridge this gap, we introduce **Kwai Keye-VL**, an 8-billion-parameter multimodal foundation model engineered for leading-edge performance in short-video understanding while maintaining robust general-purpose vision-language abilities. The development of Keye-VL rests on two core pillars: a massive, high-quality dataset exceeding 600 billion tokens with a strong emphasis on video, and an innovative training recipe. This recipe features a four-stage pre-training process for solid vision-language alignment, followed by a meticulous two-phase post-training process. The first post-training stage enhances foundational capabilities like instruction following, while the second phase focuses on stimulating advanced reasoning. In this second phase, a key innovation is our five-mode ``cold-start'' data mixture, which includes ``thinking'', ``non-thinking'', ``auto-think'', ``think with image'', and high-quality video data. This mixture teaches the model to decide when and how to reason. Subsequent reinforcement learning (RL) and alignment steps further enhance these reasoning capabilities and correct abnormal model behaviors, such as repetitive outputs. To validate our approach, we conduct extensive evaluations, showing that Keye-VL achieves state-of-the-art results on public video benchmarks and remains highly competitive on general image-based tasks (Figure 1). Furthermore, we develop and release the **KC-MMBench**, a new benchmark tailored for real-world short-video scenarios, where Keye-VL shows a significant advantage.
22
 
23
  ## πŸ”₯ News
24
  * **`2025.06.26`** 🌟 We are very proud to launch **Kwai Keye-VL**, a cutting-edge multimodal large language model meticulously crafted by the **Kwai Keye Team** at [Kuaishou](https://www.kuaishou.com/). As a cornerstone AI product within Kuaishou's advanced technology ecosystem, Keye excels in video understanding, visual perception, and reasoning tasks, setting new benchmarks in performance. Our team is working tirelessly to push the boundaries of what's possible, so stay tuned for more exciting updates!
 
479
  If you find our work helpful for your research, please consider citing our work.
480
 
481
  ```bibtex
482
+ @misc{kwaikeyeteam2025kwaikeyevltechnicalreport,
483
+ title={Kwai Keye-VL Technical Report},
484
+ author={Kwai Keye Team},
485
+ year={2025},
486
+ eprint={2507.01949},
487
+ archivePrefix={arXiv},
488
+ primaryClass={cs.CV},
489
+ url={https://arxiv.org/abs/2507.01949},
490
  }
491
  ```
492