Kwai-Keye nielsr HF Staff commited on
Commit
d2351d4
Β·
verified Β·
1 Parent(s): d52d90e

Enhance dataset card: Add comprehensive metadata and usage examples for KC-MMBench (#2)

Browse files

- Enhance dataset card: Add comprehensive metadata and usage examples for KC-MMBench (bf7f58c3104955752208a29eca94dfaeaea26df5)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +77 -9
README.md CHANGED
@@ -1,14 +1,32 @@
1
  ---
2
- license: cc-by-sa-4.0
3
  language:
4
  - zh
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
- <font size=3><div align='center' > [[🍎 Home Page](https://kwai-keye.github.io/)] [[πŸ“– Technical Report](https://huggingface.co/papers/2507.01949)] [[πŸ“Š Models](https://huggingface.co/Kwai-Keye)] [[πŸš€ Demo](https://huggingface.co/spaces/Kwai-Keye/Keye-VL-8B-Preview)] </div></font>
7
 
 
 
 
8
 
9
- Based on the [Kuaishou](https://www.kuaishou.com/) short video data, we constructed 6 datasets for Vision-Language Models (VLMs) like [**Kwai Keye-VL-8B**](https://huggingface.co/Kwai-Keye/Keye-VL-8B-Preview), **Qwen2.5-VL** and **InternVL** to evaluate performance.
 
 
 
 
 
10
 
11
- If you want to use KC-MMbench, please download with: git clone https://huggingface.co/datasets/Kwai-Keye/KC-MMbench
12
  ## Tasks
13
  | Task | Description |
14
  | -------------- | --------------------------------------------------------------------------- |
@@ -19,7 +37,6 @@ If you want to use KC-MMbench, please download with: git clone https://huggingfa
19
  | High_Like | A binary classification task to determine the rate of likes of a short video. |
20
  | SPU | The task of determining whether two items are the same product in e-commerce. |
21
 
22
-
23
  ## Performance
24
  | Task | Qwen2.5-VL-3B | Qwen2.5-VL-7B | InternVL-3-8B | MiMo-VL-7B | Kwai Keye-VL-8B |
25
  | -------------- | ------------- | ------------- | ------------- | ------- | ---- |
@@ -30,13 +47,63 @@ If you want to use KC-MMbench, please download with: git clone https://huggingfa
30
  | High_Like | 48.85 | 47.94 | 47.03 | 51.14 | 55.25 |
31
  | SPU | 74.09 | 81.34 | 75.64 | 81.86 | 87.05 |
32
 
33
- ## Example of Evaluation
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
- Here is an example of an evaluation using VLMs on our datasets. The following configuration needs to be added to the config file.
36
  ```python
37
- {
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
- "model":'...'
 
 
 
 
 
 
 
 
40
  "data": {
41
  "CPV": {
42
  "class": "KwaiVQADataset",
@@ -64,3 +131,4 @@ Here is an example of an evaluation using VLMs on our datasets. The following co
64
  }
65
  }
66
  }
 
 
1
  ---
 
2
  language:
3
  - zh
4
+ - en
5
+ license: cc-by-sa-4.0
6
+ task_categories:
7
+ - video-text-to-text
8
+ tags:
9
+ - multimodal
10
+ - video-understanding
11
+ - short-video
12
+ - benchmark
13
+ - e-commerce
14
+ - vqa
15
+ library_name:
16
+ - transformers
17
  ---
 
18
 
19
+ <font size=3><div align='center' > [[🍎 Home Page](https://kwai-keye.github.io/)] [[πŸ“– Technical Report](https://huggingface.co/papers/2507.01949)] [[\ud83d\udcca Models](https://huggingface.co/Kwai-Keye)] [[\ud83d\ude80 Demo](https://huggingface.co/spaces/Kwai-Keye/Keye-VL-8B-Preview)] </div></font>
20
+
21
+ This repository contains **KC-MMBench**, a new benchmark dataset meticulously tailored for real-world short-video scenarios, as presented in the paper "[Kwai Keye-VL Technical Report](https://huggingface.co/papers/2507.01949)". Constructed from [Kuaishou](https://www.kuaishou.com/) short video data, KC-MMBench comprises 6 distinct datasets designed to evaluate the performance of Vision-Language Models (VLMs) like [**Kwai Keye-VL-8B**](https://huggingface.co/Kwai-Keye/Keye-VL-8B-Preview), Qwen2.5-VL, and InternVL in comprehending dynamic, information-dense short-form videos.
22
 
23
+ For the associated code, detailed documentation, and evaluation scripts, please refer to the official [Kwai Keye-VL GitHub repository](https://github.com/Kwai-Keye/Kwai-Keye-VL).
24
+
25
+ If you want to use KC-MMbench, please download with:
26
+ ```bash
27
+ git clone https://huggingface.co/datasets/Kwai-Keye/KC-MMbench
28
+ ```
29
 
 
30
  ## Tasks
31
  | Task | Description |
32
  | -------------- | --------------------------------------------------------------------------- |
 
37
  | High_Like | A binary classification task to determine the rate of likes of a short video. |
38
  | SPU | The task of determining whether two items are the same product in e-commerce. |
39
 
 
40
  ## Performance
41
  | Task | Qwen2.5-VL-3B | Qwen2.5-VL-7B | InternVL-3-8B | MiMo-VL-7B | Kwai Keye-VL-8B |
42
  | -------------- | ------------- | ------------- | ------------- | ------- | ---- |
 
47
  | High_Like | 48.85 | 47.94 | 47.03 | 51.14 | 55.25 |
48
  | SPU | 74.09 | 81.34 | 75.64 | 81.86 | 87.05 |
49
 
50
+ ## Usage
51
+
52
+ This section provides a quick guide on how to interact with models using the `keye-vl-utils` library, which is essential for processing and integrating visual language information with Keye Series Models like Kwai Keye-VL-8B.
53
+
54
+ ### Install `keye-vl-utils`
55
+
56
+ First, install the necessary utility library:
57
+ ```bash
58
+ pip install keye-vl-utils
59
+ ```
60
+
61
+ ### Keye-VL Inference Example
62
+
63
+ Here's an example of performing inference with a Kwai Keye-VL model, demonstrating how to prepare inputs for both image and video scenarios.
64
 
 
65
  ```python
66
+ from transformers import AutoModel, AutoProcessor
67
+ from keye_vl_utils import process_vision_info
68
+
69
+ # default: Load the model on the available device(s)
70
+ model_path = "Kwai-Keye/Keye-VL-8B-Preview"
71
+
72
+ model = AutoModel.from_pretrained(
73
+ model_path, torch_dtype="auto", device_map="auto", attn_implementation="flash_attention_2", trust_remote_code=True,
74
+ ).to('cuda')
75
+
76
+ # Example messages demonstrating various input types (image, video)
77
+ messages = [
78
+ # Image Input Examples
79
+ [{"role": "user", "content": [{"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}]}],
80
+ [{"role": "user", "content": [{"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}]}],
81
+ [{"role": "user", "content": [{"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}]}],
82
+
83
+ # Video Input Examples (most relevant for KC-MMBench)
84
+ [{"role": "user", "content": [{"type": "video", "video": "file:///path/to/video1.mp4"}, {"type": "text", "text": "Describe this video."}]}],
85
+ [{"role": "user", "content": [{"type": "video", "video": ["file:///path/to/extracted_frame1.jpg", "file:///path/to/extracted_frame2.jpg", "file:///path/to/extracted_frame3.jpg"],}, {"type": "text", "text": "Describe this video."},],}],
86
+ [{"role": "user", "content": [{"type": "video", "video": "file:///path/to/video1.mp4", "fps": 2.0, "resized_height": 280, "resized_width": 280}, {"type": "text", "text": "Describe this video."}]}],
87
+ ]
88
+
89
+ processor = AutoProcessor.from_pretrained(model_path)
90
+ # Note: model loaded above already
91
+ text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
92
+ images, videos, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
93
+ inputs = processor(text=text, images=images, videos=videos, padding=True, return_tensors="pt", **video_kwargs).to("cuda")
94
+ generated_ids = model.generate(**inputs)
95
+ print(generated_ids)
96
+ ```
97
 
98
+ ### Evaluation
99
+
100
+ For detailed instructions on how to evaluate models using the KC-MMBench datasets, including setup and running evaluation scripts, please refer to the `evaluation/KC-MMBench/README.md` file in the official [Kwai Keye-VL GitHub repository](https://github.com/Kwai-Keye/Kwai-Keye-VL/tree/main/evaluation/KC-MMBench).
101
+
102
+ Below is the example configuration for evaluation using VLMs on our datasets:
103
+
104
+ ```python
105
+ {
106
+ "model": "...", # Specify your model
107
  "data": {
108
  "CPV": {
109
  "class": "KwaiVQADataset",
 
131
  }
132
  }
133
  }
134
+ ```