English
Ran0618 commited on
Commit
c276f3d
·
verified ·
1 Parent(s): 17a07a2

update README.md

Browse files
Files changed (5) hide show
  1. .gitattributes +3 -0
  2. README.md +157 -3
  3. asset/eval_result.png +3 -0
  4. asset/logo.png +3 -0
  5. asset/overview.png +3 -0
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ asset/eval_result.png filter=lfs diff=lfs merge=lfs -text
37
+ asset/logo.png filter=lfs diff=lfs merge=lfs -text
38
+ asset/overview.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,157 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="center">
2
+ <img src="./asset/logo.png" width="80%"/>
3
+ </p>
4
+
5
+ # 🔥 Updates
6
+
7
+ * \[3/2024\] **VMBench** evaluation code & prompt set released!
8
+
9
+
10
+ # 📣 Overview
11
+
12
+ <p align="center">
13
+ <img src="./asset/overview.png" width="100%"/>
14
+ </p>
15
+
16
+
17
+ Video generation has advanced rapidly, improving evaluation methods, yet assessing video's motion remains a major challenge. Specifically, there are two key issues: 1) current motion metrics do not fully align with human perceptions; 2) the existing motion prompts are limited. Based on these findings, we introduce **VMBench**---a comprehensive **V**ideo **M**otion **Bench**mark that has perception-aligned motion metrics and features the most diverse types of motion. VMBench has several appealing properties: (1) **Perception-Driven Motion Evaluation Metrics**, we identify five dimensions based on human perception in motion video assessment and develop fine-grained evaluation metrics, providing deeper insights into models' strengths and weaknesses in motion quality. (2) **Meta-Guided Motion Prompt Generation**, a structured method that extracts meta-information, generates diverse motion prompts with LLMs, and refines them through human-AI validation, resulting in a multi-level prompt library covering six key dynamic scene dimensions. (3) **Human-Aligned Validation Mechanism**, we provide human preference annotations to validate our benchmarks, with our metrics achieving an average 35.3% improvement in Spearman’s correlation over baseline methods. This is the first time that the quality of motion in videos has been evaluated from the perspective of human perception alignment.
18
+
19
+ # 📊Evaluation Results
20
+
21
+
22
+ ## Quantitative Results
23
+
24
+ <p align="center">
25
+ <img src="./asset/eval_result.png" width="80%"/>
26
+ </p>
27
+
28
+ ### VMBench Leaderboard
29
+
30
+ <div align="center">
31
+
32
+ | Models | Avg | CAS | MSS | OIS | PAS | TCS |
33
+ | -------------------- | -------- | -------- | -------- | -------- | -------- | -------- |
34
+ | OpenSora-v1.2 | 51.6 | 31.2 | 61.9 | 73.0 | 3.4 | 88.5 |
35
+ | Mochi 1 | 53.2 | 37.7 | 62.0 | 68.6 | 14.4 | 83.6 |
36
+ | OpenSora-Plan-v1.3.0 | 58.9 | 39.3 | 76.0 | **78.6** | 6.0 | 94.7 |
37
+ | CogVideoX-5B | 60.6 | 50.6 | 61.6 | 75.4 | 24.6 | 91.0 |
38
+ | HunyuanVideo | 63.4 | 51.9 | 81.6 | 65.8 | **26.1** | 96.3 |
39
+ | Wan2.1 | **78.4** | **62.8** | **84.2** | 66.0 | 17.9 | **97.8** |
40
+
41
+ </div>
42
+
43
+ # 🔨 Installation
44
+
45
+ ## Create Environment
46
+
47
+ ```shell
48
+ git clone https://github.com/Ran0618/VMBench.git
49
+ cd VMBench
50
+
51
+ # create conda environment
52
+ conda create -n VMBench python=3.10
53
+ pip install torch torchvision
54
+
55
+ # Install Grounded-Segment-Anything module
56
+ cd Grounded-Segment-Anything
57
+ python -m pip install -e segment_anything
58
+ pip install --no-build-isolation -e GroundingDINO
59
+ pip install -r requirements.txt
60
+
61
+ # Install Groudned-SAM-2 module
62
+ cd Grounded-SAM-2
63
+ pip install -e .
64
+
65
+ # Install MMPose toolkit
66
+ pip install -U openmim
67
+ mim install mmengine
68
+ mim install "mmcv==2.1.0"
69
+
70
+ # Install Q-Align module
71
+ cd Q-Align
72
+ pip install -e .
73
+
74
+ # Install VideoMAEv2 module
75
+ cd VideoMAEv2
76
+ pip install -r requirements.txt
77
+ ```
78
+
79
+ ## Download checkpoints
80
+ Place the pre-trained checkpoint files in the `.cache` directory.
81
+ You can download our model's checkpoints are from our [HuggingFace repository 🤗](https://huggingface.co/GD-ML/VMBench).
82
+
83
+ ```shell
84
+ mkdir .cache
85
+ cd .cache
86
+
87
+ huggingface-cli download GD-ML/VMBench --local-dir .cache/
88
+ ```
89
+ Please organize the pretrained models in this structure:
90
+ ```shell
91
+ VMBench/.cache
92
+ ├── groundingdino_swinb_cogcoor.pth
93
+ ├── sam2.1_hiera_large.pt
94
+ ├── sam_vit_h_4b8939.pth
95
+ ├── scaled_offline.pth
96
+ └── vit_g_vmbench.pt
97
+ ```
98
+
99
+ # 🔧Usage
100
+
101
+ ## Videos Preparation
102
+
103
+ Generate videos of your model using the 1050 prompts provided in `prompts/prompts.txt` or `prompts/prompts.json` and organize them in the following structure:
104
+
105
+ ```shell
106
+ VMBench/eval_results/videos
107
+ ├── 0001.mp4
108
+ ├── 0002.mp4
109
+ ...
110
+ └── 1050.mp4
111
+ ```
112
+
113
+ **Note:** Ensure that you maintain the correspondence between prompts and video sequence numbers. The index for each prompt can be found in the `prompts/prompts.json` file.
114
+
115
+ You can follow us `sample_video_demo.py` to generate videos. Or you can put the results video named index into your own folder.
116
+
117
+
118
+ ## Evaluation on the VMBench
119
+
120
+ ### Running the Evaluation Pipeline
121
+ To evaluate generated videos using the VMBench, run the following command:
122
+
123
+ ```shell
124
+ bash evaluate.sh your_videos_folder
125
+ ```
126
+
127
+ The evaluation results for each video will be saved in the `./eval_results/${current_time}/results.json`. Scores for each dimension will be saved as `./eval_results/${current_time}/scores.csv`.
128
+
129
+ ### Evaluation Efficiency
130
+
131
+ We conducted a test using the following configuration:
132
+
133
+ - **Model**: CogVideoX-5B
134
+ - **Number of Videos**: 1,050
135
+ - **Frames per Video**: 49
136
+ - **Frame Rate**: 8 FPS
137
+
138
+ Here are the time measurements for each evaluation metric:
139
+
140
+ | Metric | Time Taken |
141
+ |--------|------------|
142
+ | PAS (Perceptible Amplitude Score) | 45 minutes |
143
+ | OIS (Object Integrity Score) | 30 minutes |
144
+ | TCS (Temporal Coherence Score) | 2 hours |
145
+ | MSS (Motion Smoothness Score) | 2.5 hours |
146
+ | CAS (Commonsense Adherence Score) | 1 hour |
147
+
148
+ **Total Evaluation Time**: 6 hours and 45 minutes
149
+
150
+ # ❤️Acknowledgement
151
+ We would like to express our gratitude to the following open-source repositories that our work is based on: [GroundedSAM](https://github.com/IDEA-Research/Grounded-Segment-Anything), [GroundedSAM2](https://github.com/IDEA-Research/Grounded-SAM-2), [Co-Tracker](https://github.com/facebookresearch/co-tracker), [MMPose](https://github.com/open-mmlab/mmpose), [Q-Align](https://github.com/Q-Future/Q-Align), [VideoMAEv2](https://github.com/OpenGVLab/VideoMAEv2), [VideoAlign](https://github.com/KwaiVGI/VideoAlign).
152
+ Their contributions have been invaluable to this project.
153
+
154
+ # 📜License
155
+ The VMBench is licensed under [Apache-2.0 license](http://www.apache.org/licenses/LICENSE-2.0). You are free to use our codes for research purpose.
156
+
157
+ # ✏️Citation
asset/eval_result.png ADDED

Git LFS Details

  • SHA256: 7baa0b60fd289067481b2b0d6a96abcf5e26bfcd75d74c527bd70bfee29eb492
  • Pointer size: 131 Bytes
  • Size of remote file: 651 kB
asset/logo.png ADDED

Git LFS Details

  • SHA256: d26b31e0a99cd0930a85c6516a7a3dc74e84a552555b74af32aa6aa0e9a8facf
  • Pointer size: 132 Bytes
  • Size of remote file: 1.49 MB
asset/overview.png ADDED

Git LFS Details

  • SHA256: 8cc7a346681bf83506df27c8b63eb78c3452121d3950e8731759b7ad175b501e
  • Pointer size: 132 Bytes
  • Size of remote file: 4.75 MB