Cola-any commited on
Commit
19f138c
ยท
1 Parent(s): 6e2a935

add dataset

Browse files
README.md CHANGED
@@ -1,26 +1,96 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
 
3
  ---
4
 
5
- # ๐Ÿ”ฅVideo-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
- ## ๐Ÿ”ฅ ToDo
8
- - Release the dataset.
9
- - Release the evaluation code.
10
- - Release the metric code.
11
 
12
  ## ๐Ÿ‘€ Video-LevelGauge Overview
13
  Video-LevelGauge is explicitly designed to investigate contextual positional bias in video understanding. We introduce a standardized probe and customized context design paradigm, where carefully designed probe segments are inserted at varying positions within customized contextual contents. By comparing model responses to identical probes at different insertion points, we assess positional bias in video comprehension.
14
- It supports flexible control over context length, probe position, and context composition to evaluate positional biases in various real-world scenarios, such as multi-video understanding, long video comprehension and multi-modal interleaved inputs.
15
  Video-LevelGauge encompasses six categories of structured video understanding tasks (e.g., action reasoning), along with an open-ended descriptive task. It includes 438 manually collected multi-type videos, 1,177 multiple-choice question answering (MCQA) items, and 120 open-ended instructed descriptive problems paired with annotations.
16
  <p align="center">
17
  <img src="./figs/overview.png" width="95%" height="95%">
18
  </p>
19
 
20
  ## ๐Ÿ” Dataset
21
- Coming soon
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
- ## ๐Ÿ”ฎ Evaluation Example
24
- Coming soon
 
25
 
26
- ## ๐Ÿ“ˆ Experimental Results
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ extra_gated_prompt: >-
4
+ You acknowledge and understand that: This dataset is provided solely for
5
+ academic research purposes. It is not intended for commercial use or any other
6
+ non-research activities. All copyrights, trademarks, and other intellectual
7
+ property rights related to the videos in the dataset remain the exclusive
8
+ property of their respective owners.
9
  ---
10
 
11
+ # Video-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models
12
+
13
+ ## License
14
+ Video-LevelGauge is under the CC-BY-NC-SA-4.0 license.
15
+ It is derived from several previously published datasets ([VideoMME](https://huggingface.co/datasets/lmms-lab/Video-MME), [MLVU](https://huggingface.co/datasets/MLVU/MVLU), [VisDrone](https://github.com/VisDrone/VisDrone-Dataset), [UCF-Crime](https://www.crcv.ucf.edu/projects/real-world/), and [Ego4D](https://github.com/facebookresearch/Ego4d)). Please note that the original datasets may have their own licenses. Users must comply with the licenses of the original datasets when using this derived dataset.
16
+
17
+ โš ๏ธ If you need to access and use our dataset, you must understand and agree: **Video-LevelGauge is only used for academic research. Commercial use in any form is prohibited. The user assumes all effects arising from any other use and dissemination.**
18
+
19
+ We do not own the copyright of any raw video files and the copyright of all videos belongs to the video owners. Currently, we provide video access to researchers under the condition of acknowledging the above license. For the video data used, we respect and acknowledge any copyrights of the video authors.
20
+ If there is any infringement in our dataset, please email [email protected] and we will remove it immediately.
21
+
22
+ ## ๐Ÿ  Introduction
23
+ ๐Ÿ”” Large Video Language Models (LVLMs) suffer from positional bias, characterized by uneven comprehension of identical content presented at different contextual positions.
24
+ <p align="center">
25
+ <img src="./figs/pos_bias.png" width="55%" height="95%">
26
+ </p>
27
+ ๐ŸŒŸ The serial position effect in psychology suggests that humans tend to better recall content presented at the beginning and end of a sequence. Similar behaviors have been observed in language models. To date, how various types of LVLMs, such as those incorporating memory components or trained with long-context, perform on positional biases remains under-explored.
28
+ Moreover, how positional bias manifests in video-text interleaved contexts is still an open question. In particular, models claiming to excel at long video understanding should be validated for their ability to maintain consistent and effective perception across the entire sequence, with minimal positional bias.
29
+ For example, Qwen2.5-VL-7B exhibits reduced positional bias on the OCR task compared to its bias on other tasks:
30
+ <p align="center">
31
+ <img src="./figs/pos_bais_plot_7b_20_norm.png" width="100%" height="100%">
32
+ </p>
33
 
 
 
 
 
34
 
35
  ## ๐Ÿ‘€ Video-LevelGauge Overview
36
  Video-LevelGauge is explicitly designed to investigate contextual positional bias in video understanding. We introduce a standardized probe and customized context design paradigm, where carefully designed probe segments are inserted at varying positions within customized contextual contents. By comparing model responses to identical probes at different insertion points, we assess positional bias in video comprehension.
37
+ It supports flexible control over context length, probe position, and context composition to evaluate positional biases in various real-world scenarios, such as **multi-video understanding, long video comprehension and multi-modal interleaved inputs**.
38
  Video-LevelGauge encompasses six categories of structured video understanding tasks (e.g., action reasoning), along with an open-ended descriptive task. It includes 438 manually collected multi-type videos, 1,177 multiple-choice question answering (MCQA) items, and 120 open-ended instructed descriptive problems paired with annotations.
39
  <p align="center">
40
  <img src="./figs/overview.png" width="95%" height="95%">
41
  </p>
42
 
43
  ## ๐Ÿ” Dataset
44
+ The annotation file and the raw videos are readily accessible via this [HF Link](https://huggingface.co/datasets/Cola-any/Video-LevelGauge) ๐Ÿค—. Note that this dataset is for research purposes only and you must strictly comply with the above License.
45
+
46
+ ## ๐Ÿ”ฎ Evaluation PipLine
47
+ Please refer to our ๐ŸŽ [project](https://github.com/Cola-any/Video-LevelGauge) and ๐Ÿ“–[arXiv Paper]() for more details.
48
+
49
+ ## ๐Ÿ“ˆ Experimental Results
50
+ - **Performance of state-of-the-art LVLMs on Video-LevelGauge.**
51
+
52
+ Gemini 2.5 Pro exhibits the least positional bias, followed by GLM-4.5V, GPT-4o-latest, Doubao-Seed-1.6, and other models.
53
+ <p align="center">
54
+ <img src="./figs/leaderboard.png" width="55%" height="95%">
55
+ </p>
56
+
57
+ - **Evaluation results of Stat-of-the-art LVLMs.**
58
+
59
+ We conduct a comprehensive investigation of 27 LVLMs using Video-LevelGauge, including 6 commercial models, i.e., Gemini 2.5 Pro and QVQ-Max; 21 open-source LVLMs covering unified models like InternVL3, long video models like Video-XL2, specific optimized models like VideoRefer, multi-modal reasoning models like GLM-4.5V, and two-stage methods like LLoVi.
60
+ <p align="center">
61
+ <img src="./figs/lvlms.png" width="95%" height="95%">
62
+ </p>
63
+
64
+ - **Effect of Context Length on Positional Bias.**
65
+
66
+ Positional bias is prevalent across various context lengths and tends to intensify as the context length increases, accompanied by shifts in bias patterns.
67
+ <p align="center">
68
+ <img src="./figs/context_len.png" width="95%" height="95%">
69
+ </p>
70
+
71
+ - **Effect of Context Type on Positional Bias.**
72
+
73
+ LVLMs exhibit more pronounced positional bias in complex context scenarios.
74
+ <p align="center">
75
+ <img src="./figs/context_type.png" width="95%" height="95%">
76
+ </p>
77
+
78
+ - **Effect of Model Size on Positional Bias.**
79
+
80
+ Positional bias is significantly alleviated as model size increases, consistent with scaling law observed in other capabilities.
81
+ <p align="center">
82
+ <img src="./figs/model_size.png" width="55%" height="95%">
83
+ </p>
84
+
85
+ - **Effect of Thinking Mode on Positional Bias.**
86
+
87
+ Thinking mode can alleviate the positional bias issue to a certain extent.
88
+ <p align="center">
89
+ <img src="./figs/thinking.png" width="55%" height="95%">
90
+ </p>
91
 
92
+ ## Citation
93
+ If you find our work helpful for your research, please consider citing our work.
94
+ ```
95
 
96
+ ```
dataset_infos.json CHANGED
@@ -1,3 +1,4 @@
1
  {
2
- "license": "cc-by-nc-sa-4.0"
 
3
  }
 
1
  {
2
+ "license": "cc-by-nc-sa-4.0",
3
+
4
  }
figs/context_len.png ADDED

Git LFS Details

  • SHA256: 2040d9b25c84e4ec80536b7bb1bbd493c8a099d7c5bb0827f20fe5178b46e159
  • Pointer size: 131 Bytes
  • Size of remote file: 722 kB
figs/context_type.png ADDED

Git LFS Details

  • SHA256: 4c83592fcb044716ca3513f0e741a5d0f3ee0b722f06bf060cea4b6396386ab9
  • Pointer size: 131 Bytes
  • Size of remote file: 287 kB
figs/leaderboard.png ADDED

Git LFS Details

  • SHA256: d1661bd63451949ab5eaf242751b6a85804a675799f38abe833966f764b4fe96
  • Pointer size: 131 Bytes
  • Size of remote file: 385 kB
figs/lvlms.png ADDED

Git LFS Details

  • SHA256: be33911d669072722ce037d070575c3ec3fad4549d3b20c676d68eafd282de1f
  • Pointer size: 131 Bytes
  • Size of remote file: 694 kB
figs/model_size.png ADDED

Git LFS Details

  • SHA256: baa2c5606b6d5bc986406b937f0975b33cb06f45ad64b04326152508746f3616
  • Pointer size: 131 Bytes
  • Size of remote file: 404 kB
figs/pos_bais_plot_7b_20_norm.png ADDED

Git LFS Details

  • SHA256: ea711457000fdcc1d38833a18d57a9d47aa6da8210f88d6ccc1015649ea212e5
  • Pointer size: 131 Bytes
  • Size of remote file: 437 kB
figs/pos_bias.png ADDED

Git LFS Details

  • SHA256: 2d2d6fc96a60ebd2068001e357e3b3ae9c8b32ce024ba74b871872f8b6a320d2
  • Pointer size: 131 Bytes
  • Size of remote file: 557 kB
figs/thinking.png ADDED

Git LFS Details

  • SHA256: 877e6ef8fc64e57129e473e79e1208d6bdda21ebf1d5b5b81864d6b16908839b
  • Pointer size: 131 Bytes
  • Size of remote file: 169 kB