nielsr HF Staff commited on
Commit
a831a00
ยท
verified ยท
1 Parent(s): 72f5c10

Add task category, tags, and sample usage section

Browse files

This PR enhances the dataset card by adding:
- `task_categories: - video-text-to-text` to the YAML metadata for accurate categorization.
- Relevant `tags` (`video-understanding`, `large-video-language-models`, `lvlm`, `positional-bias`, `benchmark`, `evaluation`) to improve discoverability.
- A detailed "๐Ÿš€ Sample Usage" section directly from the project's GitHub README, including steps for dataset preparation, running inference with various models (InternVL3, MiMo-VL, GLM-4.5V), and metric calculation. This provides actionable code snippets for users to quickly get started with the dataset.

Files changed (1) hide show
  1. README.md +73 -10
README.md CHANGED
@@ -1,11 +1,23 @@
1
  ---
 
 
2
  license: cc-by-nc-sa-4.0
3
- extra_gated_prompt: >-
4
- You acknowledge and understand that: This dataset is provided solely for
5
- academic research purposes. It is not intended for commercial use or any other
6
- non-research activities. All copyrights, trademarks, and other intellectual
7
- property rights related to the videos in the dataset remain the exclusive
8
- property of their respective owners.
 
 
 
 
 
 
 
 
 
 
9
  configs:
10
  - config_name: default
11
  data_files:
@@ -39,10 +51,6 @@ dataset_info:
39
  num_examples: 1177
40
  download_size: 224148
41
  dataset_size: 490082
42
- language:
43
- - en
44
- size_categories:
45
- - 1K<n<10K
46
  ---
47
 
48
  <h1 align="center">Video-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models</h1>
@@ -93,6 +101,61 @@ Video-LevelGauge encompasses six categories of structured video understanding ta
93
  ## ๐Ÿ” Dataset
94
  The annotation file and the raw videos are readily accessible via this [HF Link](https://huggingface.co/datasets/Cola-any/Video-LevelGauge) ๐Ÿค—. Note that this dataset is for research purposes only and you must strictly comply with the above License.
95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
  ## ๐Ÿ”ฎ Evaluation PipLine
97
  Please refer to our ๐ŸŽ [project](https://github.com/Cola-any/Video-LevelGauge) and ๐Ÿ“–[arXiv Paper](https://arxiv.org/abs/2508.19650) for more details.
98
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: cc-by-nc-sa-4.0
5
+ size_categories:
6
+ - 1K<n<10K
7
+ task_categories:
8
+ - video-text-to-text
9
+ tags:
10
+ - video-understanding
11
+ - large-video-language-models
12
+ - lvlm
13
+ - positional-bias
14
+ - benchmark
15
+ - evaluation
16
+ extra_gated_prompt: 'You acknowledge and understand that: This dataset is provided
17
+ solely for academic research purposes. It is not intended for commercial use or
18
+ any other non-research activities. All copyrights, trademarks, and other intellectual
19
+ property rights related to the videos in the dataset remain the exclusive property
20
+ of their respective owners. '
21
  configs:
22
  - config_name: default
23
  data_files:
 
51
  num_examples: 1177
52
  download_size: 224148
53
  dataset_size: 490082
 
 
 
 
54
  ---
55
 
56
  <h1 align="center">Video-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models</h1>
 
101
  ## ๐Ÿ” Dataset
102
  The annotation file and the raw videos are readily accessible via this [HF Link](https://huggingface.co/datasets/Cola-any/Video-LevelGauge) ๐Ÿค—. Note that this dataset is for research purposes only and you must strictly comply with the above License.
103
 
104
+ ## ๐Ÿš€ Sample Usage
105
+
106
+ To quickly get started with running inference and evaluating models on Video-LevelGauge, follow these steps. For more detailed instructions and examples, please refer to the [GitHub repository](https://github.com/Cola-any/Video-LevelGauge).
107
+
108
+ ### โœจ Clone and Prepare Dataset
109
+ First, please clone this repository and download [our dataset](https://huggingface.co/datasets/Cola-any/Video-LevelGauge/tree/main/LevelGauge) into `./LevelGauge`, organizing it as follows:
110
+ ```
111
+ Video-LevelGauge
112
+ โ”œโ”€โ”€ asset
113
+ โ”œโ”€โ”€ evaluation
114
+ โ”œโ”€โ”€ LevelGauge
115
+ โ”‚ โ”œโ”€โ”€ json
116
+ โ”‚ โ””โ”€โ”€ videos
117
+ โ”œโ”€โ”€ metric
118
+ โ”œโ”€โ”€ output
119
+ โ”œโ”€โ”€ preprocess
120
+ ```
121
+ ### โœจ Running Inference
122
+ We take three models as examples to demonstrate how to use our benchmark for positional bias evaluation:
123
+ - **InternVL3** โ€“ inference with `transformers`.
124
+ - **MiMo-VL** โ€“ inference with `vLLM API`, using **video input**.
125
+ (If you plan to call the commercial API for testing, this is a good reference.)
126
+ - **GLM-4.5V** โ€“ inference with `vLLM API`, using **multi-image input**.
127
+
128
+ For InternVL3, please follow the [official project](https://github.com/OpenGVLab/InternVL) to set up the environment. Run inference as follow:
129
+ ```bash
130
+ bash ./evaluation/transformer/eval_intervl3.sh
131
+ ```
132
+ The accuracy at each position will be computed and saved to `acc_dir: ./output/internvl_acc`.
133
+
134
+ For MiMo-VL, please first follow the [official project](https://github.com/XiaomiMiMo/MiMo-VL/tree/main) to deploy the model with vLLM. Run inference as follow:
135
+ ```bash
136
+ bash ./evaluation/vllm/eval_mimovl.sh
137
+ ```
138
+ The accuracy at each position will be computed and saved to `acc_dir: ./output/mimovl_acc`.
139
+
140
+ For GLM-4.5V, please first follow the [official project](https://github.com/zai-org/GLM-V/) to deploy the model with vLLM. Run inference as follow:
141
+ ```bash
142
+ bash ./evaluation/vllm/eval_glm45v.sh
143
+ ```
144
+ The accuracy at each position will be computed and saved to `acc_dir: ./output/glm45v_acc`.
145
+
146
+ ๐Ÿ“Œ In addition, we provide preprocessing scripts, including:
147
+ *frame extraction* and *concatenating probe and background videos into a single video*. See the `./preprocess` folder.
148
+ You can choose the input method based on your model. Concatenating probe and background videos into a single video is recommended as it is applicable to all models.
149
+
150
+ ๐Ÿ“Œ For precise investigation, in our paper, we evaluate models on the full set of our 1,177 samples, which requires tens of thousands of inferences across 10 positions. We provide a subset of [300 samples](https://huggingface.co/datasets/Cola-any/Video-LevelGauge/blob/main/LevelGauge/json/Pos_MCQA_300_final.json) for quick testing ๐Ÿš€.
151
+
152
+ ### โœจ Metric Calculation
153
+ Once positional accuracies are saved to `acc_dir`, you can compute all metrics in one command ๐Ÿ˜„, including *Pran*, *Pvar*, *Pmean*, *MR*, etc. We use the provided files in `./output/example_acc` as an example:
154
+ ```bash
155
+ python ./metric/metric.py --acc_dir ./output/example_acc
156
+ ```
157
+ Finally, we provide a script for visualizing positional bias. See [bias_plot.py](https://github.com/Cola-any/Video-LevelGauge/blob/main/metric/bias_plot.py) for details.
158
+
159
  ## ๐Ÿ”ฎ Evaluation PipLine
160
  Please refer to our ๐ŸŽ [project](https://github.com/Cola-any/Video-LevelGauge) and ๐Ÿ“–[arXiv Paper](https://arxiv.org/abs/2508.19650) for more details.
161