Video-LevelGauge / README.md
nielsr's picture
nielsr HF Staff
Add task category, tags, and sample usage section
a831a00 verified
|
raw
history blame
11.4 kB
metadata
language:
  - en
license: cc-by-nc-sa-4.0
size_categories:
  - 1K<n<10K
task_categories:
  - video-text-to-text
tags:
  - video-understanding
  - large-video-language-models
  - lvlm
  - positional-bias
  - benchmark
  - evaluation
extra_gated_prompt: >-
  You acknowledge and understand that: This dataset is provided solely for
  academic research purposes. It is not intended for commercial use or any other
  non-research activities. All copyrights, trademarks, and other intellectual
  property rights related to the videos in the dataset remain the exclusive
  property of their respective owners. 
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: question_id
      dtype: string
    - name: question
      dtype: string
    - name: gt_answer
      dtype: string
    - name: video_name
      dtype: string
    - name: question_type
      dtype: string
    - name: answer_number
      dtype: int64
    - name: candidates
      sequence: string
    - name: video_len
      dtype: float64
    - name: video_category
      dtype: string
    - name: human_verification
      dtype: bool
  splits:
    - name: train
      num_bytes: 490082
      num_examples: 1177
  download_size: 224148
  dataset_size: 490082

Video-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models

Build Build Build

๐Ÿ“œ License

Video-LevelGauge is under the CC-BY-NC-SA-4.0 license. It is derived from several previously published datasets (VideoMME, MLVU, VisDrone, UCF-Crime, and Ego4D). Please note that the original datasets may have their own licenses. Users must comply with the licenses of the original datasets when using this derived dataset.

โš ๏ธ If you access and use our dataset, you must understand and agree: Video-LevelGauge is only used for academic research. Commercial use in any form is prohibited. The user assumes all effects arising from any other use and dissemination.

We do not own the copyright of any raw video files and the copyright of all videos belongs to the video owners. Currently, we provide video access to researchers under the condition of acknowledging the above license. For the video data used, we respect and acknowledge any copyrights of the video authors. If there is any infringement in our dataset, please email [email protected] and we will remove it immediately.

๐Ÿ  Introduction

๐Ÿ”” Large Video Language Models (LVLMs) suffer from positional bias, characterized by uneven comprehension of identical content presented at different contextual positions.

๐ŸŒŸ The serial position effect in psychology suggests that humans tend to better recall content presented at the beginning and end of a sequence. Similar behaviors have been observed in language models. To date, how various types of LVLMs, such as those incorporating memory components or trained with long-context, perform on positional biases remains under-explored. Moreover, how positional bias manifests in video-text interleaved contexts is still an open question. In particular, models claiming to excel at long video understanding should be validated for their ability to maintain consistent and effective perception across the entire sequence, with minimal positional bias. For example, Qwen2.5-VL-7B exhibits reduced positional bias on the OCR task compared to its bias on other tasks:

๐Ÿ‘€ Video-LevelGauge Overview

Video-LevelGauge is explicitly designed to investigate contextual positional bias in video understanding. We introduce a standardized probe and customized context design paradigm, where carefully designed probe segments are inserted at varying positions within customized contextual contents. By comparing model responses to identical probes at different insertion points, we assess positional bias in video comprehension. It supports flexible control over context length, probe position, and context composition to evaluate positional biases in various real-world scenarios, such as multi-video understanding, long video comprehension and multi-modal interleaved inputs. Video-LevelGauge encompasses six categories of structured video understanding tasks (e.g., action reasoning), along with an open-ended descriptive task. It includes 438 manually collected multi-type videos, 1,177 multiple-choice question answering (MCQA) items, and 120 open-ended instructed descriptive problems paired with annotations.

๐Ÿ” Dataset

The annotation file and the raw videos are readily accessible via this HF Link ๐Ÿค—. Note that this dataset is for research purposes only and you must strictly comply with the above License.

๐Ÿš€ Sample Usage

To quickly get started with running inference and evaluating models on Video-LevelGauge, follow these steps. For more detailed instructions and examples, please refer to the GitHub repository.

โœจ Clone and Prepare Dataset

First, please clone this repository and download our dataset into ./LevelGauge, organizing it as follows:

Video-LevelGauge
โ”œโ”€โ”€ asset
โ”œโ”€โ”€ evaluation
โ”œโ”€โ”€ LevelGauge
โ”‚   โ”œโ”€โ”€ json
โ”‚   โ””โ”€โ”€ videos
โ”œโ”€โ”€ metric
โ”œโ”€โ”€ output
โ”œโ”€โ”€ preprocess

โœจ Running Inference

We take three models as examples to demonstrate how to use our benchmark for positional bias evaluation:

  • InternVL3 โ€“ inference with transformers.
  • MiMo-VL โ€“ inference with vLLM API, using video input.
    (If you plan to call the commercial API for testing, this is a good reference.)
  • GLM-4.5V โ€“ inference with vLLM API, using multi-image input.

For InternVL3, please follow the official project to set up the environment. Run inference as follow:

bash ./evaluation/transformer/eval_intervl3.sh

The accuracy at each position will be computed and saved to acc_dir: ./output/internvl_acc.

For MiMo-VL, please first follow the official project to deploy the model with vLLM. Run inference as follow:

bash ./evaluation/vllm/eval_mimovl.sh

The accuracy at each position will be computed and saved to acc_dir: ./output/mimovl_acc.

For GLM-4.5V, please first follow the official project to deploy the model with vLLM. Run inference as follow:

bash ./evaluation/vllm/eval_glm45v.sh

The accuracy at each position will be computed and saved to acc_dir: ./output/glm45v_acc.

๐Ÿ“Œ In addition, we provide preprocessing scripts, including: frame extraction and concatenating probe and background videos into a single video. See the ./preprocess folder. You can choose the input method based on your model. Concatenating probe and background videos into a single video is recommended as it is applicable to all models.

๐Ÿ“Œ For precise investigation, in our paper, we evaluate models on the full set of our 1,177 samples, which requires tens of thousands of inferences across 10 positions. We provide a subset of 300 samples for quick testing ๐Ÿš€.

โœจ Metric Calculation

Once positional accuracies are saved to acc_dir, you can compute all metrics in one command ๐Ÿ˜„, including Pran, Pvar, Pmean, MR, etc. We use the provided files in ./output/example_acc as an example:

python ./metric/metric.py --acc_dir ./output/example_acc

Finally, we provide a script for visualizing positional bias. See bias_plot.py for details.

๐Ÿ”ฎ Evaluation PipLine

Please refer to our ๐ŸŽ project and ๐Ÿ“–arXiv Paper for more details.

๐Ÿ“ˆ Experimental Results

๐Ÿ“Performance of state-of-the-art LVLMs on Video-LevelGauge.

Gemini 2.5 Pro exhibits the least positional bias, followed by GLM-4.5V, GPT-4o-latest, Doubao-Seed-1.6, and other models.

๐Ÿ“Evaluation results of Stat-of-the-art LVLMs.

We conduct a comprehensive investigation of 27 LVLMs using Video-LevelGauge, including 6 commercial models, i.e., Gemini 2.5 Pro and QVQ-Max; 21 open-source LVLMs covering unified models like InternVL3, long video models like Video-XL2, specific optimized models like VideoRefer, multi-modal reasoning models like GLM-4.5V, and two-stage methods like LLoVi.

๐Ÿ“Effect of Context Length on Positional Bias.

Positional bias is prevalent across various context lengths and tends to intensify as the context length increases, accompanied by shifts in bias patterns.

๐Ÿ“Effect of Context Type on Positional Bias.

LVLMs exhibit more pronounced positional bias in complex context scenarios.

๐Ÿ“Effect of Model Size on Positional Bias.

Positional bias is significantly alleviated as model size increases, consistent with scaling law observed in other capabilities.

๐Ÿ“Effect of Thinking Mode on Positional Bias.

Thinking mode can alleviate the positional bias issue to a certain extent.

Citation

If you find our work helpful for your research, please consider citing our work.

@article{xia2025videolevelgaugeinvestigatingcontextualpositional,
  title   = {Video-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models},
  author  = {Hou, Xia and Fu, Zheren and Ling, Fangcan and Li, Jiajun and Tu, Yi and Mao, Zhendong and Zhang, Yongdong},
  journal = {arXiv preprint arXiv:2508.19650},
  year    = {2025},
}