Datasets:

GLIMPSE: Do Large Vision-Language Models Truly Think With Videos or Just Glimpse at Them?
[π Dataset] [π» Code] [π Paper] [π Overview] [π§ Dataset Details] [π© Citation]
π Overview
"True intelligence lies not in what we see, but in what we understand; not in remembering moments, but in grasping eternity."
Between seeing and understanding lies a profound abyss of cognition. Do multimodal large models truly think with videos, or are they merely performing visual theatrics?
When humans watch videos, we don't just "see" β we think. We understand the passage of time, capture the coherence of actions, and perceive the essence of things. However, existing video benchmarks often resemble image benchmarks, with questions like "What action does the person perform in the video?" or "What color is the woman's dress in the video?" For such questions, models typically only need to scan a few key frames to answer, without requiring deep temporal, spatial, and interactive reasoning.
We present GLIMPSE, a benchmark specifically designed to evaluate whether LVLMs can truly "think with videos". Unlike previous benchmarks, GLIMPSE emphasizes comprehensive video understanding that goes beyond static image cues. It contains 3,269 videos and over 4,342 highly vision-centric questions, covering 11 categories including trajectory analysis, temporal reasoning, and forensic detection.
π― Key Features:
- Human-Crafted Questions: All questions are meticulously designed by human annotators, requiring viewing the complete video and reasoning over full video context β what we call "thinking with videos"
- Beyond Frame Scanning: These questions cannot be answered by scanning selected frames or relying solely on text
- Rigorous Validation: GLIMPSE achieves 94.82% accuracy in human evaluation, but current LVLMs face significant challenges
- Challenging for SOTA: Even the best-performing model GPT-o3 only achieves 66.43% accuracy
π§ Dataset Details

GLIMPSE is meticulously curated through a three-step process: video collection, question-answer annotation, and quality review. The benchmark encompasses 3,269 videos and 4,342 high-quality visual-centric questions across 11 comprehensive categories:
π Categories Overview
π― (1) Trajectory Analysis: Analyzing object movement patterns, directions, and displacement over time, requiring fine-grained recognition and temporal reasoning.
β° (2) Temporal Reasoning: Understanding timing and sequence of events, focusing on temporal order relationships rather than simple event localization.
π (3) Quantitative Estimation: Counting dynamic events such as repeated actions or object appearances/disappearances in video content.
π¬ (4) Event Recognition: Determining whether events occur and their sequential relationships, especially in multi-event scenarios.
π (5) Reverse Event Inference: Reconstructing event flow from partial information to determine correct action sequences.
ποΈ (6) Scene Context Awareness: Understanding background changes throughout videos, evaluating spatial understanding and context recognition.
β‘ (7) Velocity Estimation: Calculating relative speeds of moving objects by analyzing displacement over time.
π₯ (8) Cinematic Dynamics: Identifying camera motion by analyzing foreground-background relationships and movement patterns.
π (9) Forensic Authenticity Analysis: Detecting fake videos generated by text-to-video models to assess video authenticity verification capabilities.
π€ (10) Robotics Evaluation: Identifying and assessing robotic actions including grasping, moving, and assembling tasks.
π₯ (11) Multi-Object Interaction: Analyzing interactions between multiple entities including physical contact, collaboration, or conflict scenarios.
All videos are controlled to be 20 seconds to 2 minutes in length to maintain optimal complexity and reference value.
β Quality Assurance
- Manual Annotation: All questions crafted by English-proficient researchers
- Full Video Requirement: Each question requires understanding the entire video, not just single frames
- Bidirectional Testing: Yes/no questions include reverse pairs to reduce evaluation bias
- Rigorous Review: Multi-stage quality control ensures visual-centricity and answerability
Research Significance
Our results reveal that LVLMs still struggle to move beyond surface-level reasoning and truly achieve "thinking with videos". GLIMPSE provides a new standard for evaluating and advancing multimodal AI's video understanding capabilities, highlighting the gap between current model performance and genuine video comprehension.
π Performance Analysis
Current state-of-the-art models show significant room for improvement on GLIMPSE:
Model | Category | TA | TR | QE | ER | REI | SCA | VE | CD | FAA | RE | MOI | Avg |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Human Expert | Gold Standard | 92.00 | 96.00 | 88.00 | 100.0 | 92.00 | 100.0 | 94.00 | 96.0 | 91.00 | 96.00 | 98.00 | 94.82 |
Random | Baseline | 23.60 | 25.40 | 23.92 | 24.32 | 16.39 | 25.63 | 25.18 | 24.31 | 27.08 | 23.53 | 22.58 | 24.14 |
mPLUG-OWL2 (7B) | Image LVLM | 39.25 | 22.40 | 36.90 | 31.80 | 34.00 | 37.30 | 28.60 | 38.40 | 52.70 | 33.50 | 28.34 | 34.12 |
Qwen-VL-Chat (7B) | Image LVLM | 38.24 | 11.60 | 35.71 | 30.72 | 32.79 | 36.18 | 27.48 | 33.16 | 51.62 | 32.04 | 27.42 | 30.73 |
LLAVA-1.5 (7B) | Image LVLM | 42.03 | 28.47 | 30.02 | 42.96 | 45.53 | 28.52 | 29.48 | 41.47 | 67.04 | 30.01 | 27.03 | 37.48 |
Video-LLaMA (7B) | Video LVLM | 44.71 | 28.88 | 23.30 | 48.94 | 38.00 | 60.46 | 15.32 | 53.30 | 65.09 | 42.13 | 51.16 | 39.71 |
Video-LLaMA2 (7B) | Video LVLM | 47.06 | 30.40 | 24.53 | 51.52 | 40.00 | 63.64 | 16.13 | 56.10 | 68.52 | 65.19 | 53.85 | 42.60 |
Chat-UniVi-V1.5 (7B) | Video LVLM | 39.86 | 23.68 | 22.36 | 54.10 | 42.00 | 66.82 | 16.94 | 58.91 | 71.95 | 68.45 | 56.54 | 41.47 |
LLAVA-NeXT-Video (7B) | Video LVLM | 46.79 | 42.33 | 19.55 | 57.68 | 44.45 | 69.04 | 34.48 | 53.03 | 72.87 | 63.71 | 57.17 | 46.80 |
VideoLLaVA (7B) | Video LVLM | 40.42 | 22.56 | 21.86 | 52.60 | 42.99 | 68.00 | 17.60 | 56.48 | 71.39 | 65.23 | 54.95 | 40.74 |
Qwen2-VL (7B) | Video LVLM | 46.15 | 44.44 | 28.37 | 59.32 | 43.42 | 67.84 | 33.18 | 55.32 | 73.44 | 66.85 | 58.82 | 52.47 |
GPT-4o | Closed-source | 48.40 | 28.80 | 49.64 | 59.12 | 56.83 | 62.81 | 40.88 | 52.92 | 65.18 | 70.11 | 57.10 | 53.80 |
GPT-o3 | Closed-source | 55.42 | 53.07 | 65.75 | 61.51 | 67.39 | 82.00 | 62.90 | 56.23 | 85.69 | 69.55 | 71.20 | 66.43 |
Gemini 1.5 Flash | Closed-source | 54.60 | 33.60 | 64.90 | 62.68 | 54.10 | 69.00 | 44.14 | 51.38 | 73.11 | 73.48 | 61.62 | 55.65 |
Gemini 1.5 Pro | Closed-source | 61.02 | 42.84 | 51.32 | 64.10 | 56.86 | 72.12 | 45.45 | 53.97 | 75.24 | 77.95 | 62.64 | 56.98 |
Key Findings
- Substantial Gap: Even the best model (GPT-o3) achieves only 66.43% accuracy compared to 94.82% human performance
- Category Variations: Models perform relatively better on Scene Context Awareness and Forensic Analysis but struggle with Temporal Reasoning and Trajectory Analysis
- Consistency Challenge: Large performance variations across categories indicate inconsistent video understanding capabilities
π Getting Started
Please refer to our Github Repo for evaluation usage.
π© Citation
If you find our benchmark useful in your research, please kindly consider citing us:
@misc{zhou2025glimpselargevisionlanguagemodels,
title={GLIMPSE: Do Large Vision-Language Models Truly Think With Videos or Just Glimpse at Them?},
author={Yiyang Zhou and Linjie Li and Shi Qiu and Zhengyuan Yang and Yuyang Zhao and Siwei Han and Yangfan He and Kangqi Li and Haonian Ji and Zihao Zhao and Haibo Tong and Lijuan Wang and Huaxiu Yao},
year={2025},
eprint={2507.09491},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.09491},
}
- Downloads last month
- 2