Datasets:
dataset_info:
features:
- name: idx
dtype: int64
- name: video_path
dtype: string
- name: question
dtype: string
- name: choices
struct:
- name: a
dtype: string
- name: b
dtype: string
- name: c
dtype: string
- name: d
dtype: string
- name: answer
sequence: string
- name: choice_type
dtype: string
- name: video_source
dtype: string
- name: video_type
dtype: string
- name: frame_number
dtype: int64
- name: video_time
dtype: float64
- name: fps
dtype: float64
- name: box
sequence:
sequence: int64
- name: mask
list:
- name: counts
dtype: string
- name: size
sequence: int64
- name: point
sequence:
sequence: int64
splits:
- name: test
num_bytes: 4578328
num_examples: 3277
download_size: 2933575
dataset_size: 4578328
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
task_categories:
- video-text-to-text
EOC-Bench : Can MLLMs Identify, Recall, and Forecast Objects in an Egocentric World?
π Overview
we introduce EOC-Bench, an innovative benchmark designed to systematically evaluate object-centric embodied cognition in dynamic egocentric scenarios. Specially, EOC-Bench features 3,277 meticulously annotated QA pairs categorized into three temporal categories: Past, Present, and Future, covering 11 fine-grained evaluation dimensions and 3 visual object referencing types. To ensure thorough assessment, we develop a mixed-format human-in-the-loop annotation framework with four types of questions and design a novel multi-scale temporal accuracy metric for open-ended temporal evaluation.
π Tasks Definition
EOC-Bench structures questions into three temporally grounded categories: Past, Present, and Future, with a total of 11 categories.
π Evaluation
Please see our GitHub.