StaticEmbodiedBench / README.md
xiaojiahao's picture
Update README.md
85a8c3d verified
metadata
datasets:
  - StaticEmbodiedBench
language:
  - en
tags:
  - embodied-AI
  - vlm
  - vision-language
  - multiple-choice
license: mit
pretty_name: StaticEmbodiedBench
task_categories:
  - visual-question-answering

πŸ“˜ Dataset Description

StaticEmbodiedBench is a dataset for evaluating vision-language models on embodied intelligence tasks, as featured in the OpenCompass leaderboard.

It covers three key capabilities:

  • Macro Planning: Decomposing a complex task into a sequence of simpler subtasks.
  • Micro Perception: Performing concrete simple tasks such as spatial understanding and fine-grained perception.
  • Stage-wise Reasoning: Deciding the next action based on the agent’s current state and perceptual inputs.

Each sample is also labeled with a visual perspective:

  • First-Person View: The visual sensor is integrated with the agent, e.g., mounted on the end-effector.
  • Third-Person View: The visual sensor is separate from the agent, e.g., top-down or observer view.

This release includes 200 open-source samples from the full dataset, provided for public research and benchmarking purposes.


πŸ’‘ Usage

This dataset is fully supported by VLMEvalKit.

πŸ”§ Evaluate with VLMEvalKit

Registered dataset names:

  • StaticEmbodiedBench β€” for standard evaluation
  • StaticEmbodiedBench_circular β€” for circular evaluation (multi-round)

To run evaluation in VLMEvalKit:

python run.py --data StaticEmbodiedBench --model <your_model_name> --verbose

For circular evaluation, simply use:

python run.py --data StaticEmbodiedBench_circular --model <your_model_name> --verbose

πŸ“š Citation

If you use this dataset in your research, please cite it as follows:

@misc{xiao2025staticpluggedmakeembodied,
      title={Static and Plugged: Make Embodied Evaluation Simple}, 
      author={Jiahao Xiao and Jianbo Zhang and BoWen Yan and Shengyu Guo and Tongrui Ye and Kaiwei Zhang and Zicheng Zhang and Xiaohong Liu and Zhengxue Cheng and Lei Fan and Chuyi Li and Guangtao Zhai},
      year={2025},
      eprint={2508.06553},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2508.06553}, 
}