Datasets:
File size: 2,351 Bytes
3080428 b4bbbf8 3080428 b4bbbf8 3080428 d54109a 1f0efb3 d54109a 1f0efb3 d54109a b58c33e f83d585 47490ef f83d585 d54109a 1f0efb3 d54109a 85a8c3d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
datasets:
- StaticEmbodiedBench
language:
- en
tags:
- embodied-AI
- vlm
- vision-language
- multiple-choice
license: mit
pretty_name: StaticEmbodiedBench
task_categories:
- visual-question-answering
---
## 📘 Dataset Description
**StaticEmbodiedBench** is a dataset for evaluating vision-language models on embodied intelligence tasks, as featured in the [OpenCompass leaderboard](https://staging.opencompass.org.cn/embodied-intelligence/rank/brain).
It covers three key capabilities:
- **Macro Planning**: Decomposing a complex task into a sequence of simpler subtasks.
- **Micro Perception**: Performing concrete simple tasks such as spatial understanding and fine-grained perception.
- **Stage-wise Reasoning**: Deciding the next action based on the agent’s current state and perceptual inputs.
Each sample is also labeled with a visual perspective:
- **First-Person View**: The visual sensor is integrated with the agent, e.g., mounted on the end-effector.
- **Third-Person View**: The visual sensor is separate from the agent, e.g., top-down or observer view.
This release includes **200 open-source samples** from the full dataset, provided for public research and benchmarking purposes.
---
## 💡 Usage
This dataset is fully supported by [VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
### 🔧 Evaluate with VLMEvalKit
Registered dataset names:
- `StaticEmbodiedBench` — for standard evaluation
- `StaticEmbodiedBench_circular` — for circular evaluation (multi-round)
To run evaluation in VLMEvalKit:
```bash
python run.py --data StaticEmbodiedBench --model <your_model_name> --verbose
```
For circular evaluation, simply use:
```bash
python run.py --data StaticEmbodiedBench_circular --model <your_model_name> --verbose
```
## 📚 Citation
If you use this dataset in your research, please cite it as follows:
```bibtex
@misc{xiao2025staticpluggedmakeembodied,
title={Static and Plugged: Make Embodied Evaluation Simple},
author={Jiahao Xiao and Jianbo Zhang and BoWen Yan and Shengyu Guo and Tongrui Ye and Kaiwei Zhang and Zicheng Zhang and Xiaohong Liu and Zhengxue Cheng and Lei Fan and Chuyi Li and Guangtao Zhai},
year={2025},
eprint={2508.06553},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.06553},
} |