Omni3D-Bench / README.md
dmarsili's picture
Update README.md
d1e2a91 verified
metadata
language:
  - en
license: cc-by-nc-4.0
size_categories:
  - 1K<n<10K
task_categories:
  - visual-question-answering
dataset_info:
  features:
    - name: image_index
      dtype: string
    - name: image
      dtype: image
    - name: q_index
      dtype: int64
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: answer_type
      dtype: string
  splits:
    - name: train
      num_bytes: 246230145
      num_examples: 501
  download_size: 106490728
  dataset_size: 246230145
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Omni3D-Bench

This repository contains the Omni3D-Bench dataset introduced in the paper Visual Agentic AI for Spatial Reasoning with a Dynamic API Omni3D-Bench contains 500 challenging (image, question, answer) tuples of diverse, real-world scenes sourced from Omni3D for complex 3D spatial reasoning.

View samples from the dataset here.

The dataset is released under the Creative Commons Non-Commercial license.

Usage

The benchmark can be accessed with the following code:

from datasets import load_dataset
dataset = load_dataset("dmarsili/Omni3D-Bench")

We additionally provide a .zip file including all the images and annotations.

Annotations

Samples in Omni3D-Bench consist of images, questions, and ground-truth answers. Samples can be loaded as python dictonaries in the following format:

<!-- annotations.json -->
{
    "questions": [
        {
            "image_index"               : str, image ID
            "question_index"            : str, question ID
            "image"                     : PIL Image, image for query
            "question"                  : str, query
            "answer_type"               : str, expected answer type - {int, float, str}
            "answer"                    : str|int|float, ground truth response to the query
        },
        {
            ...
        },
        ...
    ]
}

Citation

If you use the Omni3D-Bench dataset in your research, please use the following BibTeX entry.

@misc{marsili2025visualagenticaispatial,
    title={Visual Agentic AI for Spatial Reasoning with a Dynamic API}, 
    author={Damiano Marsili and Rohun Agrawal and Yisong Yue and Georgia Gkioxari},
    year={2025},
    eprint={2502.06787},
    archivePrefix={arXiv},
    primaryClass={cs.CV},
    url={https://arxiv.org/abs/2502.06787}, 
}