Datasets:

Modalities:
Text
Video
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Shusheng Yang commited on
Commit
a9fde41
1 Parent(s): 133eb64
Files changed (5) hide show
  1. README.md +9 -44
  2. arkitscenes.zip +3 -0
  3. scannet.zip +3 -0
  4. scannetpp.zip +3 -0
  5. test-00000-of-00001.parquet +3 -0
README.md CHANGED
@@ -27,19 +27,16 @@ This repository contains the visual spatial intelligence benchmark (VSI-Bench),
27
 
28
 
29
  ## Files
30
- The `test-00000-of-00001.parquet` contains the full dataset annotations and images pre-loaded for processing with HF Datasets. It can be loaded as follows:
31
 
32
- <!-- @shusheng -->
33
  ```python
34
  from datasets import load_dataset
35
  vsi_bench = load_dataset("nyu-visionx/VSI-Bench")
36
  ```
37
-
38
- Additionally, we provide the compressed raw videos in `*.zip`.
39
-
40
 
41
  ## Dataset Description
42
- VSI-Bench quantitatively evaluate the visual-spatial intelligence of MLLMs from egocentric video. VSI-Bench comprises over 5,000 question-answer pairs derived from 288 real videos. These videos are sourced from the validation sets of the public indoor 3D scene reconstruction datasets `ScanNet`, `ScanNet++`, and `ARKitScenes` and represent diverse environments -- including residential spaces, professional settings (e.g., offices, labs), and industrial spaces (e.g., factories) and multiple geographic regions. Repurposing these existing 3D reconstruction and understanding datasets offers accurate object-level annotations which we use in question generation and could enable future study into the connection between MLLMs and 3D reconstruction.
43
 
44
  The dataset contains the following fields:
45
 
@@ -47,49 +44,17 @@ The dataset contains the following fields:
47
  | :--------- | :---------- |
48
  | `idx` | Global index of the entry in the dataset |
49
  | `dataset` | Video source: `scannet`, `arkitscenes` or `scannetpp` |
 
50
  | `question_type` | The type of task for question |
51
  | `question` | Question asked about the video |
52
- | `options` | Answer choices for the question (only for multiple choice questions) |
53
- | `ground_truth` | Correct answer to the question |
54
- | `video_suffix` | Suffix of the video |
55
-
56
 
57
- <br>
58
 
 
59
 
60
- ### Example Code
61
-
62
- <!-- @shusheng -->
63
- ```python
64
- import pandas as pd
65
- # Load the CSV file into a DataFrame
66
- df = pd.read_csv('cv_bench_results.csv')
67
- # Define a function to calculate accuracy for a given source
68
- def calculate_accuracy(df, source):
69
- source_df = df[df['source'] == source]
70
- accuracy = source_df['result'].mean() # Assuming 'result' is 1 for correct and 0 for incorrect
71
- return accuracy
72
- # Calculate accuracy for each source
73
- accuracy_2d_ade = calculate_accuracy(df, 'ADE20K')
74
- accuracy_2d_coco = calculate_accuracy(df, 'COCO')
75
- accuracy_3d_omni = calculate_accuracy(df, 'Omni3D')
76
- # Calculate the accuracy for each type
77
- accuracy_2d = (accuracy_2d_ade + accuracy_2d_coco) / 2
78
- accuracy_3d = accuracy_3d_omni
79
- # Compute the combined accuracy as specified
80
- combined_accuracy = (accuracy_2d + accuracy_3d) / 2
81
- # Print the results
82
- print(f"CV-Bench Accuracy: {combined_accuracy:.4f}")
83
- print()
84
- print(f"Type Accuracies:")
85
- print(f"2D Accuracy: {accuracy_2d:.4f}")
86
- print(f"3D Accuracy: {accuracy_3d:.4f}")
87
- print()
88
- print(f"Source Accuracies:")
89
- print(f"ADE20K Accuracy: {accuracy_2d_ade:.4f}")
90
- print(f"COCO Accuracy: {accuracy_2d_coco:.4f}")
91
- print(f"Omni3D Accuracy: {accuracy_3d_omni:.4f}")
92
- ```
93
 
94
  ## Citation
95
 
 
27
 
28
 
29
  ## Files
30
+ The `test-00000-of-00001.parquet` file contains the complete dataset annotations and pre-loaded images, ready for processing with HF Datasets. It can be loaded using the following code:
31
 
 
32
  ```python
33
  from datasets import load_dataset
34
  vsi_bench = load_dataset("nyu-visionx/VSI-Bench")
35
  ```
36
+ Additionally, we provide the videos in `*.zip`.
 
 
37
 
38
  ## Dataset Description
39
+ VSI-Bench quantitatively evaluates the visual-spatial intelligence of MLLMs from egocentric video. VSI-Bench comprises over 5,000 question-answer pairs derived from 288 real videos. These videos are sourced from the validation sets of the public indoor 3D scene reconstruction datasets `ScanNet`, `ScanNet++`, and `ARKitScenes`, and represent diverse environments -- including residential spaces, professional settings (e.g., offices, labs), and industrial spaces (e.g., factories) and multiple geographic regions. By repurposing these existing 3D reconstruction and understanding datasets, VSI-Bench benefits from accurate object-level annotations, which are used in question generation and could support future studies exploring the connection between MLLMs and 3D reconstruction.
40
 
41
  The dataset contains the following fields:
42
 
 
44
  | :--------- | :---------- |
45
  | `idx` | Global index of the entry in the dataset |
46
  | `dataset` | Video source: `scannet`, `arkitscenes` or `scannetpp` |
47
+ | `scene_name` | Scene (video) name for each question-answer pair |
48
  | `question_type` | The type of task for question |
49
  | `question` | Question asked about the video |
50
+ | `options` | Choices for the question (only for multiple choice questions) |
51
+ | `ground_truth` | Ground truth answer for the question |
 
 
52
 
53
+ ## Evaluation
54
 
55
+ VSI-Bench evaluates performance using two metrics: for multiple-choice questions, we use `Accuracy`, calculated based on exact matches. For numerical-answer questions, we introduce a new metric, `MRA (Mean Relative Accuracy)`, to assess how closely model predictions align with ground truth values.
56
 
57
+ We provide an out-of-the-box evaluation of VSI-Bench in our [GitHub repository](https://github.com/vision-x-nyu/thinking-in-space), including the [metrics](https://github.com/vision-x-nyu/thinking-in-space/blob/main/lmms_eval/tasks/vsibench/utils.py#L109C1-L155C36) implementation used in our framework. For further detailes, users can refer to our paper and GitHub repository.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
  ## Citation
60
 
arkitscenes.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:005232fa20ccfa287255ca96c4d0c0c0863c24bdc1a40a89165b75f509bf4907
3
+ size 1812227830
scannet.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:787b0c061bde5c1f5e076012c1239340fdb1330787c644977c7cad5cdbe1d548
3
+ size 2885230719
scannetpp.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:164b2314107e070c7d8a652897404904adf36a8868c2293be04382727d9a19be
3
+ size 1030992424
test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64eb8a4ff3c705038d2c489fb97345c19e33f0a297f440a168e6940e76d329ca
3
+ size 160845