Shangding-Gu commited on
Commit
ad4d157
·
verified ·
1 Parent(s): 0049a8d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -1
README.md CHANGED
@@ -1,6 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ## Project Homepage:
2
  https://open-space-reasoning.github.io/
3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Download Dataset
5
 
6
  You can download the dataset directly from our [Hugging Face repository](https://huggingface.co/datasets/Open-Space-Reasoning/Benchmark) via:
@@ -14,7 +99,6 @@ If you encounter any issues during the download, we also provide a zipped versio
14
  [Download Dataset (ZIP)](https://huggingface.co/datasets/Open-Space-Reasoning/M4R-zip)
15
 
16
 
17
-
18
  **Note**:
19
  If you encounter any issues, please visit our [GitHub page](https://github.com/SafeRL-Lab/m4r/tree/main), where we provide more information about the project and detailed instructions for downloading and using the datasets.
20
 
 
1
+ <div align="center">
2
+ <a href="https://github.com/SafeRL-Lab/Open-Space-Reasoning">
3
+ <img src="./figures/logo-m4r.png" alt="Logo" width="60%">
4
+ </a>
5
+
6
+ <h1 align="center" style="font-size: 30px;"><strong><em>M4R</em></strong>: Measuring Massive Multimodal Understanding and Reasoning in Open Space</h1>
7
+ <p align="center">
8
+ <!-- <a href="./docs/M4R_paper.pdf">Paper</a> -->
9
+ <!-- · -->
10
+ <a href="https://open-space-reasoning.github.io/">Website</a>
11
+ ·
12
+ <a href="https://github.com/SafeRL-Lab/Open-Space-Reasoning/">Code</a>
13
+ ·
14
+ <a href="https://open-space-reasoning.github.io/#leaderboard-land-air">Leaderboard</a>
15
+ ·
16
+ <a href="https://huggingface.co/datasets/Open-Space-Reasoning/Benchmark">Dataset</a>
17
+ ·
18
+ <a href="https://huggingface.co/datasets/Open-Space-Reasoning/M4R-zip">Dataset-Zip</a>
19
+ ·
20
+ <a href="https://github.com/SafeRL-Lab/Open-Space-Reasoning/issues">Issue</a>
21
+ </p>
22
+ </div>
23
+
24
  ## Project Homepage:
25
  https://open-space-reasoning.github.io/
26
 
27
+ ## About the Dataset:
28
+ This benchmark includes approximately 2,000 videos and 19,000 human-annotated question-answer pairs, covering a wide range of reasoning tasks (as shown in Figure 1). All annotations were performed by highly educated annotators, each holding at least a master's degree in engineering-related fields such as mathematics or computer science. The dataset features a variety of video lengths, categories, and frame counts, and spans three primary open-space reasoning scenarios: **land space**, **water space**, and **air space**. An overview of the dataset’s characteristics is shown in Figure 2, which illustrates the distributions of video duration, domain coverage, and reasoning styles. During annotation, we first design the hard-level tasks and label each question with the ground-truth answer. Based on these, we then construct the medium and easy tasks. The primary differences between difficulty levels lie in the number and types of answer choices. Details of the annotation procedure and difficulty levels are provided in our [paper](https://open-space-reasoning.github.io/static/papers/M4R_paper.pdf).
29
+
30
+ ### Dataset Format:
31
+
32
+ <div align=center>
33
+ <img src="./figures/qa-example.png" width="85%"/>
34
+ </div>
35
+ <div align=center>
36
+ <center style="color:#000000;text-decoration:underline">Figure 1. A question and answer example: For each open-space reasoning setting, we include three
37
+ types of video lengths: short, medium, and long. Each video length includes tasks designed to
38
+ evaluate temporal reasoning, spatial reasoning, and intent reasoning.</center>
39
+ </div>
40
+
41
+
42
+ ### Dataset Distribution:
43
+ <div align=center>
44
+ <img src="./figures/data_distribution.png" width="85%"/>
45
+ </div>
46
+ <div align=center>
47
+ <center style="color:#000000;text-decoration:underline">Figure 2. Distribution of video and task properties in the M4R benchmark.</center>
48
+ </div>
49
+
50
+
51
+
52
+ ### Three Space Settings
53
+
54
+ <div align=center>
55
+ <img src="./figures/three-example-scenarios.png" width="85%"/>
56
+ </div>
57
+ <div align=center>
58
+ <center style="color:#000000;text-decoration:underline">Figure 3. Examples of multimodal Understanding and Reasoning in Open-Space Scenarios.</center>
59
+ </div>
60
+
61
+ ### Reasoning Settings:
62
+
63
+ <div align=center>
64
+ <img src="./figures/reasoning-settings.png" width="85%"/>
65
+ </div>
66
+ <div align=center>
67
+ <center style="color:#000000;text-decoration:underline">Figure 4. Examples of reasoning question settings in M4R across three key reasoning types: Temporal
68
+ Reasoning, which involves understanding event sequences and motion over time; Spatial Reasoning,
69
+ which focuses on relative positioning and orientation in space; and Intent Reasoning, which evaluates
70
+ understanding of goal-directed behaviors and decision-making in dynamic environments..</center>
71
+ </div>
72
+
73
+ ### One Example in Land Space Settings:
74
+
75
+ <div align=center>
76
+ <img src="./figures/land-space-examples.png" width="85%"/>
77
+ </div>
78
+ <div align=center>
79
+ <center style="color:#000000;text-decoration:underline">Figure 5. Land-space traffic accident scenarios for open-space video understanding and reasoning include
80
+ <span style="color:cyan;">intersection collisions</span>,
81
+ <span style="color:blue;">urban road accidents</span>,
82
+ <span style="color:gray;">nighttime incidents</span>,
83
+ <span style="color:orange;">rural road accidents</span>,
84
+ <span style="color:pink;">snow-covered road collisions</span>, and
85
+ <span style="color:green;">freeway accidents</span>.</center>
86
+ </div>
87
+
88
+
89
  ## Download Dataset
90
 
91
  You can download the dataset directly from our [Hugging Face repository](https://huggingface.co/datasets/Open-Space-Reasoning/Benchmark) via:
 
99
  [Download Dataset (ZIP)](https://huggingface.co/datasets/Open-Space-Reasoning/M4R-zip)
100
 
101
 
 
102
  **Note**:
103
  If you encounter any issues, please visit our [GitHub page](https://github.com/SafeRL-Lab/m4r/tree/main), where we provide more information about the project and detailed instructions for downloading and using the datasets.
104