Datasets:

Modalities:
Image
Languages:
English
Libraries:
Datasets
License:
Dataset Viewer (First 5GB)
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
2.88k
2.88k
End of preview. Expand in Data Studio

STRIDE-QA-Mini

Dataset Description

STRIDE-QA-Mini (SpatioTemporal Reasoning In Driving Environments for Visual Question Answering) is a compact subset of the STRIDE-QA corpus, built from real urban-driving footage collected by our in-house data-collection vehicles. It is designed for studying spatio-temporal reasoning in autonomous-driving scenes with Vision-Language-Models (VLMs).

The dataset provides four-dimensinal context (3-D space plus time) and frames every question in the ego-vehicle coordinate system, encouraging models to reason about where surrounding agents will be in the next one to three seconds, not merely what is visible at the current instant.

STRIDE-QA-Mini is structured around three successive design principles:

  1. Object-centric queries The foundation layer asks questions about spatial relations and immediate interactions between pairs of non-ego objects, such as surrounding vehicles, pedestrians, and static infrastructure. These queries measure pure relational understanding that is independent of the ego vehicle.
  2. Ego-aware queries Building on the object-centric layer, every question is phrased in the ego coordinate frame so that answers are directly actionable for planning and control.
  3. Prediction-oriented queries Building on the ego-aware layer, we introduce an additional subset of queries that require the model to anticipate the ego vehicle’s spatial relations and interactions 1–3 seconds ahead, pushing evaluation beyond static perception toward short-horizon motion forecasting. For example: “What is the likely separation in meters and heading (clock position: 12 = front, 3 = right, 6 = rear, 9 = left) between the ego vehicle and Region [1] after 3 seconds?”

Together these elements make STRIDE-QA-Mini a concise yet challenging dataset that challenges VLMs to handle not only what they see but also what they must predict, skills essential for safe and intelligent autonomous systems.

Key Features

Aspect Details
Spatio-temporal focus Questions probe object–object, ego–object, and future interaction reasoning.
Three QA categories 1) Object-centric Spatial QA — relations between two external objects
2) Ego-centric Spatial QA — relations between the ego vehicle and another object
3) Ego-centric Spatio-temporal QA — future distance & orientation prediction tasks
Driving domain Real dash-cam footage collected in Tokyo (urban, suburban, highway, various weather).
Privacy aware Faces and license plates are automatically blurred.

Dataset Statistics

Category Source file QA pairs
Object-centric Spatial QA object_centric_spatial_qa.json 19,895
Ego-centric Spatial QA ego_centric_spatial_qa.json 54,390
Ego-centric Spatio-temporal QA ego_centric_spatiotemporal_qa_short_answer.json 28,935
Images images/*.jpg 5,539 files

Total QA pairs: 103,220

Data Fields

Field Type Description
id str Unique sample ID.
image str File name of the key frame used in the prompt.
images list[str] File names for the four consicutive image frames. Only avaiable in Ego-centric Spatiotemporal QA category.
conversations list[dict] Dialogue in VILA format ("from": "human" / "gpt").
bbox list[list[float]] Bounding boxes [x₁, y₁, x₂, y₂] for referenced regions.
rle list[dict] COCO-style run-length masks for regions.
region list[list[int]] Region tags mentioned in the prompt.
qa_info list Meta data for each message turn in dialogue.

Privacy Protection

To ensure privacy protection, human faces and license plates in STRIDE-QA-Mini images were anonymized using the Dashcam Anonymizer. However, this process cannot guarantee perfect anonymization.

License

STRIDE-QA-Mini is released under the CC BY-NC-SA 4.0.

Acknowledgements

This dataset is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).

We would like to acknowledge the use of the following open-source repositories:

Downloads last month
34