Datasets:

ArXiv:
License:
Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
calibrationType: string
sensors: list<item: struct<type: string, id: string, attributes: list<item: struct<name: string, value: string>>, intrinsicMatrix: list<item: list<item: double>>, extrinsicMatrix: list<item: list<item: double>>, cameraMatrix: list<item: list<item: double>>, homography: list<item: list<item: double>>>>
vs
camera projection matrix: list<item: list<item: double>>
homography matrix: list<item: list<item: double>>
reprojection_error: double
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3335, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2296, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 504, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              calibrationType: string
              sensors: list<item: struct<type: string, id: string, attributes: list<item: struct<name: string, value: string>>, intrinsicMatrix: list<item: list<item: double>>, extrinsicMatrix: list<item: list<item: double>>, cameraMatrix: list<item: list<item: double>>, homography: list<item: list<item: double>>>>
              vs
              camera projection matrix: list<item: list<item: double>>
              homography matrix: list<item: list<item: double>>
              reprojection_error: double

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Physical AI Smart Spaces Dataset

Overview

Comprehensive, annotated dataset for multi-camera tracking and 2D/3D object detection. This dataset is synthetically generated with Omniverse.

This dataset consists of over 250 hours of video from across nearly 1,500 cameras from indoor scenes in warehouses, hospitals, retail, and more. The dataset is time synchronized for tracking humans across multiple cameras using feature representation and no personal data.

Dataset Description

Dataset Owner(s)

NVIDIA

Dataset Creation Date

We started to create this dataset in December, 2023. First version was completed and released as part of 8th AI City Challenge in conjunction with CVPR 2024.

Dataset Characterization

  • Data Collection Method: Synthetic
  • Labeling Method: Automatic with IsaacSim

Video Format

  • Video Standard: MP4 (H.264)
  • Video Resolution: 1080p
  • Video Frame rate: 30 FPS

Ground Truth Format (MOTChallenge) for MTMC_Tracking_2024

Annotations are provided in the following text format per line:

<camera_id> <obj_id> <frame_id> <xmin> <ymin> <width> <height> <xworld> <yworld>
  • <camera_id>: Numeric identifier for the camera.
  • <obj_id>: Consistent numeric identifier for each object across cameras.
  • <frame_id>: Frame index starting from 0.
  • <xmin> <ymin> <width> <height>: Axis-aligned bounding box coordinates in pixels (top-left origin).
  • <xworld> <yworld>: Global coordinates (projected bottom points of objects) based on provided camera matrices.

The video file and calibration (camera matrix and homography) are provided for each camera view.

Calibration and ground truth files in the updated 2025 JSON format are now also included for each scene.

Note: some calibration fields—such as camera coordinates, camera directions, and scale factors—are not be available for the 2024 dataset due to original data limitations.

Directory Structure for MTMC_Tracking_2025

  • videos/: Video files.
  • ground_truth.json: Detailed ground truth annotations (see below).
  • calibration.json: Camera calibration and metadata.
  • map.png: Visualization map in top-down view.

Ground Truth Format (JSON) for MTMC_Tracking_2025

Annotations per frame:

{
  "<frame_id>": [
    {
      "object_type": "<class_name>",
      "object_id": <int>,
      "3d_location": [x, y, z],
      "3d_bounding_box_scale": [w, l, h],
      "3d_bounding_box_rotation": [pitch, roll, yaw],
      "2d_bounding_box_visible": {
        "<camera_id>": [xmin, ymin, xmax, ymax]
      }
    }
  ]
}

Calibration Format (JSON) for MTMC_Tracking_2025

Contains detailed calibration metadata per sensor:

{
  "calibrationType": "cartesian",
  "sensors": [
    {
      "type": "camera",
      "id": "<sensor_id>",
      "coordinates": {"x": float, "y": float},
      "scaleFactor": float,
      "translationToGlobalCoordinates": {"x": float, "y": float},
      "attributes": [
        {"name": "fps", "value": float},
        {"name": "direction", "value": float},
        {"name": "direction3d", "value": "float,float,float"},
        {"name": "frameWidth", "value": int},
        {"name": "frameHeight", "value": int}
      ],
      "intrinsicMatrix": [[f_x, 0, c_x], [0, f_y, c_y], [0, 0, 1]],
      "extrinsicMatrix": [[3×4 matrix]],
      "cameraMatrix": [[3×4 matrix]],
      "homography": [[3×3 matrix]]
    }
  ]
}

Evaluation

  • 2024 Edition: Evaluation based on HOTA scores at the 2024 AI City Challenge Server. The submission is currently disabled, as the ground truths of test set are provided with this release.
  • 2025 Edition: Evaluation system and test set forthcoming in the 2025 AI City Challenge.

Dataset Quantification

Dataset Annotation Type Hours Cameras Object Classes & Counts No. 3D Boxes No. 2D Boxes Total Size
MTMC_Tracking_2024 2D bounding boxes, multi-camera tracking IDs 212 953 Person: 2,481 52M 135M 213 GB
MTMC_Tracking_2025
(Train & Validation only)
2D & 3D bounding boxes, multi-camera tracking IDs 42 504 Person: 292
Forklift: 13
NovaCarter: 28
Transporter: 23
FourierGR1T2: 6
AgilityDigit: 1
Overall: 363
8.9M 73M 74 GB

References

Please cite the following papers when using this dataset:

@InProceedings{Wang24AICity24,
author = {Shuo Wang and David C. Anastasiu and Zheng Tang and Ming-Ching Chang and Yue Yao and Liang Zheng and Mohammed Shaiqur Rahman and Meenakshi S. Arya and Anuj Sharma and Pranamesh Chakraborty and Sanjita Prajapati and Quan Kong and Norimasa Kobori and Munkhjargal Gochoo and Munkh-Erdene Otgonbold and Ganzorig Batnasan and Fady Alnajjar and Ping-Yang Chen and Jun-Wei Hsieh and Xunlei Wu and Sameer Satish Pusegaonkar and Yizhou Wang and Sujit Biswas and Rama Chellappa},
title = {The 8th {AI City Challenge},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
note = {arXiv:2404.09432},
month = {June},
year = {2024},
}

@misc{Wang24BEVSUSHI,
author = {Yizhou Wang and Tim Meinhardt and Orcun Cetintas and Cheng-Yen Yang and Sameer Satish Pusegaonkar and Benjamin Missaoui and Sujit Biswas and Zheng Tang and Laura Leal-Taix{\'e}},
title = {{BEV-SUSHI}: {M}ulti-target multi-camera {3D} detection and tracking in bird's-eye view},
note = {arXiv:2412.00692},
year = {2024}
}

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Changelog

  • 2025-04-23: Added 2025-format calibration and ground truth JSON files to all MTMC_Tracking_2024 scenes.
Downloads last month
1,991

Collection including nvidia/PhysicalAI-SmartSpaces