Dataset Viewer
Auto-converted to Parquet
@context
dict
@type
string
distribution
list
name
string
description
string
alternateName
sequence
url
string
version
string
datePublished
timestamp[s]
license
string
creator
list
conformsTo
string
keywords
sequence
{ "@language": "en", "@vocab": "https://schema.org/", "arrayShape": "cr:arrayShape", "citeAs": "cr:citeAs", "column": "cr:column", "conformsTo": "dct:conformsTo", "cr": "http://mlcommons.org/croissant/", "data": { "@id": "cr:data", "@type": "@json" }, "dataBiases": "cr:dataBiases", "dataCollection": "cr:dataCollection", "dataType": { "@id": "cr:dataType", "@type": "@vocab" }, "dct": "http://purl.org/dc/terms/", "extract": "cr:extract", "field": "cr:field", "fileProperty": "cr:fileProperty", "fileObject": "cr:fileObject", "fileSet": "cr:fileSet", "format": "cr:format", "includes": "cr:includes", "isArray": "cr:isArray", "isLiveDataset": "cr:isLiveDataset", "jsonPath": "cr:jsonPath", "key": "cr:key", "md5": "cr:md5", "parentField": "cr:parentField", "path": "cr:path", "personalSensitiveInformation": "cr:personalSensitiveInformation", "recordSet": "cr:recordSet", "references": "cr:references", "regex": "cr:regex", "repeated": "cr:repeated", "replace": "cr:replace", "sc": "https://schema.org/", "separator": "cr:separator", "source": "cr:source", "subField": "cr:subField", "transform": "cr:transform" }
sc:Dataset
[ { "@type": "cr:FileObject", "@id": "miss_1.h5", "name": "miss_1.h5", "contentUrl": "https://huggingface.co/datasets/ardamamur/EgoExOR/resolve/main/miss_1.h5", "encodingFormat": "application/x-hdf5", "sha256": "0398a442dd05059bca8b032a5bf7105e315fa7373f4c72f38d27c400f69a757d" }, { "@type": "cr:FileObject", "@id": "miss_2.h5", "name": "miss_2.h5", "contentUrl": "https://huggingface.co/datasets/ardamamur/EgoExOR/resolve/main/miss_2.h5", "encodingFormat": "application/x-hdf5", "sha256": "2a569dde7a0f93cc8eb92695691d33953e3e68a6c5d67053d0505051f42d890b" }, { "@type": "cr:FileObject", "@id": "miss_3.h5", "name": "miss_3.h5", "contentUrl": "https://huggingface.co/datasets/ardamamur/EgoExOR/resolve/main/miss_3.h5", "encodingFormat": "application/x-hdf5", "sha256": "732c2665e0bc52bb430a8cd73323034af6f912b4fa25e503c3500a4c7d65a4df" }, { "@type": "cr:FileObject", "@id": "miss_4.h5", "name": "miss_4.h5", "contentUrl": "https://huggingface.co/datasets/ardamamur/EgoExOR/resolve/main/miss_4.h5", "encodingFormat": "application/x-hdf5", "sha256": "ef9179ed13b1835ee51a4537ca327a7ec202aa6ab18ddc21d778a10d129459f8" }, { "@type": "cr:FileObject", "@id": "ultrasound_1.h5", "name": "ultrasound_1.h5", "contentUrl": "https://huggingface.co/datasets/ardamamur/EgoExOR/resolve/main/ultrasound_1.h5", "encodingFormat": "application/x-hdf5", "sha256": "7ca909346e8bb9476682fbc1b9640fa88ec3919cc00c90c7b1eef0eb5fafd2b9" }, { "@type": "cr:FileObject", "@id": "ultrasound_2.h5", "name": "ultrasound_2.h5", "contentUrl": "https://huggingface.co/datasets/ardamamur/EgoExOR/resolve/main/ultrasound_2.h5", "encodingFormat": "application/x-hdf5", "sha256": "6fc21e20a320200bf02af2a78f4fbfa3774f0e4acb3d279105c0bcf83493d6f0" }, { "@type": "cr:FileObject", "@id": "ultrasound_3.h5", "name": "ultrasound_3.h5", "contentUrl": "https://huggingface.co/datasets/ardamamur/EgoExOR/resolve/main/ultrasound_3.h5", "encodingFormat": "application/x-hdf5", "sha256": "1c8e698ef69c60eb7725b563a1f1e8430aa787bb446ef0411879b21a8f4e577d" }, { "@type": "cr:FileObject", "@id": "ultrasound_4.h5", "name": "ultrasound_4.h5", "contentUrl": "https://huggingface.co/datasets/ardamamur/EgoExOR/resolve/main/ultrasound_4.h5", "encodingFormat": "application/x-hdf5", "sha256": "1149d69aa4e5e3c1c3007924c551fa2fddea5bdfa77d0684c2e78b1868d05b50" }, { "@type": "cr:FileObject", "@id": "ultrasound_5_14.h5", "name": "ultrasound_5_14.h5", "contentUrl": "https://huggingface.co/datasets/ardamamur/EgoExOR/resolve/main/ultrasound_5_14.h5", "encodingFormat": "application/x-hdf5", "sha256": "1db79ef99299e2d54d7add56a13d516a6f2a81e45314d2d351ce9ddc89826f0e" }, { "@type": "cr:FileObject", "@id": "ultrasound_5_48.h5", "name": "ultrasound_5_48.h5", "contentUrl": "https://huggingface.co/datasets/ardamamur/EgoExOR/resolve/main/ultrasound_5_48.h5", "encodingFormat": "application/x-hdf5", "sha256": "b3dcb641b2e609a8ac0c3742e0eab39db96bb5f1bfdfa9026a6ebb4c251f5d47" }, { "@type": "cr:FileObject", "@id": "splits.h5", "name": "splits.h5", "contentUrl": "https://huggingface.co/datasets/ardamamur/EgoExOR/resolve/main/splits.h5", "encodingFormat": "application/x-hdf5", "sha256": "afffd85152f6177ee1e052d53c8374a4c7226bf94be923a4f1406062fd4f8a8b" } ]
EgoExOR
EgoExOR: An Egocentric–Exocentric Operating Room Dataset for Comprehensive Understanding of Surgical Activities Overview EgoExOR is a multimodal dataset capturing surgical procedures from both egocentric (ARIA glasses worn by participants) and exocentric (room cameras) perspectives in an operating room environment. Each procedure is stored in a single… See the full description on the dataset page: https://huggingface.co/datasets/ardamamur/EgoExOR.
[ "ardamamur/EgoExOR", "EgoExOR", "egoexor", "EgoExOR: An Egocentric Exocentric Operating Room Dataset for Comprehensive Understanding of Surgical Activities" ]
https://huggingface.co/datasets/ardamamur/EgoExOR
0.0
2025-05-06T00:00:00
Apache-2.0
[ { "@type": "sc:Person", "name": "Arda Mamur", "affiliation": "Technical University of Munich", "url": "https://huggingface.co/ardamamur" } ]
http://mlcommons.org/croissant/1.1
[ "medical", "surgrical_activities", "egocentric", "exocentric", "egoexo", "scene_graph", "operating_room", "English", "apache-2.0", "< 1K", "HDF5", "Datasets", "Croissant", "🇺🇸 Region: US" ]

EgoExOR: An Egocentric–Exocentric Operating Room Dataset for Comprehensive Understanding of Surgical Activities

Data Code Model

Official code of the paper "EgoExOR: An Egocentric–Exocentric Operating Room Dataset for Comprehensive Understanding of Surgical Activities" submitted at NeurIPS 2025 Datasets & Benchmarks Track.

Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both. We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures, Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery, EgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from wearable glasses, exocentric RGB and depth from RGB-D cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (568,235 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and human-centric perception. We evaluate the surgical scene graph generation performance of two adapted state-of-the-art models and offer a new baseline that explicitly leverages EgoExOR’s multimodal and multi-perspective signals. This new dataset and benchmark set a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception.

EgoExOR Overview

Figure: Overview of one timepoint from the EgoExoR dataset, showcasing synchronized multi-view egocentric RGB and exocentric RGB-D video streams, live ultrasound monitor feed, audio, a fused 3D point-cloud reconstruction, and gaze, hand‐pose and scene graph annotations.

🌟 Key Features

  • Multiple Modalities: Each take includes RGB video, audio, eye gaze tracking, hand tracking, 3D point cloud data, and annotations, all captured simultaneously.

  • Time-Synchronized Streams: All modalities are aligned on a common timeline, enabling precise cross-modal correlation (e.g. each video frame has corresponding gaze coordinates, hand positions, etc.).

  • Research Applicability: EgoExOR aims to fill the gap in both egocentric and exocentric surgrical datasets, supporting development of AI assistants, skill assessment tools, and multimodal models in medical and augmented reality domains.

📂 Dataset Structure

The dataset is available in two formats:

  • Individual Files: Hosted on the Hugging Face (ardamamur/EgoExOR)repository for efficient storage and access.
  • Merged HDF5 File: Consolidates all data, including splits, into a single file. Should be done locally.

Individual Files (Hugging Face Repository)

Individual files are organized hierarchically by surgery type, procedure, and take, with components like RGB frames, eye gaze, and annotations stored separately for efficiency. The splits.h5 file defines the train, validation, and test splits.

  • metadata/
    • vocabulary/
      • entity (Dataset: name, id)
        • Lists entities (e.g., objects, people) with their names and unique IDs.
      • relation (Dataset: name, id)
        • Lists relationships (e.g., "holding") with their names and unique IDs.
    • sources/
      • sources (Dataset: name, id)
        • Lists data sources (e.g., cameras like 'assistant', 'ultrasound', 'external') with their names and unique IDs.
        • Note: Camera IDs in eye_gaze/coordinates are mapped to this sources dataset for accurate source names. Do not use takes/<take_id>/sources/ for mapping camera IDs to get the source names, though the source names are listed in the same order.
    • dataset/
      • Attributes: version, creation_date, title
        • Provides dataset-level information, such as version number, creation date, and title.
  • data/
    • <surgery_type>/
      • Directory named after the type of surgery (e.g., "MISS").
      • <procedure_id>/
        • Directory for a specific procedure.
        • takes/
          • <take_id>/
            • Directory for a specific recording (subclip) of a procedure.
            • sources/
              • Attributes: source_count (int), source_0 (e.g., 'head_surgeon'), source_1, ...
                • Metadata for take cameras/sources, mapping array indices to camera/source IDs.
                • Note: Source names are in the same order as in metadata/sources, but for camera/source ID mapping (in gaze), use metadata/sources to get accurate source names.
            • frames/
              • rgb (Dataset: [num_frames, num_cameras, height, width, 3], uint8)
                • Synchronized video frames with dimensions: number of frames, number of cameras, height, width, and 3 color channels.
            • eye_gaze/
              • coordinates (Dataset: [num_frames, num_ego_cameras, 3], float32)
                • Eye gaze data from Egocentric devices with dimensions: number of frames, number of ego cameras, and 3 values (camera/source ID, x-coordinate, y-coordinate).
                • Invalid gaze points are marked as [-1., -1.].
                • Note: The camera_id in the last dimension must be mapped to metadata/sources for the correct source name, not to takes/<take_id>/sources/.
            • eye_gaze_depth/
              • values (Dataset: [num_frames, num_ego_cameras], float32)
                • Depth values for eye gaze in meters, synchronized with eye_gaze/coordinates (can use camera/source ID from coordinates).
                • Defaults to 1.0 if depth data is unavailable.
            • hand_tracking/
              • positions (Dataset: [num_frames, num_ego_cameras, 17], float32)
                • Hand tracking data from egocentric devices with dimensions: number of frames, number of ego cameras, and 17 values (camera ID + 8 keypoints for left hand + 8 keypoints for right hand, including wrist, palm, and normals).
                • Invalid points are marked with NaN.
            • audio/ (Optional)
              • waveform (Dataset: [num_samples, 2], float32)
                • Full stereo audio waveform with dimensions: number of samples and 2 channels (left, right).
              • snippets (Dataset: [num_frames, samples_per_snippet, 2], float32)
                • 1-second stereo audio snippets aligned with frames, with dimensions: number of frames, samples per snippet, and 2 channels.
            • point_cloud/
              • coordinates (Dataset: [num_frames, num_points, 3], float32)
                • Merged 3D point cloud coordinates from external cameras, with dimensions: number of frames, number of points, and 3 coordinates (x, y, z).
              • colors (Dataset: [num_frames, num_points, 3], float32)
                • RGB colors for point cloud points (0-1 range), with dimensions: number of frames, number of points, and 3 color channels.
            • annotations/
              • frame_idx
                • rel_annotations (Dataset: [n_annotations_per_frame, 3], object (byte string))
                  • Text-based scene graph annotations (e.g., "head_surgeon holding scalpel") for each frame.
                • scene_graph (Dataset: [n_annotations_per_frame, 3], float32)
                  • Tokenized annotations using integer mappings from metadata/vocabulary, representing relationships in a structured format.
  • splits.h5
    • A standalone file defining the dataset splits (train, validation, test).
    • Contains columns: surgery_type, procedure_id, take_id, frame_id
      • surgery_type: Type of surgical procedure (e.g., "appendectomy").
      • procedure_id: Unique identifier for a specific procedure.
      • take_id: Identifier for a specific recording (subclip) of a procedure.
      • frame_id: Identifier for individual frames within a take.

Merged Dataset File (Locally)

The merged dataset file consolidates all data from the individual files into a single file, including the splits defined in splits.h5. This file follows the same structure as above, with an additional splits/ directory that organizes the data into train, validation, and test subsets.

  • splits/
    • train, validation, test
      • Each split is a dataset with columns: surgery_type, procedure_id, take_id, frame_id
        • Links to the corresponding data in the data/ directory for easy access during machine learning tasks.

⚙️ Efficiency and Usability

  • Efficiency:
    • HDF5 Format: Ideal for large, complex datasets with hierarchical organization and partial loading.
    • Compression: gzip reduces file size, critical for video and point cloud data.
    • Chunking: Enables efficient access to specific frame ranges, supporting sequence-based model training.
  • Usability:
    • Logical Structure: Hierarchical organization (data/surgery/procedure/take/modality) simplifies navigation.
    • Embedded Metadata: Source mappings and vocabularies enhance self-containment.
  • Scalability: Easily accommodates new surgeries or clips by adding groups to the existing hierarchy.

📜 License

Released under the Apache 2.0 License, permitting free academic and commercial use with attribution.


📚 Citation

A formal BibTeX entry will be provided upon publication. For now, please cite the dataset URL.


🤝 Contributing

Contributions are welcome! Submit pull requests to improve loaders, add visualizers, or share benchmark results.


Dataset URL: ardamamur/EgoExOR
Last Updated: May 2025

Downloads last month
58

Models trained or fine-tuned on ardamamur/EgoExOR