--- license: apache-2.0 language: - en tags: - medical - surgrical_activities - egocentric - egoexo - scene_graph - operating_room pretty_name: >- EgoExOR: An Egocentric–Exocentric Operating Room Dataset for Comprehensive Understanding of Surgical Activities size_categories: - n<1K --- # EgoExOR: An Egocentric–Exocentric Operating Room Dataset for Comprehensive Understanding of Surgical Activities
Data Code arXiv

EgoExOR Overview

--- ## Overview **EgoExOR** is a multimodal dataset capturing surgical procedures from both **egocentric** (ARIA glasses worn by participants) and **exocentric** (room cameras) perspectives in an operating room environment. Each procedure is stored in a single, time-synchronized HDF5 file (`*.h5`) containing RGB video, audio, eye-gaze, hand-tracking, 3D point clouds, and expert scene-graph annotations. The dataset is designed for selective downloading, reproducible train/validation/test splits, and easy visualization through one-line helper functions. ### Key Features - **Multimodal Data**: Includes RGB video, audio, eye-gaze tracking, hand-tracking, 3D point clouds, and time-stamped annotations, captured simultaneously. - **Realistic Scenarios**: Recorded during ultrasound exams and minimally invasive surgery tasks, reflecting complex, real-world operating room conditions. - **Time-Synchronized Streams**: All modalities aligned on a common timeline for precise cross-modal analysis (e.g., correlating video frames with gaze and hand positions). - **Research Applications**: Supports AI-driven surgical assistance, skill assessment, and multimodal model development, addressing gaps in medical dataset availability. --- ## πŸ“ Repository Structure | Path | Description | |-----------------------------------|-----------------------------------------------------------------------------| | `miss_*.h5`, `ultrasound_*.h5` | One surgical procedure per HDF5 file, including all subclips. | | `splits.json` | Official frame-level splits for `train`, `validation`, and `test`. | | `utils/load_h5.py` | Utility to download and open HDF5 files using `h5py`. | | `utils/visualize_mosaic.py` | Visualization tool to overlay gaze and hand keypoints on video frames. | | `utils/merge_h5.py` | Script to merge multiple HDF5 files into a single file. | | `croissant.json` | Machine-readable metadata in MLCommons Croissant format. | | `README.md` | Dataset documentation (this file). | > **Note**: Large files are managed with **Git-LFS**, with each file under 50 GB to comply with hosting limits. --- ## πŸ“‚ Dataset Organization

HDF5 Structure

Each HDF5 file is hierarchically structured as follows: ```text / (Root) β”œβ”€β”€ metadata β”‚ β”œβ”€β”€ vocabulary β”‚ β”‚ β”œβ”€β”€ entity β”‚ β”‚ └── relation β”‚ β”œβ”€β”€ sources β”‚ β”‚ └── sources β”‚ └── dataset (version, creation_date, title) └── data └── └── └── take └── β”œβ”€β”€ frames/rgb [n_frames, n_cams, H, W, 3] uint8 β”œβ”€β”€ eye_gaze/coordinates [n_frames, n_cams, 3] float32 β”œβ”€β”€ eye_gaze_depth/values [n_frames, n_cams] float32 β”œβ”€β”€ hand_tracking/positions [n_frames, n_cams, 17] float32 β”œβ”€β”€ point_cloud/{coordinates,colors} β”œβ”€β”€ audio/{snippets,waveform} β”œβ”€β”€ sources └── annotations/frame_{idx} β”œβ”€β”€ rel_annotations └── scene_graph ``` A data instance corresponds to a single frame within a subclip, accessible by navigating to the specific `surgery_type`, `procedure_id`, `take_id`, and `frame_id` for the desired modality (e.g., `frames/rgb`). ### Technical Details - **Compression**: Uses `gzip` (level 4) to minimize file size. - **Chunking**: Datasets are chunked along the frame/time dimension for efficient partial loading, ideal for time-series analysis. --- ## πŸ” Modalities - **`sources/`**: Metadata for subclip cameras. - Attributes: `source_count` (int), `source_0` (e.g., 'aria01'), mapping array indices to camera IDs. - **`frames/`**: RGB video frames. - `rgb`: Shape: `(num_frames, num_cameras, height, width, 3)`, dtype: `uint8`. Synchronized BGR frames. - **`eye_gaze/`**: Eye-gaze data from ARIA devices. - `coordinates`: Shape: `(num_frames, num_aria_cameras, 3)`, dtype: `float32`. Pixel coordinates [camera_id, x, y]. Invalid gaze marked as `[-1., -1.]`. - **`eye_gaze_depth/`**: Depth data for gaze points. - `values`: Shape: `(num_frames, num_aria_cameras)`, dtype: `float32`. Depth in meters, defaults to 1.0 if unavailable. - **`hand_tracking/`**: Hand-tracking data from ARIA devices. - `positions`: Shape: `(num_frames, num_aria_cameras, 17)`, dtype: `float32`. Pixel coordinates for 8 keypoints (left/right wrist, palm, and normals). Uses `NaN` for invalid points. - **`audio/`**: Audio data. - `waveform`: Shape: `(num_samples, 2)`, dtype: `float32`. Full stereo audio waveform. - `snippets`: Shape: `(num_frames, samples_per_snippet, 2)`, dtype: `float32`. 1-second stereo snippets aligned with frames. - **`point_cloud/`**: Merged point cloud data from external cameras. - `coordinates`: Shape: `(num_frames, num_points, 3)`, dtype: `float32`. 3D point coordinates. - `colors`: Shape: `(num_frames, num_points, 3)`, dtype: `float32`. RGB colors (0-1 range). - **`annotations/`**: Scene graph annotations per frame. - `rel_annotations`: Shape: `[n_annotations_per_frame, 3]`, dtype: `object` (byte string). - `scene_graph`: Shape: `[n_annotations_per_frame, 3]`, dtype: `float32`. Tokenized annotations with vocabulary in root metadata. --- ## πŸš€ Quick Start ### 1. Load an HDF5 File ```python from utils.load_h5 import load_egoexor_h5 f = load_egoexor_h5("ardamamur/EgoExOR", "miss_4.h5") print(list(f["data/MISS/4/take"].keys())) ``` ### 2. Visualize a Frame ```python from utils.visualize_timepoint import visualize_frame_group visualize_frame_group( "miss_4.h5", # h5_file path surgery_type="MISS", procedure_id=4, take_id=1, frame_idx=500 ) --> saved as png ``` ### 3. [Optional] Merge into single HDF5 ```python from utils.merge_h5 import merge_files merge_files( input_files, # list of h5 files that will be merged output_file="EgoExOR.h5" create_splits=True, train_size=0.7, val_size=0.15, test_size=0.15, random_seed=42 ) ``` ### 3. Access Splits The `splits.json` file provides tuples `(surgery_type, procedure_id, take_id, frame_id)` for `train`, `validation`, and `test` splits. --- ## βš™οΈ Efficiency and Usability - **Efficiency**: - **HDF5 Format**: Ideal for large, complex datasets with hierarchical organization and partial loading. - **Compression**: `gzip` reduces file size, critical for video and point cloud data. - **Chunking**: Enables efficient access to specific frame ranges, supporting sequence-based model training. - **Usability**: - **Logical Structure**: Hierarchical organization (`data/surgery/procedure/take/modality`) simplifies navigation. - **Embedded Metadata**: Source mappings and vocabularies enhance self-containment. - **Scalability**: Easily accommodates new surgeries or clips by adding groups to the existing hierarchy. --- ## πŸ“ Considerations - **Raw Data**: Raw data (e.g., `.vrs` files) is not currently provided. - **Point Cloud Data**: Limited to external camera sources. - **Documentation**: Additional examples and detailed guides could further improve usability. --- ## 🎯 Conclusion **EgoExOR** offers a robust, multimodal dataset for advancing AI-driven surgical understanding. Its synchronized egocentric and exocentric data, stored efficiently in HDF5 with compression and chunking, supports diverse applications from skill assessment to multimodal model development. Future enhancements in documentation and semantic annotations could further elevate its impact. --- ## πŸ“œ License Released under the **Apache 2.0 License**, permitting free academic and commercial use with attribution. --- ## πŸ“š Citation A formal BibTeX entry will be provided upon publication. For now, please cite the dataset URL. --- ## 🀝 Contributing Contributions are welcome! Submit pull requests to improve loaders, add visualizers, or share benchmark results. --- *Dataset URL: [ardamamur/EgoExOR](https://example.com/ardamamur/EgoExOR)* *Last Updated: May 2025*