ardamamur commited on
Commit
b77c4c9
·
verified ·
1 Parent(s): feb09cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -40,9 +40,7 @@ Official code of the paper "EgoExOR: An Egocentric–Exocentric Operating Room D
40
 
41
  - **Authors**: [Ege Özsoy][eo], [Arda Mamur][am], Felix Tristram, Chantal Pellegrini, Magdalena Wysocki, Benjamin Busam, [Nassir Navab][nassir]
42
 
43
- Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. We present EgoExOR, a pioneering dataset to propel surgical scene understanding by fusing first-person and third-person perspectives. Spanning 93 minutes (84,277 frames at 15 FPS) of two critical spine proceduresUltrasound-Guided Needle Insertion and Minimally Invasive Spine SurgeryEgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from Project Aria glasses, exocentric RGB/depth from Azure Kinect cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (~573,000 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and robotic guidance. We evaluate EgoExOR with established OR perception models and a new tailored approach, showcasing its potential to advance surgical automation and skill analysis. EgoExOR redefines OR datasets, offering a rich, multimodal resource for next-generation clinical perception.
44
-
45
-
46
 
47
  [am]: https://github.com/ardamamur/
48
  [eo]: https://www.cs.cit.tum.de/camp/members/ege-oezsoy/
 
40
 
41
  - **Authors**: [Ege Özsoy][eo], [Arda Mamur][am], Felix Tristram, Chantal Pellegrini, Magdalena Wysocki, Benjamin Busam, [Nassir Navab][nassir]
42
 
43
+ Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both. We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures, Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery, EgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from wearable glasses, exocentric RGB and depth from RGB-D cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (568,235 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and human-centric perception. We evaluate the surgical scene graph generation performance of two adapted state-of-the-art models and offer a new baseline that explicitly leverages EgoExOR’s multimodal and multi-perspective signals. This new dataset and benchmark set a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception.
 
 
44
 
45
  [am]: https://github.com/ardamamur/
46
  [eo]: https://www.cs.cit.tum.de/camp/members/ege-oezsoy/