AI Surgical Navigation

Surgical navigation systems support a range of clinical procedures, including minimally invasive neurosurgery, stereotaxy, and implant placement. By integrating pre-operative imaging with real-time intraoperative tracking, they enhance surgical precision, accuracy, and safety. Traditional systems often rely on infrared marker-based tracking due to its high precision, but these require a clear line-of-sight, careful operating room setup, and are vulnerable to marker contamination and reflectivity loss. In contrast, we employ a markerless RGB-based approach using multiple cameras and deep learning. This setup reduces occlusion issues common with markers and enables broader functionality, as the cameras can be repurposed for additional tasks within the operating room, forming a versatile, multi-use imaging system.

Recurrent Multi-View 6D Pose Estimation

We extended SpyroPose to handle sequential data by replacing the U-Net’s convolutional layers with ConvGRU layers. This modification enables 6D pose estimation from video sequences captured across multiple camera views. SpyroPose outputs a distribution of pose candidates, and we enhanced the method for selecting the final pose from this distribution to improve accuracy.

Training Data

The training data used for our experiments can be found here: tbd

Publications