SmartOT is a joint project of several project partners which aims to develop an automatic lighting system and an intelligent control concept for operating theaters. The project partners are the Digital Media Lab, the Computer Graphics and Virtual Reality Research Lab (CGVR) and the TZI of the University of Bremen, Dr. Mach GmbH & Co. KG, Kizmo GmbH, Qioptiq Photonics GmbH & CO. KG and the University Clinic for Visceral Surgery at the Pius-Hospital Oldenburg. The project is funded by the german Federal Ministry of Education an Research (BMBF - Bundesministerium für Bildung und Forschung).
Previous surgical lamp solutions require frequent manual positioning in order to ensure sufficiently good lighting conditions during a surgery. To avoid the resulting distractions and short interruptions in the operating procedure, an automatic lighting system with a large number of light modules is being developed. This lighting system is designed to automatically detect and avoid shadows casts by surgeons and assistants and to illuminate the site as uniform as possible.
Autonomous lighting control software
Based on the ASuLa project, we provide an adapted version for autonomous lighting control of the lighting modules in the SmartOT project. This software receives range images from multiple range cameras as input and generates control commands for the light modules as output. Together with our project partner Dr. Mach, we are planning the connection to the physical light modules developed by Dr.Mach. Furthermore, the software provides networking functionality with which the control information of the lamps can also be sent to our developed VR simulation and our light planning software.
Our VR simulator enables testing and further development of the newly developed lighting system by using a head-mounted display. We are currently expanding this VR simulator with various functionalities, e.g. the simulation of shadows cast by the lighting system by our own hands and arms in real time. In addition, we have built in functionalities to perform attention tests, which are conducted under the direction of the University of Oldenburg. The 3D model of the operating room was realized by project partner KIZMO.
Figure 1: Image of the VR Simulator.
Figure 2: Lighting system reacts to movements in the VR simulator.
Light planning software
One of our initial tasks was to find optimal lamp positions to ensure that uniform and optimal lighting can be maintained during typical surgeries. We initially developed the lighting design software in consultation with Dr. Mach and Qioptik to enable the testing and analysis of different light module configurations (number and position). In addition, point cloud recordings of real operations can be used to analyze the expected shadow cast by a light module configuration.
Figure 3: Manually testing various light module configurations.
Figure 4: Shadow of a recorded point cloud cast on a virtual site by different light modules.
Figure 5: Algorithm-based optimization of light module configurations on point clouds.
Recordings of open abdominal surgeries (depth images only)
Figure 6: Timelapse of surgery 4. Shown is the resulting pointcloud from the three depth images of the kinects. The points are colored according to the Kinect with which they were captured.
The following files are depth image recordings of real open abdominal surgeries. They were each recorded simultaneously with three ceiling-mounted Microsoft Kinect v2 to avoid occlusions by the conventional surgical lights. Due to memory limitations and for pre-processing, the files contain only about 1 frame per second per Kinect. The files provided do not contain color images.
|Surgery||Duration||Size (zipped)||Size (unzipped)||Note|
|Surgery 1||2:08 h||2.85 GiB||9.26 GiB||Incomplete.|
|Surgery 2||4:02 h||5.27 GiB||17.6 GiB||Incomplete.|
|Surgery 3||3:00 h||4.00 GiB||13.0 GiB||Incomplete. Grid recording.|
|Surgery 4||3:28 h||4.57 GiB||15.0 GiB||Complete. Grid recording.|
|Surgery 5||2:24 h||2.76 GiB||10.4 GiB||Complete.|
|Surgery 6||3:27 h||4.11 GiB||15.0 GiB||Complete.|
|Surgery 7||5:45 h||8.38 GiB||25.1 GiB||Complete.|
|Surgery 8||3:00 h||3.86 GiB||13.0 GiB||Complete.|
|Surgery 9||3:11 h||3.78 GiB||13.8 GiB||Complete. Grid recording.|
Explanations to the notes:
- Incomplete: Surgery was not recorded to the end.
- Complete: Surgery was completly recorded.
- Grid recording: Contains recording of the grid for registration at the beginning.
Each file contains the recordings of three kinects for one surgery. In addition, each file contains the intrinsic camera parameters in the header as well as the model-to-world matrices of the kinects, which were derived via our grid registration procedure. Since they are binary files, they can be easily read in any programming language without the need of additional libraries.
Using the recordings:
- The binary file structure is shortly described [here].
- This data may only be used for scientific purposes. If you publish research which is using this material, cite the following paper: Improved Lossless Depth Image Compression (2020)
Source Code for Point Cloud Registration
Fast and Robust 3D Registration and Calibration of Depth-Only Sensors
This original work is copyright by University of Bremen.
Any software of this work is covered by the European Union Public Licence v1.2. To view a copy of this license, visit eur-lex.europa.eu.
Any other assets (3D models, movies, documents, depth-image recordings etc.) are covered by the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit creativecommons.org.
If you use any of the assets or software to produce a publication, then you must give credit and put a reference in your publication. If you would like to use our software in proprietary software, you can obtain an exception from the above license (aka. dual licensing). Please contact zach at cs.uni-bremen dot de.
- Further information: Project Website