VIVATOP

VIVATOP is a joint project of the Digital Media Lab and the Computer Graphics and Virtual Reality Lab (CGVR) at the University of Bremen, the Fraunhofer Institute for Digital Medicine (MEVIS), the apoQlar GmbH, the cirp GmbH, the szenaris GmbH and the University Clinic for Visceral Surgery at the Pius-Hospital Oldenburg. The project is funded by the german Federal Ministry of Education an Research (BMBF - Bundesministerium für Bildung und Forschung).

The aim of VIVATOP is to provide extensive support for surgeons in the preoperative, intraoperative and training phases using modern Augmented- and Virtual Reality techniques in combination with 3D printed organ models and RGB-D cameras. By combining the above mentioned techniques we create a virtual multi-user enviroment in which surgeons are able to collaberate with external experts, immersively inspect virtual patient data and get hapic feedback by the organ model.

Virtual Scene with Point Cloud

Virtual Scene with a live streamed Point Cloud.

3D printed model of a liver

3D printed model of a liver.

Use Cases

Preoperative Planning

Every surgery has to be extensivley planned beforehand to minimize risks. We try to assist the surgeons during this phase by providing the possibility to collaberate with (external) colleques in a multi-user virtual enviroment. There they will be able to view, inspect and manipulate 3D patient data. A key feature for immersion and usability is the haptic feedback from the physical organ models. Those models can be printed and used as a controler for the corresponding virtual model.

Intraoperative Support

During a live surgery, multiple depth cameras will be used to record the ongoing operation. This data will be streamed into the virtual enviroment and a point cloud reconstruction is rendered so external experts can view and support the surgery.

Training

The training of students and doctors is another vital use case which could greatly benefit from the physical and virtual organ models.

Our Contributions

Within the project, our department is mostly concerned with providing the general multi-user VR OP-enviroment and research and development of a live-streaming solution for the RGB-D data, which includes compression and filtering, as well as, the rendering of Point Clouds. The VR enviroment is based on a client-server architecture and the Unreal Engine 4.

Multi-user VR-enviroment

Multi-user VR-enviroment

Point Cloud Streaming

The RGB-D data of the depth cameras mounted in the operation room has to be streamed to the external experts in real-time, therefore, fast data processing and minimizing latencys are key requirements. To accomodate those, we developed a streaming solution based on our library called DynCam. More information about it can be read on its page DynCam. Additionally, efficient compression is needed to keep the required bandwidth low. To achieve best results, we compress the individual color- and depth images and developed novel, more effective, lossless depth compression algorithms.

Point Cloud streaming pipeline

Point Cloud streaming pipeline

Other current tasks we address with our pipeline, are denoising, filtering and merging of the data and lastly computation of the resulting Point Cloud.

Point Cloud Visualization

For accurate and low-latency visualization, the Point Cloud is rendered directly without costly surface reconstruction. For this purpose a custom Point Cloud renderer is used which is able to dynamically render huge Point Cloud sets directly in the Unreal Engine 4. More information about it can be viewed on the corresponding master thesis page Efficient rendering of massive and dynamic point cloud data in state-of-the-art graphics engines.

Point Cloud streamed and rendered into the virtual op-enviroment.

Point Cloud streamed and rendered into the virtual op-enviroment.

Huge Point Cloud visualized using our renderer.

Huge Point Cloud visualized using our renderer.

Medical Volume Rendering

The visualization of volumetric medical data like CT scans is another important topic we are working on. 3D graphics engines, which we use for the VR enviroment, usually don't directly provide appropriate solutions. As part of a master thesis, a direct volume renderer for CT data was developed and integrated in the Unreal Engine 4 which allows high-quality real-time visualization in VR. For more information about it, we refer to the related page Ray-Marching-Based Volume Rendering of Computed Tomography Data in a Game Engine.

Publications

Related Links