Publications

[]

[Complete list of all entries in BibTex format]

The copyright for most of the above papers has been transferred to a publisher. The ACM and IEEE grant permission to make a digital copy of the works on the authors' institutional repository. See also any copyright notices on the individual papers for details.


Autonomous, Module-based Surgical Lighting System in Surgery Room

A Novel, Autonomous, Module-Based Surgical Lighting System

Andre Mühlenbrock, Hendrik Huscher, Verena Uslar, Timur Cetin, Rene Weller, Dirk Weyhe, Gabriel Zachmann

Optimal illumination of the surgical site is crucial for successful surgeries. Current lighting systems, however, suffer from significant drawbacks, particularly shadows cast by surgeons and operating room personnel. We introduce an innovative, module-based lighting system that actively prevents shadows using an array of swiveling, ceiling-mounted light modules. The intensity and orientation of these modules are autonomously controlled by novel algorithms utilizing multiple depth sensors mounted above the operating table. This paper presents our complete system, detailing the algorithms for autonomous control and the initial optimization of the light module setup. Unlike prior work that was largely conceptual and based on simulations, this study introduces a real prototype featuring 56 light modules and three depth sensors. We evaluate this prototype through measurements, semi-structured interviews (n=4), and an extensive quantitative user study (n=11). The evaluation focuses on illumination quality, shadow elimination, and suitability for open surgeries compared to conventional OR lights. Our results demonstrate that the novel lighting system and optimization algorithms outperform conventional OR lights for abdominal surgeries, according to both objective measures and subjective ratings by surgeons.

Published in:

ACM Transactions on Computing for Healthcare (Just Accepted), October 1, 2024

Files:

     Preprint


Enhancing Anatomy Learning

Uncertain Physics for Robot Simulation in a Game Engine

Hermann Meißenhelter, Rene Weller, Gabriel Zachmann

Physics simulations are crucial for domains like animation and robotics, yet they are limited to deterministic simulations with precise knowledge of initial conditions. We introduce a surrogate model for simulating rigid bodies with positional uncertainty (Gaussian) and use a non-uniform sphere hierarchy for object approximation. Our model outperforms traditional sampling-based methods by several orders of magnitude in efficiency while achieving similar outcomes.

Published in:

40th Anniversary of the IEEE Conference on Robotics and Automation (ICRA@40), Rotterdam, Netherlands, September 23-26, 2024

Files:

     Extended Abstract
     Poster
     Video


Enhancing Anatomy Learning

Enhancing Anatomy Learning Through Collaborative VR? An Advanced Investigation

Haya Almaree, Roland Fischer, Rene Weller, Verena Uslar, Dirk Weyhe, Gabriel Zachmann

Common techniques for anatomy education in medicine include lectures and cadaver dissection, as well as the use of replicas. However, recent advances in virtual real- ity (VR) technology have led to the development of specialized VR tools for teaching, training, and other purposes. The use of VR technology has the potential to greatly enhance the learning experience for students. These tools offer highly interactive and engaging learning environments that allow students to inspect and interact with virtual 3D anatomical structures repeatedly, intuitively, and immersively. Additionally, multi- user VR environments can facilitate collaborative learning, which has the potential to enhance the learning experience even further. However, the effectiveness of collabora- tive learning in VR has not been adequately explored. Therefore, we conducted two user studies, each with n1,2 = 33 participants, to evaluate the effectiveness of virtual collaboration in the context of anatomy learning, and compared it to individual learn- ing. For our two studies, we developed a multi-user VR anatomy learning application using UE4. Our results demonstrate that our VR Anatomy Atlas offers an engaging and effective learning experience for anatomy, both individually and collaboratively. How- ever, we did not find any significant advantages of collaborative learning in terms of learning effectiveness or motivation, despite the multi-user group spending more time in the learning environment. In fact, motivation tended to be slightly lower. Although the usability was rather high for the single-user condition, it tended to be lower for the multi-user group in one of the two studies, which may have had a slightly negative ef- fect. However, in the second study, the usability scores were similarly high for both groups. The absence of advantages for collaborative learning may be due to the more complex environment and higher cognitive load. In consequence, more research into collaborative VR learning is needed to determine the relevant factors promoting collab- orative learning in VR and the settings in which individual or collaborative learning in VR is more effective, respectively.

Published in:

Computers & Graphics, 2024
Journal article

Files:

     Paper


Immersive Medical VR Training Simulators with Haptic Feedback

Maximilian Kaluschke

Virtual reality and haptic feedback technologies are revolutionizing medical training, especially in orthopedic and dental surgery. These technologies create virtual simulators that offer a risk-free environment for skill development, addressing ethical concerns of traditional patient-based training. The challenge is to make simulators immersive, realistic, and effective in skill transfer to real-world scenarios. This dissertation presents a modular VR-based, haptic-enabled physics simulation system designed to meet these challenges. It features continuous, realistic 6 degrees-of-freedom force feedback with material removal capabilities, enhancing interaction with virtual anatomical structures and tools. Novel algorithms for collision detection, force rendering, and volumetric representation improve the realism and performance of VR haptic simulators. These algorithms were implemented in a versatile library compatible with various game engines, haptic devices, and virtual tools. Two advanced medical training simulators demonstrate this library: one for total hip arthroplasty and another for dental procedures like root canal treatment and caries removal. Enhanced with features like automated VR registration, sound synthesis, VR zoom, and eye tracking, these simulators significantly impact learning and skill transfer to real-life procedures. Expert evaluations and studies with dental students show substantial improvements in real-world skills after using the simulators. The research highlights the importance of hand-tool alignment and stereopsis in learning outcomes and provides new insights into dental training behaviors and the use of indirect vision. This work advances VR and haptic technology in medical training, offering tools that improve training efficiency and effectiveness, ultimately enhancing patient care and treatment outcomes.

Published in:

Staats- und Universitätsbibliothek Bremen, July 2024.

Files:

     Dissertation
     Slides
     Talk (YouTube)


Geometric Computing for Simulation-Based Robot Planning

Toni Tan

Simulation-based robot planning is a popular approach in robotics that involves using computer simulations to plan and optimize robot motions by envisioning the outcome of generated plans before their execution in the real world. This approach offers several benefits, including the ability to evaluate multiple motion plans, reduce trial-and-error in physical experimentation, and enhance safety by identifying potential collisions and other hazards before executing a motion. Although this approach can significantly benefit robotic manipulation tasks, such simulations are still computationally expensive and may require more computing power than the robotic agents can provide. In addition, uncertainties arising from, i.e., perception or simulation models must be taken into account. Current approaches often require running simulations multiple times with varying parameters to account for these uncertainties, making real-time action planning and execution difficult. This thesis presents an accelerated geometric computation, i.e., CD methods for such simulation, precisely an algorithm based on BVHs and SIMD instruction sets. The main idea is to increase the branching factor of BVH according to available SIMD width and simultaneously test BV nodes for intersection in parallel. In addition, this thesis presents compression strategies for BVH-based CD implemented on two existing CD algorithms, namely Doptree and Boxtree. The idea is to remove redundant information from BVHs, and compress 32-bit floating points used to represent BVHs. This greatly increases the number of simultaneous simulations done in parallel by robotic agents, with most benefitting remote robots, as their computing power is often limited. Furthermore, this thesis presents an idea of benchmarking as an online service. In the literature, it is quite often that the results of proposed algorithms are difficult to replicate due to missing hardware/software and different computing configurations. Combined with the idea of using a virtual machine to safely execute user-uploaded algorithms makes it possible to safely run benchmarks as an online service. Not only are the results reproducible, but they are also comparable, as they are done within the same hardware/software configurations. Finally, this thesis investigates an idea to address uncertainties by incorporating them into simulations. The main concept is to integrate uncertainty as a a probability distribution into CD algorithms. In this sense, CD algorithms will not only report collisions but also the probability when a collision occurs. The outcome is not a simple final state of the simulation but rather a probability map reflecting a continuous distribution of final states.

Files:

     Dissertation


point cloud streaming using GMMs

Temporal Hierarchical Gaussian Mixture Models for Real-Time Point Cloud Streaming

Roland Fischer, Tobias Gels, Rene Weller, Gabriel Zachmann

Point clouds play an important role in robotics, autonomous driving, and telepresence applications with typical tasks such as SLAM and scene/avatar reconstruction. However, noisy sensor data, huge data loads, and inhomogeneous densities make efficient processing and accurate representation challenging, especially for real-time and streaming-based applications. We present a novel approach for compact point cloud representation and real-time streaming using a temporal hierarchical GMM-based generative model. Our level-based construction scheme allows us to dynamically adjust the maximum LOD and progressively transmit and render more detailed levels. We minimize the construction cost by exploiting the temporal coherence between consecutive frames. Combined with our highly parallelized and optimized CUDA implementation, we achieve real-time speeds with high-fidelity reconstructions. Our results show that we achieve significantly higher compression factors than previous work with similar accuracy, and that the temporal approach saves 20-36% construction time in our test scene.

Published in:

SIGGRAPH Posters, Denver, CO, USA, July 28 - August 01, 2024

Files:

     Extended Abstract (preprint)
     Poster
     Movie


Rendering extraterrestrial atmospheres

Physically Based Real-Time Rendering of Atmospheres using Mie Theory

Simon Schneegans, Tim Meyran, Ingo Ginkel, Gabriel Zachmann, Andreas Gerndt

Most real-time rendering models for atmospheric effects have been designed and optimized for Earth's atmosphere. Some authors have proposed approaches for rendering other atmospheres, but these methods still use approximations that are only valid on Earth. For instance, the iconic blue glow of Martian sunsets can not be represented properly as the complex interference effects of light scattered at dust particles can not be captured by these approximations. In this paper, we present an approach for generalizing an existing model to make it capable of rendering extraterrestrial atmospheres. This is done by replacing the approximations with a physical model based on Mie Theory. We use the particle-size distribution, the particle-density distribution as well as the wavelength-dependent refractive index of atmospheric particles as input. To demonstrate the feasibility of this idea, we extend the model by Bruneton et al. [BN08] and implement it into CosmoScout VR, an open-source visualization of our Solar System. In a first step, we use Mie Theory to precompute the scattering behaviour of a particle mixture. Then, multi-scattering is simulated, and finally the precomputation results are used for real-time rendering. We demonstrate that this not only improves the visualization of the Martian atmosphere, but also creates more realistic results for our own atmosphere.

Published in:

Computer Graphics Forum (Eurographics), 2024
Journal article

Files:

     Paper
     Movie
     Slides


A man is using the VR simulator. He holds two haptics devices, one in each hand, and is viewing the virtual scene through an HMD. The virtual scene shows him inspecting the inner anatomy of a tooth he just worked on, by looking at a mirror reflection.

Reflecting on Excellence: VR Simulation for Learning Indirect Vision in Complex Bi-Manual Tasks

Maximilian Kaluschke, Rene Weller, Myat Su Yin, Benedikt W. Hosp, Farin Kulapichitr, Peter Haddawy, Siriwan Suebnukarn, Gabriel Zachmann

Indirect vision through a mirror, while bi-manually manipulating both the mirror and another tool is a relatively common way to perform operations in various types of surgery. However, learning such psychomotor skills requires extensive training; they are difficult to teach; and they can be quite costly, for instance, for dentistry schools. In order to study the effectiveness of VR simulators for learning these kinds of skills, we developed a simulator for training dental surgery procedures, which supports tracking of eye gaze and tool trajectories (mirror and drill), as well as automated outcome scoring. We carried out a pre-/post-test study in which 30 fifth-year dental students received six training sessions in the access opening stage of the root canal procedure using the simulator. In addition, six experts performed three trials using the simulator. The outcomes of drilling performed on realistic plastic teeth showed a significant learning effect due to the training sessions. Also, students with larger improvements in the simulator tended to improve more in the real-world tests. Analysis of the tracking data revealed novel relationships between several metrics w.r.t. eye gaze and mirror use, and performance and learning effectiveness: high rates of correct mirror placement during active drilling and high continuity of fixation on the tooth are associated with increased skills and increased learning effectiveness. Larger time allocation for tooth inspections using the mirror, i.e., indirect vision, and frequency of inspection are associated with increased learning effectiveness. Our findings suggest that eye tracking can provide valuable insights into student learning gains of bi-manual psychomotor skills, particularly in indirect vision environments.

Published in:

2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR), Orlando, Florida, USA, March 16 - 21, 2024.

Files:

     Paper
     Movie
     Slides

Links:

     Project Page


VR Research at Fraunhofer IGD, Darmstadt, Germany

Wolfgang Felger, Martin Göbel, Dirk Reiners, Gabriel Zachmann

In this article, we will present and describe some of the developments of Virtual Reality (VR) at the Fraunhofer Institute for Computer Graphics, more precisely its department for visualization and simulation (A4), later to be renamed into department for visualization and virtual reality1 in Darmstadt, Germany, from 1991 until 2000.

Published in:

IEEE VR conference workshops, 2024.

Files:

     Technical Report
     Video
     Slides


Effects of Markers in Training Datasets on the Accuracy of 6D Pose Estimation

Janis Rosskamp, Rene Weller, and Gabriel Zachmann

Collecting training data for pose estimation methods on images is a time-consuming task and usually involves some kind of manual labeling of the 6D pose of objects. This time could be reduced considerably by using marker-based tracking that would allow for automatic labeling of training images. However, images containing markers may reduce the accuracy of pose estimation due to a bias introduced by the markers. In this paper, we analyze the influence of markers in training images on pose estimation accuracy. We investigate the accuracy of estimated poses for three different cases: i) training on images with markers, ii) removing markers by inpainting, and iii) augmenting the dataset with randomly generated markers to reduce spatial learning of marker features. Our results demonstrate that utilizing marker-based techniques is an effective strategy for collecting large amounts of ground truth data for pose prediction. Moreover, our findings suggest that the usage of inpainting techniques do not reduce prediction accuracy. Additionally, we investigate the effect of inaccuracies of labeling in training data on prediction accuracy. We show that the precise ground truth data obtained through marker tracking proves to be superior compared to markerless datasets if labeling errors of 6D ground truth exist.

Published in:

Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, 4457-4466.

Files:

     Paper
     Poster
     Recording

Links:

     Project Page
     Code


A Clinical User Study Investigating the Benefits of Adaptive Volumetric Illumination Sampling

Valentin Kraft, Christian Schumann, Daniela Salzmann, Dirk Weyhe, Gabriel Zachmann, and Andrea Schenk

Accurate and fast understanding of the patient’s anatomy is crucial in surgical decision making and particularly important in visceral surgery. Sophisticated visualization techniques such as 3D Volume Rendering can aid the surgeon and potentially lead to a benefit for the patient. Recently, we proposed a novel volume rendering technique called Adaptive Volumetric Illumination Sampling (AVIS) that can generate realistic lighting in real-time, even for high resolution images and volumes but without introducing additional image noise. In order to evaluate this new technique, we conducted a randomized, three-period crossover study comparing AVIS to conventional Direct Volume Rendering (DVR) and Path Tracing (PT). CT datasets from 12 patients were evaluated by 10 visceral surgeons who were either senior physicians or experienced specialists. The time needed for answering clinically relevant questions as well as the correctness of the answers were analyzed for each visualization technique. In addition to that, the perceived workload during these tasks was assessed for each technique, respectively. The results of the study indicate that AVIS has an advantage in terms of both time efficiency and most aspects of the perceived workload, while the average correctness of the given answers was very similar for all three methods. In contrast to that, Path Tracing seems to show particularly high values for mental demand and frustration. We plan to repeat a similar study with a larger participant group to consolidate the results.

Published in:

IEEE Transactions on Visualization and Computer Graphics, 2024, 1-8.

Files:

     Paper


Optimizing the Illumination of a Surgical Site in New Autonomous Module-based Surgical Lighting Systems

Andre Mühlenbrock, René Weller, Gabriel Zachmann

Good illumination of the surgical site is crucial for the success of a surgery—yet current, typical surgical lighting systems have significant shortcomings, e.g. with regard to shadowing and ease of handling. To address these shortcomings, new lighting systems for operating rooms have recently been developed, consisting of a variety of swiveling light modules that are mounted on the ceiling and controlled automatically. For such a new type of lighting system, we present a new optimization pipeline that maintains the brightness at the surgical site as constant as possible over time and minimizes shadows by using depth sensors. Furthermore, by performing simulations on point cloud recordings of nine real abdominal surgeries, we demonstrate that our optimization pipeline is capable of effectively preventing shadows cast by bodies and heads of the OR personnel.

Published in:

Medical Imaging and Computer-Aided Diagnosis (MICAD 2022), edited by Ruidan Su, Yudong Zhang, Han Liu, and Alejandro F Frangi, 293–303. Singapore: Springer Nature Singapore, 2023.

Files:

     Paper
     Slides


Two VR users learning anatomy together.

Collaborative VR Anatomy Atlas - Investigating Multi-User Anatomy Learning

Haya Almaree, Roland Fischer, Rene Weller, Verena Uslar, Dirk Weyhe, Gabriel Zachmann

In medical education, anatomy is typically taught through lectures, cadaver dissection, and using replicas. Advances in VR technology facilitated the development of specialized VR tools for teaching, training, and other tasks. They can provide highly interactive and engaging learning environments where students can immersively and repeatedly inspect and interact with virtual 3D anatomical structures. Moreover, multi-user VR environments can be employed for collaborative learning, which may enhance the learning experience. Concrete applications are still rare, though, and the effect of collaborative learning in VR has not been adequately explored yet. Therefore, we conducted a user study with n= 33 participants to evaluate the effectiveness of virtual collaboration on the example of anatomy learning (and compared it to individual learning). For our study, we developed an UE4-based multi-user VR anatomy learning application. Our results show that our VR Anatomy Atlas provides an engaging learning experience and is very effective for anatomy learning, individually as well as collaboratively. However, interestingly, we could not find significant advantages for collaborative learning regarding learning effectiveness or motivation, even though the multi-user group spent more time in the learning environment. Although rather high for the single-user condition, the usability tended to be lower for the multi-user group. This may be due to the more complex environment and a higher cognitive load. Thus, more research in collaborative VR for anatomy education is needed to investigate, if and how it can be employed more effectively.

Published in:

Virtual Reality and Mixed Reality (EuroXR 2023) , Rotterdam, The Netherlands, 29. November - 1. December 2023 (Best Paper Award).

Files:

     Paper
     Talk


Depth images with holes/missing data.

Inpainting of Depth Images using Deep Neural Networks for Real-Time Applications

Roland Fischer, Janis Roßkamp, Thomas Hudcovic, Anton Schlegel, Gabriel Zachmann

Depth sensors enjoy increased popularity throughout many application domains, such as robotics (SLAM) and telepresence. However, independent of technology, the depth images inevitably suffer from defects such as holes (invalid areas) and noise. In recent years, deep learning-based color image inpainting algorithms have become very powerful. Therefore, with this work, we propose to adopt existing deep learning models to reconstruct missing areas in depth images, with the possibility of real-time applications in mind. After empirical tests with various models, we chose two promising ones to build upon: a U-Net architecture with partial convolution layers that conditions the output solely on valid pixels, and a GAN architecture that takes advantage of a patch-based discriminator. For comparison, we took a standard U-Net and LaMa. All models were trained on the publically available NYUV2 dataset, which we augmented with synthetically generated noise/holes. Our quantitative and qualitative evaluations with two public and an own dataset show that LaMa most often produced the best results, however, is also significantly slower than the others and the only one not being real-time capable. The GAN and partial convolution-based models also produced reasonably good results. Which one was superior varied from case to case but, generally, the former performed better with small-sized holes and the latter with bigger ones. The standard U-Net model that we used as a baseline was the worst and most blurry.

Published in:

International Symposium on Visual Computing (ISVC) 2023, Lake Tahoe, NV, USA, October 16 - 18, 2023.

Files:

     Paper
     Talk


Various visualizations for the teleport locomotion metaphor.

How Observers Perceive Teleport Visualizations in Virtual Environments

Roland Fischer, Marc Jochens, Rene Weller, Gabriel Zachmann

Multi-user VR applications have great potential to foster remote collaboration and improve or replace classical training and education. An important aspect of such applications is how participants move through the virtual environments. One of the most popular VR locomotion methods is the standard teleportation metaphor, as it is quick, easy to use and implement, and safe regarding cybersickness. However, it can be confusing to the other, observing, participants in a multi-user session and, therefore, reduce their presence. The reason for this is the discontinuity of the process, and, therefore, the lack of motion cues. As of yet, the question of how this teleport metaphor could be suitably visualized for observers has not received very much attention. Therefore, we implemented several continuous and discontinuous 3D visualizations for the teleport metaphor and conducted a user study for evaluation. Specifically, we investigated them regarding confusion, spatial awareness, and spatial and social presence. Regarding presence, we did find significant advantages for one of the visualizations. Moreover, some visualizations significantly reduced confusion. Furthermore, multiple continuous visualizations ranked significantly higher regarding spatial awareness than the discontinuous ones. This finding is also backed up by the users' tracking data we collected during the experiments. Lastly, the classic teleport metaphor was perceived as less clear and rather unpopular compared with our visualizations.

Published in:

ACM Symposium on Spatial User Interaction (SUI) 2023, Sydney, Australia, October 13 - 15, 2023.

Files:

     Paper
     Talk


Selection technieques in VR

Novel Algorithms and Methods for Immersive Telepresence and Collaborative VR

Roland Fischer

This thesis is concerned with collaborative VR and tackles many of its challenges. A core contribution is the design, development, and evaluation of a low-latency, real-time point cloud streaming and rendering pipeline for VR-based telepresence that enables to have high-quality live-captured 3D scenes and avatars in shared virtual environments. Additionally, we combined a custom direct volume renderer with an Unreal Engine 4-based collaborative VR application for immersive medical CT data visualization/inspection. We also propose novel methods for RGB-D/depth image enhancement and compression and investigate the observer’s perception of new 3D visualizations for the potentially confusing teleport locomotion metaphor in multi-user environments. Moreover, we designed and developed multiple methods to efficiently procedurally generate realistically-looking terrains for VR applications, that are capable of creating plausible biome distributions as well as natural water bodies. Lastly, we conducted a study and investigated the effects of collaborative anatomy learning in VR. Eventually, with the ensemble of contributions that we presented within this thesis, including not only novel algorithms and methods but also studies and comprehensive evaluations, we were able to improve collaborative VR on many fronts and provide critical insights into various research topics.

Files:

     Dissertation


Comparison between three teeth states that could be the outcomes of drilling during root-canal-access opening.

The effect of 3D stereopsis and hand-tool alignment on learning effectiveness and skill transfer of a VR-based simulator for dental training

Maximilian Kaluschke, Myat Su Yin, Peter Haddawy, Siriwan Suebnukarn, Gabriel Zachmann

Recent years have seen the proliferation of VR-based dental simulators using a wide variety of different VR configurations with varying degrees of realism. Important aspects distin- guishing VR hardware configurations are 3D stereoscopic rendering and visual alignment of the user’s hands with the virtual tools. New dental simulators are often evaluated without analysing the impact of these simulation aspects. In this paper, we seek to determine the impact of 3D stereoscopic rendering and of hand-tool alignment on the teaching effective- ness and skill assessment accuracy of a VR dental simulator. We developed a bimanual simulator using an HMD and two haptic devices that provides an immersive environment with both 3D stereoscopic rendering and hand-tool alignment. We then independently con- trolled for each of the two aspects of the simulation. We trained four groups of students in root canal access opening using the simulator and measured the virtual and real learning gains. We quantified the real learning gains by pre- and post-testing using realistic plastic teeth and the virtual learning gains by scoring the training outcomes inside the simulator. We developed a scoring metric to automatically score the training outcomes that strongly correlates with experts’ scoring of those outcomes. We found that hand-tool alignment has a positive impact on virtual and real learning gains, and improves the accuracy of skill assessment. We found that stereoscopic 3D had a negative impact on virtual and real learn- ing gains, however it improves the accuracy of skill assessment. This finding is counter-intui- tive, and we found eye-tooth distance to be a confounding variable of stereoscopic 3D, as it was significantly lower for the monoscopic 3D condition and negatively correlates with real learning gain. The results of our study provide valuable information for the future design of dental simulators, as well as simulators for other high-precision psycho-motor tasks.

Published in:

PLoS ONE 18(10): e0291389, October 4, 2023

Files:

     Paper

Links:

     Project Page


Vamex3 Autonomously exploring mars with a heterogeneous robot swarm

VAMEX3: Autonomously exploring mars with a heterogeneous robot swarm

Leon Danter, Joachim Clemens, Andreas Serov, Anne Schattel, Michael Schleiss, Cedric Liman, Mario Gäbel, Andre Mühlenbrock, and Gabriel Zachmann

In the search for past or present extraterrestrial life or a potential habitat for terrestrial life forms, our neighbor- ing planet Mars is the main focus. The research initiative ”VaMEx - Valles Marineris Explorer”, initiated by the German Space Agency at DLR has the main objective to explore the Valles Marineris on Mars. This rift valley sys- tem is particularly exciting because the past and current environmental conditions within the canyon, topograph- ically about 10 km beneath the global Martian surface, show an atmospheric pressure above the triple point of water - thus, physically allowing the presence of liquid water at temperatures above the melting point.
The VaMEx3 project phase within this initiative aims to establish a robust and field-tested concept for a potential future space mission to the Valles Marineris performed by a heterogeneous autonomous robot swarm consist- ing of moving, running, and flying systems. These sys- tems, their sensor suites, and their software stack will be enhanced and validated to enable the swarm to au- tonomously explore regions of interest. This task in- cludes multi-robot multi-sensor SLAM, autonomous task distribution, and robust and fault-tolerant navigation with sensors that enable a redundant pose solution on the Mar- tian surface.

Published in:

ASTRA 2023, Leiden, Netherlands, October 18 - 20, 2023

Files:

     Paper


Photograph of the three tasks that were done physically and via haptic rendering

Perceived Realism of Haptic Rendering Methods for Bimanual High Force Tasks: Original and Replication Study

Mario Lorenz, Andrea Hoffmann, Maximilian Kaluschke, Taha Ziadeh, Nina Pillen, Magdalena Kusserow, Jérôme Perret, Sebastian Knopp, André Dettmann, Philipp Klimant, Gabriel Zachmann, Angelika C. Bullinger

Realistic haptic feedback is a key for virtual reality applications in order to transition from solely procedural training to motor-skill training. Currently, haptic feedback is mostly used in low-force medical procedures in dentistry, laparoscopy, arthroscopy and alike. However, joint replacement procedures at hip, knee or shoulder, require the simulation of high-forces in order to enable motor-skill training. In this work a prototype of a haptic device capable of delivering double the force (35~N to 70~N) of state-of-the-art devices is used to examine the four most common haptic rendering methods (penalty-, impulse-, constraint-, rigid body-based haptic rendering) in three bimanual tasks (contact, rotation, uniaxial transition with increasing forces from 30 to 60~N) regarding their capabilities to provide a realistic haptic feedback. In order to provide baseline data, a worst-case scenario of a steel/steel interaction was chosen. The participants needed to compare a real steel/steel interaction with a simulated one. In order to substantiate our results, we replicated the study using the same study protocol and experimental setup at another laboratory. The results of the original study and the replication study deliver almost identical results. We found that certain investigated haptic rendering method are likely able to deliver a realistic sensation for bone-cartilage/steel contact but not for steel/steel contact. Whilst no clear best haptic rendering method emerged, penalty-based haptic rendering performed worst. For simulating high force bimanual tasks, we recommend a mixed implementation approach of using impulse-based haptic rendering for simulating contacts and combine it with constraint or rigid body-based haptic rendering for rotational and translational movements.

Published in:

Nature Scientific Reports, volume 13, July 11, 2023

Files:

     Paper

Links:

     Project Page


Versatile Immersive Virtual and Augmented Tangible OR – Using VR, AR and Tangibles to Support Surgical Practice

Anke Verena Reinschluessel, Thomas Muender, Roland Fischer, Valentin Kraft, Verena Nicole Uslar, Dirk Weyhe, Andrea Schenk, Gabriel Zachmann, Tanja Döring, Rainer Malaka

Immersive technologies such as virtual reality (VR) and augmented reality (AR), in combination with advanced image segmentation and visualization, have considerable potential to improve and support a surgeon’s work. We demonstrate a solution to help surgeons plan and perform surgeries and educate future medical staff using VR, AR, and tangibles. A VR planning tool improves spatial understanding of an individual’s anatomy, a tangible organ model allows for intuitive interaction, and AR gives contactless access to medical images in the operating room. Additionally, we present improvements regarding point cloud representations to provide detailed visual information to a remote expert and about the remote expert. Therefore, we give an exemplary setup showing how recent interaction techniques and modalities benefit an area that can positively change the life of patients.

Published in:

CHI EA '23, Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 1 - 5. Hamburg, Germany, April 23 - 28, 2023.

Files:

     Paper
     Movie
     Movie


Aeroconf 2023 Adaptive Packing Gravity

Patent: Illuminating device for illuminating a surgical wound

Andre Mühlenbrock, Peter Kohrs, Adria Fox, Gabriel Zachmann

The invention is a system for the illumination of surgical wounds, which consists of a plurality of small swiveling light units and a control system, which controls the light units so that the surgical wound is optimally illuminated. The patent essentially emerged from the SmartOT project.

Published in:

Deutsches Patent- und Markenamt, Deutschland, April 13, 2023.


Aeroconf 2023 Adaptive Packing Gravity

Adaptive Polydisperse Sphere Packings for High Accuracy Computations of the Gravitational Field

Hermann Meißenhelter, René Weller, Matthias Noeker, Tom Andert, Gabriel Zachmann

We present a new method to model the mass of celestial bodies based on adaptive polydisperse sphere packings. Using poly- disperse spheres in the mascon model has shown to deliver a very good approximation of the mass distribution of celestial bodies while allowing fast computations of the gravitational field. However, small voids between the spheres reduce the accuracy especially close to the surface. Hence, the idea of our adaptive sphere packing is to place more spheres close to the surface instead of filling negligible small gaps deeper inside the body. Although this reduces the packing density, we achieve greater accuracy close to the surface. For the adaptive sphere packing, we propose a mass assignment algorithm that uniformly samples the volume of the body. Additionally, we present a method to further optimize the mass distribution of the spheres based on least squares optimization. The sphere packing and the gravitational acceleration remain computable entirely on the GPU (Graphics Processing Unit).

Published in:

IEEE Aerospace Conference (AeroConf) 2023, Big Sky, Montana, USA, March 4 - 11, 2023.

Files:

     Paper


A virtual reality simulation of a novel way to illuminate the surgical field – A feasibility study on the use of automated lighting systems in the operating theatre

Timur Cetin, Andre Mühlenbrock, Gabriel Zachmann, Verena Weber, Dirk Weyhe and Verena Uslar

Introduction: Surgical lighting systems have to be re-adjusted manually during surgery by the medical personnel. While some authors suggest that interaction with a surgical lighting system in the operating room might be a distractor, others support the idea that manual interaction with the surgical lighting system is a hygiene problem as pathogens might be present on the handle. In any case, it seems desirable to develop a novel approach to surgical lighting that minimizes the need for manual interaction during a surgical procedure.

Methodes: We investigated the effect of manual interaction with a classical surgical lighting system and simulated a proposed novel design of a surgical lighting system in a virtual reality environment with respect to performance accuracy as well as cognitive load (measured by electroencephalographical recordings).

Results: We found that manual interaction with the surgical lights has no effect on the quality of performance, yet for the price of a higher mental effort, possibly leading to faster fatigue of the medical personnel in the long run.

Discussion: Our proposed novel surgical lighting system negates the need for manual interaction and leads to a performance quality comparable to the classical lighting system, yet with less mental load for the surgical personnel.

Published in:

Frontiers in Surgery, March 02, 2023.

Files:

     Paper


NaivPhys4RP - Towards Human-like Robot Perception ``Physical Reasoning based on Embodied Probabilistic Simulation``

NaivPhys4RP - Towards Human-like Robot Perception "Physical Reasoning based on Embodied Probabilistic Simulation"

Franklin Kenghagho K., Michael Neumann, Patrick Mania, Toni Tan, Feroz Siddiky A., René Weller, Gabriel Zachmann and Michael Beetz

Perception in complex environments especially dynamic and human-centered ones goes beyond classical tasks such as classification usually known as the what- and where-object-questions from sensor data, and poses at least three challenges that are missed by most and not properly addressed by some actual robot perception systems. Note that sensors are extrinsically (e.g., clutter, embodiedness-due noise, delayed processing) and intrinsically (e.g., depth of transparent objects) very limited, resulting in a lack of or high-entropy data, that can only be difficultly compressed during learning, difficultly explained or intensively processed during interpretation. (a) Therefore, the perception system should rather reason about the causes that produce such effects (how/why-happen-questions). (b) It should reason about the consequences (effects) of agent-object and object-object interactions in order to anticipate (what-happen-questions) the (e.g., undesired) world state and then enable successful action on time. (c) Finally, it should explain its outputs for safety (meta why/how-happen-questions). This paper introduces a novel white-box and causal generative model of robot perception (NaivPhys4RP) that emulates human perception by capturing the Big Five aspects (FPCIU) of human commonsense, recently established, that invisibly (dark) drive our observational data and allow us to overcome the above problems. However, NaivPhys4RP particularly focuses on the aspect of physics, which ultimately and constructively determines the world state.

Published in:

2022 IEEE-RAS 21th International Conference on Humanoid Robots (Humanoids), Ginowan, Okinawa, Japan, November 28 - 30, 2022.

Files:

     Paper


The OPA3L System and Testconcept for Urban Autonomous Driving

Andreas Folkers, Constantin Wellhausen, Matthias Rick, Xibo Li, Lennart Evers, Verena Schwarting, Joachim Clemens, Philipp Dittmann, Mahmood Shubbak, Tom Bustert, Gabriel Zachmann, Kerstin Schill, Christof Büskens

The development of autonomous vehicles for urban driving is widely considered as a challenging task as it requires intensive interdisciplinary expertise. The present article presents an overview of the research project OPA 3 L (Optimally Assisted, Highly Automated, Autonomous and Cooperative Vehicle Navigation and Localization). It highlights the hardware and software architecture as well as the developed methods. This comprises algorithms for localization, perception, high- and low-level decision making and path planning, as well as model predictive control. The research project contributes a cross-platform holistic approach applicable for a wide range of real-world scenarios. The developed framework is implemented and tested on a real research vehicle, miniature vehicles, and a simulation system.

Published in:

2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau, China, October 8 - 12, 2022.

Files:

     Paper


Simulation of the detectability of different surface properties with bistatic radar observations

Simulation of the detectability of different surface properties with bistatic radar observations

Jonas Krumme, Thomas P. Andert, René Weller, Graciela González Peytaví, Gabriel Zachmann, Dennis Scholl, Adrian Schulz

Bistatic radar (BSR) is a well-established technology to probe surfaces of planets and also small bodies like asteroids and comets. The radio subsystem onboard the spacecraft serves as the transmitter and the ground station on Earth as the receiver of the radio signal in the bistatic radar configuration. A part of the reflected signal is scattered towards the receiver which records both the right-hand circular polarized (RHCP) and left-hand circular polarized (LHCP) echo components. From the measurement of those, geophysical properties like surface roughness and dielectric constant can be derived. Such observations aim at extracting the radar reflectivity coefficient of the surface, which is also called the radar-cross section. This coefficient depends on the physical properties of the surface. We developed a bistatic radar simulation tool that utilizes hardware acceleration and massively-parallel programming paradigms available on modern GPUs. It is based on the Shooting and Bouncing Rays (SBR) method (sometimes also called Ray-Launching Geometrical Optics), which we have adapted for the GPU and implemented using hardware- accelerated raytracing. This provides high-performance estimation of the scattering of electromagnetic waves from surfaces, which is highly desirable since surfaces can become very large relative to the surface features that need to be resolved by the simulation method. Our method can, for example, deal with the asteroids 1 Ceres and 4 Vesta, which have mean diameters of around 974 km and 529 km, resp., which are very large surfaces relative to the sizes of the surface features. But even smaller objects can require a large number of rays for sampling the surface with a density large enough for accurate results. In this paper, we present our new, very efficient simulation method, its application to several examples with various shapes and surface properties, and examine limits of the detectability of water ice on small bodies.

Published in:

International Astronautical Congress (IAC) 2022, Paris, France, September 18 - 22, 2022.

Files:

     Abstract
     Paper
     Slides


A Framework for Safe Execution of User-Uploaded Algorithms

A Framework for Safe Execution of User-Uploaded Algorithms

Toni Tan, René Weller, Gabriel Zachmann

In recent years, a trend has existed for an open benchmark aiming for reproducible and comparable benchmarking results. The best reproducibility can be achieved when performing the benchmarks in the same hard- and software environment. This can be offered as a web service. One challenge of such a web service is the integration of new algorithms into the existing benchmarking tool due to security concerns. In this paper, we present a framework that allows the safe execution of user-uploaded algorithms in such a benchmark-as-a-service web tool. To guarantee security as well as reproducibility and comparability of the service, we extend an existing system architecture to allow the execution of user-uploaded algorithms in a virtualization environment. Our results show that although the results from the virtualization environment are slightly slower by around 3.7% to 4.7% compared with the native environment, the results are consistent across all scenarios with different algorithms, object shapes, and object complexity. Moreover, we have automated the entire process from turning on/off a virtual machine, starting benchmark with intended parameters to communicating with the backend server when the benchmark has finished. Our implementation is based on Microsoft Hyper-V that allows us to benchmark algorithms that use Single Instruction, Multiple Data (SIMD) instruction sets as well as access to the Graphics Processing Unit (GPU).

Published in:

ACM Web3D 2022: The 27th International Conference on 3D Web Technology, Evry-Courcouronnes, France, November 2 - 4, 2022.

Files:

     Paper
     Slides
     Supplemental Material
     Demo

Links:

     Project Page


Comparing Methods for Gravitational Computation: Studying the
Effect of Inhomogeneities

Comparing Methods for Gravitational Computation: Studying the Effect of Inhomogeneities

Matthias Noeker, Hermann Meißenhelter, Tom Andert, René Weller, Özgür Karatekin, Benjamin Haser

Current and future small body missions, such as the ESA Hera mission or the JAXA MMX mission demand good knowledge of the gravitational field of the targeted celestial bodies. This is not only motivated to ensure the precise spacecraft operations around the body, but likewise important for landing manoeuvres, surface (rover) operations, and science, including surface gravimetry. To model the gravitation of irregularly-shaped, non-spherical bodies, different methods exist. Previous work performed a comparison between three different methods, considering a homogeneous density distribution inside the body. In this work, the comparison is continued, by introducing a first inhomogeneity inside the body. For this, the same three methods, being the polyhedral method and two different mascon methods are compared.

Published in:

Europlanet Science Congress 2022, Granada, Spain, September 18-23, 2022, EPSC2022-562, doi.org/10.5194/epsc2022-562.

Files:

     Abstract
     Slides


Procedural Generation of Landscapes with Water Bodies Using Artificial
Drainage Basins

Procedural Generation of Landscapes with Water Bodies Using Artificial Drainage Basins

Roland Fischer, Judith Boeckers, Gabriel Zachmann

We propose a method for procedural terrain generation that focuses on creating huge landscapes with realistically-looking river networks and lakes. A natural-looking integration into the landscape is achieved by an approach inverse to the usual way: After authoring the initial landmass, we first generate rivers and lakes and then create the actual terrain by ``growing'' it, starting at the water bodies. The river networks are formed based on computed artificial drainage basins. Our pipeline approach not only enables quick iterations and direct visualization of intermediate results but also balances user control and automation. The first stages provide great control over the layout of the landscape while the later stages take care of the details with a high degree of automation. Our evaluation shows that vast landscapes can be created in under half a minute. Also, with our system, it is quite easy to create landscapes closely resembling real-world examples, highlighting its capability to create realistic-looking landscapes. Moreover, our implementation is easy to extend and can be integrated smoothly into existing workflows.

Published in:

Computer Graphics International (CGI), Geneva, Switzerland, September 12-16, 2022. LNCS vol. 13443.

Files:

     Paper
     Technical Report
     Talk


Future Directions for XR 2021-2030: International Delphi Consensus Study

Future Directions for XR 2021-2030: International Delphi Consensus Study

Jolanda G. Tromp, Gabriel Zachmann, Jerome Perret, Beatrice Palacco

Chapter Abstract:
XR has been put forward as one of the “Essential Eight” key enabling technologies of the 21st century. Together, they are expected to drive the digital transformation that has started only recently in many areas of business, daily life, and leisure. Importantly, XR has the potential to play a major role in supporting the achievement of several if not all 17 Sustainable Development Goals set forth by the UN. The path towards realizing the full potential of XR technologies needs to be clarified in order to make informed decisions about research and development agendas, investment, funding, and regulations. In order to provide insights into the best approach to further develop XR towards its full potential, the EuroXR Association initiated a study using the well-established Delphi consensus method, drawing on the expertise of independent senior XR experts to formulate future directions for XR R&D. The results are presented in terms of a roadmap for the future of XR, identifying the prerequisites to clear the path for this, and clarifying the roles and responsibilities for the XR research community, the XR business community, and the government and regulation bodies. The main findings of our XR roadmap are summarized into a number of specific areas for the stakeholders to act upon, in order to push the cutting edge of XR and be part of the early-adopters who have this key enabling technology at their disposal throughout industry, education and society.

About the book:
This book offers a comprehensive overview of the technological aspects of Extended Realities (XR) and discusses the main challenges and future directions in the field.

Published in:

Chapter 34 in: Roadmapping Extended Reality: Fundamentals and Applications, John Wiley & Sons August 5, 2022.


Dynparity: Dynamic disparity adjustment to avoid stereo window violations on stationary stereoscopic displays

Dynparity: Dynamic disparity adjustment to avoid stereo window violations on stationary stereoscopic displays

Christoph Schröder-Dering, René Weller, Gabriel Zachmann

We propose a novel method to avoid stereo window violations at screen borders. These occur for objects in front of the zero parallax plane, which appear in front of the (physical) screen, and that are clipped for one eye while still being visible for the other eye. This contradicts other stereo cues, particularly disparity, potentially resulting in eye strain and simulator sickness. In interactive and dynamic virtual environments, where the user controls the camera, e.g., via head tracking, it is impossible to avoid stereo window violations completely. We propose \textsl{Dynparity}, a novel rendering method to eliminate the conflict between clipping and negative disparity, by introducing a non-uniform stereoscopic projection. For each vertex in front of the zero parallax plane, we compute the stereoscopic projection such that the parallax approaches zero towards the edge of the screen. Our approach works entirely on the GPU in real-time and can be easily included in modern game engines. We conducted a user study comparing our method to the standard stereo projection on a large-screen stereo wall with head tracking. Our results show significantly reduced simulator sickness when using Dynparity compared to the standard stereo rendering.

Published in:

Computer Animation and Virtual Worlds , Wiley & Sons Ltd August 16, 2022.

Files:

     Paper
     Slides


Example of redirected walking

Redirected walking in virtual reality with auditory step feedback

René Weller, Benjamin Brennecke, Gabriel Zachmann

We present a novel approach of redirected walking (RDW) based on step feedback sounds to redirect users in virtual reality. The main idea is to achieve path manipulation by changing step noises to deviate the users, who still believe that they are walking a straight line. Our approach can be combined with traditional visual approaches for RDW based on eye-blinking. Moreover, we have conducted a user study in a large area (10×20m) using a within-subject design. We achieved a translational redirection of 1.7m in average with pure audio feedback. Moreover, our results show that visual methods can amplify the deviation of our new auditory approach by 80cm in average at the distance of 20 m.

Published in:

The Visual Computer , Springer July 1, 2022. Selected paper from The 39th International Conference on Computer Graphics, Geneva, Switzerland, September 12-16 , 2022. Best Paper Award

Files:

     Paper


Effects of Immersion and Navigation Agency in Virtual Environments on Emotions and Behavioral Intentions

Effects of Immersion and Navigation Agency in Virtual Environments on Emotions and Behavioral Intentions

René Weller, Joscha Cepok, Roman Arzaroli, Kevin Marnholz, Cornelia S. Große, Hauke Reuter, Gabriel Zachmann

We present a study investigating the question whether and how people’s intention to change their environmental behavior depends on the degrees of immersion and freedom of navigation when they experience a deteriorating virtual coral reef. We built the virtual reef on top of a biologically sound model of the ecology of coral reefs, which allowed us to simulate the realistic decay of reefs under adverse environmental factors. During their experience, participants witnessed those changes while they also explored the virtual environment. In a two-factorial experiment (N = 224), we investigated the effects of different degrees of immersion and different levels of navigation freedom on emotions, the feeling of presence, and participants’ intention to change their environmental behavior. The results of our analyses show that immersion and navigation have a significant effect on the participants’ emotions of sadness and the feeling of helplessness. In addition, we found a significant effect, mediated by the participants’ emotions, on the intention to change their behavior. The most striking result is, perhaps, that the highest level of immersion combined with the highest level of navigation did not lead to the highest intentions to change behavior. Overall, our results show that it is possible to raise awareness of environmental threats using virtual reality; it also seems possible to change people’s behavior regarding these threats. However, it seems that the VR experience must be carefully designed to achieve these effects: a simple combination of all affordances offered by VR technology might potentially decrease the desired effects.

Published in:

Frontiers in Virtual Reality, Section Virtual Reality and Human Behavior September 05, 2022.

Files:

     Paper (submitted version, for the final version click the link above)
     Video


Evaluation of Point Cloud Streaming and Rendering for VR-based Telepresence in the OR

Evaluation of Point Cloud Streaming and Rendering for VR-based Telepresence in the OR

Roland Fischer, Andre Mühlenbrock, Farin Kulapichitr, Verena Nicole Uslar, Dirk Weyhe, Gabriel Zachmann

Immersive and high-quality VR-based telepresence systems could be of great benefit in the medical field and the operating room (OR) specifically, as they allow distant experts to interact with each other and to assist local doctors as if they were physically present. Despite recent advances in VR technology, and more telepresence systems making use of it, most of the current solutions in use in health care (if any), are just video-based and don’t provide the feeling of presence or spatial awareness, which are highly important for tasks such as remote consultation, -supervision, and -teaching. Reasons still holding back VR telepresence systems are high demands regarding bandwidth and computational power, subpar visualization quality, and complicated setups. We propose an easy-to-set-up telepresence system that enables remote experts to meet in a multi-user virtual operating room, view live-streamed and 3D-visualized operations, interact with each other, and collaboratively explore medical data. Our system is based on Azure Kinect RGB-D cameras, a point cloud streaming pipeline, and fast point cloud rendering methods integrated into a state-of-the-art 3D game engine. Remote experts are visualized via personalized real-time 3D point cloud avatars. For this, we have developed a high-speed/low-latency multi-camera point cloud streaming pipeline including efficient filtering and compression. Furthermore, we have developed splatting-based and mesh-based point cloud rendering solutions and integrated them into the Unreal Engine 4. We conducted two user studies with doctors and medical students to evaluate our proposed system, compare the rendering solutions, and highlight our system’s capabilities.

Published in:

Virtual Reality and Mixed Reality (EuroXR 2022), LNCS vol 13484, Stuttgart, Germany, September 14 - September 16, 2022. Best Paper Award

Files:

     Paper (preprint)
     Book PDF (pp. 89-110)
     Slides


The Impact of 3D Stereopsis and Hand-Tool Alignment on Effectiveness of a VR-based Simulator for Dental Training

The Impact of 3D Stereopsis and Hand-Tool Alignment on Effectiveness of a VR-based Simulator for Dental Training

Maximilian Kaluschke, Myat Su Yin, Peter Haddawy, Siriwan Suebnukarn, Gabriel Zachmann

Recent years have seen the proliferation of VR-based dental simulators using a wide variety of different VR configurations. Differences in technologies and setups used result in important differences in degree of realism. These include 3D stereoscopic rendering and visual alignment of the user’s hands with the virtual tools. While each new dental simulator typically is associated with some form of evaluation study, only few comparative studies have been carried out to determine the benefits of various simulation aspects. In this paper, we seek to determine the impact of 3D stereo-scopic rendering and of hand-tool alignment on the teaching effectiveness of a VR dental simulator. We developed a bimanual simulator using an HMD and two haptic devices that provides an immersive environment with both 3D stereoscopic rendering and hand-tool alignment. We then systematically and independently controlled for each of the two aspects of the simulation. We trained four groups of students in root canal access opening using the simulator and measured the learning gains by doing pre- and post-testing using realistic plastic teeth. We found that hand-tool alignment has a positive impact on learning gains, while stereoscopic 3D does not. The effect of stereoscopic 3D is surprising and demands further research in settings with small target objects. The results of our study provide valuable information for the future design of dental simulators, as well as simulators for other high-precision psycho-motor tasks.

Published in:

10th IEEE International Conference on Healthcare Informatics (ICHI 2022), June 11 - June 14, 2022.

Files:

     Paper
     Slides

Links:

     Project Page


Fast, Accurate and Robust Registration of Multiple Depth Sensors without need for RGB and IR Images

Andre Mühlenbrock, Roland Fischer, Christoph Schröder-Dering, René Weller and Gabriel Zachmann

Registration is an essential prerequisite for many applications when a multiple-camera setup is used. Due to the noise in depth images, registration procedures for depth sensors frequently rely on the detection of a target object in color or infrared images. However, this prohibits use cases where color and infrared images are not available or where there is no mapping between the pixels of different image types, e.g., due to separate sensors or different projections. We present our novel registration method that requires only the point cloud resulting from the depth image of each camera. For feature detection, we propose a combination of a custom-designed 3D registration target and an algorithm that is able to reliably detect that target and its features in noisy point clouds. Our evaluation indicates that our lattice detection is very robust (with a precision of more than 0.99) and very fast (on average about 20 ms with a single core). We have also compared our registration method with known methods: Our registration method achieves an accuracy of 1.6 mm at a distance of 2 m using only the noisy depth image, while the most accurate registration method achieves an accuracy of 0.7 mm requiring both the infrared and depth image.

Published in:

The Visual Computer, Springer May 17, 2022. Selected paper from the 2021 International Conference on Cyberworlds (CW), Caen, France, September 28 - 30, 2021.

Files:

     Paper


Optimizing the Arrangement of Fixed Light Modules in New Autonomous Surgical Lighting Systems

Andre Mühlenbrock, Timur Cetin, Dirk Weyhe and Gabriel Zachmann

Several novel autonomous lighting systems for illuminating the surgical site (e.g. SmartOT, Optimus ISE) consist of a large number of swiveling light modules placed on the ceiling instead of two or three movable surgical lamps. For such a new type of lighting system for operating rooms, the initial placement of the light modules is of great importance, since the light modules cannot be moved during the surgery. Therefore, we present a novel approach for optimizing the arrangement of lighting modules in a new autonomous lighting system that exploits the special characteristics of an operating room and the surgeries that take place there by taking into account occluding geometry (e.g., operators and medical staff) via point cloud recordings.

Published in:

SPIE Medical Imaging, San Diego, CA, USA, February 20 - 24, 2022.

Files:

     Paper
     Poster
     Teaser Video


synthesizing realistic asteroid surfaces from morphological parameters

Numerical approach to synthesizing realistic asteroid surfaces from morphological parameters

Xizhi Li, Jean-Baptiste Vincent, Rene Weller and Gabriel Zachmann

The complex shape of asteroids and comets is a critical parameter in many scientific and operational studies. From the global irregular shape down to the local surface details, these topographies reflect the formation and evolutionary processes that remould the celestial body. Furthermore, these processes control how the surface will continue to evolve: from mass wasting on high slopes to spin-up due to anisotropic re-emission of thermal radiation. In addition, for space missions, the irregular coarse shape and complex landscape are a hazard to navigation, which must be accounted for in the planning phase.

Published in:

Astronomy & Astrophysics, A&A, 25 March 2022.

Files:

     Paper


Aeroconf 2022 Gravity Modelling

Efficient and Accurate Methods for Computing the Gravitational Field of Irregular-Shaped Bodies

Hermann Meißenhelter, Matthias Noeker, Tom Andert, René Weller, Benjamin Haser, Özgür Karatekin, Birgit Ritter, Max Hofacker, Larissa Balestrero Machado, Gabriel Zachmann

In this study, we present and compare three different methods to model the gravitational field of small bodies and apply them to three test cases that we describe in detail. Our first method is based on the polyhedral method that pro- vides a closed-form analytical solution of the gravity field for (assumed) homogeneous density. The idea behind the second method is to represent the small body’s mass by a polydisperse sphere packing. This allows us an easy and efficient computation through parallelization on the GPU (Graphics Processing Unit). The third method models the internal mass distribution of the body as a set of solid elements in spherical coordinates. The body is divided into longitudes and latitudes and the radius is divided into subsections. The used size of the volume elements is chosen to ensure high accuracy in representing the shape of the body. All three methods are also applicable on the surface of the body, making it interesting in the context of surface gravimetry. We evaluate the three methods using two ideal shapes (sphere and cube) and one real shape model (Martian moon Phobos). We compare the gravitational acceleration at their surface and measure the relative error of the models concerning the analyt- ical solutions. We also look at the computational cost of each method. Our proposed methods indicate that each of them is suitable for modeling asteroids with different characteristics. We provide reliable gravitation data for purposes such as space- craft orbit analysis and evaluation of the small body’s surface domain.

Published in:

IEEE Aerospace Conference (AeroConf) 2022, Big Sky, Montana, USA, March 5 - 12, 2022. Best Paper in Track Award.

Files:

     Paper
     Video


Virtual Reality and Augmented Reality

Virtual Reality and Augmented Reality - Proceedings of the 17th EuroVR Conference 2020

Patrick Bourdot, Victoria Interrante, Regis Kopper, Anne-Hélène Olivier, Hideo Saito, Gabriel Zachmann

17th EuroVR International Conference, EuroVR 2020, Valencia, Spain, November 25–27, 2020, Proceedings

This book constitutes the refereed proceedings of the 17th International Conference on Virtual Reality and Augmented Reality, EuroVR 2020, held in Valencia, Spain, in November 2020.
The 12 full papers were carefully reviewed and selected from 35 submissions. The papers are organized in topical sections named: Perception, Cognition and Behaviour; Training, Teaching and Learning; Tracking and Rendering; and Scientific Posters.

Published in:

17th EuroVR International Conference, EuroVR 2020, Valencia, Spain, November 25–27, 2020, Proceedings.


Lattice Based Registration Cover Video

Fast and Robust Registration of Multiple Depth-Sensors and Virtual Worlds

Andre Mühlenbrock, Roland Fischer, René Weller, Gabriel Zachmann

The precise registration between multiple depth sensors is a crucial prerequisite for many applications. Previous techniques frequently rely on RGB or IR images and checkerboard targets for feature detection. However, this prohibits the usage for use-cases where neither is available or where IR and depth images have different projections. Therefore, we present a novel registration approach that uses depth data exclusively for feature detection, making it more universally applicable while still achieving robust and precise results. We propose a combination of a custom 3D registration target — a lattice with regularly-spaced holes — and a feature detection algorithm that is able to reliably extract the lattice and its features from noisy depth images. In addition, we have integrated the registration procedure to a publicly available Unreal Engine 4 plugin that allows multiple point clouds captured by several depth cameras to be registered in a virtual environment. Despite the rather noisy depth images, we are able to quickly obtain a robust registration that yields an average deviation of 3.8 mm to 4.4 mm in our test scenarios.

Published in:

2021 International Conference on Cyberworlds (CW) , Caen, France, September 28 - 30, 2021.

Files:

     Paper
     Slides
     Technical Video (2:59)

Links:

     Lattice Registration C++ Library: A small C++ library which implements our registration procedure to be used with arbitrary depth sensors.
     QT Demo Application (Windows x64 Build): Simple demo application to test our registration procedure with multiple Microsoft Azure Kinects [Screenshot]
     UE4 Project + Plugin (Source Code): An Unreal Engine 4.26 project with plugin implementing our registration procedure including the registration of depth sensors with the virtual world.


IEEE Aeroconf 2021

VR-Interactions for Planning Planetary Swarm Exploration Missions in VaMEx-VTB

Rene Weller, Christoph Schröder, Jörn Teuber, Philipp Dittmann, Gabriel Zachmann

Virtual testbeds (VTBs) are essential for researchers and engineers during the planning, decision making, and testing phases of space missions because they are much faster and cost-effective than physical models or tests. Moreover, they allow to simulate the target conditions that are not available on earth for real-world tests, and it is possible to change or adjust mission parameters or target conditions on-the-fly. However, such highly specialized and flexible tools are often only available as desktop tools with limited visual feedback and a lack of usability. On the other hand, VR is predestinated for easy, natural interac-tion even in complex decision making and training scenarios, while simultaneously offering high fidelity visual feedback and immersion.
We present a novel tool that combines the flexibility of virtual testbeds with an easy-to-use VR interface. To do so, we have extended a VTB for planetary exploration missions, the VaMEx-VTB (Valles Marineris Exploration-VTB), to support sophisticated virtual reality (VR) interactions. The VTB is based on the modern game engine "Unreal Engine 4", which qualifies it for state-of-the-art rendering. Additionally, our system supports a wide variety of different hardware devices, including head- mounted displays (HMDs) and large projection powerwalls with different tracking and input methods. Our VR-VTB enables the users to investigate simulated sensor output and other mission parameters like lines-of-sight or ground formations for a swarm of different spacecraft, including autonomous ground vehicles, flying drones, a humanoid robot, and supporting orbiters. Moreover, the users can directly interact with the virtual environment to distract the swarm units or change environment parameters, like adding boulders or invoking sand storms. Until now, we have used our system for three different scenarios: a swarm-based exploration of the Valles Marineris on planet Mars, a test scenario of the same swarm units on the Canary Islands, and the autonomous building of a moon base. An expert review shows the general usability of our VR-VTB.

Published in:

IEEE Aerospace Conference (AeroConf) 2021, held online.

Files:

     Paper


Selection technieques in VR

New Methodologies for Automotive PLM by Integrating 3D CAD and Virtual Reality into Function-oriented Development

Moritz Cohrs

The approach of a function-oriented development extends the traditional component-oriented development by focusing an interdisciplinary development of vehicle functions as mechatronic and cyber-physical systems and it is an important measure to master the increasing product and development complexity. So far, the promising potentials of 3D virtual reality methods have not yet been evaluated in the particular context of an automotive function-oriented development. This research focuses on the question if and how 3D virtual reality methods can improve relevant workflows and generally streamline a function-oriented development. With the application to different automotive use cases, it is shown that these novel 3D methods provide a significant benefit for function-oriented development and comparable approaches to systems engineering.

Published in:

Original Version: Staats- und Universitätsbibliothek Bremen, July 2021.

Files:

     Dissertation


Selection technieques in VR

LenSelect: Object Selection in Virtual Environments by Dynamic Object Scaling

René Weller, Waldemar Wegele, Christoph Schröder, Gabriel Zachmann

We present a novel selection technique for VR called LenSelect. The main idea is to decrease the Index of Difficulty (ID) according to Fitts’ Law by dynamically increasing the size of the potentially selectable objects. This facilitates the selection process especially in cases of small, distant or partly occluded objects, but also for moving targets. In order to evaluate our method, we have defined a set of test scenarios that covers a broad range of use cases, in contrast to often used simpler scenes. Our test scenarios include practically relevant scenarios with realistic objects but also synthetic scenes, all of which are available for download. We have evaluated our method in a user study and compared the results to two state-of-the-art selection techniques and the standard ray-based selection. Our results show that LenSelect performs similar to the fastest method, which is ray-based selection, while significantly reducing the error rate by 44%.

Published in:

Frontiers in Virtual Reality, Technologies for VR, 21 June 2021.

Files:

     Paper

Links:

     Project page


ETRA 21

Visualizing Prediction Correctness of Eye Tracking Classifiers

Martin H.U. Prinzler, Christoph Schröder, Sahar Mahdie Klim Al Zaidawi, Gabriel Zachmann, Sebastian Maneth

Eye tracking data is often used to train machine learning algorithms for classification tasks. The main indicator of performance for such classifiers is typically their prediction accuracy. However, this number does not reveal any information about the specific intrinsic workings of the classifier. In this paper we introduce novel visualization methods which are able to provide such information. We introduce the Prediction Correctness Value (PCV). It is the difference between the calculated probability for the correct class and the maximum calculated probability for any other class. Based on the PCV we present two visualizations: (1) coloring segments of eye tracking trajectories according to their PCV, thus indicating how beneficial certain parts are towards correct classification, and (2) overlaying similar information for all participants to produce a heatmap that indicates at which places fixations are particularly beneficial towards correct classification. Using these new visualizations we compare the performance of two classifiers (RF and RBFN).

Published in:

ETRA '21 Short Papers: ACM Symposium on Eye Tracking Research and Applications, Virtual Event, May 24 - 27, 2021.

Files:

     Paper
     Poster
     Teaser Video


Lattice Based Registration Cover

Fast and Robust Registration and Calibration of Depth-Only Sensors

Andre Mühlenbrock, Roland Fischer, René Weller, Gabriel Zachmann

The precise registration between multiple depth cameras is a crucial prerequisite for many applications. Previous techniques frequently rely on RGB or IR images and checkerboard targets for feature detection, partly due to the depth data being inherently noisy. This limitation prohibits the usage for use-cases where neither is available. We present a novel registration approach that solely uses depth data for feature detection, making it more universally applicable while still achieving robust and precise results. We propose a combination of a custom 3D registration target - a lattice with regularly-spaced holes - and a feature detection algorithm that is able to reliably extract the lattice and its features from noisy depth images.

Published in:

Eurographics 2021 (EG 2021) Posters , Vienna, Austria, May 03 - 07, 2021. Public Voting Award for Best Poster.

Files:

     Paper
     Poster
     Teaser Video


Immersive Anatomy Atlas: Learning Factual Medical Knowledge in a Virtual Reality Environment
Top Cited Article

Immersive Anatomy Atlas: Learning Factual Medical Knowledge in a Virtual Reality Environment

Kilian Gloy, Paul Weyhe, Eric Nerenz, Maximilian Kaluschke, Verena Uslar, Gabriel Zachmann, Dirk Weyhe

In order to improve learning efficiency and memory retention in medical teaching, furthering active learning seems to be an effective alternative to classical teaching. One option to make active exploration of the subject matter possible is the use of virtual reality (VR) technology. The authors developed an immersive anatomy atlas which allows users to explore human anatomical structures interactively through virtual dissection. Thirty-two senior-class students from two German high schools with no prior formal medical training were separated into two groups and tasked with answering an anatomical questionnaire. One group used traditional anatomical textbooks and the other used the immersive virtual reality atlas. The time needed to answer the questions was measured. Several weeks later, the participants answered a similar questionnaire with different anatomical questions in order to test memory retention. The VR group took significantly less time to answer the questionnaire, and participants from the VR group had significantly better results over both tests. Based on the results of this study, VR learning seems to be more efficient and to have better long-term effects for the study of anatomy. The reason for that could lie in the VR environment’s high immersion, and the possibility to freely and interactively explore a realistic representation of human anatomy. Immersive VR technology offers many possibilities for medical teaching and training, especially as a support for cadaver dissection courses.

Published in:

Anatomical Sciences Education, American Association for Anatomy (24th of April 2021). doi:10.1002/ase.2095,

Files:

     Paper


UnrealHaptics: Plugins for Advanced VR Interactions in Modern Game Engines

UnrealHaptics: Plugins for Advanced VR Interactions in Modern Game Engines

Janis Rosskamp, Hermann Meißenhelter, Rene Weller, Marc O. Rüdel, Johannes Ganser, Gabriel Zachmann

UnrealHaptics is a plugin-architecture that enables advanced virtual reality (VR) interactions, such as haptics or grasping in modern game engines. The core is a combination of a state-of-the-art collision detection library with support for very fast and stable force and torque computations and a general device plugin for communication with different input/output hardware devices, such as haptic devices or Cybergloves. Our modular and lightweight architecture makes it easy for other researchers to adapt our plugins to their requirements. We prove the versatility of our plugin architecture by providing two use cases implemented in the Unreal Engine 4 (UE4). In the first use case, we have tested our plugin with a haptic device in different test scenes. For the second use case, we show a virtual hand grasping an object with precise collision detection and handling multiple contacts. We have evaluated the performance in our use cases. The results show that our plugin easily meets the requirements of stable force rendering at 1 kHz for haptic rendering even in highly non-convex scenes, and it can handle the complex contact scenarios of virtual grasping.

Published in:

Frontiers in Virtual Reality, Technologies for VR, 16 April 2021.

Files:

     Paper


A Shared Virtual Environment for Dental Surgical Skill Training

A Shared Virtual Environment for Dental Surgical Skill Training

Maximilian Kaluschke, Myat Su Yin, Peter Haddawy, Natchalee Srimaneekarn, Pipop Saikaew, Gabriel Zachmann

Online learning has become an effective approach to reach students who may not be able to travel to university campuses for various reasons. Its use has also dramatically increased during the current COVID-19 pandemic with social distancing and lockdown requirements. But online education has thus far been primarily limited to teaching of knowledge and cognitive skills. There is yet almost nouse of online education for teaching of physical clinical skills. In this paper, we present a shared haptic virtual environmentfor dental surgical skill training. The system provides the teacher and student with a shared environment containing a virtual dental station with patient, a dental drill controlled by a haptic device, and a drillable tooth. It also provides automated scoring of procedure outcomes. We discuss a number of optimizations used in order to provide the high-fidelity simulation and real-time performance needed for training of high-precision clinical skills. Since tactile, in particular kinaesthetic, sense is essential in carrying out many dental procedures, an important question is how to best teach this in a virtual environment. In order to support exploring this, our system includes three modes for transmitting haptic sensations from the user performing the procedure to the user observing.

Published in:

IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR) 2021, March 27 - April 2, 2021.

Files:

     Paper
     Slides

Links:

     Project Page


Improved CNN-based Marker Labeling for Optical Hand Tracking

Improved CNN-based Marker Labeling for Optical Hand Tracking

Janis Rosskamp, Rene Weller, Thorsten Kluss, Jaime L. Maldonado C., Gabriel Zachmann

Hand tracking is essential in many applications reaching from the creation of CGI movies to medical applications and even real-time, natural, physically-based grasping in VR. Optical marker-based tracking is often the method of choice because of its high accuracy, the support for large workspaces, good performance, and there is no wiring of the user required. However, the tracking algorithms may fail in case of hand poses where some of the markers are occluded. These cases require a subsequent reassignment of labels to reappearing markers. Currently, convolutional neural networks (CNN) show promising results for this re-labeling because they are relatively stable and real-time capable. In this paper, we present several methods to improve the accuracy of label predictions using CNNs. The main idea is to improve the input to the CNNs, which is derived from the output of the optical tracking system. To do so, we propose a method based on principal component analysis, a projection method that is perpendicular to the palm, and a multi-image approach. Our results show that our methods provide better label predictions than current state-of-the-art algorithms, and they can be even extended to other tracking applications.

Published in:

EuroVR 2020, Valencia, Spain, November 27 - 29, 2020.

Files:

     Paper
     Slides
     Talk (Video), see EuroVR


Volumetric Medical Data Visualization for Collaborative VR Environments

Volumetric Medical Data Visualization for Collaborative VR Environments

Roland Fischer, Kai-Ching Chang, René Weller, Gabriel Zachmann

In clinical practice, medical imaging technologies, like computed tomography, have become an important and routinely used technique for diagnosis. Advanced 3D visualization techniques of this data, e.g. by using volume rendering, provide doctors a better spatial understanding for reviewing complex anatomy. There already exist sophisticated programs for the visualization of medical imaging data, however, they are usually limited to exactly this topic and can be hardly extended to new functionality; for instance, multi-user support, especially when considering immersive VR interfaces like tracked HMDs and natural user interfaces, can provide the doctors an easier, more immersive access to the information and support collaborative discussions with remote colleagues. We present an easy-to-use and expandable system for volumetric medical image visualization with support for multi-user VR interactions. The main idea is to combine a state-of-the-art open-source game engine, the Unreal Engine 4, with a new volume renderer. The underlying game engine basis guarantees the extensibility and allows for easy adaption of our system to new hardware and software developments. In our example application, remote users can meet in a shared virtual environment and view, manipulate and discuss the volume-rendered data in real-time. Our new volume renderer for the Unreal Engine is capable of real-time performance, as well as, high-quality visualization.

Published in:

EuroVR 2020, Valencia, Spain, November 27 - 29, 2020.

Files:

     Paper
     Slides


Review of Haptic Rendering Techniques for Hip Surgery Training

Review of Haptic Rendering Techniques for Hip Surgery Training

Taha Ziadeh, Jerome Perret, Maximilian Kaluschke, Sebastian Knopp and Mario Lorenz

In this review paper, we discuss haptic rendering techniques that can be used for hip surgery training. In the context of surgery, the simulation requires high quality of feedback forces and the interaction with the virtual environment must be synchronized in real time. Several studies were presented since the 90s to solve collision detection problem and force feedback computation. In this review paper, we classify haptic rendering techniques under two categories: methods of direct force-feedback computation, and proxy based methods. In the first category, the force is calculated and sent directly to the haptic device once the penetration measure is found. In contrast the proxy based techniques try to follow the haptic device using a proxy or “god-object” which is limited to the surface of rigid objects in the virtual environment, then compute the feedback force based on the behavior of this proxy. Under each category, we present the different techniques and discuss their benefits and disadvantages in the light of surgery training.

Published in:

EuroVR 2020 Application, Exhibition & Demo Track, Virtual Event, Finland, November 27 - 29, 2020. Best Application Paper Award

Files:

     Paper

Links:

     Project Page


Procedural 3D Asteroid Model Synthesis

Procedural 3D Asteroid Model Synthesis - A general approach to automatically generate arbitrary 3D asteroid

Xizhi Li

In this thesis we propose new methods to automatically generate an implicit representation of 3D asteroid models, inspired not only by sphere packing but also from noise models. They enables: 1. a novel invariant shape descriptor to be evaluated on GPU side with CUDA; the statistical histogram of the shape descriptor is used to represent the highly detailed 3D asteroid model, 2. an automatic method (AstroGen) to approximate the given constraint shape with sphere packing based metaballs, 3. an optimization method which use the distance between different asteroids' histogram as target function and particle swarm optimization (PSO) algorithm to optimize the parameters of each asteroid's implicit representation (makes the implicit modeling into a machine learning task), 4. a new procedural noise model to generate the surface details on the implicit surface, the details behave in a coherent way with the underlying surface. Ever since the arise of general GPU, the computation speed of computers has increased notably faster than its memory bandwidth. The direct consequence of this trend is that compute-intensive algorithms (especially parallelizable algorithms) become increasingly attractive. This is the main reason to explain the recent popularity of procedural methods. We believe that the latest tendency in hardware (i.e., GPU, Cloud Computing) justify the necessity to take a reconsideration of procedural methods. Our procedural algorithm fits this trends quite well and has great potential in nearly all areas of computer graphics.

Published in:

Original Version: Staats- und Universitätsbibliothek Bremen, 2020.

Files:

     Dissertation


OpenCollBench - Benchmarking of Collision Detection & Proximity Queries as a Web-Service

OpenCollBench - Benchmarking of Collision Detection & Proximity Queries as a Web-Service

Toni Tan, René Weller, Gabriel Zachmann

We present a server-based benchmark that enables a fair analysis of different collision detection & proximity query algorithms. A simple yet interactive web interface allows both expert and non-expert users to easily evaluate different collision detection algorithms’ performance in standardized or optionally user-definable scenarios and identify possible bottlenecks. In contrast to typically used simple charts or histograms to show the results, we additionally propose a heatmap visualization directly on the benchmarked objects that allows the identification of critical regions on a sub-object level. An anonymous login system, in combination with a server-side scheduling algorithm, guarantees security as well as the reproducibility and comparability of the results. This makes our benchmark useful for end-users who want to choose the optimal collision detection method or optimize their objects with respect to collision detection but also for researchers who want to compare their new algorithms with existing solutions.

Published in:

ACM Web3D 2020: The 25th International Conference on 3D Web Technology, Virtual Event , Republic of Korea, November 9 - 13, 2020.

Files:

     Paper
     Talk (Video)
     Talk (Slides)

Links:

     Project Page


Using Large-Scale Augmented Floor Surfaces for Industrial Applications and Evaluation on Perceived Sizes

Using Large-Scale Augmented Floor Surfaces for Industrial Applications and Evaluation on Perceived Sizes

Michael Otto, Eva Lampen, Philipp Agethen, Gabriel Zachmann, Enrico Rukzio

Large high-resolution displays (LHRDs) provide an enabling technology to achieve immersive, isometrically registered, virtual environments. It has been shown that LHRDs allow better size judgments, higher collaboration performance, and shorter task completion times. This paper presents novel insights into human size perception using large-scale floor displays, in particular in-depth evaluations of size judgment accuracy, precision, and task completion time. These investigations have been performed in the context of six, novel applications in the domain of automotive production planning. In our studies, we used a 54-sqm sized LED floor and a standard tablet visualizing relatively scaled and true to scale 2D content, which users had to estimate using different aids. The study involved 22 participants and three different conditions. Results indicate that true to scale floor visualizations reduce the mean absolute percentage error of spatial estimations. In all three conditions, we did not find the typical overestimation or underestimation of size judgments.

Published in:

Personal and Ubiquitous Computing, Springer, published August 2020, received August 2019, doi.org/10.1007/s00779-020-01433-z,

Files:

     Paper


A cadaver-based biomechanical model of acetabulum reaming for surgical virtual reality training simulators

A cadaver-based biomechanical model of acetabulum reaming for surgical virtual reality training simulators

Luigi Pelliccia, Mario Lorenz, Niels Hammer, Christoph-Eckhard Heyde, Maximilian Kaluschke, Philipp Klimant, Sebastian Knopp, Stefan Schleifenbaum, Christian Rotsch, René Weller, Michael Werner, Gabriel Zachmann, Dirk Zajonz

Total hip arthroplasty (THA) is a highly successful surgical procedure, but complications remain, including aseptic loosening, early dislocation and misalignment. These may partly be related to lacking training opportunities for novices or those performing THA less frequently. A standardized training setting with realistic haptic feedback for THA does not exist to date. Virtual Reality (VR) may help establish THA training scenarios under standardized settings, morphology and material properties. This work summarizes the development and acquisition of mechanical properties on hip reaming, resulting in a tissue-based material model of the acetabulum for force feedback VR hip reaming simulators. With the given forces and torques occurring during the reaming, Cubic Hermite Spline interpolation seemed the most suitable approach to represent the nonlinear force-displacement behavior of the acetabular tissues over Cubic Splines. Further, Cubic Hermite Splines allowed for a rapid force feedback computation below the 1 ms hallmark. The Cubic Hermite Spline material model was implemented using a three-dimensional-sphere packing model. The resulting forces were delivered via a human-machine-interaction certified KUKA iiwa robotic arm used as a force feedback device. Consequently, this novel approach presents a concept to obtain mechanical data from high-force surgical interventions as baseline data for material models and biomechanical considerations; this will allow THA surgeons to train with a variety of machining hardness levels of acetabula for haptic VR acetabulum reaming.

Published in:

Nature Scientific Reports,

Files:

     Paper

Links:

     Project Page


AutoBiomes: procedural generation of multi-biome landscapes

AutoBiomes: procedural generation of multi-biome landscapes

Roland Fischer, Philipp Dittmann, René Weller, Gabriel Zachmann

Advances in computer technology and increasing usage of computer graphics in a broad field of applications lead to rapidlyrising demands regarding size and detail of virtual landscapes. Manually creating huge, realistic looking terrains and populatingthem densely with assets is an expensive and laborious task. In consequence, (semi-)automatic procedural terrain generationis a popular method to reduce the amount of manual work. However, such methods are usually highly specialized for certainterrain types and especially the procedural generation of landscapes composed of different biomes is a scarcely explored topic.We present a novel system, called AutoBiomes, which is capable of efficiently creating vast terrains with plausible biomedistributions and therefore different spatial characteristics. The main idea is to combine several synthetic procedural terraingeneration techniques with digital elevation models (DEMs) and a simplified climate simulation. Moreover, we include aneasy-to-use asset placement component which creates complex multi-object distributions. Our system relies on a pipelineapproach with a major focus on usability. Our results show that our system allows the fast creation of realistic looking terrains.

Published in:

The Visual Computer, Springer, July 24, 2020, doi 10.1007/s00371-020-01920-7; selected paper from the Computer Graphics International (CGI) 2020, Geneva, Switzerland, October 20 - 23, 2020.

Files:

     Paper
     Talk (Slides)
     Talk (Video)


Procedural 3D Asteroid Surface Detail Synthesis

Procedural 3D Asteroid Surface Detail Synthesis

Xizhi Li, René Weller, Gabriel Zachmann

We present a novel noise model to procedurally generate volumetric terrain on implicit surfaces. The main idea is to combine a novel Locally Controlled 3D Spot noise (LCSN) for authoring the macro structures and 3D Gabor noise to add micro details. More specifically, a spatially-defined kernel formulation in combination with an impulse distribution enables the LCSN to generate arbitrary size craters and boulders, while the Gabor noise generates stochastic Gaussian details. The corresponding metaball positions in the underlying implicit surface preserve locality to avoid the globality of traditional procedural noise textures, which yields an essential feature that is often missing in procedural texture based terrain generators. Furthermore, different noise-based primitives are integrated through operators, i.e. blending, replacing, or warping into the complex volumetric terrain. The result is a completely implicit representation and, as such, has the advantage of compactness as well as flexible user control. We applied our method to generating high quality asteroid meshes with fine surface details.

Published in:

Eurographics & Eurovis 2020(EGEV 2020), Norrköping, Sweden, May 25 - 29, 2020.

Files:

     Paper
     Talk (Video)
     Talk (Slides)


Improved Lossless Depth Image Compression

Improved Lossless Depth Image Compression

Roland Fischer, Philipp Dittmann, Christoph Schröder, Gabriel Zachmann

Since RGB-D sensors became massively popular and are used in a wide range of applications, depth data compression became an important research topic. Live-streaming of depth data requires quick compression and decompression. Accurate preservation of information is crucial in order to prevent geometric distortions. Custom algorithms are needed considering the unique characteristics of depth images. We propose a real-time, lossless algorithm which can achieve significantly higher compression ratios than RVL. The core elements are an adaptive span-wise intra-image prediction, and parallelization. Additionally, we extend the algorithm by inter-frame difference computation and evaluate the performance regarding different conditions. Lastly, the compression ratio can be further increased by a second encoder, circumventing the lower limit of four-bit per valid pixel of the original RVL algorithm.

Published in:

Journal of WSCG, Vol.28, No.1-2, ISSN 1213-6972; selected paper from the WSCG'20, Pilsen, Czech Republic, May 19 - 21, 2020.

Files:

     Paper
     Supplementary Data
     Talk (Video)
     Talk (Slides)

Links:

     Project Page


Robustness of Eye Movement Biometrics Against Varying Stimuli and Varying Trajectory Length

Robustness of Eye Movement Biometrics Against Varying Stimuli and Varying Trajectory Length

Christoph Schröder, Sahar Mahdie Klim Al Zaidawi, M.Sc. Martin H.U. Prinzler, Sebastian Maneth, Gabriel Zachmann

Recent results suggest that biometric identification based on human's eye movement characteristics can be used for authentication. In this paper, we present three new methods and benchmark them against the state-of-the-art. The best of our new methods improves the state-of-the-art performance by 5.2 percentage points. Furthermore, we investigate some of the factors that affect the robustness of the recognition rate of different classifiers on gaze trajectories, such as the type of stimulus and the tracking trajectory length. We find that the state-of-the-art method only works well when using the same stimulus for testing that was used for training. By contrast, our novel method more than doubles the identification accuracy for these transfer cases. Furthermore, we find that with only 90 seconds of eye tracking data, 86.7% accuracy can be achieved.

Published in:

ACM CHI 2020, Honolulu, Hawaiʻi, April 25 - 30, 2020.

Files:

     Paper
     Slides
     Talk (Video)
     Short Talk (3 minutes video)
     Source code

Links:

     Project Page


TODO

Realistic Haptic Feedback for Material Removal in Medical Simulations

Maximilian Kaluschke, Rene Weller, Niels Hammer, Luigi Pelliccia, Mario Lorenz, Gabriel Zachmann

We present a novel haptic rendering method to simulate material removal in medical simulations at haptic rates. The core of our method is a new massively-parallel continuous collision detection algorithm in combination with a stable and flexible 6-DOF collision response scheme that combines penalty- and constraint-based force computation. Moreover, a volumetric representation allows us to derive a realistic local material model from experimental human cadaveric data, as well as support real-time continuous material removal. We have applied our algorithm to a hip replacement simulator and two dentistry-related simulations for root-canal opening and caries removal. The results show realistic continuous forces and torques at haptic rates.

Published in:

2020 IEEE Haptics Symposium (HAPTICS), Washington, D.C., USA, March 28 - 31, 2020.

Files:

     Paper
     Poster
     Slides
     Talk (Video)


Towards Seamless User Experiences in Driving Simulation Studies

Towards Seamless User Experiences in Driving Simulation Studies

Victoria Ivleva, Sergej Holzmann, Joost Venrooij, Gabriel Zachmann

We present the results of a study where the physical transition into the driving simulator was masked by a virtual experience. Our main hypothesis was that participants should experience a higher sense of presence in the simulator when their entering the physical environment of the driving simulator is masked by a virtual ex- perience that shows a transition from the real starting room to the car, combined with storytelling, but conceals the driving simulator itself. To confirm this hypothesis, we performed a comparative, between-subjects user study, in which two groups were examined while they used a driving simulator: one group experienced a virtual transi- tion while walking to the simulator; the other group only walked to the driving simulator before starting the driving simulation. The user study evaluation showed that participants who experienced the virtual transition tended to feel a higher sense of presence. In addition, we found evidence in the behavior and subjective response that the virtual transition influenced the participants. However, there was no significant difference between the two groups in terms of their driving behavior. Ultimately, the results of this user study show that virtual transition technology has considerable potential for the user studies implemented in driving simulators.

Published in:

Proc. of DSC 2019 EUROPE VR, Driving Simulation Conference & Exhibition, Strasbourg, France, September 4-6, 2019.

Files:

     Paper
     Slides


SIMD Optimized Bounding Volume Hierarchies for Collision Detection
SIMD Optimized Bounding Volume Hierarchies for Collision Detection

SIMDop: SIMD Optimized Bounding Volume Hierarchies for Collision Detection

Toni Tan, René Weller, Gabriel Zachmann

We present a novel data structure for SIMD optimized simultaneous bounding volume hierarchy (BVH) traversals like they appear for instance in collision detection tasks. In contrast to all previous approaches, we consider both the traversal algorithm and the construction of the BVH. The main idea is to increase the branching factor of the BVH according to the available SIMD registers and parallelize the simultaneous BVH traversal using SIMD operations. This requires a novel BVH construction method because traditional BVHs for collision detection usually are simple binary trees. To do that, we present a new BVH construction method based on a clustering algorithm, Batch Neural Gas, that is able to build efficient n-ary tree structures along with SIMD optimized simultaneous BVH traversal. Our results show that our new data structure outperforms binary trees significantly.

Published in:

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019, Macau, China, November 4 - 8, 2019.

Files:

     Paper
     Talk


Combining a Scientific Coral Reef Model With an Awareness Raising 3D Underwater World

Hauke Reuter, Andreas Kubicek, Gabriel Zachmann

Tropical coral reefs, which provide major services for over 500 million people, face severe challenges, such as temperature increase causing frequent deadly bleaching events, and local impacts resulting from intensive use and pollution. Elaboration of scientific tools which allow to extrapolate reef dynamics in detail, assess impact of various drivers and thus evaluate sustainable management schemes is necessary. However, the resulting scientific knowledge often remains inaccessible for management authorities and ecosystem users. Furthermore, it is highly desirable to transfer results into the public for better understanding of underlying processes, management measures, decisions and awareness building.

Published in:

11th WIOMSA Symposium, Mauritius, July, 2019.

Files:

     Abstract for the poster


A Modular Virtual Testbed for Multimodal Autonomous Planetary Missions

VaMEx-VTB — A Modular Virtual Testbed for Multimodal Autonomous Planetary Missions

Jörn Teuber, René Weller, Luisa Buinhas, Daniel Kühn, Philipp Dittmann, Abhishek Srinivas, Frank Kirchner, Roger Förstner, Oliver Funke, Gabriel Zachmann

The "VaMEx - Valles Marineris Explorer" initiative is part of the DLR Explorer Initiatives. As such it is an interdisciplinary research program funded by the DLR Space Administration aimed at developing new concepts, algorithms and hardware for swarm-based exploration of the Valles Marineris on Mars. This includes a hominid robotic platform (project VaMEx-VIPe), autonomous swarm navigation including ground vehicles and UAVs (project VaMEx-CoSMiC) that rely on a local positioning and landing system (project VaMEx-LAOLa), and orbital support (VaMEx-NavComNet) serving as a science data, telemetry and telecommand relay between Earth and the in-situ elements and providing near real-time position updates to the other elements. Real validation and verification tests for such complex navigation and exploration systems are difficult, expensive and time-consuming because they require the availability of hardware, realistic environments and software-in-the-loop. In this paper, we present VaMEx-VTB, a virtual testbed (VTB) that enables the verification and validation of such large and complex interdisciplinary research projects during very early phases. The basic idea of VaMEx-VTB is to provide a common software platform for all modules in combination with a sophisticated user definable computer simulation thereby it helps reducing expensive and time-consuming physical testing. Additionally, it can serve as an integration and discussion hub during the development process. The VTB allows users to configure various aspects of the test scenarios and the test environment, such as physical parameters, atmospheric conditions, or terrain features. This is essential especially for extraterrestrial planetary missions that are difficult to reconstruct on earth. Finally, a sophisticated graphical feedback, based on a state-of-the-art game engine, allows an easy and direct interaction of the engineers with the test case in the VTB. Our modular design based on ROS supports consistent data access for all components. So far, we have implemented a realistic simulation of the relevant environmental parameters and created an adjustable model of the Valles Marineris terrain, based on the HiRISE data. Additionally, the VTB synthesizes realistic sensor input for several algorithms running on the swarm elements. The modular design concept also qualifies the VTB to serve as a testing platform for other extraterrestrial missions in the future.

Published in:

70th International Astronautical Congress (IAC) 2019, Washington, DC, USA, October 21 - 25, 2019.

Files:

     Paper

Links:

     Project


Auto Packing for Arbitrary 3D Objects and Container

Auto Packing for Arbitrary 3D Objects and Container

Hermann Meißenhelter, René Weller, Gabriel Zachmann

Packing problems occur in different forms in many important real-world applications. In this paper we consider the hardly researched challenge of packing a set of arbitrary 3D objects, following a pre-defined distribution, into a single arbitrary 3D container while optimizing the following two, partly contradictory, criteria simultaneously: maximization of the packed volume, obviously, but also the maximization of the distances of objects of the same type to avoid clustering. We present a several algorithms to heuristically compute solutions to this challenge. Our algorithms are organized in a flexible two-tier pipeline that offers the possibility to compute an initial placement that can be improved in a second step. Our results show that our approach produces dense packings for a wide range of different objects and containers.

Published in:

GI VR/AR Workshop 2019, Fulda, Germany, September 17 - 18, 2019.

Links:

     Project


Introducing Virtual & 3D-Printed Models for Improved Collaboration in Surgery

Introducing Virtual & 3D-Printed Models for Improved Collaboration in Surgery

Anke Reinschlüssel, Roland Fischer, Christian Schumann, Verena Uslar, Thomas Münder, Uwe Katzky, Heike Kißner, Valentin Kraft, Marie Lampe, Thomas Lück, Kai Bock-Müller, Hans Nopper, Sirko Pelzl, Dirk Wenig, Andrea Schenk, Dirk Weyhe, Gabriel Zachmann, Rainer Malaka

Computer-assisted surgery and the use of virtual environments in surgery are getting popular lately, as they provide numerous benefits, especially for visualisation of data. Yet, these tools lack features for direct and interactive discussion with remote experts and intuitive means of control for 3D data. Therefore, we present a concept to create an immersive multi-user system, by using virtual reality, augmented reality and 3D-printed organ models, which enables a collaborative workflow to assist surgeries. The 3D models will be an interaction medium to provide haptic feedback as well as teaching material. Additionally, multiple depth cameras will be used to provide remote users in the virtual environment with a realistic live representation of the operating room. Our system can be used in the planning stage, intraoperatively as well as for training. First prototypes were rated as highly useful by visceral surgeons in a focus group.

Published in:

18th Annual Meeting of the German Society for Computer- and Robot-Assisted Surgery (CURAC 2019), Reutlingen, Germany, September 19 - 21, 2019.

Files:

     Paper (preprint)
     Conference proceedings (paper on pages 253-258)

Links:

     Project Page


Virtual Validation and Verification of the VaMEx Initiative

Virtual Validation and Verification of the VaMEx Initiative

Jörn Teuber, René Weller, Luisa Buinhas, Daniel Kühn, Philipp Dittmann, Abhishek Srinivas, Frank Kirchner, Roger Förstner, Oliver Funke, Gabriel Zachmann

We present an overview of the Valles Marineris Explorer (VaMEx) initiative, a DLR-funded project line for the development of required key technologies to enable a future swarm exploration of the Valles Marineris on Mars. The Valles Marineris is a wide canyon range, near the Martian equator. The so far still fictive VaMEx mission scenario compromises a swarm of different robots, including rovers, flying drones and a hominid robot. Here, we present VaMEx-VTB, a virtual testbed (VTB) with a digitalized map of the large and frag- mented terrain of the Valles Marineris. The VaMEx-VTB allows an adjustable validation as well as verification of the complex mission design in virtual reality, due to its modular design. It shall also be used in preparation of field tests in the near future for validation of each swarm element’s ability for interactive swarm cooperation and collaboration.

Published in:

International Planetary Probe Workshop 2019, Oxford, England, July 8 - 12, 2019.

Files:

     Paper
     Poster


Application Scenarios for 3D-Printed Organ Models for Collaboration in VR & AR

Application Scenarios for 3D-Printed Organ Models for Collaboration in VR & AR

Muender, T., Reinschluessel, A., Zargham, N., Döring, T., Wenig, D., Malaka, R., Fischer, R., Zachmann, G., Schumann, C., Kraft, V., Schenk, A., Uslar, V., Weyhe, D., Nopper, H. & Lück, T.

Medical software for computer-assisted surgery often solely supports one phase of the surgical process, e.g., surgery planning. This paper describes a concept for a system, which can be seamlessly used in the preoperative planning phase, in the intraoperative phase for viewing the planning data, as well as for training and education. A combination of virtual and augmented reality with a multi-user functionality will support the three phases. 3D-printed organ models will be used as interaction devices for more intuitive interaction with the visual data and for educating future surgeons. We present the three application scenarios for this concept in detail and discuss the research opportunities.

Published in:

Mensch und Computer 2019 - Workshopband, Bonn: Gesellschaft für Informatik e.V., 2019.

Files:

     Paper


Fast and Easy Collision Detection for Rigid and Deformable Objects

Fast and Easy Collision Detection for Rigid and Deformable Objects

René Weller and Gabriel Zachmann

Chapter Abstract:
In this chapter, we present two methods for collision detection in virtual environments. The first method relies on a data structure called the Inner Sphere Tree (IST). ISTs are suitable for rigid objects and they are the first data structure that is able to compute the penetration volume between a pair of colliding objects at haptic rendering rates. This new contact information guarantees physically-plausible and continuous forces and torques for the collision responses that are essential for stable physically-based simulations and haptic rendering. ISTs do rely on a bounding volume hierarchy that requires a time-consuming pre-processing that becomes invalid in case of deformations. Consequently, for deformable objects, we propose another algorithm (we call it kDet) that does not need any pre-processing. kDet works completely on the GPU and has a constant running time for practically all relevant objects.

About the Book:

This book takes the practicality of other "Gems" series such as "Graphics Gems" and "Game Programming Gems" and provide a quick reference for novice and expert programmers alike to swiftly track down a solution to a task needed for their VR project. Reading the book from cover to cover is not the expected use case, but being familiar with the territory from the Introduction and then jumping to the needed explanations is how the book will mostly be used. Each chapter (other than Introduction) will contain between 5 to 10 "tips", each of which is a self-contained explanation with implementation detail generally demonstrated as pseudo code, or in cases where it makes sense, actual code.

Published in:

Chapter 34 in: William R. Sherman (ed.): VR Developer Gems, CRC Press Taylor & Francis Group, June, 2019, ISBN 978-1-138-03012-1.

Order from: Amazon
View online at Google Books


A Continuous Material Cutting Model with Haptic Feedback for Medical Simulations

A Continuous Material Cutting Model with Haptic Feedback for Medical Simulations

Maximilian Kaluschke, Rene Weller, Gabriel Zachmann, Mario Lorenz

We present a novel haptic rendering approach to simulate material removal in medical simulations at haptic rates. The core of our method is a new massively-parallel continuous collision detection algorithm in combination with a stable and flexible 6-DOF collision response scheme that combines penalty-based and constraint-based force computation.

Published in:

2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, March 23 - 27, 2019.

Files:

     Paper
     Poster
     Talk


A Virtual Reality Assembly Assessment Benchmark for Measuring VR Performance & Limitations

Michael Otto, Eva Lampen, Philipp Agethena, Mareike Langohr, Gabriel Zachmann, Enrico Rukzio

With an increasing product complexity in manufacturing industry, virtual reality (VR) offers the possibility to immersively assess assembly processes already in early product development stages. Within production validation phases, engineers visually assess product part assembly and interactively validate corresponding production processes. Nevertheless, by now research does not give answers on how VR assembly system’s performance can be measured with respect to its technical limitations. The proposed Virtual Reality Assembly Assessment (VR2A) benchmark is an open, standardized experiment design for evaluating the overall VR assembly assessment performance in terms of sizes and clearances instead of measuring single technical impact factors within the interaction cycle, such as tracking, rendering and visualization limitations. VR2A benchmark focusses on the overall production engineer’s assessment objective generating quantifiable metrics. Using VR2A, users gain practical insights on their overall VR assessment system’s performance and limitations. An in-depth evaluation with production engineers (N=32) revealed, that negative clearances can be detected more easily than positive ones, part sizes directly correlate with the assessment performance. Additionally, the evaluation showed that VR2A is easy to use, universally usable and generates objective insights on the applied VR system.

Published in:

ScienceDirect, 52nd CIRP Conference on Manufacturing Systems (CMS), Ljubljana, Slovenia, June 12-14, 2019,


New Concepts for Virtual Testbeds : Data Mining Algorithms for Blackbox Optimization based on Wait-Free Concurrency and Generative Simulation

Patrick Draheim

In this thesis, I propose novel data mining algorithms for computing Pareto optimal simulation model configurations, based on an approximation of the feasible design space, for deterministic and stochastic blackbox simulations in virtual testbeds for achieving above stated goal. These novel data mining algorithms lead to an automatic knowledge discovery process that does not need any supervision for its data analysis and assessment for multiobjective optimization problems of simulation model configurations. This achieves the previously stated goal of computing optimal configurations of simulation models for long-term simulations and assessments. Furthermore, I propose two complementary solutions for efficiently integrating massively-parallel virtual testbeds into engineering processes. First, I propose a novel multiversion wait-free data and concurrency management based on hash maps. These wait-free hash maps do not require any standard locking mechanisms and enable low-latency data generation, management and distribution for massively-parallel applications. Second, I propose novel concepts for efficiently code generating above wait-free data and concurrency management for arbitrary massively-parallel simulation applications of virtual testbeds. My generative simulation concept combines a state-of-the-art realtime interactive system design pattern for high maintainability with template code generation based on domain specific modelling. This concept is able to generate massively-parallel simulations and, at the same time, model checks its internal dataflow for possible interface errors. These generative concept overcomes the challenge of efficiently integrating virtual testbeds into engineering processes. These contributions enable for the first time a powerful collaboration between simulation, optimization, visualization and data analysis for novel virtual testbed applications but also overcome and achieve the presented challenges and goals.

Published in:

Original Version: Staats- und Universitätsbibliothek Bremen, 2018.

Files:

     Dissertation


Immersive Anatomy Atlas—Empirical Study Investigating the Usability of a Virtual Reality Environment as a Learning Tool for Anatomy

Immersive Anatomy Atlas—Empirical Study Investigating the Usability of a Virtual Reality Environment as a Learning Tool for Anatomy

Dirk Weyhe, Verena Uslar, Felix Weyhe, Maximilian Kaluschke and Gabriel Zachmann

We developed a prototype of a virtual, immersive, and interactive anatomy atlas for surgical anatomical training. The aim of this study was to test the usability of the VR anatomy atlas and to measure differences in knowledge acquirement between an immersive content delivery medium and conventional learning (OB). Twenty-eight students of the 11th grade of two german high schools randomly divided into two groups. One group used conventional anatomy books and charts whereas the other group used the VR Anatomy Atlas to answer nine anatomy questions. Error rate, duration for answering the individual questions, satisfaction with the teaching unit, and existence of a medical career wish were evaluated as a function of the learning method. The error rate was the same for both schools and between both teaching aids (VR: 34.2%; OB: 34.1%). The answering speed for correctly answered questions in the OB group was approx. twice as high as for the VR group (mean value OB: 98 s, range: 2–410 s; VR: 50 s, 1–290 s). There was a significant difference between the students of the two schools based on a longer processing time in the OB condition in School B (mean OB in School A: 158 s; OB in School B: 77 s). The subjective survey on the learning methods showed a significantly better satisfaction for VR (p = 0.012). Medical career aspirations have been strengthened with VR, while interest of the OB group in such a career tended to decline. The immersive anatomy atlas helped to actively and intuitively perform targeted actions that led to correct answers in a shorter amount of time, even without prior knowledge of VR and anatomy. With the OB method, orientation difficulties and/or the technical effort in the handling of the topographical anatomy atlas seem to lead to a significantly longer response time, especially if the students are not specially trained in literature research in books or texts. This seems to indicate that the VR environment in the sense of constructivist learning might be a more intuitive and effective way to acquire knowledge than from books.

Published in:

Frontiers in Surgery, Visceral Surgery 5:73 (30 November 2018). doi:10.3389/fsurg.2018.00073,

Files:

     Paper
     Video
     Video


Procedural Generation of Highly Detailed Asteroid Models

AstroGen – Procedural Generation of Highly Detailed Asteroid Models

Xi-Zhi Li, René Weller, Gabriel Zachmann

We present a novel algorithm, called AstroGen, to procedurally generate highly detailed and realistic 3D meshes of small celestial bodies automatically. AstroGen gains it’s realism from learning surface details from real world asteroid data. We use a sphere packing-based metaball approach to represent the rough shape and a set of noise functions for the surface details. The main idea is to apply an optimization algorithm to adopt these representations to available highly detailed asteroid models with respect to a similarity measure. Our results show that our approach is able to generate a wide variety of different celestial bodies with very complex surface structures like caves and craters.

Published in:

The 15 th International Conference on Control, Automation, Robotics and Vision (ICARCV 2018), Singapore, November 18-21,

Files:

     Paper
     Talk


HIPS A Virtual Reality Hip Prosthesis Implantation Simulator

HIPS - A Virtual Reality Hip Prosthesis Implantation Simulator

Maximilian Kaluschke, Rene Weller, Gabriel Zachmann, Luigi Pelliccia, Mario Lorenz, Philipp Klimant, Sebastian Knopp, Johannes P. G. Atze, Falk Möckel

We present the first VR training simulator for hip replacement surgeries. We solved the main challenges of this task – high and stable forces during the milling process while simultaneously a very sensitive feedback is required – by using an industrial robot for the force output and the development of a novel massively parallel haptic rendering algorithm with support for material removal.

Published in:

2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Reutlingen, Germany, March 18 - 22, 2018.

Files:

     Paper
     Poster
     Demo Abstract


UnrealHaptics A Plugin-System for High Fidelity Haptic Rendering in the Unreal Engine

UnrealHaptics - A Plugin-System for High Fidelity Haptic Rendering in the Unreal Engine

Marc O. Rüdel, Johannes Ganser, René Weller, Gabriel Zachmann

We present UnrealHaptics, a novel set of plugins that enable both 3-DOF and 6-DOF haptic rendering in the Unreal Engine 4. The core is the combination of the integration of a state-of-the-art collision detection library with support for very fast and stable force and torque computations and a general haptics library for the communication with different haptic hardware devices. Our modular and lightweight architecture makes it easy for other researchers to adapt our plugins to their own requirements. As a use case we have tested our plugin in a new asymmetric collaborative multiplayer game for blind and sighted people. The results show that our plugin easily meets the requirements for haptic rendering even in complex scenes.

Published in:

Springer Lecture Notes in Computer Science, LNCS, Volume 11162, 2018.

Files:

     Paper


DynCam A Reactive Multithreaded Pipeline Library for 3D Telepresence in VR

DynCam: A Reactive Multithreaded Pipeline Library for 3D Telepresence in VR

Christoph Schröder, Mayank Sharma, Jörn Teuber, René Weller, Gabriel Zachmann

We contribute a new library, DynCam, for real-time, low latency, streaming point cloud processing with a special focus on telep- resence in VR. Our library combines several RGBD-images from multiple distributed sources to a single point cloud and transfers it through a network. This processing is organized as a pipeline that supports implicit multithreading. The pipeline uses functional reactive programming to describe transformations on the data in a declarative way. In contrast to previous libraries, DynCam is plat- form independent, modular and lightweight. This makes it easy to extend and allows easy integration into existing applications. We have prototypically implemented a telepresence application in the Unreal Engine. Our results show that DynCam outperforms competing libraries concerning latency as well as network traffic.

Published in:

VRIC 2018, Laval, France, Apri 4-6, 2018.

Files:

     Paper preprint
     Talk


Fast and Accurate Simulation of Gravitational Field of Irregular-shaped Bodies using Polydisperse Sphere Packings

Fast and Accurate Simulation of Gravitational Field of Irregular-shaped Bodies using Polydisperse Sphere Packings

Abhishek Srinivas, René Weller, Gabriel Zachmann

Currently, interest in space missions to small bodies (e.g., asteroids) is increasing, both scientifically and commercially. One of the important aspects of these missions is to test the navigation, guidance, and control algorithms. The most cost and time efficient way to do this is to simulate the missions in virtual testbeds. To do so, a physically-based simulation of the small bodies' physical properties is essential. One of the most important physical properties, especially for landing operations, is the gravitational field, which can be quite irregular, depending on the shape and mass distribution of the body. In this paper, we present a novel algorithm to simulate gravitational fields for small bodies like asteroids. The main idea is to represent the small body's mass by a polydisperse sphere packing. This allows for an easy and efficient parallelization. Our GPU-based implementation outperforms traditional methods by more than two orders of magnitude while achieving a similar accuracy.

Published in:

ICAT-EGVE 2017, Adelaide, Australia, November 22-24, 2017.

Files:

     Paper
     Talk


Augmented Invaders A Mixed Reality Multiplayer Outdoor Game

Augmented Invaders: A Mixed Reality Multiplayer Outdoor Game

Michael Bonfert, Inga Lehne, Ralf Morawe, Melina Cahnbley, Gabriel Zachmann, Johannes Schöning

Many virtual and mixed reality games focus on single player ex- periences. In this paper, we describe the concept and prototype implementation of a mixed reality multiplayer game that can be played with a smartphone and an HMD in outdoor environments. Players can team up to ght against attacking alien drones. The relative positions between the players are tracked using GPS, and the rear camera of the smartphone is used to augment the envi- ronment and teammates with virtual objects. The combination of multiplayer, mixed reality, the use of geographical location and outdoor action together with a ordable, mobile equipment enables a novel strategic and social game experience.

Published in:

VRST 2017, Gothenburg, Sweden, November 8-10, 2017.

Files:

     Paper


Invariant Local Shape Descriptors Classification of Large-Scale Shapes with Local Dissimilarities

Invariant Local Shape Descriptors: Classification of Large-Scale Shapes with Local Dissimilarities

Xi-Zhi Li, Patrick Lange, René Weller, Gabriel Zachmann

We present a novel statistical shape descriptor for arbitrary three-dimensional shapes as a six-dimensional feature for generic classification purposes. Our feature parameterizes the complete geometrical relation of the global shape and additionally considers local dissimilarities while being invariant to the shape appearance. Our approach allows the classification of large-scale shapes with only small local dissimilarities. Our feature can be easily quantized and mapped into a histogram, which can be used for efficient and effective classification. We take advantage of GPU processing in order to efficiently compute our invariant local shape descriptor feature even for large-scale shapes. Our synthetic benchmarks show that our approach outperforms state-of-the-art methods for local shape dissimilarity classification. In general, it yields robust and promising recognition rates even for noisy data.

Published in:

Computer Graphics International 2017, Yokohoma, Japan, June 27 - 30, 2017.

Files:

     Paper
     Presentation
     Talk


A Volumetric Penetration Measure for 6-DOF Haptic Rendering of Streaming Point Clouds

A Volumetric Penetration Measure for 6-DOF Haptic Rendering of Streaming Point Clouds

Maximilian Kaluschke, René Weller and Gabriel Zachmann

We present a novel method to define the penetration volume between a surface point cloud and arbitrary 3D CAD objects. Moreover, we have developed a massively-parallel algorithm to compute this penetration measure efficiently on the GPU. The main idea is to represent the CAD object's volume by an inner bounding volume hierarchy while the point cloud does not require any additional data structures. Consequently, our algorithm is perfectly suited for streaming point clouds that can be gathered online via depth sensors like the Kinect. We have tested our algorithm in several demanding scenarios and our results show that our algorithm is fast enough to be applied to 6-DOF haptic rendering while computing continuous forces and torques.

Published in:

IEEE World Haptics Conference 2017, Fürstenfeldbruck, Germany, June 6 - 9, 2017.

Files:

     Paper
     Poster
     Teaser


GDS Gradient based Density Spline Surfaces for Multiobjective Optimization in Arbitrary Simulations

GDS: Gradient based Density Spline Surfaces for Multiobjective Optimization in Arbitrary Simulations

Patrick Lange, René Weller and Gabriel Zachmann

We present a novel approach for approximating objective functions in arbitrary deterministic and stochastic multi-objective blackbox simulations. Usually, simulated-based optimization approaches require pre-defined objective functions for optimization techniques in order to find a local or global minimum of the specified simulation objectives and multi-objective constraints. Due to the increasing complexity of state-of-the-art simulations, such objective functions are not always available, leading to so-called blackbox simulations. In contrast to existing approaches, we approximate the objective functions and design space for deterministic and stochastic blackbox simulations, even for convex and concave Pareto fronts, thus enabling optimization for arbitrary simulations. Additionally, Pareto gradient information can be obtained from our design space approximation. Our approach gains its efficiency from a novel gradient-based sampling of the parameter space in combination with a density-based clustering of sampled objective function values, resulting in a B-spline surface approximation of the feasible design space.

Published in:

ACM SIGSIM PADS Conference 2017, Singapore, May 24 - 26, 2017.

Files:

     Paper
     Slides


kDet Parallel Constant Time Collision Detection for Polygonal Objects

kDet: Parallel Constant Time Collision Detection for Polygonal Objects

René Weller, Nicole Debowski and Gabriel Zachmann

We define a novel geometric predicate and a class of objects that enables us to prove a linear bound on the number of intersecting polygon pairs for colliding 3D objects in that class. Our predicate is relevant both in theory and in practice: it is easy to check and it needs to consider only the geometric properties of the individual objects – it does not depend on the configuration of a given pair of objects. In addition, it characterizes a practically relevant class of objects: we checked our predicate on a large database of real-world 3D objects and the results show that it holds for all but the most pathological ones. Our proof is constructive in that it is the basis for a novel collision detection algorithm that realizes this linear complexity also in practice. Additionally, we present a parallelization of this algorithm with a worst-case running time that is independent of the number of polygons. Our algorithm is very well suited not only for rigid but also for deformable and even topology-changing objects, because it does not require any complex data structures or pre-processing. We have implemented our algorithm on the GPU and the results show that it is able to find in real-time all colliding polygons for pairs of deformable objects consisting of more than 200k triangles, including self-collisions.

Published in:

Eurographics 2017, Lyon, France, April 24 - 28, 2017.

Files:

     Paper
     Talk


Virtual Reality for User-Centered Design and Evaluation of Touch-free Interaction Techniques for Navigating Medical Images in the Operating Room

Virtual Reality for User-Centered Design and Evaluation of Touch-free Interaction Techniques for Navigating Medical Images in the Operating Room

Anke Reinschlüssel, Jörn Teuber, Marc Herrlich, Jeffrey Bissel, Melanie van Eikeren, Johannes Ganser, Felicia Köller, Fenja Kollasch, Thomas Mildner, Luca Raimondo, Lars Reisig, Marc Rüdel, Danny Thieme, Tobias Vahl, Gabriel Zachmann, Rainer Malaka

Computer-assisted surgery has pervaded the operating room (OR). While display and imaging technologies advance rapidly, keyboard and mouse are still the dominant input devices, even though they cause sterility problems. We present an interactive virtual operating room (IVOR), intended as a tool to develop and study interaction methods for the OR, and two novel touch-free interaction techniques using hand and foot gestures. All was developed and evaluated with 20 surgeons. The results show that our techniques can be used with minimal learning time and no significant differences regarding completion time and usability compared to the control condition relying on verbal instruction of an assistant. Furthermore, IVOR as a tool was well received by the surgeons, although they had no prior experience with virtual reality. This confirms IVOR is an effective tool for user-centered design and evaluation, providing a portable, yet realistic substitution for a real OR for early evaluations.

Published in:

CHI 2017 - Late-Breaking Work, Colorado Convention Center, Denver, CO, May 6 - 11, 2017.

Files:

     Paper


Optimized Positioning of Autonomous Surgical Lamps

Jörn Teuber, René Weller, Ron Kikinis, Karl-Jürgen Oldhafer, Michael J. Lipp, Gabriel Zachmann

We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery.
Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This meta-optimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution.
Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.

Published in:

SPIE Medical Imaging Orlando, FL, USA, February 11 - 16, 2017.

Copyright 2017 Society of Photo Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic electronic or print reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.

http://dx.doi.org/10.1117/12.2256029

Files:

     Paper
     Talk


Novel Morphological Features for Non-mass-like Breast Lesion Classification on DCE-MRI

Novel Morphological Features for Non-mass-like Breast Lesion Classification on DCE-MRI

M. Razavi, L. Wang, T. Tan, N. Karssemeijer, L. Linsen, U. Frese, H. K. Hahn, & G. Zachmann

For both visual analysis and computer assisted diagnosis systems in breast MRI reading, the delineation and diagnosis of ductal carcinoma in situ (DCIS) is among the most challenging tasks. Recent studies show that kinetic features derived from dynamic contrast enhanced MRI (DCE-MRI) are less effective in discriminating malignant non-masses against benign ones due to their similar kinetic characteristics. Adding shape descriptors can improve the differentiation accuracy. In this work, we propose a set of novel morphological features using the sphere packing technique, aiming to discriminate non-masses based on their shapes. The feature extraction, selection and the classification modules are integrated into a computer-aided diagnosis (CAD) system. The evaluation was performed on a data set of 106 non-masses extracted from 86 patients, which achieved an accuracy of 90.56%, precision of 90.3%, and area under the receiver operating characteristic (ROC) curve (AUC) of 0.94 for the differentiation of benign and malignant types.

Published in:

MLMI 2016 ( Machine Learning in Medical Imaging), in Conjunction with MICCAI 2016, Athens, Greece, Springer LNCS, volume 10019,

Files:

     Paper
     Poster


Intelligent Realtime 3D Simulations

Intelligent Realtime 3D Simulations

Patrick Lange, Gabriel Zachmann

This thesis focuses on techniques to improve performance, scalability as well as multi-objective optimization of interactive 3D simulation-based optimization applications. The approaches developed in this work contribute to the area of simulation-based optimization, high performance computing, simulation and modelling, multi-objective optimization, and realtime interactive systems.

Published in:

ACM SIGSIM PADS Phd Colloquium, Banff, AB, Canada, May 15-18, 2016, Best Phd Award

Files:

     Poster
     Talk


Knowledge Discovery for Pareto based Multiobjective Optimization in Simulation

Knowledge Discovery for Pareto based Multiobjective Optimization in Simulation

Patrick Lange, René Weller, Gabriel Zachmann

We present a novel knowledge discovery approach for automatic feasible design space approximation and parameter optimization in arbitrary multiobjective blackbox simulations. Our approach does not need any supervision of simulation experts. Usually simulation experts conduct simulation experiments for a predetermined system specification by manually reducing the complexity and number of simulation runs by varying input parameters through educated assumptions and according to prior defined goals. This leads to a error-prone trial-and-error approach for determining suitable parameters for successful simulations. In contrast, our approach autonomously discovers unknown relationships in model behavior and approximates the feasible design space. Furthermore, we show how Pareto gradient information can be obtained from this design space approximation for state-of-the-art optimization algorithms. Our approach gains its efficiency from a novel spline-based sampling of the parameter space in combination within novel forest-based simulation dataflow analysis. We have applied our new method to several artificial and real-world scenarios and the results show that our approach is able to discover relationships between parameters and simulation goals. Additionally, the computed multiobjective solutions are close to the Pareto front.

Published in:

ACM SIGSIM PADS, Banff, AB, Canada, May 15 - 18, 2016.

Files:

     Paper
     Talk


GraphPool A High Performance Data Management for 3D Simulations

GraphPool: A High Performance Data Management for 3D Simulations

Patrick Lange, René Weller, Gabriel Zachmann

We present a new graph-based approach called GraphPool for the generation, management and distribution of simulation states for 3D simulation applications. Currently, relational databases are often used for this task in simulation applications. In contrast, our approach combines novel wait-free nested hash map techniques with traditional graphs which results in a schema-less, in-memory, highly efficient data management. Our GraphPool stores static and dynamic parts of a simulation model, distributes changes caused by the simulation and logs the simulation run. Even more, the GraphPool supports sophisticated query types of traditional relational databases. As a consequence, our GraphPool overcomes the associated drawbacks of relational database technology for sophisticated 3D simulation applications. Our GraphPool has several advantages compared to other state-of-the-art decentralized methods, such as persistence for simulation state over time, object identification, standardized interfaces for software components as well as a consistent world model for the overall simulation system. We tested our approach in a synthetic benchmark scenario but also in real-world use cases. The results show that it outperforms state-of-the-art relational databases by several orders of magnitude.

Published in:

ACM SIGSIM PADS, Banff, AB, Canada, May 15 - 18, 2016.

Files:

     Paper
     Talk


Wait-Free Hash Maps in the Entity-Component-System Pattern for Realtime Interactive Systems

Wait-Free Hash Maps in the Entity-Component-System Pattern for Realtime Interactive Systems

Patrick Lange, René Weller, Gabriel Zachmann

In the past, the Entity-Component-System (ECS) pattern has become a major design pattern used in modern architectures for Realtime Interactive Systems (RIS). In this paper we introduce high performance wait-free hash maps for the System access of Components within the ECS pattern. This allows non-locking read and write operations, leading to a highly responsive low-latency data access while maintaining a consistent data state. Furthermore, we present centralized as well as decentralized approaches for reducing the memory demand of these memory-intensive wait-free hash maps for diverse RIS applications. Our approaches gain their efficiency by Component-wise queues which use atomic markup operations for fast memory deletion. We have implemented our new method in a current RIS and the results show that our approach is able to efficiently reduce the memory usage of wait-free hash maps very effectively by more than a factor of ten while still maintaining their high performance. Furthermore, we derive best practices from our numerical results for different use cases of wait-free hash map memory management in diverse RIS applications.

Published in:

IEEE VR: 9th Workshop on Software Engineering and Architectures for Realtime Interactive Systems SEARIS 2016, Greenville, SC, USA, March 19 - 23, 2016.

Files:

     Paper
     Talk


Kinaptic — Techniques and Insights for Creating Competitive Accessible 3D Games for Sighted and Visually Impaired Users

Kinaptic — Techniques and Insights for Creating Competitive Accessible 3D Games for Sighted and Visually Impaired Users

Andreas Grabski, Toni Toni, Tom Zigrand, René Weller, Gabriel Zachmann

We present the first accessible game that allows a fair competition between sighted and blind people in a shared virtual 3D environment.We use an asymmetric setup that allows touchless interaction via Kinect, for the sighted player, and haptic, wind, and surround audio feedback, for the blind player. We evaluated our game in an in-the-wild study. The results show that our setup is able to provide a mutually fun game experience while maintaining a fair winning chance for both players. Based on our study, we also suggest guidelines for future developments of games for visually impaired people that could help to further include blind people into society.

Published in:

IEEE Haptics Symposium 2016, Philadelphia, PA, USA, April 8 - 11, 2016

Files:

     Paper
     Talk


New Geometric Algorithms and Data Structures for collision detection of dynamically deforming objects

PhD thesis: New Geometric Algorithms and Data Structures for collision detection of dynamically deforming objects

David Mainzer

This thesis presents a collision detection approach, which works entirely without an acceleration data structure and supports rigid and soft bodies. Furthermore, we can compute inter-object and intra-object collisions of rigid and deformable objects consisting of many tens of thousands of triangles in a few milliseconds. To realize this, a subdivision of the scene into parts using a fuzzy clustering approach is applied. Based on that all further steps for each cluster can be performed in parallel and if desired, distributed to different GPUs. Tests have been performed to judge the performance of our approach against other state-of-the-art collision detection algorithms. Additionally, we integrated our approach into Bullet, a commonly used physics engine, to evaluate our algorithm.

Published in:

Universitätsbibliothek Clausthal Clausthal, 2015,

Files:

     Dissertation


Time-efficient and Accurate Spatial Localization of Automotive Function Architectures with Function-oriented 3D Visualization

Time-efficient and Accurate Spatial Localization of Automotive Function Architectures with Function-oriented 3D Visualization

Moritz Cohrs, Valeri Kremer, Stefan Klimke, Gabriel Zachmann

A primary challenge in the automotive industry is the continued increasing complexity of modern cars caused by the ever increasing amount of complex vehicle functions. These functions are implemented as mechatronic systems consisting of multiple individual components. A promising, relatively new approach to manage the increasing complexity in the development process is the function-oriented design that focuses on the interdisciplinary, holistic development of such functions. A frequent and important task in function-oriented design is the identification of the spatial distribution of the components and their connections of a specific function. In this paper, we present a very time-efficient and accurate solution to this task. Our solution uses virtual reality 3D visualization methods, based on consistent integration of function-oriented data with CAD data. We evaluated our method in several user studies and the results show that it is capable of fulfilling the task in a much more a time-efficient and more accurate way than the traditional method.

Published in:

Computer-Aided Design and Applications 2015, Taylor & Francis CAD'15 Journal

Files:

     Paper


A Framework for Transparent Execution of Massively-Parallel Applications on CUDA and OpenCL

A Framework for Transparent Execution of Massively-Parallel Applications on CUDA and OpenCL

Jörn Teuber, René Weller, Gabriel Zachmann

We present a novel framework for the simultaneous development for different massively parallel platforms. Currently, our framework supports CUDA and OpenCL but it can be easily adapted to other programming languages. The main idea is to provide an easy-to-use abstraction layer that encapsulates the calls of own parallel device code as well as library functions. With our framework the code has to be written only once and can then be used transparently for CUDA and OpenCL. The output is a single binary file and the application can decide during run-time which particular GPU-method it will use. This enables us to support new features of specific platforms while maintaining compatibility. We have applied our framework to a typical project using CUDA and ported it easily to OpenCL. Furthermore we present a comparison of the running times of the ported library on the different supported platforms.

Published in:

EuroVR Conference 2015, Lecco, Italy, October 15 - 16, 2015.

Files:

     Paper


Kanaria Identifying the Challenges for Cognitive Autonomous Navigation and Guidance for Missions to Small Planetary Bodies

Kanaria: Identifying the Challenges for Cognitive Autonomous Navigation and Guidance for Missions to Small Planetary Bodies

Alena Probst, Graciela Gonzales Peytavi, David Nakath, Anne Schattel, Carsten Rachuy, Patrick Lange, Joachimg Clemens, Mitja Echim, Verena Schwarting, Abhishek Srinivas, Konrad Gadzicki, Roger Förster, Bernd Eissfeller, Kerstin Schill, Christof Büskens, Gabriel Zachmann

With the rapid evolution of space technologies and increasing thirst for knowledge about the origin of life and the universe, the need for deep space missions as well as for autonomous solutions for complex, time-critical mission operations becomes urgent. Within this context, the project KaNaRiA aims at technology development tailored to the ambitious task of space resource mining on small planetary bodies using increased autonomy for on-board mission planning, navigation and guidance. With the aim to validate and test our methods, we create a virtual environment in which humans can interact with the simulation of the mission. In order to achieve real-time performance, we propose a massively-parallel software system architecture, which enables very efficient and easily adaptable communication between concurrent software modules within KaNaRiA.

Published in:

International Astronautical Congress (IAC) 2015, Jerusalem, Isreal, October 12 - 16, 2015.

Files:

     Paper


Autonomous Surgical Lamps

Autonomous Surgical Lamps

Jörn Teuber, René Weller, Ron Kikinis, Karl-Jürgen Oldhafer, Michael J. Lipp, Gabriel Zachmann

We present a novel method for the autonomous positioning of surgical lamps in open surgeries. The basic idea is to use an inexpensive depth camera to track all objects and the surgical staff and also generate a dynamic online model of the operation situs. Based on this information, our algorithms continuously compute the optimal positions for all surgical lamps. These positions can then be communicated to robotic arms so that the lamps mounted on their end effectors will move autonomously. This will ensure optimal lighting of the operation situs at all times, while avoiding occlusions and shadows from obstacles. We tested our algorithm in a VR simulation using real-world depth camera data that was recorded during a real abdominal operation. Our results show that our method is robust and can ensure close-to-optimal lighting conditions in real-world surgeries with an update rate of 20 Hz.

Published in:

Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie (CURAC) 2015, Bremen, Germany, September 17 - 19, 2015.

Files:

     Paper
     Video


Multi Agent System Optimization in Virtual Vehicle Testbeds

Multi Agent System Optimization in Virtual Vehicle Testbeds

Patrick Lange, René Weller, Gabriel Zachmann

Modelling, simulation, and optimization play a crucial role in the development and testing of autonomous vehicles. The ability to compute, test, assess, and debug suitable configurations reduces the time and cost of vehicle development. Until now, engineers are forced to manually change vehicle configurations in virtual testbeds in order to react to inappropriate simulated vehicle performance. Such manual adjustments are very time consuming and are also often made ad-hoc, which decreases the overall quality of the vehicle engineering process. In order to avoid this manual adjustment as well as to improve the overall quality of these adjustments, we present a novel comprehensive approach to modelling, simulation, and optimization of such vehicles. Instead of manually adjusting vehicle configurations, engineers can specify simulation goals in a domain specific modelling language. The simulated vehicle performance is then mapped to these simulation goals and our multi-agent system computes for optimized vehicle configuration parameters in order to satisfy these goals. Consequently, our approach does not need any supervision and gives engineers visual feedback of their vehicle configuration expectations. Our evaluation shows that we are able to optimize vehicle configuration sets to meet simulation goals while maintaining real-time performance of the overall simulation.

Published in:

EAI SIMUtools 2015, Athens, Greece, Portland, August 24 - 26, 2015.

Files:

     Paper
     Slides


Virtuelle und Erweiterte Realität 11. Workshop der GI-Fachgruppe VR/AR

Virtuelle und Erweiterte Realität / 11. Workshop der GI-Fachgruppe VR/AR

Gabriel Zachmann, René Weller, André Hinkenjann (Hg.)

Als etablierte Plattform für den Informations- und Ideenaustausch der deutschsprachigen VR/AR-Szene bietet der Workshop den idealen Rahmen, aktuelle Ergebnisse und Vorhaben aus Forschung und Entwicklung im Kreise eines fachkundigen Publikums zur Diskussion zu stellen.

Published in:

GI VRAR Workshop 2014, Bremen, Germany

Files:

     Proceedings


Innovative and Contact-free Natural User Interaction with Cars

Mohammad Razavi, Saber Adavi, Muhammed Zaid Alam, Daniel Mohr and Gabriel Zachmann

Within the last two decades, the vehicle industry has majorly changed the way humans interact with cars and their embedded systems that provide aid and convenience for the passengers. Today, instead of using the ordinary physical button for each function, cars have multifunctional control devices with hierarchical menus, which demands the visual attention of the driver and also, they are getting progressively complex. In our approach, we introduce a contact-free, multimodal interaction system for automobiles to make interactions more natural, attractive, and intuitive. We designed an interactive car driving simulation in which various car functions such as radio, windows, mirror, and cabin lights were integrated. They are controlled by a combination of speech, natural gestures, and exploiting the visibility of objects in the car. This yields a heavily decrease in visual demand and improves robustness and user experience.

Published in:

GI VRAR Workshop 2014, Bremen, Germany

Files:

     Paper

Links:

     Project Homepage


Hand Pose Recognition — Overview and Current Research

Hand Pose Recognition — Overview and Current Research

Hand Pose Recognition — Overview and Current Research

Daniel Mohr and Gabriel Zachmann

Vision-based markerless hand tracking has many applica- tions, for instance in virtual prototyping, navigation in virtual environ- ments, tele- and robot-surgery and video games. It is a very challenging task, due to the real-time requirements, 26 degrees-of-freedom, high ap- pearance variability, and frequent self-occlusions. Because of that, and because of the many desirable applications, it has received increasing attention in the computer vision community of the past years. A lot of approaches have been proposed to (partially) solve the problem, but no system has been presented yet that can solve the full-DOF hand pose estimation problem robustly in real-time.
The purpose of this article is to present an overview of the approaches that have been presented so far and where future research of hand track- ing probably will go.
First, we will explain the challenges in more detail. Second, we will classify the approaches; third, we will describe the most important ap- proaches, and finally we will show the future directions and give a short overview of our current work.

Published in:

In Brunnett, G., Coquillart, S., van Liere, R., Welch, G., Váša, L. (Eds.): Virtual Realities, Springer, ISBN 978-3-319-17042-8; Revised Selected Papers of the International Dagstuhl Seminar 13241, Germany, June 9-14, 2013.

Files:

     Paper


Scalable Concurrency Control for Massively Collaborative Virtual Environments

Scalable Concurrency Control for Massively Collaborative Virtual Environments

Patrick Lange, René Weller, Gabriel Zachmann

We present a novel concurrency control mechanism for collaborative massively parallel virtual environments that allows an arbitrary amount of components to exchange data with very little synchronisation overhead. The approach taken here is to maintain the shared world state of the complete virtual environment in a global key-value pool. Our novel method does not use any locking mechanism. Instead it allows wait-free data access for all concurrent components for both, reading and writing operations. This guarantees a highly responsive low-latency data access while keeping a consistent system state for all users and system components. Nevertheless, our approach is perfectly scalable even for massive multi-user scenarios. We provide a number of benchmarks in this paper, and the results show an almost constant running time, independent of the number of concurrent users. Moreover, our approach outperforms previous concurrency control systems significantly by more than an order of magnitude.

Published in:

ACM Multimedia Systems, Massively Multiuser Virtual Environments (MMVE) 2015, Portland, United States, March 18 - 20, 2015.

Files:

     Paper
     Slides


Virtual Reality for Simulating Autonomous Deep-Space Navigation and Mining

Virtual Reality for Simulating Autonomous Deep-Space Navigation and Mining

P. Lange, A. Probst, A. Srinivas, G. Gonzalez Peytavi, C. Rachuy, A. Schattel, V. Schwarting, J. Clemens, D. Nakath, M. Echim, and G. Zachmann

In accordance with the space exploration goals declared by the National Aeronautics and Space Administration (NASA) in 2010 and 2013, the investigation of the deeper solar system becomes a central objective for upcoming space missions. Within this scheme, technologies and capabilities are developed that enable manned missions beyond low-Earth orbit - to lunar orbit, lunar surface, or even Mars and beyond. Particularly interesting targets are asteroids. They can serve as test beds for hardware and technology demonstration, which is needed prior to those aspired long-term missions. Asteroids can frequently be reached with smaller energy demands than those required for a mission to Moon or Mars. Furthermore, they are assumed to contain significant amounts of water and valuable metallic volatiles, which could serve as in-situ supplies for life support systems or spacecraft maintenance. Despite these technical facts, asteroids are also very interesting targets from a scientific point of view: They are remainders of the early formation phase of the solar system and are hold responsible for bringing life to Earth [DFJ90]. As the trend in future space exploration tends to focus on objects in deep space, the importance of autonomy increases on-board of spacecraft.With augmenting signal travel time due to great distances to Earth, it is difficult or even impossible to be able to react from ground on unexpected events for which time is a crucial factor. Up to this date, spacecraft in orbit follow specific timeline procedures during time-critical mission phases or pre-designed protocols in case unknown failures occur. The most common reaction on faults is the safe mode, during which the spacecraft shuts down every on-board module except the vital systems and awaits further (recovery) instructions from Earth ground stations. Hence, the demand for closed loop decision-making processes that are independent of the tele-commanding from ground. This includes not only the handling of errors but also navigation, guidance, and attitude/orbit control tasks. Therefore, the focus of this project is to make the spacecraft independent from the ground station as much as possible. This shall be achieved by autonomous navigation and autonomous decision making, so that it can determine optimal trajectories during flight and potential target asteroids autonomously for mining. The autonomy of the spacecraft is based on cognitive and biology-inspired algorithms. Assessment of these algorithms is necessary before they are applied in real scenarios. Therefore, algorithms have to be tested in a virtual environment with different virtual scenarios. This virtual environment should simulate motion of planets and asteroids, gravity, solar pressure, sensors of spacecraft, features of the asteroid, collision detection between asteroid and spacecraft for landing, etc. in real-time. In order to interact with this virtual environment, different 3D interaction metaphors have to be defined so that the user can change physical parameters, visualize different data, create different mission scenarios, change the spacecraft parameters, and even create new asteroid clusters and shapes (generated via 3D procedural modelling), which is necessary as the spacecraft might encounter new unknown asteroids.

Published in:

24th International Conference on Artificial Reality and Telexistence (ICAT-EGVE 2014), Bremen, Germany, December 8 - 10, 2014.

Files:

     Paper
     Poster


Massively-Parallel Proximity Queries for Point Clouds

Massively-Parallel Proximity Queries for Point Clouds

Max Kaluschke, Uwe Zimmermann, Marinus Danzer, Gabriel Zachmann and René Weller

We present a novel massively-parallel algorithm that allows real-time distance computations between arbitrary 3D objects and unstructured point cloud data. Our main application scenario is collision avoidance for robots in highly dynamic environments that are recorded via a Kinect, but our algorithm can be easily generalized for other applications such as virtual reality. Basically, we represent the 3D object by a bounding volume hierarchy, therefore we adopted the Inner Sphere Trees data structure, and we process all points of the point cloud in parallel using GPU optimized traversal algorithms. Additionally, all parallel threads share a common upper bound in the minimum distance, this leads to a very high culling efficiency. We implemented our algorithm using CUDA and the results show a real-time performance for online captured point clouds. Our algorithm outperforms previous CPU-based approaches by more than an order of magnitude.

Published in:

11th Workshop on Virtual Reality Interaction and Physical Simulation VRIPHYS (2014), Bremen, Germany, September 24 - 25, 2014.

Files:

     Paper
     Slides


Massively Parallel Batch Neural Gas for Bounding Volume Hierarchy Construction

Massively Parallel Batch Neural Gas for Bounding Volume Hierarchy Construction

René Weller, David Mainzer, Abhishek Srinivas, Matthias Teschner and Gabriel Zachmann

Ordinary bounding volume hierarchy (BVH) construction algorithms create BVHs that approximate the boundary of the objects. In this paper, we present a BVH construction that instead approximates the volume of the objects with successively finer levels. It is based on Batch Neural Gas (BNG), a clustering algorithm that is known from machine learning. Additionally, we present a novel massively parallel version of this BNG-based hierarchy construction that runs completely on the GPU. It reduces the theoretical complexity of the sequential algorithm from O(nlogn) to O(log2 n) and also our CUDA implementation outperforms the CPU version significantly in practice.

Published in:

11th Workshop on Virtual Reality Interaction and Physical Simulation VRIPHYS (2014), Bremen, Germany, September 24 - 25, 2014.

Files:

     Paper
     Slides


A Framework for Wait-Free Data Exchange in Massively Threaded VR Systems

A Framework for Wait-Free Data Exchange in Massively Threaded VR Systems

Patrick Lange, René Weller, Gabriel Zachmann

A central part of virtual reality systems and game engines is the generation, management and distribution of all relevant world states. In modern interactive graphic software systems usually many independent software components need to communicate and exchange data. Standard approaches suffer the n2 problem because the number of interfaces grows quadratically with the number of component functionalities. Such many-to-many architectures quickly become unmaintainable, not to mention latencies of standard concurrency control mechanisms. We present a novel method to manage concurrent multithreaded access to shared data in virtual environments. Our highly efficient low-latency and lightweight architecture is based on a new wait-free hash map using key-value pairs. This allows us to reduce the traditional many-to-many problem to a simple many-to-one approach. Our results show that our framework outperforms by more than two orders of magnitude standard lock-based but also modern lock-free methods significantly.

Published in:

International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG)), Plzen, Czech Republic, June 2 - 5, 2014. ISBN 978-80-86943-71-8

Files:

     Paper
     Slides


Collision Detection Based on Fuzzy Scene Subdivision

Collision Detection Based on Fuzzy Scene Subdivision

David Mainzer, Gabriel Zachmann

We present a novel approach to perform collision detection queries between rigid and/or deformable models. Our method can handle arbitrary de- formations and even discontinuous ones. For this, we subdivide the whole scene with all objects into connected but totally independent parts by a fuzzy clustering algorithm. Following, for every part our algorithm performs a Principal Com- ponent Analyses to achieve the best sweep direction for the Sweep-Plane step, which reduces the number of false positives greatly. Our collision detection algo- rithm processes all computations without the need of a bounding volume hierar- chy or any other acceleration data structure. One great advantage of this is that our method can handle the broad phase as well as the narrow phase within one single framework. Our collision detection algorithm works directly on all prim- itives of the whole scene, which results in a simpler implementation and can be integrated much more easily by other applications. We can compute inter-object and intra-object collisions of rigid and deformable objects consisting of many tens of thousands of triangles in a few milliseconds on a modern computer. We have evaluated its performance by common benchmarks.

Published in:

GPU Computing and Applications, Singapore, 9 Oct 2013, ISBN-13 978-981-287-133-6

Files:

     Paper
     Slides


Poster: Collision Detection Based on Fuzzy Clustering for Deformable Objects on GPUs (CDFC)

David Mainzer, Gabriel Zachmann

We present a novel Collision Detection Based on Fuzzy Clustering for Deformable Objects on GPUs (CDFC) technique to perform collision queries between rigid and/or deformable models. Our method can handle arbitrary deformations and even discontinuous ones. With our approach, we subdivide the scene into connected but totally independent parts by fuzzy clustering, and therefore, the algorithm is especially well-suited to GPU's. Our collision detection algorithm processes all computations without the need of a bounding volume hierarchy or any other acceleration data structure. One great advantage of this is that our method can handle the broad phase as well as the narrow phase within one single framework. We can compute inter-object and intra-object collisions of rigid and deformable objects consisting of many tens of thousands of triangles in a few milliseconds on a modern computer. We have evaluated its performance by common benchmarks. In practice, our approach is faster than earlier CPU- and/or GPU-based approaches and as fast as state-of-the-art techniques but even more scalable.

Published in:

International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG) - POSTER Proceedings, Plzen, Czech Republic, June 24 - 27, 2013. ISBN 978-80-86943-76-3

Files:

     Paper
     Poster


Fast Sphere Packings with Adaptive Grids on the GPU

Fast Sphere Packings with Adaptive Grids on the GPU

Jörn Teuber, René Weller, Gabriel Zachmann, Stefan Guthe

Polydisperse sphere packings are a new and very promising data representation for several fundamental problems in computer graphics and VR such as collision detection and deformable object simulation. In this paper we present acceleration techniques to compute such sphere packings for arbitrary 3D objects efficiently on the GPU. To do that, we apply different refinement methods for adaptive grids. Our results show a significant speed-up compared to existing approaches.

Published in:

GI VR/AR 2013 (X. Workshop der GI-Fachgruppe VR/AR) Second Place for Best Paper Award

Files:

     Paper
     Slides
     Video


A Methodology for Interactive Spatial Visualization of Automotive Function Architectures for Development and Maintenance

A Methodology for Interactive Spatial Visualization of Automotive Function Architectures for Development and Maintenance

Moritz Cohrs, Stefan Klimke, Gabriel Zachmann

In this paper, we utilize spatial visualization of automotive function architectures to enable novel, improved methodologies and workflows for the development, validation and service of vehicle functions. We build upon our prior approach for consistent data integration of automotive function architectures with CAD models. We show the benefits of the proposed novel methodologies by applying them to the scenario of developing an auto-motive signal light system. This demonstrates the capabilities of our new methodology in making a function-oriented development much more efficient as well as supporting testing and service of vehicle functions.

Published in:

9th International Symposium, ISVC 2013 (International Symposium on Visual Computing), Rethymnon, Crete, Greece, July 29-31, 2013. Proceedings, Part II, George Bebis et al. in: Advances in Visual Computing, Springer, ISBN 978-3-642-41938-6

Files:

     Paper
     Slides


Streamlining Function-oriented Development by Consistent Integration of Automotive Function Architectures with CAD Models

Streamlining Function-oriented Development by Consistent Integration of Automotive Function Architectures with CAD Models

Moritz Cohrs, Stefan Klimke, Gabriel Zachmann

A primary challenge in the automotive industry is the increasing complexity of modern cars caused by the high amount of vehicle electronics respectively vehicle functions which are implemented as mechatronic systems. A promising solution is the relatively new function-oriented development approach that focuses on the interdisciplinary development of such functions and which helps to handle the high complexity in automotive development. At this stage, however, a function-oriented development does not fully exploit the capabilities of virtual technologies which are fairly well-established technologies in the automotive product development. One reason in particular is that function-oriented data is not yet integrated with geometric CAD data. Our main contributions begin with an analysis of the data structures of function architecture data and CAD data and they provide a definition of the requirements for a consistent mapping of named data structures. Moreover, we develop a meta-format that enables a system-independent description and exchange of function architectures. In addition, we carry out a prototypical implementation that shows the applicability of the proposed data integration approach and we derive new methods that can assist a function-oriented development. Finally, we evaluate these methods by means of actual use cases. Summarizing, our research focuses on the interdisciplinary integration of function architectures with CAD models to create synergies and to enable new, beneficial methods for the spatial visualization and utilization of such data.

Published in:

Computer-Aided Design and Applications, CAD 2013, 2014

Files:

     Paper


Model-Based High-Dimensional Pose Estimation with Application to Hand Tracking

PhD thesis: Model-Based High-Dimensional Pose Estimation with Application to Hand Tracking

Daniel Mohr

This thesis presents several novel techniques for computer vision based full-DOF human hand motion estimation. The most important contributions are a novel resolution-independent and memory efficient representation of hand pose silhouettes that allows to match a hypothesis in near-constant time, a new class of similarity measures that work for nearly arbitrary input modalities, and a novel matching approach that naturally combines a novel template hierarchy with a new image space search method.

Published in:

Staats- und Universitätsbibliothek Bremen, Bremen, 2012,

Files:

     Dissertation


New Geometric Data Structures for Collision Detection

New Geometric Data Structures for Collision Detection

René Weller

We present new geometric data structures for collision detection and more, including: Inner Sphere Trees - the first data structure to compute the peneration volume efficiently. Protosphere - an new algorithm to compute space filling sphere packings for arbitrary objects. Kinetic AABBs - a bounding volume hierarchy that is optimal in the number of updates when the objects deform. Kinetic Separation-List - an algorithm that is able to perform continuous collision detection for complex deformable objects in real-time. Moreover, we present applications of these new approaches to hand animation, real-time collision avoidance in dynamic environments for robots and haptic rendering, including a user study that exploits the influence of the degrees of freedom in complex haptic interactions. Last but not least, we present a new benchmarking suite for both, peformance and quality benchmarks, and a theoretic analysis of the running-time of bounding volume-based collision detection algorithms.

Published in:

Extended Version: Springer Series on Touch and Haptic Systems, 2013, ISBN 978-3-319-01020-5.
Original Version: Staats- und Universitätsbibliothek Bremen,
2012.

Files:

     Flyer
     Dissertation


User Performance in Complex Bi-manual Haptic Manipulation with 3 DOFs vs. 6 DOFs

User Performance in Complex Bi-manual Haptic Manipulation with 3 DOFs vs. 6 DOFs

René Weller, Gabriel Zachmann

We present the results of a comprehensive user study that evaluates the influence of the degrees of freedom on the users' performance in complex bi-manual haptic interaction tasks. To do that, we have developed a novel multi-player game that allows the qualitative as well as the quantitative evaluation of different force-feedback devices simultaneously. The game closely resembles typical tasks arising in tele-operation scenarios or virtual assembly simulations; thus, the results of our user study apply directly to real-world industrial applications. The game is based on our new haptic workspace that supports high fidelity, two-handed multi-user interactions in scenarios containing a large number of dynamically simulated rigid objects; moreover, it works independent of the objects' polygon count. The results of our user study show that 6 DOF forcefeedback devices outperform 3 DOF devices significantly, both in user perception and in user performance.

Published in:

IEEE Haptics Symposium 2012, Vancouver, Canada, March 2012,

Files:

     Paper
     Poster
     Eyecatcher
     Teaser Best Teaser Award

Links:

     Project Homepage


A Comparative Evaluation of Three Skin Color Detection Approaches

A Comparative Evaluation of Three Skin Color Detection Approaches

Dennis Jensch, Daniel Mohr and Gabriel Zachmann

Skin segmentation is a challenging task due to several influences such as, for example, unknown lighting conditions, skin colored background, and camera limitations. A lot of skin segmentation approaches were proposed in the past including adaptive (in the sense of updating the skin color online) and non-adaptive approaches. In this paper, we compare three different skin segmentation approaches. The first is a well-known non- adaptive approach. It is based on a simple, pre-computed skin color distribution. Methods two and three adaptively estimate the skin color in each frame utilizing clustering algorithms. The second approach uses a hierarchical clustering for a simultaneous image and color space segmentation, while the third approach is a pure color space clustering, but with a more sophisticated clustering approach.

For evaluation, we compared the segmentation results of the approaches against a ground truth dataset. To obtain the ground truth dataset, we labeled about 500 images captured under various conditions.

Published in:

GI AR/VR Workshop 2012 , Germany, Düsseldorf,

Extended vesion in Journal of Virtual Reality and Broadcasting, vol 12, no 1, 2015.

Files:

     Paper
     Slides [pptx] Slides [pdf]

Links:

     Project Homepage


Segmentation-Free, Area-Based Articulated Object Tracking

Segmentation-Free, Area-Based Articulated Object Tracking

Daniel Mohr, Gabriel Zachmann

We propose a novel, model-based approach for articulated object detection and pose estimation that does not need any low-level feature extraction or foreground segmentation and thus eliminates this error-prone step. Our approach works directly on the input color image and is based on a new kind of divergence of the color distribution between an object hypothesis and its background. Consequently, we get a color distribution of the target object for free.

We further propose a coarse-to-fine and hierarchical algorithm for fast object localization and pose estimation. Our approach works significantly better than segmentation-based approaches in cases where the segmen- tation is noisy or fails, e.g. scenes with skin-colored backgrounds or bad illumination that distorts the skin color.

We also present results by applying our novel approach to markerless hand tracking.

Published in:

7th International Symposium on Visual Computing (ISVC) 2011 , Las Vegas, NV, USA,

Files:

     Paper
     Slides [pptx] Slides [pdf]
     Example Video: avi, mov

Links:

     Project Homepage


Adaptive Bitonic Sorting

Adaptive Bitonic Sorting

Gabriel Zachmann

Adaptive bitonic sorting is a sorting algorithm suitable for implementation on EREW parallel architectures. Similar to bitonic sorting, it is based on merging, which is recursively applied to obtain a sorted sequence. In contrast to bitonic sorting, it is data-dependent. Adaptive bitonic merging can be performed in O(n/p) parallel time, p being the number of processors, and executes only O(n) operations in total. Consequently, adaptive bitonic sorting can be performed in O(n log n / p) time, which is optimal. So, one of its advantages is that it executes a factor of O(log n) less operations than bitonic sorting. Another advantage is that it can be implemented efficiently on modern GPUs.

Published in:

Encyclopedia of Parallel Computing, Springer, 2011, pages 146-157; Padua, David (ed.), ISBN 978-0-387-09765-7

Files:

     First version
     Third version (proof)


3-DOF vs. 6-DOF - Playful Evaluation of Complex Haptic Interactions

3-DOF vs. 6-DOF - Playful Evaluation of Complex Haptic Interactions

René Weller, Gabriel Zachmann

We present a haptic workspace that allows high fidelity two-handed multi-user interactions in scenarios containing a large number of dynamically simulated rigid objects and a polygon count that is only limited by the capabilities of the graphics card. Based in this workspace we present a novel multiplayer game that supports qualitative as well as quantitative evaluation of different haptic devices in demanding haptic interaction tasks.

Published in:

IEEE International Conference on Consumer Electronics (ICCE) 2011 , Las Vegas, NV, USA,

Files:

     Paper
     Slides [pptx]
     Video from Talk: wmv, mov

Links:

     Project Homepage


Inner Sphere Trees and Their Application to Collision Detection

Inner Sphere Trees and Their Application to Collision Detection

René Weller, Gabriel Zachmann

Collision detection between rigid objects plays an important role in many fields of robotics and computer graphics, e.g. for path-planning, haptics, physically-based simulations, and medical applications.

This chapter contributes the following novel ideas to the area of collision detection:

Published in:

Virtual Realities, Springer, 2011, pages 181-202, Sabine Coquillart and Guido Brunnett and Greg Welch, (ed.) ISBN 978-3-211-99177-0 (Dagstuhl Seminar)

Links:

     Project Homepage


ProtoSphere: A GPU-Assisted Prototype Guided Sphere Packing Algorithm for Arbitrary Objects

ProtoSphere: A GPU-Assisted Prototype Guided Sphere Packing Algorithm for Arbitrary Objects

René Weller, Gabriel Zachmann

We present a new algorithm that is able to efficiently compute a space filling sphere packing for arbitrary objects. It is independent of the object's representation (polygonal, NURBS, CSG,...); the only precondition is that it must be possible to compute the distance from any point to the surface of the object. Moreover, our algorithm is not restricted to 3D but can be easily extended to higher dimensions.

The basic idea is very simple and related to prototype based approaches known from machine learning. This approach directly leads to a parallel algorithm that we have implemented using CUDA. As a byproduct, our algorithm yields an approximation of the object's medial axis that has applications ranging from path-planning to surface reconstruction.

Published in:

Siggraph Asia, Technical Sketches, Seoul, Republic of Korea, December, 2010

Files:

     Paper
     Slides [pptx]
     Video 1 from Talk: wmv, mov
     Video 2 from Talk: wmv, mov
     Video 3 from Talk: wmv, mov
     Video 4 from Talk: wmv, mov
     Video 5 from Talk: wmv, mov
     Video 6 from Talk: wmv, mov

Links:

     Project Homepage


A Benchmarking Suite for 6-DOF Real Time Collision Response Algorithms

A Benchmarking Suite for 6-DOF Real Time Collision Response Algorithms

René Weller, David Mainzer, Gabriel Zachmann, Mikel Sagardia, Thomas Hulin, Carsten Preusche

A benchmarking suite for rigid object collision detection and collision response schemes. The proposed benchmarking suite can evaluate both the performance as well as the quality of the collision response. The former is achieved by densely sampling the configuration space of a large number of highly detailed objects; the latter is achieved by a novel methodology that comprises a number of models for certain collision scenarios. With these models, we compare the force and torque signals both in direction and magnitude.

Our device-independent approach allows objective predictions for physically-based simulations as well as 6-DOF haptic rendering scenarios. In the results, we show a comprehensive example application of our benchmarks comparing two quite different algorithms utilizing our proposed benchmarking suite. This proves empirically that our methodology can become a standard evaluation framework.

Published in:

Proceedings of the 17th ACM Symposium on Virtual Reality Software and Technology 2010 (VRST' 2010), Hong Kong, November, 2010, 63-70

Files:

     Paper
     Slides

Links:

     Project Homepage


FAST Fast Adaptive Silhouette Area based Template Matching

FAST: Fast Adaptive Silhouette Area based Template Matching

Daniel Mohr, Gabriel Zachmann

Template matching is a well-proven approach in the area of articulated object tracking. Matching accuracy and computation time of template matching are essential and yet often conflicting goals.

In this paper, we present a novel, adaptive template matching approach based on the silhouette area of the articulated object. With our approach, the ratio between accuracy and speed simply is a modifiable parameter, and, even at high accuracy, it is still faster than a state-of-the-art approach. We approximate the silhouette area by a small set of axis-aligned rectangles. Utilizing the integral image, we can thus compare a silhouette with an input image at an arbitrary position independently of the resolution of the input image. In addition, our rectangle covering yields a very memory efficient representation of templates.

Furthermore, we present a new method to build a template hierarchy optimized for our rectangular representation of template silhouettes. %This is a consistent continuation of our adaptive approach.

With the template hierarchy, the complexity of our matching method for n templates is O(log n) and independent of the input resolution. For example, a set of 3000 templates can be matched in 2.3 ms.

Overall, our novel methods are an important contribution to a complete system for tracking articulated objects.

Published in:

British Machine Vision Conference, Aberystwyth, United Kingdom, September, 2010, 39.1-39.12

Files:

     Paper
     Poster
     Example Video avi, mov

Links:

     Project Homepage


Silhouette Area Based Similarity Measure for Template Matching in Constant Time

Silhouette Area Based Similarity Measure for Template Matching in Constant Time

Daniel Mohr, Gabriel Zachmann

We present a novel, fast, resolution-independent silhouette area-based matching approach. We approximate the silhouette area by a small set of axis-aligned rectangles. This yields a very memory efficient representation of templates. In addition, utilizing the integral image, we can thus compare a silhouette with an input image at an arbitrary position in constant time.

Furthermore, we present a new method to build a template hierarchy optimized for our rectangular representation of template silhouettes. With the template hierarchy, the complexity of our matching method for n templates is O(log n). For example, we can match a hierarchy consisting of 1000 templates in 1.5 ms. Overall, our contribution constitutes an important piece in the initialization stage of any tracker of (articulated) objects.

Published in:

6th International Conference of Articulated Motion and Deformable Objects , Port d'Andratx, Mallorca, Spain, 2010, 43-54

The original publication is available at Springer Verlag

Files:

     Paper Erratum: Eq. 5 does not take into account all rectangle configurations, i.e. we do not obtain the minimum number of rectangles for all areas.
     Slides
     Example Video WebM

Links:

     Project Homepage


Collision Detection: A Fundamental Technology for Virtual Prototyping

in: Virtual Technologies for Business and Industrial Applications, by N. Raghavendra Rao (ed.); IGI Global, 2010, ch. 3, pp. 36-67.

Published in:

IGI Global, 2010, 36-67


Stable 6-DOF Haptic Rendering with Inner Sphere Trees

Stable 6-DOF Haptic Rendering with Inner Sphere Trees

René Weller, Gabriel Zachmann

Based on our new geometric data structure, the inner sphere trees, we present a fast and stable uniform algorithm for proximity and penetration volume queries between watertight objects at haptic rates.

Moreover, we present a multi-threaded version of the penetration volume computation for time-critical haptic rendering that is based on separation lists and the novel notion of expected overlapping volumes. Finally, we show how to use the penetration volume to compute continuous contact forces and torques that enable a stable rendering of 6-DOF penalty-based distributed contacts.

Published in:

Proceedings of International Design Engineering Technical Conferences & Computers and Information in Engineering Conference (IDETC/CIE) , San Diego, USA, 30 August - 02 September 2009. Virtual Environments and Systems - 2009 Best Paper Award.

Virtual Environments and Systems - 2009 Best Paper Award

Files:

     Slides
     Simulation Video WebM
     Interaction Video WebM
     Bones Video WebM
     758 Video WebM
     Pin in Hole Benchmark Video WebM

Links:

     Project Homepage


Visual Computing for Medical Diagnosis and Treatment

Visual Computing for Medical Diagnosis and Treatment

Jan Klein, Ola Friman, Markus Hadwiger, Bernhard Preim, Felix Ritter, Anna Vilanova, Gabriel Zachmann, Dirk Bartz

Diagnostic algorithms and efficient visualization techniques are of major importance for pre-operative decisions, intra-operative imaging and image-guided surgery. Complex diagnostic decisions are characterized by a high information flow and fast decisions, requiring efficient and intuitive presentation of complex medical data and precision in the visualization. For intra-operative medical treatment, the pre-operative visualization results of the diagnostic systems have to be transferred to the patient on the operation room table. Via augmented reality, additional information of the hidden regions can be displayed virtually. This state-of-the-art report summarizes visual computing algorithms for medical diagnosis and treatment. After starting with direct volume rendering and tagged volume rendering as general techniques for visualizing anatomical structures, we go into more detail by focusing on the visualization of tissue and vessel structures. Afterwards, algorithms and techniques that are used for medical treatment in the context of image-guided surgery, intra-operative imaging and augmented reality, are discussed and reviewed.

Published in:

Computers & Graphics, Vol. 33, Issue 4, August 2009, pp. 554 -- 565.

Files:

      Preliminary version of the paper


A Unified Approach for Physically-Based Simulations and Haptic Rendering

A Unified Approach for Physically-Based Simulations and Haptic Rendering

René Weller, Gabriel Zachmann

Since the visual feedback and effects of today's games have become extremely mature, it will be more and more important for games to provide realistic feedback to other senses, such as our haptic sense. On the hardware side, this has become possible in recent years by the advent of first inexpensive haptic devices on the consumer market, such as the Falcon from Novint. Research on force-feedback devices and algorithms has been done over 10 years, and has only fairly recently been introduced to games.

However, while there is a large body of research on how to render forces given a collision and its contact information, the computation of the latter for massive models is still a challenge. First of all, this is due to the much higher effort to compute contact information. Second, this is due to the update rates that are necessary for haptic rendering, which need to be much higher than for visual rendering, i.e., 250-1000 Hz. And third, defining the contact information such that continuous contact forces can be derived is not always obvious.

Therefore, one of the major challenges in haptic rendering for games is the computation of continuous forces at haptic rates. A solution to this challenge can also be utilized to do physically-based simulation of rigid bodies, which has become increasingly popular in games over the past few years.

In this paper, we take advantage of the fact that in rendering haptic forces, as well as in most real-time applications that involve physically-based simulation, an absolutely correct determination of the forces acting on the virtual objects is not necessary.

Published in:

ACM SIGGRAPH Video Game Proceedings , New Orleans, USA, August 2009.

Files:

     Paper
     Slides
     Simulation Video wmv, mov
     Interaction Video wmv, mov
     Armadillo Video wmv, mov
     Screwdriver Video wmv, mov
     Bozzle Video wmv, mov

Links:

     Project Homepage


Inner Sphere Trees for Proximity and Penetration Queries

Inner Sphere Trees for Proximity and Penetration Queries

René Weller, Gabriel Zachmann

We present a novel geometric data structure for approximate collision detection at haptic rates between rigid objects. Our data structure, which we call inner sphere trees, supports different kinds of queries, namely, proximity queries and a new method for interpenetration computation, the penetration volume, which is related to the water displacement of the overlapping region and, thus, corresponds to a physically motivated force. The main idea is to bound objects from the inside with a set of non-overlapping spheres. Based on such sphere packings, a "inner bounding volume hierarchy" can be constructed. In order to do so, we propose to use an AI clustering algorithm, which we extend and adapt here. The results show performance at haptic rates both for proximity and penetration volume queries for models consisting of hundreds of thousands of polygons.

Published in:

2009 Robotics: Science and Systems Conference (RSS) , Seattle, USA, June 28 - July 01 2009.

Files:

     Paper
     Poster
     Technical Report

Links:

     Project Homepage


Continuous Edge Gradient-Based Template Matching for Articulated Objects

Continuous Edge Gradient-Based Template Matching for Articulated Objects

Gabriel Zachmann, Daniel Mohr

In this paper, we propose a novel edge gradient based template matching method for object detection. In contrast to other methods, ours does not perform any binarization or discretization during the online matching. This is facilitated by a new continuous edge gradient similarity measure. Its main components are a novel edge gradient operator, which is applied to query and template images, and the formulation as a convolution, which can be computed very efficiently in Fourier space.

Published in:

International Conference on Computer Vision Theory and Applications (VISAPP) , Lisbon, Portugal, February 05-09, 2009.

Files:

     Paper
     Slides
     Technical Report
     Video 1divx,   Video 2divx,  Video 3divx

Links:

     Project Homepage


Segmentation of Distinct Homogeneous Color Regions in Images

Gabriel Zachmann, Daniel Mohr

In this paper, we present a novel algorithm to detect homogeneous color regions in images. We show its performance by applying it to skin detection. In contrast to previously presented methods, we use only a rough skin direction vector instead of a static skin model as a priori knowledge. Thus, higher robustness is achieved in images captured under unconstrained conditions. We formulate the segmentation as a clustering problem in color space. A homogeneous color region in image space is modeled using a 3D gaussian distribution. Parameters of the gaussians are estimated using the EM algorithm with spatial constraints. We transform the image by a whiten- ing transform and then apply a fuzzy k-means algorithm to the hue value in order to obtain initialization parameters for the EM algorithm. A divisive hi- erarchical approach is used to determine the number of clusters. The stopping criterion for further subdivision is based on the edge image. For evaluation, the proposed method is applied to skin segmentation and compared with a well known method.

Published in:

The 12th International Conference on Computer Analysis of Images and Patterns (CAIP), Vienna, Austria, August 27-29, 2007

Files:

     Paper
     Slides
     Video (divx)
     Video (mov)

Links:

     Project Homepage


IEEE VR2007 Workshop on Trends and Issues in Tracking for Virtual Environments

IEEE VR2007 Workshop on "Trends and Issues in Tracking for Virtual Environments"

Gabriel Zachmann (ed.)

The goal of this half-day workshop is to bring together researchers and industry working in the area of tracking and to talk about making tracking actually work. To that end, the workshop is to provide a broad picture of what is the current state of the art, what are the various technologies available, and what are issues for further research and development

Published in:

February 2007, Shaker Verlag, Aachen, Germany, ISBN 978-3-8322-5967-9

Files:

     Home page of the Workshop
     Buy a copy of the the workshop proceedings online (hard-copy for 25€; electronic version for 3€) from the publisher
     You can also ask me at zach at cs.uni-bremen.de -- I've still got some copies for sale, left over from the conference ;-)


A Benchmarking Suite for Static Collision Detection Algorithms

Sven Trenkel, René Weller, Gabriel Zachmann

In this paper, we present a benchmarking suite that allows a systematic comparison of pairwise static collision detectionalgorithms for rigid objects.The benchmark generates a number of positions and orientations for a predefined distance. We implemented the benchmarking procedure and compared a wide number of freely available collision detection algorithms.

Published in:

International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG), Plzen, Czech Republic, January 29 - February 1, 2007.

Files:

     Paper
     Slides
     Video of configuration generation avi, mov

For further information please visit our project homepage.


Kinetic Separation Lists for Continuous Collision Detection of Deformable Objects

Kinetic Separation Lists for Continuous Collision Detection of Deformable Objects

Gabriel Zachmann, René Weller

We present a new acceleration scheme for continuous collision detection of objects under arbitrary deformations. Both pairwise and self collision detection are presented. This scheme is facilitated by a new acceleration data structure, the kinetic separation list. The event-based approach of our kinetic separation list enables us to transform the continuous problem into a discrete one. Thus, the number of updates of the bounding volume hierarchies as well as the number of bounding volume checks can be reduced significantly. We performed a comparison of our kinetic approaches with the classical swept volume algorithm. The results shows that our algorithm performs up to fifty times faster in practically relevant scenarios.

Published in:

Third Workshop in Virtual Reality Interactions and Physical Simulation (Vriphys), Madrid, Spain, November 6 - 7, 2006.

Files:

     Paper
     Slides


Kinetic Bounding Volume Hierarchies for Collision Detection of Deformable Objects

Gabriel Zachmann, René Weller

We present novel algorithms for updating bounding volume hierarchies of objects undergoing arbitrary deformations. Therefore, we introduce two new data structures, the kinetic AABB tree and the kinetic BoxTree. The event-based approach of the kinetic data structures framework enables us to show that our algorithms are optimal in the number of updates. Moreover, we show a lower bound for the total number of BV updates, which is independent of the number of frames. We used our kinetic bounding volume hierarchies for collision detection and performed a comparison with the classical bottom-up update method. The results show that our algorithms perform up to ten times faster in practically relevant scenarios.

Published in:

ACM Int'l Conf. on Virtual Reality Continuum and Its Applications (VRCIA), Hong Kong, China, June 14-17, 2006.

Files:

     Paper
     Slides
     Technical Report


A Model for the Expected Running Time of Collision Detection using AABB Trees

A Model for the Expected Running Time of Collision Detection using AABB Trees

René Weller, Jan Klein, Gabriel Zachmann

In this paper, we propose a model to estimate the expected running time of hierarchical collision detection that utilizes AABB trees, which are a frequently used type of bounding volume (BV). We show that the average running time for the simultaneous traversal of two binary AABB trees depends on two characteristic parameters: the overlap of the root BVs and the BV diminishing factor within the hierarchies. With this model, we show that the average running time is in O(n) or even in O(logn) for realistic cases. Finally, we present some experiments that confirm our theoretical considerations. We believe that our results are interesting not only from a theoretical point of view, but also for practical applications, e. g., in time-critical collision detection scenarios where our running time prediction could help to make the best use of CPU time available.

Published in:

12th Eurographics Symposium on Virtual Environments (EGVE), Lisbon, Portugal, May 8-10, 2006.

Files:

     Paper (on-screen version)
     Paper (print version)
     Slides


Book: Geometric Data Structures for Computer Graphics

Book: Geometric Data Structures for Computer Graphics

Elmar Langetepe, Gabriel Zachmann

Links:

Buy from the publisher (formerly published by AK Peters), or from Amazon

Published in:

A K Peters, 2006

Here is chapter 7, and the table of contents, of the book.


GPU-ABiSort: Optimal Parallel Sorting on Stream Architectures

GPU-ABiSort: Optimal Parallel Sorting on Stream Architectures

Alexander Greß, Gabriel Zachmann

In this paper, we present a novel approach for parallel sorting on stream processing architectures. It is based on adaptive bitonic sorting. For sorting n values utilizing p stream processor units, this approach achieves the optimal time complexity O ((n log n)/p). While this makes our approach competitive with common sequential sorting algorithms not only from a theoretical viewpoint, it is also very fast from a practical viewpoint. This is achieved by using efficient linear stream memory accesses (and by combining the optimal time approach with algorithms optimized for small input sequences). We present an implementation on modern programmable graphics hardware (GPUs). On recent GPUs, our optimal parallel sorting approach has shown to be remarkably faster than sequential sorting on the CPU, and it is also faster than previous non-optimal sorting approaches on the GPU for sufficiently large input sequences. Because of the excellent scalability of our algorithm with the number of stream processor units p (up to n / log2 n or even n / log n units, depending on the stream architecture), our approach profits heavily from the trend of increasing number of fragment processor units on GPUs, so that we can expect further speed improvement with upcoming GPU generations.

Published in:

Proc. 20th IEEE International Parallel and Distributed Processing Symposium (IPDPS), Rhodes Island, Greece, April 25 - 29, 2006.

Files:

     Paper
     Technical Report


Space-Efficient FPGA-Accelerated Collision Detection for Virtual Prototyping

Space-Efficient FPGA-Accelerated Collision Detection for Virtual Prototyping

Andreas Raabe, Stefan Hochgürtel, Gabriel Zachmann, Joachim K. Anlauf

We present a space-efficient, FPGA-optimized architecture to detect collisions among virtual objects. The design consists of two main modules, one for traversing a hierarchical acceleration data structure, and one for intersecting triangles. This paper focuses on the former. The design is based on a novel algorithm for testing discretely oriented polytopes for overlap in 3D space. In addition, we derive a new overlap test algorithm that can be implemented using fixed-point arithmetic without producing false negatives and with bounded error. System C simulation results on different levels of abstraction show that real-time collision detection of complex objects at rates required by force-feedback and physically-based simulations can be obtained. In addition, synthesis results show that the design can still be fitted into a six-million gates FPGA. Furthermore, we compare our FPGA-based design with a fully parallelized ASIC-targeted architecture and a software implementation.

Published in:

Design Automation and Test in Europe (DATE), Munich, Germany, March 6 - 10, 2006.

Files:

     Paper
     Slides


Hardware-Accelerated Collision Detection using Bounded-Error Fixed-Point Arithmetic

Hardware-Accelerated Collision Detection using Bounded-Error Fixed-Point Arithmetic

Andreas Raabe, Stefan Hochgürtel, Gabriel Zachmann, Joachim K. Anlauf

A novel approach for highly space-efficient hardware-accelerated collision detection is presented. This paper focuses on the architecture to traverse bounding volume hierarchies in hardware. It is based on a novel algorithm for testing discretely oriented polytopes (DOPs) for overlap, utilizing only fixed-point (i.e., integer) arithmetic. We derive a bound on the deviation from the mathematically correct result and give formal proof that no false negatives are produced. Simulation results show that real-time collision detection of complex objects at rates required by force-feedback and physically-based simulations can be obtained. In addition, synthesis results prove the architecture to be highly space efficient. We compare our FPGA-optimized design with a fully parallelized ASIC-targeted architecture and a software implementation.

Published in:

Proceedings of WSCG 2006, Plzen, Czech Republic, January 30 - February 3, 2006, 17-24

Files:

     Paper
     Slides


Patent on Collision Detection

Patent on Collision Detection

Gabriel Zachmann

The present invention relates to a process and a device for the collision detection of objects by traversal of hierarchical binary bounding BoxTrees, in which each bounding box pair of a hierarchically lower level is derived from a bounding box from the immediately above level by means of cutting off two sub-volumes by means of two parallel cut-planes. For the collision detection of a first and second object, for each second bounding box of the bounding BoxTree of the second object to be checked for overlapping with a first bounding box of the bounding BoxTree of the first object, an auxiliary bounding box is computed which is axis-aligned in the object coordinate system of the first object and encloses the second bounding box with minimal volume, and with which the overlap check is conducted instead of with the second bounding box, and the computation results from the level immediately above are utilized for computation of auxiliary bounding boxes of lower levels. The process makes quick collision detection possible with low memory requirements.

Published in:

Patent for "Process and Device for Collision Detection of Objects", 2005

Links:

     first page
     US Patent


The Expected Running Time of Hierarchical Collision Detection

The Expected Running Time of Hierarchical Collision Detection

Jan Klein, Gabriel Zachmann

We propose a theoretical approach to analyze the average-case running time of hierarchical collision detection that utilizes bounding volume hierarchies.

Published in:

SIGGRAPH 2005, Poster, Los Angeles, CA, USA, August 2005.

Files:

     Poster
     One-Page Summary.
     Supplemental Material
     Slides


Hardware Accelerated Collision Detection --- An Architecture and Simulation Results

Hardware Accelerated Collision Detection --- An Architecture and Simulation Results

Andreas Raabe and Blazej Bartyzel and Gabriel Zachmann and Joachim K. Anlauf

We present a hardware architecture for a single-chip ac- celeration of an efficient hierarchical collision detection al- gorithm as well as simulation results for collision queries using this architecture. The architecture consists of two main stages, one for traversing simultaneously a hierar- chy of discretely oriented polytopes, and one for intersect- ing triangles. Within each stage, the architecture is deeply pipelined and parallelized. For the first stage, we compare and evaluate different traversal schemes for bounding vol- ume hierarchies. A simulation in VHDL shows that a hardware implemen- tation can offer a speed-up over a software implementation by orders of magnitude. Thus, real-time collision detection of complex objects at rates required by force-feedback and physically-based simulations can be achieved.

Published in:

IEEE Xplore, Plzen, Czech Republic, 2006, 17-24.

Files:

     Paper
     Slides


Interpolation Search for Point Cloud Intersection

Interpolation Search for Point Cloud Intersection

Jan Klein, Gabriel Zachmann

We present a novel algorithm to compute intersections of two point clouds. It can be used to detect collisions between implicit surfaces defined by two point sets, or to construct their intersection curves. Our approach utilizes a proximity graph that allows for quick interpolation search of a common zero of the two implicit functions. First, pairs of points from one point set are constructed, bracketing the intersection with the other surface. Second, an inter- polation search along shortest paths in the graph is performed. Third, the solutions are refined. For the first and third step, randomized sampling is utilized.
We show that the number of evaluations of the implicit function and the overall runtime is in O(loglogN) in the average case, where N is the point cloud size. The storage is bounded by O(N).
Our measurements show that we achieve a speedup by an order of magnitude compared to a recently proposed randomized sampling technique for point cloud collision detection.

Published in:

Proceedings of WSCG 2005, Plzen, Czech Republic, January 31 - February 7, 2005, 163-170

Files:

     Paper (print-version)
     Paper (onscreen-version)
     Slides


Shader Maker

Shader Maker

Markus Kramer, René Weller, Gabriel Zachmann

Actually, this is not a regular publication, but a software release.

Shader Maker is a simple, cross-platform GLSL editor. It works on Windows, Linux, and Mac OS X.

It provides the basics of a shader editor, such that students can get started with writing their own shaders as quickly as possible. This includes: syntax highlighting in the GLSL editors; geometry shader editor (as well as vertex and fragment shader editors, of course); interactive editing of the uniform variables; light source parameters; pre-defined simple shapes (e.g., torus et al.) and a simple OBJ loader; and a few more.

For download and further information please visit our project website


Hardware-Accelerated Ambient Occlusion Computation

Hardware-Accelerated Ambient Occlusion Computation

M. Sattler and R. Sarlette and G. Zachmann and R. Klein

In this paper, we present a novel, hardware-accelerated approach to compute the visibility between surface points and directional light sources.
Thus, our method provides a first-order approximation of the rendering equation in graphics hardware. This is done by accumulating depth tests of vertex fragments as seen from a number of light directions. Our method does not need any preprocessing of the scene elements and introduces no memory overhead.
Besides of the handling of large polygonal models, it is suitable for deformable or animated objects under time-varying high-dynamic range illumination at interactive frame rates.

Published in:

Proceedings of VMV 2004, November 16 - 18, 2004, 119-135

Files:

     Paper
     Video


Collision Detection for Deformable Objects

Collision Detection for Deformable Objects

M. Teschner and S. Kimmerle and B. Heidelberger and G. Zachmann and L. Raghupathi and A. Fuhrmann and M.-P. Cani and F. Faure and N. Magnenat-Thalmann and W. Strasser and P. Volino

Interactive environments for dynamically deforming objects play an important role in surgery simulation and entertainment technology. These environments require fast deformable models and very efficient collision handling techniques. While collision detection for rigid bodies is well-investigated, collision detection for deformable objects introduces additional challenging problems. This paper focusses on these aspects and summarizes recent research in the area of deformable collision detection. Various approaches based on bounding volume hierarchies, distance fields, and spatial partitioning are discussed. Further, image-space techniques and stochastic methods are considered. Applications in cloth modeling and surgical simulation are presented.

Published in:

Computer Graphics Forum (Eurographics Proc.), 2005, 24(1), 61-81.

Files:

     Paper
     Paper
     Slides


Point Cloud Surfaces using Geometric Proximity Graphs

Point Cloud Surfaces using Geometric Proximity Graphs

Jan Klein and Gabriel Zachmann

We present a new definition of an implicit surface over a noisy point cloud, based on the weighted least squares approach. It can be evaluated very fast, but artifacts are significantly reduced.
We propose to use a different kernel function that approximates geodesic distances on the surface by utilizing a geometric proximity graph. From a variety of possibilities, we have examined the Delaunay graph and the sphere-of-influence graph (SIG), for which we propose several extensions.
The proximity graph also allows us to estimate the local sampling density, which we utilize to automatically adapt the bandwidth of the kernel and to detect boundaries. Consequently, our method is able to handle point clouds of varying sampling density without manual tuning.
Our method can be integrated into other surface definitions, such as moving least squares, so that these benefits carry over.

Published in:

Computers & Graphics, December, 2004, 28(6), 839-850

Files:

     Paper


Point Cloud Collision Detection

Point Cloud Collision Detection

Jan Klein and Gabriel Zachmann

In the past few years, many efficient rendering and surface reconstruction algorithms for point clouds have been developed. However, collision detection of point clouds has not been considered until now, although this is a prerequisite to use them for interactive or animated 3D graphics.
We present a novel approach for time-critical collision detection of point clouds. Based solely on the point representation, it can detect intersections of the underlying implicit surfaces. The surfaces do not need to be closed.
We construct a point hierarchy where each node stores a sufficient sample of the points plus a sphere covering of a part of the surface. These are used to derive criteria that guide our hierarchy traversal so as to increase convergence. One of them can be used to prune pairs of nodes, the other one is used to prioritize still to be visited pairs of nodes. At the leaves we efficiently determine an intersection by estimating the smallest distance.
We have tested our implementation for several large point cloud models. The results show that a very fast and precise answer to collision detection queries can always be given.

Published in:

Eurographics 2004, Grenoble, September, 2004

Files:

     Paper
     Slides


Nice and Fast Implicit Surfaces over Noisy Point Clouds

Nice and Fast Implicit Surfaces over Noisy Point Clouds

Jan Klein and Gabriel Zachmann

We propose a new definition of the implicit surface for a noisy point cloud that allows for high-quality reconstruction of the surface in all cases. It is based on proximity graphs that provide a more topology-based measure for p roximity of points. The new definition can be evaluated very fast, but, unlike other definitions based on the weighted least squares approach, it does not suffer from artifacts.

Published in:

SIGGRAPH 2004, Sketches and Applications, Los Angeles, CA, USA, August, 2004

Files:

     Paper
     Slides


Object-Space Interference Detection on Programmable Graphics Hardware

Object-Space Interference Detection on Programmable Graphics Hardware

Alexander Gress and Gabriel Zachmann

We present a novel method for checking the intersection of polygonal models on graphics hardware utilizing its SIMD, occlusion query, and floating point texture capabilities. It consists of two stages: traversal of bounding volume hierarchies, thus quickly determining potentially intersecting sets of polygons, and the actual intersection tests, resulting in lists of intersecting polygons. Unlike previous methods, our method does all computations in object space and does not make any requirements on connectivity or topology.

Published in:

Informatik II, University Bonn, Germany, 2004

Files:

     Paper

Link:

     Paper and Slides from the GD'03 conference.


Proximity Graphs for Defining Surfaces over Point Clouds

Proximity Graphs for Defining Surfaces over Point Clouds

Jan Klein and Gabriel Zachmann

We present a new definition of an implicit surface over a noisy point cloud. It can be evaluated very fast, but, unlike other definitions based on the moving least squares approach, it does not suffer from artifacts. In order to achieve robustness, we propose to use a different kernel function that approximates geodesic distances on the surface by utilizing a geometric proximity graph. The starting point in the graph is determined by approximate nearest neighbor search. From a variety of possibilities, we have examined the Delaunay graph and the sphere-of-influence graph (SIG). For both, we propose to use modifications, the r-SIG and the pruned Delaunay graph. We have implemented our new surface definition as well as a test environment which allows to visualize and to evaluate the quality of the surfaces. We have evaluated the different surfaces induced by different proximity graphs. The results show that artifacts and the root mean square error are significantly reduced.

Published in:

Symposium on Point-Based Graphics, ETHZ, Zürich, Switzerland, June 2 - 4, 2004

Files:

     Paper
     Slides


Visual-fidelity dataglove calibration

Visual-fidelity dataglove calibration

Ferenc Kahlesz and Gabriel Zachmann and Reinhard Klein

This paper presents a novel calibration method for datagloves with many degrees of freedom (Gloves that measure at least 2 flexes per finger plus abduction/adduction, eg. Immersion's Cyberglove). The goal of our method is to establish a mapping from the sensor values of the glove to the joint angles of an articulated hand that is of "high visual" fidelity. This is in contrast to previous methods that aim at determining the absolute values of the real joint angles with high accuracy. The advantage of our method is that it can be simply carried through without the need for auxiliary calibration hardware (such as cameras), while still producing visually correct mappings. To achieve this, we developed a method that explicitly models the cross-couplings of the abduction sensors with the neighboring flex sensors. The results show that our method performs superior to linear calibration in most cases.

Published in:

Computer Graphics International (CGI), Crete, Greece, June 16-19, 2004

Files:

     Paper
     Slides


Consistent Normal Orientation for Polygonal Meshes Consistent Normal Orientation for Polygonal Meshes

Consistent Normal Orientation for Polygonal Meshes

Pavel Borodin and Gabriel Zachmann and Reinhard Klein

In this paper, we propose a new method that can consistently orient all normals of any mesh (if at all possible), while ensuring that most polygons are seen with their front-faces from most viewpoints. Our algorithm combines the proximity-based with a new visibility-based approach. Thus, it virtually eliminates the problems of proximity-based approaches, while avoiding the limitations of previous solid-based approaches. Our new method builds a connectivity graph of the patches of the model, which encodes the "proximity" of neighboring patches. In addition, it augments this graph with two visibility coefficients for each patch. Based on this graph, a global consistent orientation of all patches is quickly found by a greedy optimization. We have tested our new method with a large suite of models, many of which from the automotive industry. The results show that almost all models can be oriented consistently and sensibly using our new algorithm.

Published in:

Computer Graphics International (CGI), Crete, Greece, June 16-19, 2004

Files:

     Paper
     Slides


ADB-Trees: Controlling the Error of Time-Critical Collision Detection ADB-Trees: Controlling the Error of Time-Critical Collision Detection ADB-Trees: Controlling the Error of Time-Critical Collision Detection

ADB-Trees: Controlling the Error of Time-Critical Collision Detection

Jan Klein and Gabriel Zachmann

We present a novel framework for hierarchical collision detection that can be applied to virtually all bounding volume (BV) hierarchies. It allows an application to trade quality for speed. Our algorithm yields an estimation of the quality, so that applications can specify the desired quality. In a time-critical system, applications can specify the maximum time budget instead, and quantitatively assess the quality of the results returned by the collision detection afterwards.
Our framework stores various characteristics about the average distribution of the set of polygons with each node in a BV hierarchy, taking only minimal additional memory footprint and construction time. We call such augmented BV hierarchies average-distribution tree or ADB-trees.
We have implemented our new approach by augmenting AABB trees and present performance measurements and comparisons with a very fast previous algorithm, namely the DOP-tree. The results show a speedup of about a factor 3 to 6 with only approximately 4% error.

Published in:

8th International Fall Workshop on Vision, Modeling, and Visualization (VMV) München, Germany, November 19-21, 2003

Files:

     Paper
     Slides
     Movie
     Movie (another version)


Object-Space Interference Detection on Programmable Graphics Hardware

Alexander Gress and Gabriel Zachmann

We present a novel method for checking the intersection of polygonal models on current graphics accelerator boards (GPU). All SIMD computations are performed on the GPU using vertex and fragment programs. The result is either the list of intersecting polygons, or just its length; the former can be read back to the CPU by a texture, while the latter is fed back using the occlusion counter.
Our approach consists of two stages: simultaneous traversal of bounding volume hierarchies in order to quickly determine potentially intersecting sets of polygons, and the actual polygon intersection tests in object space. Both stages are mapped on the graphics hardware using floating point textures extensively.
Unlike previous methods, our method does all computations in object space and does not make any requirements on connectivity or topology.

Published in:

SIAM Conf. on Geometric Design and Computing, Seattle, Washington, November 13 - 17, 2003

Files:

     Paper
     Slides


Time-Critical Collision Detection Using an Average-Case Approach

Time-Critical Collision Detection Using an Average-Case Approach

Jan Klein and Gabriel Zachmann

We present a novel, generic framework and algorithm for hierarchical collision detection, which allows an application to balance speed and quality of the collision detection.
We pursue an average-case approach that yields a numerical measure of the quality. This can either be specified by the simulation or interaction, or it can help to assess the result of the collision detection in a time-critical system.
Conceptually, we consider sets of polygons during traversal and estimate probabilities that there is an intersection among these sets. This can be done efficiently by storing characteristics about the average distribution of the set of polygons with each node in a bounding volume hierarchy (BVH). Consequently, we neither need any polygon intersection tests nor access to any polygons during the collision detection process.
Our approach can be applied to virtually any BVH. Therefore, we call a BVH that has been augmented in this way an average-distribution tree or ADB-tree.
We have implemented our new approach with two basic BVHs and present performance measurements and comparisons with a very fast previous algorithm, namely the DOP-tree. The results show a speedup of about a factor 3 to 6 with only approximately 4% error.

Published in:

ACM Symposium on Virtual Reality Software and Technology (VRST), Osaka, Japan, October 1-3, 2003

Files:

     Paper
     Slides
     Movie
The movie shows those bounding volume pairs that have at least one collision cell with probability larger than the predefined threshold. The different colors of the BVs just denote which object they belong to.


High-Performance Collision Detection Hardware

High-Performance Collision Detection Hardware

Gabriel Zachmann and Günter Knittel

We present a novel hardware architecture for a single-chip collision detection accelerator and algorithms for efficient hierarchical collision. We use a hierarchy of k-DOPs for maximum performance. A new hierarchy traversal algorithm and an optimized triangle-triangle intersection test reduce bandwidth and computational costs. The resulting hardware architecture can process two object hierarchies and identify intersecting triangles autonomously at high speed. Real-time collision detection of complex objects at rates required by force-feedback and physically-based simulations can be achieved even in worst-case configurations.

Published in:

Informatik II, University Bonn, Germany, August, 2003

Files:

     Paper


An Architecture for Hierarchical Collision Detection

An Architecture for Hierarchical Collision Detection

Gabriel Zachmann and Günter Knittel

We present novel algorithms for efficient hierarchical collision detection and propose a hardware architecture for a single-chip accelerator. We use a hierarchy of bounding volumes defined by k-DOPs for maximum performance. A new hierarchy traversal algorithm and an optimized triangle-triangle intersection test reduce bandwidth and computation costs. The resulting hardware architecture can process two object hierarchies and identify intersecting triangles autonomously at high speed. Real-time collision detection of complex objects at rates required by force-feedback and physically-based simulations can be achieved.

Published in:

11th WSCG International Conference, Plzen-Bory, Czech Republic, February 3-7, 2003

Files:

     Paper
     Paper (Extended)
     Slides
     Additional Material


Minimal Hierarchical Collision Detection

Minimal Hierarchical Collision Detection

Gabriel Zachmann

We present a novel bounding volume hierarchy that allows for extremely small data structure sizes while still performing collision detection as fast as other classical hierarchical algorithms in most cases. The hierarchical data structure is a variation of axis-aligned bounding box trees. In addition to being very memory efficient, it can be constructed efficiently and very fast. We also propose a criterion to be used during the construction of the BV hierarchies is more formally established than previous heuristics. The idea of the argument is general and can be applied to other bounding volume hierarchies as well. Furthermore, we describe a general optimization technique that can be applied to most hierarchical collision detection algorithms. Finally, we describe several box overlap tests that exploit the special features of our new BV hierarchy. These are compared experimentally among each other and with the DOP tree using a benchmark suite of CAD data.

Published in:

ACM Symposium on Virtual Reality Software and Technology (VRST), Hong Kong, China, November 11-13, 2002

Files:

     Paper
     Paper (Extended)
     Slides


Natural Interaction in Virtual Environments

Natural Interaction in Virtual Environments

Gabriel Zachmann

This paper presents a number of algorithms necessary to achieve natural interaction in virtual environments; by "natural" we understand the use of a virtual hand as naturally as we are used to manipulate our real environment with our real hand. We present algorithms for very fast collision detection, which is a necessary prerequisite for natural interaction. In addition, we describe a framework for preventing object penetrations while still allowing the object's motion in a physically plausible way. Finally, we explain a model for naturally grasping virtual objects without resorting to gesture recognition.

Published in:

Workshop über Trends und Höhepunkte der Graphischen Datenverarbeitung, University of Tübingen, November, 2001

Files:

     Paper


Natural and Robust Interaction in Virtual Assembly Simulation

Natural and Robust Interaction in Virtual Assembly Simulation

Gabriel Zachmann and Alexander Rettig

Virtual assembly simulation is one of the most challenging applications of virtual reality.
Robust and natural interaction techniques to perform the assembly tasks under investigation are essential as well as efficient methods for choosing from a large number of functionalities from inside the virtual environment.
In this paper we present such techniques and methods, in particular multimodal input techniques including speech input and gesture recognition for controlling the system. We address precise positioning by novel approaches for constraining interactive motion of parts and tools, while a new natural grasping algorithm provides intuitive interaction. Finally, sliding contact simulation allows the user to create collision-free assembly paths efficiently.
Preliminary results show that the array of functionality and techniques described in this paper is sufficiently mature so that virtual assembly simulation can be applied in the field.

Published in:

Eighth ISPE International Conference on Concurrent Engineering: Research and Applications (ISPE/CE2001), California, USA, July, 2001.

Files:

     Paper


Optimizing the Collision Detection Pipeline

Optimizing the Collision Detection Pipeline

Gabriel Zachmann

A general framework for collision detection is presented. Then, we look at each stage and compare different approaches by extensive benchmarks. The results suggest a way to optimize the performance of the overall framework. A benchmarking procedure for comparing algorithms checking a pair of objects is presented and applied to three different hierarchical algorithms. A new convex algorithm is evaluated and compared with other approaches to the neighbor-finding problem.

Published in:

First International Game Technology Conference, Hong Kong, China, January 18 - 21, 2001.

Files:

     Paper


Virtual Reality in Assembly Simulation Virtual Reality in Assembly Simulation

Virtual Reality in Assembly Simulation - Collision Detection, Simulation Algorithms, and Interaction Techniques

Gabriel Zachmann (PhD thesis)

This thesis presents frameworks, algorithms, and techniques in order to make the application of virtual reality for virtual prototyping feasible. Virtual assembly simulation is focused on in particular.
The contributions are in the following areas: high-level specification of virtual environments, efficient interaction metaphors and frameworks, real-time collision detection and response, physically-based simulation, tracking, and virtual prototyping application development.
A framework for authoring virtual environments is proposed. The main premise for the proposed framework is that it should be easy for non-computer scientists to author virtual environments. Therefore, the concept of actions, events, inputs, and objects is introduced. These entities can be combined to virtual environments by the event-based approach.
Collision detection is one of the enabling technologies for all kinds of physically-based simulation and for virtual prototyping. This book proposes a collision detection pipeline as a framework for collision detection modules. Subsequently, several algorithms for all stages of the collision detection pipeline are developed and evaluated.
Interaction in virtual environments comprises many different aspects: device handling, processing input data, navigation, interaction paradigms, and physically-based object behavior. For all of them, techniques, frameworks, or algorithms are presented in this book, with a particular emphasis on their application to virtual prototyping.
Finally, virtual prototyping is discussed in general, while the virtual assembly simulation application is described in more detail.
All frameworks and algorithms have been implemented in Fraunhofer-IGD's VR system Virtual Design II, now available from VRCom.

Published in:

Darmstadt University of Technology, Germany, May, 2000.

Files:

     PhD thesis
     Printed book


Virtual Reality as a Tool for Verification of Assembly and Maintenance Processes

Virtual Reality as a Tool for Verification of Assembly and Maintenance Processes

Antonino Gomes de Sá and Gabriel Zachmann

Business process re-engineering is becoming a main focus in today's efforts to overcome problems and deficits in the automotive and aerospace industries (e.g., integration in international markets, product complexity, increasing number of product variants, reduction in product development time and cost). In this paper, we investigate the steps needed to apply virtual reality (VR) for virtual prototyping (VP) to verify assembly and maintenance processes. After a review of today's business process in vehicle prototyping, we discuss CAD-VR data integration and identify new requirements for design quality. We present several new interaction paradigms so that engineers and designers can experiment naturally with the prototype. Finally, a user survey evaluates some of the paradigms and the acceptance and feasibility of virtual prototyping for our key process. The results show that VR will play an important role for VP in the near future.

Published in:

Computers & Graphics, 1999

Files:

     Paper


Integrating Virtual Reality for Virtual Prototyping

Integrating Virtual Reality for Virtual Prototyping

Antonino Gomes de Sá and Gabriel Zachmann

In order to stay competitive, companies must deliver new products with higher quality in a shorter time. Business process re-engineering is becoming a main focus in today's efforts to overcome problems and deficits in the automotive and aerospace industries (e.g., integration in international markets, product complexity, increasing number of product variants, reduction in product development time and cost). There is some evidence indicating that the assembly process often drives the majority of the cost of a product, and that up to 70% of the total life cycle costs of a product are committed by decisions made in the early stages of the design process. The use of virtual reality for virtual prototyping is still in its infancy. In this paper, we investigate the steps needed to apply virtual reality (VR) for virtual prototyping (VP) to verify assembly and maintenance processes. The final goal of assembly/disassembly verification is the assertion that a part or component can be assembled by a human worker, and that it can be disassembled later-on for service and maintenance. Other questions need to be adressed, too: is it "difficult" to assemble/disassemble a part? How long does it take? How stressful is it in terms of ergonomics? Is there enough room for tools? After a review of today's business process in vehicle prototyping, we discuss CAD-VR data integration and identify new requirements for design data quality. We present several new interaction paradigms so that engineers, designers and skilled mechanical workers can experiment naturally with the virtual prototype. Finally, some results of a user survey performed at BMW are presented, showing the acceptance and potential of VP and the paradigms implemented for this key process. The results show that VR will play an important role for VP in the near future.

Published in:

ASME Design Engineering Technical Conferences, Atlanta, Georgia, September, 1998.

Files:

     Paper


Rapid Collision Detection by Dynamically Aligned DOP-Trees

Rapid Collision Detection by Dynamically Aligned DOP-Trees

Gabriel Zachmann

Based on a general hierarchical data structure, we present a fast algorithm for exact collision detection of arbitrary polygonal rigid objects. Objects consisting of hundreds of thousands of polygons can be checked for collision at interactive rates. The pre-computed hierarchy is a tree of discrete oriented polytopes (DOPs). An efficient way of re-aligning DOPs during traversal of such trees allows to use simple interval tests for determining overlap between OPs. The data structure is very efficient in terms of memory and construction time. Extensive experiments with synthetic and real-world CAD data have been carried out to analyze the performance and memory usage of the data structure. A comparison with OBB-trees indicates that DOP-trees as efficient in terms of collision query time, and more efficient in memory usage and construction time.

Published in:

Virtual Reality Annual International Symposium (VRAIS), Atlanta, Georgia, March, 1998.

Files:

     Paper


VR-Techniques for Industrial Applications

Gabriel Zachmann

This chapter provides some classifications and characterizations of virtual environments, followed by a description of basic and advanced interaction techniques. A framework for describing and authoring virtual environments is proposed, as well as a brief description of the architecture of VR systems. Finally, a solution to magnetic field distortion is given, which is needed for precise interaction and positioning in virtual environments.

Files:

     Paper


Real-time and Exact Collision Detection for Interactive Virtual Prototyping

Real-time and Exact Collision Detection for Interactive Virtual Prototyping

Gabriel Zachmann

Many companies have started to investigate Virtual Reality as a tool for evaluating digital mock-ups. One of the key functions needed for interactive evaluation is real-time collision detection. An algorithm for exact collision detection is presented which can handle arbitrary non-convex polyhedra efficiently. The approach attains its speed by a hierarchical adaptive space subdivision scheme, the BoxTree, and an associated divide-and-conquer traversal algorithm, which exploits the very special geometry of boxes. The traversal algorithm is generic, so it can be endowed with other semantics operating on polyhedra, e.g., distance computations. The algorithm is fairly simple to implement and it is described in great detail in an ``ftp-able'' appendix to facilitate easy implementation. Pre-com\-pu\-ta\-tion of auxiliary data structures is very simple and fast. The efficiency of the approach is shown by timing results and two real-world digital mock-up scenarios.

Published in:

ASME Design Engineering Technical Conferences, Sacramento, California, September 1997.

Files:

     Paper


Distortion Correction of Magnetic Fields for Position Tracking

Distortion Correction of Magnetic Fields for Position Tracking

Gabriel Zachmann

Electro-magnetic tracking systems are in wide-spread use for measuring 6D positions. However, their accuracy is impaired seriously by distortions of the magnetic fields caused by many types of metal which are omnipresent at real sites. We present a fast and robust method for "equalizing" those distortions win order to yield accurate tracking. The algorithm is based on global scattered data interpolation using a "snap-shot" of the magnetic field's distortion measured once in advance. The algorithm is fast (it does not introduce any further lag in the data flow), robust, the samples of the field's "snap-shot" can be arranged in any way, and it is easy to implement. The distortion is visualized in an intuitive way to provide insight into its nature, and the correction algorithm is evaluated in terms of accuracy and performance. Finally, a qualitative comparison of the suceptibility of a Polhemus and an Ascension tracking system is carried out.

Published in:

Computer Graphics International Conference (CGI) , Belgium, June 23 - 27, 1997.

Files:

     Paper


A Language for Describing Behavior of and Interaction with Virtual Worlds

A Language for Describing Behavior of and Interaction with Virtual Worlds

Gabriel Zachmann

Virtual environments are created by specifying their content, which comprises geometry, interaction, properties, and behavior of the objects. Interaction and behavior can be cumbersome to specify and create, if they have to be implemented through an API.
In this paper, we take the {\em script} based approach to describing virtual environments. We try to identify a generic and complete, yet simple set of functionality, so that non-programmers can readily build their own virtual worlds.

We extend the common object behavior paradigm by the notion of an Action-Event-Object triad.

Published in:

VRST, Hongkong, July 01 - 04, 1996.

Files:

     Paper
     Color Plates


Virtual Prototyping Examples for Automotive Industries

Virtual Prototyping Examples for Automotive Industries

Fan Dai, Wolfgang Felger, Thomas Frühauf, Martin Göbel, Dirk Reiners, Gabriel Zachmann

The vision of virtual prototyping is to use virtual reality techniques for design evaluations and presentations based on a digital model instead of physical prototypes. In the automotive industries, CAD and CAE systems are widely used. This provides a good basis for virtual prototyping. This vision is therefore extremely interesting for automotive industries. Many companies have started to evaluate existing tools and technologies, and think about, or begin to develop virtual prototyping systems for their own needs.

In this paper, we present some examples from our recent projects with automotive campanies. Based on these examples, we discuss problems, solutions and future directions of R&D to achieve the vision of virtual prototyping.

Published in:

Virtual Reality World, February, 1996, Stuttgart.

Files:

     Paper


The BoxTree: Exact and Fast Collision Detection of Arbitrary Polyhedra

The BoxTree: Exact and Fast Collision Detection of Arbitrary Polyhedra, SIVE95 (First Workshop on Simulation and Interaction in Virtual Environments), University of Iowa, July 1995.

Gabriel Zachmann

An algorithm for exact collision detection is presented which can handle arbitrary non-convex polyhedra efficiently. Despite the wealth of literature, there are not many fast algorithms for this class of objects.

The approach attains its speed by a hierarchical data structure, the BoxTree, and an associated divide-and-conquer traversal algorithm, which exploits the very special geometry of boxes. Boxes were chosen, because they offer much tighter space partitioning than spheres.

The method is generic, so it can be endowed with other semantics operating on polyhedra.

The algorithm is fairly simple to implement and it is described in great detail in an appendix to facilitate easy implementation. The construction of the data structure is very simple and very fast. Timing results show the efficiency of this approach.

Files:

     Paper
     Appendix


Exact and Fast Collision Detection

Exact and Fast Collision Detection

Gabriel Zachmann (Diploma Thesis)

Collision detection has many different applications; for example, in physically based simulation, where moving objects are simulated. In order to determine their behavior over time, the most basic information needed is the time and position of collision together with the exact point of collision. Only if this information is known exactly, the collision response can determine how objects will react, according to their mass, mass distribution, velocities, etc.

Files:

     Paper