Theses

On this page, you can find a number of topics for which we are looking for a student who is interested in working on them as part of their thesis (bachelor or master). The list is sorted in reverse chronological order; that means, the further a topic is towards the bottom, the more likely it is already taken (or no longer relevant to us).
However, this list is by no means exhaustive! In fact, we always have many more topics available. So, please also check out our research projects; in all projects, there are lots of opportunities for doing a thesis.
In addition, there are quite a few "free floating" topics, which are not listed here nor are they connected with research projects; those are ideas we would like to try out or get familiar with.

If you are interested in one of the topics, please send me (or the respective contact person) an email with your transcript of record and 1-2 sentences of motivation.

If you would like to talk to us abouth thesis topics, just make an appointment with one of the project members or researchers of my group.
You can also come to my office hours (mondays 6pm - 8pm, no appoitment needed).
Please make sure to send me or the researchers your transcript of records.

Ethics

Unlike 20 years ago, a lot of computer science research can and will have a huge impact on our society and the way we live. That impact can be good, but today, our research could also have a considerable negative impact.

I encourage you to consider the potential impact, both good and bad, of your work. If there is a negative impact, I also encourage you to try to think about ways to mitigate that.

As a matter of course, I expect you to follow ACM's Code of Ethics and Professional Conduct. I think, we all should go a step further and change the scientific peer-reviewing system, not only for paper submissions but also for grant proposal submissions, before we start a thesis, a new product development, etc. Here is an interview with Brent Hecht, who has a point with his radical proposal, I think.
This article (in German) explains quite well, I think, how agile software development can include ethical considerations ("Ethik in der agilen Software-Entwicklung", August 2021, Informatik Spektrum der Gesellschaft für Informatik).

Doing Your Thesis Abroad

If you are interested in doing your thesis abroad, please talk to us, we might be able to help with establishing a contact.
You also might want to look for financial aid, such as this DAAD stipend.

Doing Your Thesis with a Company

If you are interested in doing your thesis at a company, we might be able to help establish a contact, for instance, with Kizmo, Kuka (robot developer), Icido (VR software), Volkswagen (VR), Dassault Systèmes 3DEXCITE (rendering and visualization), ARRI (camera systems), Maxon (Hersteller von Cinema4D), etc.

Doing Your Thesis in the Context of a Research Project

We always have a number of research projects going on, and in the context of those, there are always a number of topics for potential master's or bachelor's theses. If you are interested in such an "embedded" thsis topic, please pick one of those research projects, then talk to the contact given there or talk to me.

Formalities

If you feel comfortable with writing in English, I encourage you to write your thesis in English. (Or, if you want to become more fluent in English writing.)
I recommend to write your thesis using LaTeX! There are no typographic requirements regarding your thesis: just make it comfortable to read; I suggest you put some effort into making it typographically pleasing.
A good starting point is the Classic Thesis Template by André Miede. (Archived Version 4.6) But feel free to use some other style.

Regarding the structure of your thesis, just look at some of the examples in our collection of finished thesis.

Referencing / citation: with the natbib LaTeX package, this should be relatively straight-forward, just pick one of the predefined citation/referencing styles.
If you are interested in variants, here is the Ultimate Citation Cheat Sheet that contains examples of the three most prevalent styles. I suggest to follow the MLA style. (Source)

Recommendations While Doing the Actual Work

Recommendations for Writing Up

Guidelines for Type(s) of Chart to use in your Thesis

Table comparing chart types

At some point in your work, you probably will generate some charts to present your results. Some charts are better in showing specific facets than other charts. In the following table, you can find an overview of which chart is useful in communicating which properties of the data [B. Saket, A. Endert, and Ç. Demiralp: "Task-Based Effectiveness of Basic Visualizations", IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 7, pp. 2505–2512, July 2019].

How to use the table: first, pick the purpose of your visualization of your data; for example, let's assume you want to find correlations. So, you go to the "Correlations" row. Next, pick your top criterion; in our example, let's assume you strive to maximize user preference. So, you go to the cell under the "User preference" column. Finally, pick one of the chart types on the left hand side in that cell (they are ranked by score regarding the respective criterion you picked). In our example, you should probably use the lines chart; if that does not fit your purposes (for whatever other reasons), then you probably want to pick the bar chart instead. The arrows symbolize "performs better than" relationships between chart types (inside that cell).

Criteria we Use When Grading Your Thesis

Bei der Beurteilung einer Master- bzw. Bachelor-Arbeit verwenden wir folgende Kriterien:

Recommendations for Your Presentation During Your Defense (Colloquium)

Links

For printing your thesis, you might want to consider Druck-Deine-Diplomarbeit. We have heard from other students that they have had good experiences with them (and I have seen nice examples of their print products).
Also, there is a friendly copy shop, Haus der Dokumente, on Wiener Str. 7, right on the campus.

The List


Bachelor Thesis: Influence of Hand Models on Hand Pose Estimation

Subject

representation of hands

Pose Estimation of hands (in combination with objects) is an important foundation for both VR and robotics applications. Methods based on deep learning usually outperform classical methods by a wide margin. However, they require annotated training images. The annotation process of real images with hands is cumbersome and prone to errors. Therefore synthetic data is important tool to train pose estimators.

In this thesis you measure the influence of hand models with different quality (see image) on hand pose estimation.

Your Tasks/Challenges:

Requirements:

Contact:

Janis Roßkamp, j.rosskamp at cs.uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de


Thesis: A Hybrid Approach to Pose Estimation of Hands using Deep Learning

Subject

rgb vs. mocap

Hand Pose Estimation is important for both virtual reality applications and motion capturing (mocap) for games and movies. Current methods often use RGB images (top image) or marker-based strategies. However, RGB images typically fail to provide high-precision pose estimation, whereas marker-based motion capturing requires numerous markers to accurately track the hand requiring the use of an intrusive glove (bottom image). Moreover, this marker-based approach results in the loss of all hand information, such as shape and color.

In this thesis, you develop a novel hybrid method that combines the strengths of mocap and RGB images to enhance the accuracy of hand pose estimation. Using only a few markers, i.e. on the fingertips, we can improve current methods on RGB images.

Your Tasks/Challenges:

Requirements:

Contact:

Janis Roßkamp, j.rosskamp at cs.uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de


Master Thesis: Real-Time Rendering of Dynamic Point Clouds

Subject

point cloud rendering techniques 1
point cloud rendering techniques 2

Rendering 3D point clouds, captured using depth or LiDAR sensors, in real-time is a fundamental and challenging area in computer graphics. A point cloud consists of numerous 3D points that hold spatial position and color information. The primary goal of point cloud rendering is to display this collection of points (which might be a simple array of 3D-Vectors) in such a way that they are perceived as a opaque surface on a screen.

The current methods in point cloud rendering are diverse. Some approaches visualize the points directly as circular "splats" on the screen. Other techniques transform the points into a 3D grid and reconstruct a mesh, which is then rendered. Recent techniques utilize different types of deep neural networks.

Your Task:

Depending on your preferences, your task may vary an may be:

Highly advanced techniques such as Fusion4D (Microsoft) or Function4D (among others, by Google), which achieve high-quality results, typically do not release their source code. Therefore, re-implementing parts of these techniques would be very interesting for a large research community in the context of a master thesis.

Working Environment:

Regardless of your chosen path (whether developing a new technique, improving an existing one, or partially re-implementing a very advanced unpublished technique), your technique should finally be integrated into our Point Cloud Rendering Framework (PCRFramework). Our PCRFramework is a stable, lightweight and easy-to-expand framework implemented in modern C++ and offers an ImGUI frontend, that already supports different basic kinds of point cloud rendering techniques (Splat Rendering, Mesh Rendering, TSDF with Real-time Marching Cubes). It already possesses numerous functions to load point clouds from an Azure Kinect, Microsoft Kinect v2, or from a recorded file. The framework integrates CUDA and already offers some useful functions to access the loaded point cloud in CUDA as well as from the CPU. Should you wish to use neural networks, the PCRFramework is also capable of loading and inferring neural networks trained in PyTorch via LibTorch. In this case, your implementation would primarily be in Python and PyTorch.

Requirements:

Contact:

Andre Mühlenbrock, muehlenb at uni-bremen.de
Prof. Dr. Gabriel Zachmann, email: zach at informatik.uni-bremen.de


Master Thesis: Redirected Walking in Shared Real and Virtual Spaces

Subject

redirected walking example

Redirected walking (RDW) enables users to walk in a larger VR space than the real space. This works by shifting and rotating the virtual space ever so slightly, ideally below the user's noticeable threshold. There has been a lot of research on RDW techniques and the just noticeable thresholds.
However, how do you redirect multiple users in a shared virtual environment in the case the users also share the same real space, e.g., a big lab or a huge indoor court?
The setup is a number of users wearing untethered HMDs moving around in a large, common, tracked space (for instance, using optical tracking and WiFi HMDs). This setup is not quite consumer grade (yet), but we can imagine a future, where such kind of arcades are possible.
An approach to solving the RDW problem could be kind of a trajectory optimization, where users' trajectories in real space are predicted, and the optimization goal is the total deviation from all the trajectories of all users.

Your Tasks/Challenges:

Research the literature on multi-user RDW. Formalize the optimization problem as a mathematical non-linear optimization problem. Identify a suitable math library for solving the problem in real-time. Implement the system. Test and evaluate it with a number of users.

Requirements:

Contact:

Prof. Dr. Gabriel Zachmann, email: zach at informatik.uni-bremen.de


Master Thesis: Inverse Reinforcement Learning and Affordances

Subject

point clouds in vr 1

People could program powerful chess computers before they could program a robot to walk on two legs, and many of the tasks we find easy as human beings, such as daily activities involved in preparing meals or cleaning up, turn out to be difficult to specify in detail. Thus, if we want robots to be competent helpers in the home, it would be better if we could teach them by showing what needs to be done, and for them to learn from watching us. Several techniques are being researched to enable such learning. One of these techniques is IRL—inverse reinforcement learning [1]—where the goal is to discover, by watching an "expert," the reward function that this expert is maximizing. This is more effective than simple imitation of the expert's actions. Consider the proverbial monkey shown how to wash dishes. The monkey may go through the motions of wiping, but if it did not understand that the dishes should be clean afterwards, then it won't do a good job.
However, IRL is an ill-posed problem: there can be an infinity of reward functions that the expert may be demonstrating. To even make an educated guess would often require considering enormous search spaces—there are many parameters that go into characterizing even the simplest manipulation action! Additionally, the environments in which human beings perform tasks, and the tasks themselves, are in principle of unbounded complexity: if a human knows how to stack three plates on top of each other, they also know how to stack four or ten.

Your Tasks/Challenges:

The subject of this thesis is to develop an IRL system that combines existing research into relational IRL[2], modular IRL[3], and explicitly represented knowledge to enable a simulated agent to learn, from demonstrations performed in a simulated environment, how to perform tasks such as stacking various items, putting objects in and taking them out of containers, and how to cover containers.
While the project can start with published techniques, it also raises research questions to investigate. Relational IRL is a technique to learn rewards that generalize and describe tasks for environments of, in principle, arbitrary complexity. However, the choice of logical formulas in the relational descriptions has a significant influence on the quality of the learned rewards—how can the logical language of these descriptions be well-chosen for the tasks we have in mind, such as stacking and container use?
Furthermore, because IRL is mathematically ill-posed, many reward functions are learnable. [2], cited below, shows an example of an unstacking task, where both a reward for "there are 4, 5, or 6 blocks on the floor" and a reward for "there are no stacked blocks" are learnable from the same data, but it is only the second one that captures the intended level of generality. How can the learning process be influenced to prefer the more generalizable rewards? How can we encode which parameters of the demonstration count "as-is" and which are allowed to vary arbitrarily? The manipulations involved in stacking or container use are complex. Can these be split into several phases, allowing for independent learning for each phase and thus simplifying the search space for the IRL problem?

Requirements:

[1] [2] [3]

Contact:

Prof. Dr. Gabriel Zachmann, email: zach at informatik.uni-bremen.de


Master Theses at DLR/CGVR: Point Clouds in VR

Subject

point clouds in vr 1
point clouds in vr 2
point clouds in vr 3

The Department of Maritime Security Technologies at the Institute for the Protection of Maritime Infrastructures is dedicated to solving a variety of technological issues necessary for the implementation and testing of innovative system concepts to protect maritime infrastructures. This includes the development of visualization methods for maritime infrastructures, including vast point cloud data sets.
The Computer Graphics and Virtual Reality Research Lab (CGVR) at University of Bremen carries out fundamental and applied research in visual computing, which comprises computer graphics as well as computer vision. In addition, we have a long history in research in virtual reality, which draws on methods from computer graphics, HCI, and computer vision.
These two research groups offer the opportunity for joint master theses, allowing students to get the best of both worlds of acadamic and applied science.

Potential Topics:

Note that you do not need to work on all of the topics; they are meant as potential ideas what you could work on and what is of interest to us. The specific details of your topic will be discussed, once you decide you want to work in this area.

Also, there is the option of your getting some funding when working on one of these topics.

Requirements:

Contact:

Prof. Dr. Gabriel Zachmann, email: zach at informatik.uni-bremen.de


Master thesis: Identifying the Re-Use of Printing Matrices

Subject

book illustration print matrices re-use

Even before Gutenberg invented printing texts, images were printed using matrices, either carved woodblocks or engraved copperplates. Because they were expensive to produce, these matrices were often re-used even after many years or sold to other printers. Since there was no copyright, some printers simply had successful illustrations copied (with greater or lesser accuracy) for their own use. In the last years, several millions of book illustrations have been digitised, including naturally many re-uses of printing matrices. However, these photographs do not look exactly the same - matrices may become worn or damaged over time, the printing process may have been handled slightly differently, pages can become dirty or torn, lastly, photos were taken by different camera systems and from different angles. This thesis aims to investigate possible methods to match images to used printing matrices in order to track possible re-use, with the intention of incorporating the devloped methods into real-world usage. One idea could be to utilize geometric hashing on either extracted feature points (see our Massively Parallel Algorithms lecture) or on features extracted from a trained classifier network.

Your Tasks/Challenges:

Requirements:

Contact:

Thomas Hudcovic, hudo at uni-bremen dot de
Prof. Dr. Gabriel Zachmann, zach informatik.uni-bremen.de


Master Thesis: Gravity Modeling and Stable Diffusion

Subject

stable diffusion gravity modeling

Current and future small-body missions, such as the ESA Hera mission or the JAXA MMX mission, demand good knowledge of the gravitational field of the targeted celestial bodies. This is not only motivated to ensure the precise spacecraft operations around the body but likewise important for landing maneuvers, surface (rover) operations, and science, including surface gravimetry. To model, the gravitation of irregularly-shaped, different methods exist. Recently (latent) stable diffusion has gained popularity as a deep learning approach. Usually, the systems work in the image space. However, this thesis should investigate how the method can be used to model a gravity field (3D space). With the polyhedral method, we can compute the gravity field of 3D shape files as ground truth data.

Your Tasks/Challenges:

Requirements:

Contact:

Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de


Master Thesis: Sphere Packing Problems

Subject

sphere packing examples

Sphere packings offer a way to approximate a shape volume. They can be used in many applications. The most common usage is collision detection since it is fast and trivial to test spheres for an intersection. Another application is modeling gravitational fields or applications in medical environments with force feedback. Also, an important quality criterium is packing density, which is closely related to the fractal dimension. An exact determination of the fractal dimension is still an open problem. The practical side is well understood. We use the Protosphere algorithm for triangular meshes to generate sphere packings, which approximate Apollonian diagrams. Yet, the theoretical side needs more exploration. We are considering multiple areas where you can study single or multiple topics in a thesis.

Your Tasks/Challenges:

Requirements:

Contact:

Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de


Master thesis: Natural hand-object manipulations in VR using Optimization

Subject

natural object manipulation

One of the long-standing research challenges in VR is to allow users to manipulate virtual objects the same way they would in the real world, i.e., grasp them, twiddle and twirl them, etc.
One approach could be physically-based simulation, calculating the forces acting on object and fingers, and then integrating both hand and object positions.
Another approach, to be explored in this thesis, is to use optimization. The idea is to calculate hand-object penetrations, or minimal distances in case there are no penetrations, then determine a new pose for both hand (and fingers) and the object such that these penetrations are minimized (or distances are maximized).
Software for computing penetrations has been developed in the CGVR lab and is readily available. Also, many software packages for doing fast non-linear optimization is available in the public domain (e.g., pagmo).

Task / Challenges:

Requirements:

Contact:

Prof. Dr. Gabriel Zachmann: zach at informatik.uni-bremen.de


Master thesis: Mixed Reality Telepresence: Extending a Collaborative VR Telepresence System by Augmented Reality

Subject

AR Telepresence

Shared virtual reality (VR) and augmented reality (AR) systems with personalized avatars have great potential for collaborative work between remote users. Studies indicate that these technologies provide great benefits for telepresence applications, as they tend to increase the overall immersion, social presence, and spatial communication in virtual collaborative tasks. In our current project, remote doctors can meet and interact with each other in a shared virtual environment using VR headsets and are able to view live-streamed and 3D-visualized operations (based on RGB-D data) to assist the local doctor in the operating room. The local doctor is also able to join using VR.

The goal of this thesis is to extend the existing UE4 VR telepresence project to allow the local doctor to use AR glasses like the Hololens instead of the VR headset. This enables the doctor to interact - hands-free - with the remote experts while continuing the operation, and prevents interruptions. Your tasks are to adapt the current code such as to work with the Hololens, too (general detection, tracking, registration, interaction gestures). Additionally, the relevant data has to be streamed as fast as possible onto the Hololens to be viewed. Lastly, - optionally - it would be great to use the build-in depth sensor of the Hololens for 3D visualizations of the patient. This could be done by continuously registering the sensor and streaming the data back into the shared virtual world.

Task / Challenges:

Requirements:

Contact:

Roland Fischer, s_8ix2ba at uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de


Master thesis: High Fidelity Point Clouds: Artificially Increasing the Sensor's Depth Resolution

Subject

Depth Supersampling

RGB-D cameras like Microsoft's Azure Kinect and the corresponding point cloud visualizations of the captured scenes are getting increasingly popular and find usage in a wide range of applications. However, the low depth sensor resolution is a limiting factor resulting in very coarse 3D visualizations.

The goal of this thesis is to find and implement methods to artificially increase the depth sensor's resolution, and, thus, the fidelity of the generated point clouds. The methods have to be fast enough for real-time usage. One approach is to develop or adapt and employ super sampling algorithms (possibly based on deep learning) on the depth images. Another approach would be to experiment with attaching a convex lens in front of the sensor to increase the local pixel density for a distinct area, although this limits the field of view. Using a lens would entail a custom calibration/registration procedure between depth and color sensor. Your task is to explore these and possibly other methods and implement the most convincing one(s).

Task / Challenges:

Requirements:

Contact:

Roland Fischer, s_8ix2ba at uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de


Bachelor/Master thesis: Non-Uniform Stereo Projection for 3D Monitors

Subject

Tunnel with red-cyan stereo rendering

Virtual worlds on 3D Monitors are limited by the display area. For example, at the edges of the screen, conflicting stereo cues cause stereo violations. We developed a new method to do stereo rendering in real-time that probably reduces these adverse effects. Our approach works in the vertex shader that was tested on the powerwall.

In this work, your task is to port the stereo rendering algorithm to work on the Z-Space. The Z-Space is a 3d monitor with head tracking. The stereo rendering currently uses side-by-side rendering and has to be ported to Quad-Buffer stereo rendering. The method is implemented in the Godot game engine. Further, you will conduct a user study to evaluate the influence of the new technique on a small screen and compare it to the large Powerwall.

Task / Challenges:

Requirements:

Contact:

Christoph Schröder-Dering, schroeder.c at informatik.uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de


Master thesis: Comparing Surcigal Lighting Systems in VR

Lego in VR Simulation

Subject

In the SmartOT project, a new autonomous lighting system for operating rooms is being developed, which consists of many light modules placed on the ceiling that automatically change intensity to minimize shadows cast by surgeons and medical staff on the surgical wound. The aim of the master thesis is to design and perform a suitable study (and a task) that allows to compare the automatic lighting system with the manual surgical lamp in a VR simulation.

The autonomous lighting system as well as a conventional manual surgical lamp are already implemented in a VR simulation using Unreal Engine 4.26. In addition, an implementation already exists in the work with Lego bricks (Video), which can be extended by a task (e.g., the explicit rebuilding of a given Lego figure by a proband).

Tasks/Challenges

Requirements:

It is fine if not all requirements are met, as long as there is a willingness to delve into these topics as part of the master's thesis.

Contact:

Andre Mühlenbrock, muehlenb at uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de



Master thesis: Creation of an RGB-D Dataset with Ground Truth for Supervised Learning and Depth Image Enhancement

Subject

avatar

RGB-D cameras (color + depth) are hugely popular for 3D reconstruction and telepresence scenarios. An open problem is the inherent sensor noise which limits the achievable quality. Deep Learning techniques showed to be very promising in image denoising, -completion, and -enhancement tasks, however, for supervised learning, ground truth data is needed. Acquiring suitable, realistic ground truth data for RGB-D images is a huge challenge, which is why there is nearly none yet.

With this thesis, we want to create a universally usable RGB-D dataset with ground truth data. To achieve this, the idea is to arrange a real physical test scene consisting of a wide variety of objects and materials. To precisely specify and change the position and rotation of the RGB-D camera within the scene, we rely on a highly accurate robot/robot arm. The corresponding ground truth images will be acquired by creating a virtual version of the scene and its contained objects, e.g. using the Unreal Engine 4 and Blender. A virtual camera can eventually be placed in the virtual scene, be exactly aligned with the physical one, and record corresponding synthetic ground truth images.

Task / Challenges:

Requirements:

Contact:

Roland Fischer, s_8ix2ba at uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de


Master thesis: Determination and optimization of the accuracy of measured values of the "Virtuose™ 6D" haptic arm of the company Haption GmbH.

Subject

Virtuose™ 6D-Haptikarm

The "Virtuose™ 6D" haptic arm from the company Haption is used in a training system for drilling and milling in human bones. With the haptic arm, the training participant is given a very realistic feeling of the drilling or milling process. During the training, measured values for the forces and position of the drilling or milling tip in space are to be recorded, the accuracy of which is currently limited.

Your task

The task is to optimize the processing of the sensory information from the haptic arm so that these measured values achieve the required accuracy, i.e. about 0.5 mm in position and 0.2 N in force. The measured values are to be transmitted to a system that can store these data with high frequency and precise time stamps. The work includes the identification of the sources of error, in particular the deformation of the haptic arm structure under the action of the applied forces, their modeling and measurement, and their compensation.

Contact:

Prof. Dr. G. Zachmann, zach at informatik.uni-bremen.de








Bachelor/Master thesis: Teaching Anatomy through Augmented Reality

Subject

Marching cubes rendering of a hip socket that is represented by an implicit surface.

The human anatomy is hard to convey through textbooks. Studies suggest that teaching anatomy through Mixed Reality can improve the learning effect, as well as the memory retention of the learned material. We developed a virtual reality (VR) immersive anatomy atlas that allows medical students to explore the human anatomy in a virtual 3D space. We want to experiment with other forms of mixed reality, such as augemnted reality (AR).

Your task

In this thesis you will build upon an existing application that is a VR Unreal Engine project. Your task is to adapt the anatomy atlas to work in AR. To do this, the project needs to be ported to the OpenXR framework. for hip surgery simulation that should work for Unreal and Unity game engine. Your task is to improve the visualization of the implicit surface which currently relies on marching cubes. The task should be solved by ray tracing/ ray marching.

Prerequisites

This thesis requires some knowledge of game engines, ideally with Unreal Engine. It would be helpful if you have already worked with VR in Unreal Engine (e.g. you have visited the course "Virtual Reality and Physically-Based Simulation").

Contact:

Maximilian Kaluschke, mxkl at uni-bremen dot de
Prof. Dr. G. Zachmann, zach at informatik.uni-bremen.de


Master thesis: Factors influencing correct perception of spatial realtionships in VR

Subject

heatmap1

Learning the human anatomy plays an important role in any surgeon's education. Patient's well-being depends to a significant degree on the surgeon's good understanding of the spatial relationships between all the structures in the human body, such as organs, blood vessels, nerves, etc. The reseaerch question in this thesis is: how much better do people (e.g., medical students) learn those spatial realationships between different structures of the human body when they learn those using virtual reality, as opposed to learning them from 2D books?

Your task

In this thesis, you will build upon an exisiting application that was implemented on top of the game engine Unreal. This application already contains a lot of anatomy and several features to interact with the 3D geometry.

During this thesis, you will work with surgeons in Oldenburg to design and conduct the user study.

Prerequisites

This thesis does not require the excellent programming skills. You will need considerable knowledge of statistics (which you can learn, of course, during your thesis). In any case, it would be helpful if you had some experience with the Unreal Engine. Like any other thesis, you will need to do a lot of literature research. Participation in our VR course will provide a good basis for understanding virtual reality as a whole.

Contact:

Prof. Dr. G. Zachmann, zach at informatik.uni-bremen.de


Master thesis: Influence of Self-Shadows on Presence in immersive virtual worlds

Subject

Creating a sense of presence in immersive virtual worlds has been a research topic for a long time. Fast hardware and software response times, exact tracking of your own position, and high resolutions are some of the many known requirements for a basic sense of presence when using head mounted displays (HMD).

In this master thesis another, possibly more subtle factor on the feeling of presence is to be examined: shadows of oneself rendered in virtual worlds using HMDs. Questions would be whether such self-shadows increase the sense of presence, and how detailed and realistic such shadows have to be in order to have an effect.

In order to render shadows of oneself in a virtual world that is perceived via HMDs, one could proceed as follows: One or more kinects take a depth image in real time, from which a point cloud is generated. The generated point cloud is used to render the shadow through virtual light sources in the virtual world.

Task / Challenges:

Requirements:

Contact:

Andre Mühlenbrock, muehlenb at uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de


Master (Bachelor) thesis: Depth Perception in VR

Subject

heatmap1

The goal of this thesis is to investigate the (distorted) depth perception that usually is recognized in VR. There are a variety of so-called depth cues, i.e., sources of information about the spatial relations of the objects in the environment, which are used by the human visual system to deduce depth. This includes visual monocular depth cues (e.g., occlusion, relative size), oculomotor depth cues (e.g., convergence, accommodation), and binocular depth cues (in particular, disparity). Unfortunately, there are frequent reports of underestimation of distances in virtual environments. There are many potential reasons for this effect, including hardware errors, software errors and errors of human perception. The difference in the images in the left and right eye is called binocular disparity and it is considered to be the strongest depth cue in the personal space. Using random-dot patterns, it was observed that it is possible to perceive depth with no other depth cue than disparity. However, the actual influence of the disparity on the depth perception in VR and probably, an algorithm to influence the disparity in software to correct the depth perception is still unknown. Such an automatic correction algorithm could be a goal changer for many application using VR.

Your task

The goal of this thesis is to take at least the first step to investiagte the influence of the disparity on the depth perception in VR. Our idea is to design a user study where the distance of the eyes is changed in VR (which is pretty straight forward by simply adjusting the virtual cameras) and compare the results to the depth perception in the real world but also with a changed dispartiy. To do that, we have a set of Hyperscopes, this are glasses that uses mirrors to change the disparity. Really challenging is the definition of an experiment that avoids other depth cues which can influence the results.

Prerequisites

This thesis does not require the ultimate programming skills. Nevertheless, it would be helpful if you have a little experience with the Unreal Engine to set up a scene and change the disparity of the virtual cameras. This thesis mainly requires a lot of literature research about the human depth perception but also about the design of good user studies. Moreover, some knowledge about statistics could be helpful for the analysis of the results. Participating our VR course can be a good starting point.

Contact:

Rene Weller, weller at informatik.uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de


Master thesis: Semantic Segmentation of Art

Subject

The aim of this project is to develop software for searching and analyzing Rocaille forms, such that it facilitates the comprehension of, motives, composition form-related transfers, and attributions.

The Rocaille can be described as a freely-constructed, mostly asymmetrical base frame consisting of volute-like shapes— i.e. C- and S-shaped scrolls—which is marked by mostly independent and dominant extensions. All objects from the field of architecture, the applied arts, and all picture frames can be designed in this way: by employing a subtle form of the Rocaille to accentuate individual aspects, using ornamental exaggeration or constructing objects by means of the Rocaille itself.

Methods developed in this thesis could, for instance, be used in art historical databases such as Prometheus or applied to the ever-growing offers on the internet. They will help to answer questions critical to art history such as the relationship between the analysis of form and meaning.

Tasks/Challenges

The goal is to develop algorithms that can learn the appearance, forms, and composition of the different parts of Rocaille art.

Prerequisites

Contact:

Prof. Dr. Gabriel Zachmann, zach informatik.uni-bremen.de


Master thesis: Radiotherapy optimization

Subject

In radio therapy, tumors or other unheathly tissue is irradiated by a beam (or several beams) of high-energy electromagnetic (radio) waves. If the irradiated energy is large enough, then the unhealthy tissue is "killed", Of course, there is a challenge: the radio beams should hit all the unhealthy tissue, but only that one &emdash; it should leave the healthy tissue intact.

The beams are usually generated by linear accelerators, and the cross section of the beams can be shaped by multi-leaf collimators (think "frustum through arbitrarily shaped window").

In addition, it is possible to overlap several beams coming in from different angles, where the goal is to make the shape of the intersectoin volume as close to the treatment volume as possible. Then, it is easier to adjust the energy of the beams such that the sum of the energies in the intersection volume reaches the level where it can kill the unhealthy tissue, while the energy is below the threshold where it would harm the healthy tissue.

A further challenge arises from the characteristics of proton beams (which are usually used in this kind of therapy): they lose energy as they enter the tissue, but the energy loss does not depend linearly on the penetration depth ("Bragg peak"), and they spread out as they go deeper.

Tasks/Challenges

The goal is to develop algorithms that can compute the optimal positions and energy levels of the proton beams, given a specific target volume (tumor), healthy tissue, and bones in the form of a CT or MRI volume.

Prerequisites

Contact:

Thomas Hudcovic, hudo at uni-bremen dot de
Prof. Dr. Gabriel Zachmann, zach informatik.uni-bremen.de


Studienabschlussarbeit im Bereich Virtual Reality Methoden Fahrsimulation

Inhalt

Die BMW Group bietet dir eine Studienabschlussarbeit im Zentrum für Fahrsimulation, Forschung, neue Technologien und Innovationen. Im Rahmen deiner Arbeit begleitest du uns bei der Weiterentwicklung von Virtual Reality (VR) Methoden in der Fahrsimulation, welche u.a. ein immersiveres Fahr- und Innenraumerlebnis sicherstellen. Weiterhin bist du in die Erarbeitung einer Technologie eingebunden, welche die Evaluierung von Fahrzeuginnenraumkonzepten mittels VR in einem Realfahrzeug ermöglicht. Die Zusammenarbeit mit den internen Fachgruppen rundet deine Aufgaben ab.

Voraussetzungen

Kontakt

Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Hier hast du die Möglichkeit, dich direkt online zu bewerben:
www.karriere.bmwgroup.de


Praktikum VR-Methodenentwicklung

Ihre Aufgaben

Volkswagen ist einer der größten Automobilhersteller der Welt - werden Sie Teil davon!
Wir bringen innovative Ideen zur Serienreife, damit jeder davon profitieren kann. Effiziente und nachhaltige Technologien kennzeichnen nicht nur unsere Produkte, sondern auch deren Entstehungsprozess. Und weil jeder Volkswagen nur so gut ist wie die Menschen, die dahinter stehen, bieten wir jedem einzelnen Mitarbeiter optimale Entwicklungsperspektiven. Wenn Sie mit uns gemeinsam die automobile Zukunft gestalten wollen - steigen Sie ein.

Für unser junges, engagiertes Team im Bereich Visuelle Kommunikation suchen wir für den Zeitraum von mindestens 5 Monaten studentische Verstärkung.
Unser Tätigkeitsfokus liegt auf der Produktion von Bildern und Filmen zur Kommunikation technischer Sachverhalte.

Sie haben Spaß am Film und kennen sich in Game-Engines aus? Kommen Sie zu uns und erarbeiten Sie anhand eines konkreten Beispiels die Umsetzung einer Animation mithilfe der Game-Engine Unity. Bearbeiten Sie neben den technischen Herausforderungen u.a. folgende Fragestellungen :

Ihre Qualifikationen

Haben wir Ihr Interesse geweckt? Dann freuen wir uns auf Ihre Bewerbung.

Weitere Informationen

Diese Stelle ist bei der Volkswagen AG in Wolfsburg zu besetzen.
Ein Einsatz ist möglich ab: Zeitnah

Referenzcode: E-1676/2018

Ihre Fragen beantwortet: Frau Jana Juenemann
unter der Telefonnummer +49-5361-9-14185


Virtuelle 3D Simulation von Korallenriffen

Inhalt

Das Leibniz-Zentrum für Marine Tropenökologie (ZMT) forscht über Küstenökosysteme in den Tropen und ihre Reaktion auf Änderungen in ihrer Umwelt. Basierend auf realen Daten zur Interaktion von Korallen und ihrer Reaktion auf Umweltveränderungen ist am ZMT ein abstraktes Simulationsmodell eines Korallenriffes entstanden, das zeigt, wie es sich unter Einfluss verschiedener Stressfaktoren entwickelt. Zur besseren Veranschaulichung soll auf Basis des vorhandenen Wissens über die Abläufe in Korallenriffen eine dreidimensionale virtuelle Umgebung (ggf. auch immersiv) geschaffen werden, mit der interaktiv die Entwicklung des Riffs beeinflusst und naturnah verfolgt werden kann.

Mögliche Aufgaben

Voraussetzungen

Kontakt

Je nach Schwerpunkt und Studiengang liegt die Betreuung überwiegend bei der AG Computergraphik und Virtuelle Realität oder beim ZMT.
Prof. G. Zachmann, zach at informatik.uni-bremen.de, Tel. 63991, Bibliothekstraße 5, 3.OG MZH 3460

PD Dr. Hauke Reuter
Leibniz-Zentrum für Marine Tropenökologie GmbH Bremen
E-Mail: Hauke.reuter at zmt-bremen.de


Direct Animation für Roboter-Pfadplanung

Inhalt

In vielen Produktionsprozessen werden Arbeitsschritte von Robotern übernommen. Dabei führen die Maschinen mitunter komplexe Bewegungsabläufe aus, die der Art wie Menschen mit ihren Händen arbeiten nicht unähnlich ist. Schließlich verfügen Menschen über ein hochausgebildetes kognitiv-motorisches System zur Manipulation ihrer Umwelt. Hieraus motiviert sich ein Ansatz der Robotik, menschliche Tätigkeiten als Vorlage für die Planung von Robotersteuerung zu nehmen, also dass Roboter Bewegungsabläufe von Menschen „lernen“.

Ein bisher selten verfolgter Ansatz ist es, innovative Techniken zur Animationserstellung (Direct Animation) für die Pfadplanung von Robotern zu verwenden. So könnten z.B. Menschen mittels Touch- oder 3D-Eingabegeräten die Bewegung von realen oder virtuellen Roboterarmen interaktiv wie eine Gliederpuppe steuern (Digital Puppetry). Wird dies aufgenommen, kann die Bewegung wieder abgespielt werden und durch erneute Steuerung in weiteren Layers oder Passes erweitert und verfeinert werden (Layered Animation). Mittels intuitiver Methoden wie Dragimation können zeitliche Abläufe dieser Bewegungen einfach und schnell angepasst werden.

Die Arbeitsgruppe Digitale Medien und die Arbeitsgruppe Computergraphik und Virtuelle Realität suchen nach AbschlusskandidatInnen, die Interesse an einer Arbeit in diesem Themenbereich haben. Ziel ist es, Interaktionstechniken zur Steuerung von Robotersimulationen zu entwickeln und zu evaluieren. Konzepte und Techniken der Direct Animation sollen dabei Modell stehen. Konkret sollen verschiedene Ansätze entwickelt, in vorhandene Systeme integriert, und auf ihre Eignung für die Roboter-Pfadplanung untersucht werden. Dabei spielen ebenso Themen wie Motion Capture eine Rolle wie Aspekte der Computersimulation (etwa Kollisionserkennung).

Voraussetzungen

Kontakt

Prof. G. Zachmann, zach at informatik.uni-bremen.de, Tel. 63991, Bibliothekstraße 5, 3.OG MZH 3460

B. Walther-Franks
AG Digitale Medien
Raum 5320, MZH, Bibliothekstr. 1
Sprechstunde nach Vereinbarung
E-Mail: bw at tzi.de


Abschlussarbeit oder Praktikum bei KUKA

KUKA

Inhalt

Weltweit setzt die KUKA AG Maßstäbe in der Robotertechnologie. Wir sind ein global expandierendes Unternehmen im Bereich Robotik und Automatisierungstechnik.

Für Forschungsarbeiten in den Bereichen

sucht die KUKA Laboratories GmbH in Augsburg engagierte MINT-Studenten mit

In Zusammenarbeit mit Prof. Dr. Zachmann, Lehrstuhl für Computergraphik und virtuelle Realität, erwarten Sie vergütete Praktika und Abschlussarbeiten in spannenden Aufgabenfeldern. Mehr Infos beim Lehrstuhl (cgvr.cs.uni-bremen.de) und im Internet unter:
www.kuka.jobs

Kontakt

Prof. G. Zachmann, zach at informatik.uni-bremen.de, Tel. 63991, Bibliothekstraße 5, 3.OG MZH 3460


Kollisionserkennung

Inhalt

tori_collision

Bei der Kollisionserkennung geht es generell darum, zu erkennen, ob sich zwei graphische Objekte berühren oder gar durchdringen. Meistens sind diese Objekte in Form von Polygonen gegeben (sog. “polygon soups”), aber andere Repräsentationen sind mindestens ebenso interessant.

Kollisionserkennung ist eine wichtige Basistechnologie für viele Bereiche der Computergraphik, z.B. Animation, physikalisch-basierte Simulation, Interaktion in virtuellen Umgebungen, Robotik, virtual prototyping, etc.

Die zur Zeit gängigen hierarchischen Algorithmen scheinen an eine Grenze zu stoßen. Deswegen suchen wir hier nach neuartigen Algorithmen in anderen Richtungen.

Ein weiteres großes noch weitgehend unerforschtes Gebiet sind Algorithmen zur Kollisionserkennung von deformierbaren Objekten. Diese sind natürlich eine Voraussetzung für die Simulation von Kleidern, Bauteilen aus Plastik oder Gummi, etc.

Voraussetzungen

Grundkenntnisse in Computer-Graphik und linearer Algebra, C/C++.
"Nice-to-Have" wäre Unix/Linux, das läßt sich aber schnell nachholen.

Kontakt

Prof. G. Zachmann, zach at informatik.uni-bremen.de, Tel. 63991, Bibliothekstraße 5, 3.OG MZH 3460


Natürliche Interaktion in VR

Inhalt

bild1
bild2
bild0

Im Bereich der Virtuellen Realität interessieren uns vor allem intuitive und natürliche Interaktion. Das langfristige Ziel ist, die Interaktion mit virtuellen Umgebungen so natürlich zu gestalten wie unsere täglich gewohnte Interaktion mit der realen Umwelt.

Insbesondere die Hand (genauer gesagt: die virtuelle Hand) wurde bisher vernachlässigt, obwohl sie eigentlich unser wichtigstes “Werkzeug” ist. Deswegen bieten wir verschiedene Themen zu diesem Komplex an.

Ein Ziel ist z.B. das “natürliche Greifen”. Dazu muß einerseits eine realistische, deformierbare Hand modelliert werden, andererseits muß das Greifen eines Objektes an sich simuliert werden.

Voraussetzungen

Grundkenntnisse in Computer-Graphik und linearer Algebra, C/C++.
“Nice-to-Have” wäre Unix/Linux, das läßt sich aber schnell nachholen.

Kontakt

Prof. G. Zachmann, zach at informatik.uni-bremen.de, Tel. 63991, Bibliothekstraße 5, 3.OG MZH 3460