Theses
On this page, you can find a number of
topics
for which we are looking for a student who is interested
in working on them as part of their thesis
(bachelor or master).
The
list
is sorted in reverse chronological order;
that means, the further a topic is towards the bottom,
the more likely it is already taken (or no longer relevant to us).
However, this list is by no means exhaustive!
In fact, we always have many more topics available.
So, please also check out our research projects; in all projects,
there are lots of opportunities for doing a thesis.
In addition, there are quite a few "free floating"
topics, which are not listed here
nor are they connected with research projects;
those are ideas we would like to try out or get familiar with.
If you are interested in one of the topics, please send me (or the respective contact person) an email with your transcript of record and 1-2 sentences of motivation.
If you would like to talk to us abouth thesis topics,
just make an appointment with one of the project members
or researchers of my group.
You can also come to my office hours (mondays 6pm - 8pm,
no appoitment needed).
Please make sure to send me or the researchers your
transcript of records.
Ethics
Unlike 20 years ago, a lot of computer science research can and will have a huge impact on our society and the way we live. That impact can be good, but today, our research could also have a considerable negative impact.
I encourage you to consider the potential impact, both good and bad, of your work. If there is a negative impact, I also encourage you to try to think about ways to mitigate that.
As a matter of course, I expect you to follow ACM's
Code of Ethics and Professional Conduct.
I think, we all should go a step further
and change the scientific peer-reviewing system,
not only for paper submissions but also for grant proposal submissions,
before we start a thesis, a new product development, etc.
Here is an interview with Brent Hecht,
who has a point with his radical proposal, I think.
This article (in German) explains quite well, I think,
how
agile software development can include ethical considerations
("Ethik in der agilen Software-Entwicklung", August 2021, Informatik Spektrum der Gesellschaft für Informatik).
Doing Your Thesis Abroad
If you are interested in doing your thesis abroad, please
talk to us, we might be able to help with establishing a contact.
You also might want to look for financial aid,
such as this DAAD stipend.
Doing Your Thesis with a Company
If you are interested in doing your thesis at a company, we might be able to help establish a contact, for instance, with Kizmo, Kuka (robot developer), Icido (VR software), Volkswagen (VR), Dassault Systèmes 3DEXCITE (rendering and visualization), ARRI (camera systems), Maxon (Hersteller von Cinema4D), etc.
Doing Your Thesis in the Context of a Research Project
We always have a number of research projects going on, and in the context of those, there are always a number of topics for potential master's or bachelor's theses. If you are interested in such an "embedded" thsis topic, please pick one of those research projects, then talk to the contact given there or talk to me.
Formalities
If you feel comfortable with writing in English, I encourage
you to write your thesis in English.
(Or, if you want to become more fluent in English writing.)
I recommend to write your thesis using LaTeX!
There are no typographic requirements regarding your thesis:
just make it comfortable to read; I suggest you put some effort
into making it typographically pleasing.
A good starting point is the
Classic Thesis Template
by André Miede.
(Archived Version 4.6)
But feel free to use some other style.
Regarding the structure of your thesis, just look at some of the examples in our collection of finished thesis.
Referencing / citation: with the natbib LaTeX package, this
should be relatively straight-forward, just pick one of the predefined
citation/referencing styles.
If you are interested in variants,
here is the Ultimate Citation Cheat Sheet
that contains examples of the three most prevalent styles.
I suggest to follow the MLA style.
(Source)
Recommendations While Doing the Actual Work
- When you start doing your thesis, keep kind of a diary or log book,
where you record your ideas, and keep track of what you have done.
Eamples:- Max' notes on his tablet (thanks Max!)
- My notes as a stack of papers in a folder when I did my master's thesis (called "Diplomarbeit" at the time)
- Lab notebooks by Hahn and Bell
- Lab notebooks by other famous people: Leonardo da Vinci, Graham Bell, Thomas Edison 1, 2, 3. (Source)
- Good Laboratory Notebook Practices (Source)
- Have your laptop/computer make a backup every day automatically! (I just narrowly escaped a total disaster! one week after I had left the research institution where I did my thesis, the hard disk of the big machine (SGI Onyx) that contained all my data crashed completely! and that was one week before my deadline!)
Recommendations for Writing Up
- Write in active voice, not passive voice, whenever you describe what you have done or when you have made choices. This is also recommended in the APA style and you can read more about when to use active voice and when to use passive voice.
- Whenever you describe methods, algorithms, or software that others have developed, say so, i.e., "give credit where credit is due". (There is nothing wrong with using ideas, software, etc., from others, so long as you give credit.)
- Motivate your decision and choices. You can do so by reasoning, by citing previous work, by making experiments, etc.
- Evaluate your algorithms and methods. If you have developed an algorithm, the evaluation consists of experiments about its performance (quantitative and/or qualitative); ideally, you can also make a theoretical analysis using big-O calculus. If you have developed a user interaction method, the evaluation consists of user studies.
- When you describe the prior work in Section 2 of your thesis (a.k.a. state-of-the-art), also try to assess their good features and their limitations. (Usually, one sentence is enough.)
- In your chapter "Conclusions", try to summarize what you have done, describe for which cases your new method performs well and by what factor it performs better than the state-of-the-art; also, describe the limitations of your new method.
- When you describe your algorithms, please use pseudo-code (and equations, if there are any). Never use Blueprints or flow graphs. Real code (and blueprints) goes into an appendix. If you want to look at some good examples, you can look at the following theses: Hermann Meißenhelter's , Roland Fischer's, my own. (This list is, of course, by no means exhaustive!)
- Look at some of the examples on our Finished Theses page.
- Before you turn in your thesis, ask your advisor to have a quick look at it.
- When you turn in your thesis, please send me a PDF via mail.
Guidelines for Type(s) of Chart to use in your Thesis
At some point in your work, you probably will generate some charts to present your results. Some charts are better in showing specific facets than other charts. In the following table, you can find an overview of which chart is useful in communicating which properties of the data [B. Saket, A. Endert, and Ç. Demiralp: "Task-Based Effectiveness of Basic Visualizations", IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 7, pp. 2505–2512, July 2019].
How to use the table: first, pick the purpose of your visualization of your data; for example, let's assume you want to find correlations. So, you go to the "Correlations" row. Next, pick your top criterion; in our example, let's assume you strive to maximize user preference. So, you go to the cell under the "User preference" column. Finally, pick one of the chart types on the left hand side in that cell (they are ranked by score regarding the respective criterion you picked). In our example, you should probably use the lines chart; if that does not fit your purposes (for whatever other reasons), then you probably want to pick the bar chart instead. The arrows symbolize "performs better than" relationships between chart types (inside that cell).
Criteria we Use When Grading Your Thesis
Bei der Beurteilung einer Master- bzw. Bachelor-Arbeit verwenden wir folgende Kriterien:
- Kenntnisse und Fähigkeiten (was bringt die Studentin mit?)
- Systematik und Wissenschaftlichkeit (kann die Studentin wissenschaftlich arbeiten?)
- Initiative, Einsatz, Durchhaltevermögen (wie stellt sich die Studentin während der Arbeit an? wie ist ihre Frustrationstoleranz? hat sie eigene Ideen und geht diese mit Energie an? macht sie evtl. "Dienst nach Vorschrift"?)
- Qualität der Ergebnisse (was kam bei der Arbeit tatsächlich heraus?)
- Präsentation der Ergebnisse (kann die Studentin präzise und verständlich über ihre Arbeit berichten? das betrifft sowohl die schriftliche Ausarbeitung als auch den Vortrag)
Recommendations for Your Presentation During Your Defense (Colloquium)
- Length: for master's theses, your talk should not exceed 20 minutes; for bachelor's theses, you should err towards 15 minutes.
- You can omit the slide "Structure of the talk" (contrary to what you probably learnt). Reason: the structure is always the same, i.e., motivation, problem/task, related/prior work, concept/architecture/algorithms, implementation, evaluation, conclusions, future work.
- In your introduction, try to motivate your work; to do so, try to answer two questions: 1) what is the "big picture" where your works fit in? 2) what is the exact problem you are trying to solve? 3) in which way do existing solutions / scientific works fall short?
- Focus on the "meat", i.e., your algorithms, your user study, or your software architecture; basically, any- and everything that is hard computer science.
- Towards the end, show plots, show pictures, show videos.
- Draw conclusions: what is now possible with your novel stuff? point out limitations that still exist.
- Also good practice: show a video at the end.
- Bad practice: too much text, no diagrams.
- Practice your talk! You can ask your friends, girl friend, or record yourself. (I know it might hurt, but it is helpful.)
- Don't forget to invite your advisor/supervisor(s) to your defense!
Links
For printing your thesis, you might want to consider
Druck-Deine-Diplomarbeit.
We have heard from other students that they have
had good experiences with them (and I have seen nice examples of their print products).
Also, there is a friendly copy shop,
Haus der Dokumente,
on Wiener Str. 7, right on the campus.
The List
Master Thesis: Inverse Reinforcement Learning and Affordances
Subject

People could program powerful chess computers before they could program a robot to walk on two legs, and many of the tasks we find easy as human beings, such as daily activities involved in preparing meals or cleaning up, turn out to be difficult to specify in detail. Thus, if we want robots to be competent helpers in the home, it would be better if we could teach them by showing what needs to be done, and for them to learn from watching us. Several techniques are being researched to enable such learning. One of these techniques is IRL—inverse reinforcement learning [1]—where the goal is to discover, by watching an "expert," the reward function that this expert is maximizing. This is more effective than simple imitation of the expert's actions. Consider the proverbial monkey shown how to wash dishes. The monkey may go through the motions of wiping, but if it did not understand that the dishes should be clean afterwards, then it won't do a good job. However, IRL is an ill-posed problem: there can be an infinity of reward functions that the expert may be demonstrating. To even make an educated guess would often require considering enormous search spaces—there are many parameters that go into characterizing even the simplest manipulation action! Additionally, the environments in which human beings perform tasks, and the tasks themselves, are in principle of unbounded complexity: if a human knows how to stack three plates on top of each other, they also know how to stack four or ten.
Your Tasks/Challenges:
The subject of this thesis is to develop an IRL system that combines existing research into relational IRL[2], modular IRL[3], and explicitly represented knowledge to enable a simulated agent to learn, from demonstrations performed in a simulated environment, how to perform tasks such as stacking various items, putting objects in and taking them out of containers, and how to cover containers. While the project can start with published techniques, it also raises research questions to investigate. Relational IRL is a technique to learn rewards that generalize and describe tasks for environments of, in principle, arbitrary complexity. However, the choice of logical formulas in the relational descriptions has a significant influence on the quality of the learned rewards—how can the logical language of these descriptions be well-chosen for the tasks we have in mind, such as stacking and container use? Furthermore, because IRL is mathematically ill-posed, many reward functions are learnable. [2], cited below, shows an example of an unstacking task, where both a reward for "there are 4, 5, or 6 blocks on the floor" and a reward for "there are no stacked blocks" are learnable from the same data, but it is only the second one that captures the intended level of generality. How can the learning process be influenced to prefer the more generalizable rewards? How can we encode which parameters of the demonstration count "as-is" and which are allowed to vary arbitrarily? The manipulations involved in stacking or container use are complex. Can these be split into several phases, allowing for independent learning for each phase and thus simplifying the search space for the IRL problem?
Requirements:
- Motivation
- Programming skills (Python, PyTorch, OpenAI Gym).
- Unreal Engine (Virtual Reality or OptiTrack)
Contact:
Prof. Dr. Gabriel Zachmann,
email: zach at informatik.uni-bremen.de
Master Theses at DLR/CGVR: Point Clouds in VR
Subject



The Department of Maritime Security Technologies at the Institute for the Protection of Maritime Infrastructures is dedicated to solving a variety of technological issues necessary for the implementation and testing of innovative system concepts to protect maritime infrastructures. This includes the development of visualization methods for maritime infrastructures, including vast point cloud data sets. The Computer Graphics and Virtual Reality Research Lab (CGVR) at University of Bremen carries out fundamental and applied research in visual computing, which comprises computer graphics as well as computer vision. In addition, we have a long history in research in virtual reality, which draws on methods from computer graphics, HCI, and computer vision. These two research groups offer the opportunity for joint master theses, allowing students to get the best of both worlds of acadamic and applied science.
Potential Topics:
- Point Cloud Labeling: Your mission will be to create an intuitive and interactive VR application, revolutionizing the way we annotate and process vast amounts of point cloud data. Point cloud data lies at the core of modern technologies such as self-driving cars, augmented reality, and 3D mapping. Your challenge will be to bridge the gap between traditional 2D labeling methods and the immense potential of 3D point cloud data. Through your expertise and creativity, you will unlock the next level of precision and efficiency in data annotation.
- Point Cloud Rendering: With the advent of cutting-edge scanning technologies and 3D data capture, point cloud datasets have grown to unprecedented sizes, containing billions of data points. The conventional rendering approaches simply fall short in handling such colossal volumes, leading to reduced performance, compromised visual fidelity, and frustrating user experiences. Your task will be to innovate and engineer a novel technique that cleverly optimizes memory usage and computational efficiency, while preserving the intricate details and accuracy of the original point cloud data.
- Point Cloud Segmentation: Traditional segmentation approaches often require vast amounts of annotated data, making them cumbersome and time-consuming. Your task will be to explore innovative learning techniques, including few-shot-approaches, designing algorithms that can leverage prior knowledge from a small set of labeled point clouds to accurately segment new, previously unseen data.
Note that you do not need to work on all of the topics; they are meant as potential ideas what you could work on and what is of interest to us. The specific details of your topic will be discussed, once you decide you want to work in this area.
Requirements:
- Proficiency in computer graphics and 3D rendering techniques.
- Strong programming skills (C++, Python, or related languages).
- A plus, but not really necessary, is familiarity with GPU programming (CUDA) and a good understanding of hashing and image matching and feature detection on images
- A passion for pushing the boundaries of what’s possible in the realm of virtual environments and visualization.
Contact:
Prof. Dr. Gabriel Zachmann,
email: zach at informatik.uni-bremen.de
Master thesis: Identifying the Re-Use of Printing Matrices
Subject

Even before Gutenberg invented printing texts, images were printed using matrices, either carved woodblocks or engraved copperplates. Because they were expensive to produce, these matrices were often re-used even after many years or sold to other printers. Since there was no copyright, some printers simply had successful illustrations copied (with greater or lesser accuracy) for their own use.
In the last years, several millions of book illustrations have been digitised, including naturally many re-uses of printing matrices. However, these photographs do not look exactly the same - matrices may become worn or damaged over time, the printing process may have been handled slightly differently, pages can become dirty or torn, lastly, photos were taken by different camera systems and from different angles.
This thesis aims to investigate possible methods to match images to used printing matrices in order to track possible re-use, with the intention of incorporating the devloped methods into real-world usage.
One idea could be to utilize geometric hashing on either extracted feature points (see our Massively Parallel Algorithms lecture) or on features extracted from a trained classifier network.
Your Tasks/Challenges:
- There are already systems that analyse a closed corpus of such images through direct comparison between them (https://www.robots.ox.ac.uk/~vgg/software/vise/). However, here, a procedure is sought that can work with an image database, to which new material is added constantly.
- Two aspects of the material may differ from many other tasks of analysing images:
- Firstly, many examples contain large numbers of lines, not least because light and shade are normally shown by hatching. Hence, finding feature points could be somewhat challenging.
- Secondly, one will not be able to make new photographs of the images under standardised conditions but use the images that are publicly available in repositories such as this: https://www.digitale-sammlungen.de/de/.
- Familiarize yourself with the concepts of spatial hashing (geometric hashing) and implement it so it can take adavantage of the parallelization capabilities of the GPU.
- Just throwing ORB or another feature detector on the images may not be enough to prevent false negative and false positive matches, you might need to incorporate deep learning features and maybe even other attributes of the images and think of suitable data structures for that.
Requirements:
- Solid machine learning and deep learning skills, familiarity with basic classifier neural networks
- Familiarity with the concepts of features and feature extraction w.r.t (convolutional) neural networks
- Familiarity with GPU programming (CUDA) and a good understanding of hashing and image matching and feature detection on images
- Openness for working with images from other time-periods
Contact:
Thomas Hudcovic,
hudo at uni-bremen dot de
Prof. Dr. Gabriel Zachmann, zach informatik.uni-bremen.de
Master Thesis: Gravity Modeling and Stable Diffusion
Subject

Current and future small-body missions, such as the ESA Hera mission or the JAXA MMX mission, demand good knowledge of the gravitational field of the targeted celestial bodies. This is not only motivated to ensure the precise spacecraft operations around the body but likewise important for landing maneuvers, surface (rover) operations, and science, including surface gravimetry. To model, the gravitation of irregularly-shaped, different methods exist.
Recently (latent) stable diffusion has gained popularity as a deep learning approach. Usually, the systems work in the image space. However, this thesis should investigate how the method can be used to model a gravity field (3D space). With the polyhedral method, we can compute the gravity field of 3D shape files as ground truth data.
Your Tasks/Challenges:
- Generate a lot of ground truth data with the polyhedral method
- Find a more compact way to represent the gravity field (latent space, gravitational potential)
- Predict the gravity field of new objects
- Another output could be the density distribution inside a body (inverse problem)
Requirements:
- Excellent machine learning skills
- Basic knowledge of stable diffusion (AutoEncoder, U-Net)
- Motivation
Contact:
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Master Thesis: Sphere Packing Problems
Subject

Sphere packings offer a way to approximate a shape volume. They can be used in many applications. The most common usage is collision detection since it is fast and trivial to test spheres for an intersection. Another application is modeling gravitational fields or applications in medical environments with force feedback.
Also, an important quality criterium is packing density, which is closely related to the fractal dimension. An exact determination of the fractal
dimension is still an open problem.
The practical side is well understood. We use the Protosphere algorithm for triangular meshes to generate sphere packings, which approximate Apollonian diagrams. Yet, the theoretical side needs more exploration.
We are considering multiple areas where you can study single or multiple topics in a thesis.
Your Tasks/Challenges:
- Determining the fractal dimension
- Packing density (theoretical limit for approximation error)
- The precision or effect of prototypes (symmetric object leads not to a complete symmetry in the sphere packing)
Requirements:
- Joy in math and geometry (Computational Geometry)
- Motivation
Contact:
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Deep Learning for Deformation Simulations during Hand Motion
Subject

Simulating deformations of objects is an important part of computer graphics and is known as soft-body dynamics.
Examples of soft-body dynamics are skin deformation on hands or the deformation of jelly-like materials.
Computation of these materials is done using the finite element method, position-based dynamics, spring/mass models, and others.
In this thesis, we focus on one prominent example, which is the deformation of skin during hand motion.
Depending on the complexity of the hand model, this simulation may take a while.
So, for real-time applications in computer graphics speeding up this simulation is crucial.
Mesh-based deep learning is a promising candidate to achieve this speedup.
#Here, the deformed mesh of the hand is the output of the neural network given a set of parameters (joint angles of the hand) as input.
The training data for this mesh-based approach is provided via soft body dynamics using the finite element method.
Your Tasks/Challenges:
- Reimplement a mesh-based neural network from the literature
- Train neural networks with deformation data
Requirements:
- Programming skills in Python
- Interest/basic knowledge in deep learning
Contact:
Janis Roßkamp, j.rosskamp at cs.uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Master thesis: Natural hand-object manipulations in VR using Optimization
Subject

One of the long-standing research challenges in VR is to allow users to manipulate
virtual objects the same way they would in the real world, i.e., grasp them,
twiddle and twirl them, etc.
One approach could be physically-based simulation, calculating the forces acting on object
and fingers, and then integrating both hand and object positions.
Another approach, to be explored in this thesis, is to use optimization.
The idea is to calculate hand-object penetrations, or minimal distances in case
there are no penetrations, then determine a new pose for both hand (and fingers)
and the object such that these penetrations are minimized (or distances are maximized).
Software for computing penetrations has been developed in the CGVR lab and is readily available.
Also, many software packages for doing fast non-linear optimization is available in the public domain
(e.g., pagmo).
Task / Challenges:
- Work out the details of the method, for instance, what exactly could be the best objective function for the optimization?
- Determine the best optimization software package, get familiar with our penetration computation software.
- Implement the method in C/C++.
- Perform a small user study.
Requirements:
- Programming skills in C/C++ (at least basic knowledge)
- Mathematical thinking (no theorem proving will be needed)
Contact:
Prof. Dr. Gabriel Zachmann: zach at informatik.uni-bremen.de
Master thesis: Mixed Reality Telepresence: Extending a Collaborative VR Telepresence System by Augmented Reality
Subject

Shared virtual reality (VR) and augmented reality (AR) systems with personalized avatars have great potential for collaborative work between remote users. Studies indicate that these technologies provide great benefits for telepresence applications, as they tend to increase the overall immersion, social presence, and spatial communication in virtual collaborative tasks. In our current project, remote doctors can meet and interact with each other in a shared virtual environment using VR headsets and are able to view live-streamed and 3D-visualized operations (based on RGB-D data) to assist the local doctor in the operating room. The local doctor is also able to join using VR.
The goal of this thesis is to extend the existing UE4 VR telepresence project to allow the local doctor to use AR glasses like the Hololens instead of the VR headset. This enables the doctor to interact - hands-free - with the remote experts while continuing the operation, and prevents interruptions. Your tasks are to adapt the current code such as to work with the Hololens, too (general detection, tracking, registration, interaction gestures). Additionally, the relevant data has to be streamed as fast as possible onto the Hololens to be viewed. Lastly, - optionally - it would be great to use the build-in depth sensor of the Hololens for 3D visualizations of the patient. This could be done by continuously registering the sensor and streaming the data back into the shared virtual world.
Task / Challenges:
- Extending the current UE4 project to allow the usage of AR goggles (Hololens) instead of only VR headsets.
- Designing and implementing interaction gestures for the AR user.
- Implementing low-latency compression and streaming of point cloud/video data to the Hololens.
- (Implementing the continuous registration and streaming of the Hololens's depth camera data for shared point cloud avatars.)
Requirements:
- Some experience in a game engine, ideally the UE4 but Unity is fine too
- Basic programming skills, ideally c++.
-
Helpful:
- Experience in computer graphics.
- Experience with AR/VR.
Contact:
Roland Fischer, s_8ix2ba at uni-bremen.de
Prof. Dr. Gabriel Zachmann,
zach at informatik.uni-bremen.de
Master thesis: High Fidelity Point Clouds: Artificially Increasing the Sensor's Depth Resolution
Subject

RGB-D cameras like Microsoft's Azure Kinect and the corresponding point cloud visualizations of the captured scenes are getting increasingly popular and find usage in a wide range of applications. However, the low depth sensor resolution is a limiting factor resulting in very coarse 3D visualizations.
The goal of this thesis is to find and implement methods to artificially increase the depth sensor's resolution, and, thus, the fidelity of the generated point clouds. The methods have to be fast enough for real-time usage. One approach is to develop or adapt and employ super sampling algorithms (possibly based on deep learning) on the depth images. Another approach would be to experiment with attaching a convex lens in front of the sensor to increase the local pixel density for a distinct area, although this limits the field of view. Using a lens would entail a custom calibration/registration procedure between depth and color sensor. Your task is to explore these and possibly other methods and implement the most convincing one(s).
Task / Challenges:
- Conducting experiments with convex lenses in front of the sensor to achieve a higher density at range.
- Conducting experiments with (deep learning?) super sampling of the depth images.
- Investigation of other methods to artificially increase the depth senors resoultion.
- Implementation of the most convincing method(s).
Requirements:
- Basic programming skills, ideally c++.
- Experience with image processing.
-
Helpful:
- Some experience in a game engine, ideally the UE4 but Unity is fine too
- Experience with RGB-D cameras.
- Experience with deep learning.
Contact:
Roland Fischer, s_8ix2ba at uni-bremen.de
Prof. Dr. Gabriel Zachmann,
zach at informatik.uni-bremen.de
Bachelor/Master thesis: Non-Uniform Stereo Projection for 3D Monitors
Subject

Virtual worlds on 3D Monitors are limited by the display area. For example, at the edges of the screen, conflicting stereo cues cause stereo violations. We developed a new method to do stereo rendering in real-time that probably reduces these adverse effects. Our approach works in the vertex shader that was tested on the powerwall.
In this work, your task is to port the stereo rendering algorithm to work on the Z-Space. The Z-Space is a 3d monitor with head tracking. The stereo rendering currently uses side-by-side rendering and has to be ported to Quad-Buffer stereo rendering. The method is implemented in the Godot game engine. Further, you will conduct a user study to evaluate the influence of the new technique on a small screen and compare it to the large Powerwall.
Task / Challenges:
- Port the side-by-side stereo rendering to Quad-Buffer rendering in the Godot game engine
- Create a simple but effective scene to test the effect
- Evaluate the influence of the new method, comparing small and large screens
Requirements:
- C/C++ or shader programming
-
Helpful:
- Experience in designing and conducting studies
- Knowledge of statistics
- Experience in OpenGL programming
Contact:
Christoph Schröder-Dering, schroeder.c at informatik.uni-bremen.de
Prof. Dr. Gabriel Zachmann,
zach at informatik.uni-bremen.de
Master thesis: Comparing Surcigal Lighting Systems in VR

Subject
In the SmartOT project, a new autonomous lighting system for operating rooms is being developed, which consists of many light modules placed on the ceiling that automatically change intensity to minimize shadows cast by surgeons and medical staff on the surgical wound. The aim of the master thesis is to design and perform a suitable study (and a task) that allows to compare the automatic lighting system with the manual surgical lamp in a VR simulation.
The autonomous lighting system as well as a conventional manual surgical lamp are already implemented in a VR simulation using Unreal Engine 4.26. In addition, an implementation already exists in the work with Lego bricks (Video), which can be extended by a task (e.g., the explicit rebuilding of a given Lego figure by a proband).
Tasks/Challenges
- Implementation of a task (e.g., the rebuilding of a given Lego figure) in Unreal Engine 4 which is suitable to compare both lighting systems.
- Planning and conducting a study in VR.
Requirements:
- Experience in designing and conducting studies.
- Experience in a Game Engine (ideally Unreal Engine 4).
- Experience in working with a VR-headset (HMD).
- Basic programming skills (C++, Blueprints).
- Knowledge of statistics.
It is fine if not all requirements are met, as long as there is a willingness to delve into these topics as part of the master's thesis.
Contact:
Andre Mühlenbrock, muehlenb at uni-bremen.de
Prof. Dr. Gabriel Zachmann,
zach at informatik.uni-bremen.de
Bachelor/Master thesis: Comparing Virtual and Real Grasps
Subject
Advances in grasping algorithms and hand tracking methods enable interaction in virtual environments through natural grasping. This allows experiments in VR where participants can interact naturally with virtual objects leading to possible new insights into how humans manipulate objects. But, these experiments are only possible, if real and virtual grasps are comparable to some extent. The goal of this thesis is to investigate this assumption by comparing real with virtual grasps. The investigation (while not limited to) should answer the following research questions:
- Will physically-based grasping improve the results compared to more simplistic methods?
- Will a limited time for grasping operations influence the results?

Tasks/Challenges
We provide a framework for tracking of hand movements and manipulating objects in VR. Your task is to design a virtual environment in the Unreal Engine 4 and conduct a study to investigate differences in grasping between the real and virtual world.
Requirements:
- Some experience in a game engines
-
Helpful:
- Experience in designing and conducting studies.
- Basic programming skills.
- Experience in VR/working with HMDs.
- Knowledge of statistics.
Contact:
Janis Roßkamp, j.rosskamp at cs.uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Master thesis: Creation of an RGB-D Dataset with Ground Truth for Supervised Learning and Depth Image Enhancement
Subject

RGB-D cameras (color + depth) are hugely popular for 3D reconstruction and telepresence scenarios. An open problem is the inherent sensor noise which limits the achievable quality. Deep Learning techniques showed to be very promising in image denoising, -completion, and -enhancement tasks, however, for supervised learning, ground truth data is needed. Acquiring suitable, realistic ground truth data for RGB-D images is a huge challenge, which is why there is nearly none yet.
With this thesis, we want to create a universally usable RGB-D dataset with ground truth data. To achieve this, the idea is to arrange a real physical test scene consisting of a wide variety of objects and materials. To precisely specify and change the position and rotation of the RGB-D camera within the scene, we rely on a highly accurate robot/robot arm. The corresponding ground truth images will be acquired by creating a virtual version of the scene and its contained objects, e.g. using the Unreal Engine 4 and Blender. A virtual camera can eventually be placed in the virtual scene, be exactly aligned with the physical one, and record corresponding synthetic ground truth images.
Task / Challenges:
- Creating and arranging a suitable, varied physical test scene with everyday objects
- Exactly recreating the scene via 3D modeling or other appropriate techniques (photogrammetry)
- Recording of test images and trajectories using the robot and an RGB-D camera
- Taking the corresponding color and depth images in the virtual scene
Requirements:
- Experience in 3D modeling or 3D reconstruction
-
Helpful:
- Experience with robots and RGB-D cameras
- Basic programming skills, ideally c++.
- Experience with game engines like the Unreal Engine 4
Contact:
Roland Fischer, s_8ix2ba at uni-bremen.de
Prof. Dr. Gabriel Zachmann,
zach at informatik.uni-bremen.de
Master thesis: Determination and optimization of the accuracy of measured values of the "Virtuose™ 6D" haptic arm of the company Haption GmbH.
Subject

The "Virtuose™ 6D" haptic arm from the company Haption is used in a training system for drilling and milling in human bones. With the haptic arm, the training participant is given a very realistic feeling of the drilling or milling process. During the training, measured values for the forces and position of the drilling or milling tip in space are to be recorded, the accuracy of which is currently limited.
Your task
The task is to optimize the processing of the sensory information from the haptic arm so that these measured values achieve the required accuracy, i.e. about 0.5 mm in position and 0.2 N in force. The measured values are to be transmitted to a system that can store these data with high frequency and precise time stamps. The work includes the identification of the sources of error, in particular the deformation of the haptic arm structure under the action of the applied forces, their modeling and measurement, and their compensation.
Contact:
Prof. Dr. G. Zachmann, zach at informatik.uni-bremen.de
Bachelor/Master thesis: Teaching Anatomy through Augmented Reality
Subject

The human anatomy is hard to convey through textbooks. Studies suggest that teaching anatomy through Mixed Reality can improve the learning effect, as well as the memory retention of the learned material. We developed a virtual reality (VR) immersive anatomy atlas that allows medical students to explore the human anatomy in a virtual 3D space. We want to experiment with other forms of mixed reality, such as augemnted reality (AR).
Your task
In this thesis you will build upon an existing application that is a VR Unreal Engine project. Your task is to adapt the anatomy atlas to work in AR. To do this, the project needs to be ported to the OpenXR framework. for hip surgery simulation that should work for Unreal and Unity game engine. Your task is to improve the visualization of the implicit surface which currently relies on marching cubes. The task should be solved by ray tracing/ ray marching.
Prerequisites
This thesis requires some knowledge of game engines, ideally with Unreal Engine. It would be helpful if you have already worked with VR in Unreal Engine (e.g. you have visited the course "Virtual Reality and Physically-Based Simulation").
Contact:
Maximilian Kaluschke,
mxkl at uni-bremen dot de
Prof. Dr. G. Zachmann,
zach at informatik.uni-bremen.de
Master thesis: Factors influencing correct perception of spatial realtionships in VR
Subject

Learning the human anatomy plays an important role in any surgeon's education. Patient's well-being depends to a significant degree on the surgeon's good understanding of the spatial relationships between all the structures in the human body, such as organs, blood vessels, nerves, etc. The reseaerch question in this thesis is: how much better do people (e.g., medical students) learn those spatial realationships between different structures of the human body when they learn those using virtual reality, as opposed to learning them from 2D books?
Your task
In this thesis, you will build upon an exisiting application that was implemented on top of the game engine Unreal. This application already contains a lot of anatomy and several features to interact with the 3D geometry.
- Design an experiment, based on the virtual anatomy atlas, for investigating the research question stated above? (which organs are best suited? what are good evlauation criteria?)
- Investigate the accuracy of user's spatial perception in the anatomy atlas with a user study.
- perform statistical analysis of the gathered data.
Prerequisites
This thesis does not require the excellent programming skills. You will need considerable knowledge of statistics (which you can learn, of course, during your thesis). In any case, it would be helpful if you had some experience with the Unreal Engine. Like any other thesis, you will need to do a lot of literature research. Participation in our VR course will provide a good basis for understanding virtual reality as a whole.
Contact:
Prof. Dr. G. Zachmann, zach at informatik.uni-bremen.de
Master thesis: Influence of Self-Shadows on Presence in immersive virtual worlds
Subject
Creating a sense of presence in immersive virtual worlds has been a research topic for a long time. Fast hardware and software response times, exact tracking of your own position, and high resolutions are some of the many known requirements for a basic sense of presence when using head mounted displays (HMD).
In this master thesis another, possibly more subtle factor on the feeling of presence is to be examined: shadows of oneself rendered in virtual worlds using HMDs. Questions would be whether such self-shadows increase the sense of presence, and how detailed and realistic such shadows have to be in order to have an effect.
In order to render shadows of oneself in a virtual world that is perceived via HMDs, one could proceed as follows: One or more kinects take a depth image in real time, from which a point cloud is generated. The generated point cloud is used to render the shadow through virtual light sources in the virtual world.
Task / Challenges:
- Implementation of one or more methods to render shadows of the user in a virtual world.
- Planning and carrying out a suitable study to answer the questions as to whether and how much shadows can increase the sense of presence.
Requirements:
- Algorithmic thinking
- Experience in C++
-
Helpful:
- Knowledge/experience in shadow rendering techniques (e.g. shadow maps, shadow volumes)
- Experience in a game engine that enables the implementation of such a shadow and supports the use of HMDs.
- Experience in working with HMDs and Kinects.
Contact:
Andre Mühlenbrock, muehlenb at uni-bremen.de
Prof. Dr. Gabriel Zachmann,
zach at informatik.uni-bremen.de
Master (Bachelor) thesis: Depth Perception in VR
Subject

The goal of this thesis is to investigate the (distorted) depth perception that usually is recognized in VR. There are a variety of so-called depth cues, i.e., sources of information about the spatial relations of the objects in the environment, which are used by the human visual system to deduce depth. This includes visual monocular depth cues (e.g., occlusion, relative size), oculomotor depth cues (e.g., convergence, accommodation), and binocular depth cues (in particular, disparity). Unfortunately, there are frequent reports of underestimation of distances in virtual environments. There are many potential reasons for this effect, including hardware errors, software errors and errors of human perception. The difference in the images in the left and right eye is called binocular disparity and it is considered to be the strongest depth cue in the personal space. Using random-dot patterns, it was observed that it is possible to perceive depth with no other depth cue than disparity. However, the actual influence of the disparity on the depth perception in VR and probably, an algorithm to influence the disparity in software to correct the depth perception is still unknown. Such an automatic correction algorithm could be a goal changer for many application using VR.
Your task
The goal of this thesis is to take at least the first step to investiagte the influence of the disparity on the depth perception in VR. Our idea is to design a user study where the distance of the eyes is changed in VR (which is pretty straight forward by simply adjusting the virtual cameras) and compare the results to the depth perception in the real world but also with a changed dispartiy. To do that, we have a set of Hyperscopes, this are glasses that uses mirrors to change the disparity. Really challenging is the definition of an experiment that avoids other depth cues which can influence the results.
Prerequisites
This thesis does not require the ultimate programming skills. Nevertheless, it would be helpful if you have a little experience with the Unreal Engine to set up a scene and change the disparity of the virtual cameras. This thesis mainly requires a lot of literature research about the human depth perception but also about the design of good user studies. Moreover, some knowledge about statistics could be helpful for the analysis of the results. Participating our VR course can be a good starting point.
Contact:
Rene Weller, weller at informatik.uni-bremen.de
Prof. Dr. Gabriel Zachmann,
zach at informatik.uni-bremen.de
Master thesis: Semantic Segmentation of Art
Subject
The aim of this project is to develop software for searching and analyzing Rocaille forms, such that it facilitates the comprehension of, motives, composition form-related transfers, and attributions.

The Rocaille can be described as a freely-constructed, mostly asymmetrical base frame consisting of volute-like shapes— i.e. C- and S-shaped scrolls—which is marked by mostly independent and dominant extensions. All objects from the field of architecture, the applied arts, and all picture frames can be designed in this way: by employing a subtle form of the Rocaille to accentuate individual aspects, using ornamental exaggeration or constructing objects by means of the Rocaille itself.
Methods developed in this thesis could, for instance, be used in art historical databases such as Prometheus or applied to the ever-growing offers on the internet. They will help to answer questions critical to art history such as the relationship between the analysis of form and meaning.
Tasks/Challenges
The goal is to develop algorithms that can learn the appearance, forms, and composition of the different parts of Rocaille art.
- Detect and segment major forms of the Rocaille
- Establish requirements to extend the available training dataset for efficient training
- Apply state of the art machine learning methods such as Convolutional Neural Networks in the field of art history
Prerequisites
- Experience in Python
- Nice-to-have: knowledge in machine learning, interest in art history
Contact:
Prof. Dr. Gabriel Zachmann, zach informatik.uni-bremen.de
Master thesis: Radiotherapy optimization
Subject
In radio therapy, tumors or other unheathly tissue is irradiated by a beam (or several beams) of high-energy electromagnetic (radio) waves. If the irradiated energy is large enough, then the unhealthy tissue is "killed", Of course, there is a challenge: the radio beams should hit all the unhealthy tissue, but only that one &emdash; it should leave the healthy tissue intact.

The beams are usually generated by linear accelerators, and the cross section of the beams can be shaped by multi-leaf collimators (think "frustum through arbitrarily shaped window").
In addition, it is possible to overlap several beams coming in from different angles, where the goal is to make the shape of the intersectoin volume as close to the treatment volume as possible. Then, it is easier to adjust the energy of the beams such that the sum of the energies in the intersection volume reaches the level where it can kill the unhealthy tissue, while the energy is below the threshold where it would harm the healthy tissue.

A further challenge arises from the characteristics of proton beams (which are usually used in this kind of therapy): they lose energy as they enter the tissue, but the energy loss does not depend linearly on the penetration depth ("Bragg peak"), and they spread out as they go deeper.

Tasks/Challenges
The goal is to develop algorithms that can compute the optimal positions and energy levels of the proton beams, given a specific target volume (tumor), healthy tissue, and bones in the form of a CT or MRI volume.
- Understand the essential characteristics of the beams, the collimators, and the tissue
- Obtain and understand suitable volume data for later testing from the TCAI
- Probably investigate first inside-out (polygonal) rendering similar to the approach we have followed here
- Investigate a ray-tracing approach (inside out); the idea is to shoot rays from the target volume outwards, taking scattering and dissipating effects into account.
- If the ray-tracing approach is feasible, then you should investigate the potential of the new RTX graphics card, which provide support for ray-tracing
Prerequisites
- Algorithmic thinking
- Experience in C++
- Nice-to-have: knowledge in computer graphics, medical imaging, or geometric computing
Contact:
Thomas Hudcovic,
hudo at uni-bremen dot de
Prof. Dr. Gabriel Zachmann, zach informatik.uni-bremen.de
Studienabschlussarbeit im Bereich Virtual Reality Methoden Fahrsimulation
Inhalt
Die BMW Group bietet dir eine Studienabschlussarbeit im Zentrum für Fahrsimulation, Forschung, neue Technologien und Innovationen. Im Rahmen deiner Arbeit begleitest du uns bei der Weiterentwicklung von Virtual Reality (VR) Methoden in der Fahrsimulation, welche u.a. ein immersiveres Fahr- und Innenraumerlebnis sicherstellen. Weiterhin bist du in die Erarbeitung einer Technologie eingebunden, welche die Evaluierung von Fahrzeuginnenraumkonzepten mittels VR in einem Realfahrzeug ermöglicht. Die Zusammenarbeit mit den internen Fachgruppen rundet deine Aufgaben ab.
Voraussetzungen
- Studium der Medieninformatik, Digitalen Medien, Informatik oder ein vergleichbarer Studiengang.
- Fundierte Kenntnisse im Bereich Virtual Reality.
- Erweitere Kenntnisse von Graphik- und Multimedia- Anwendungen.
- Kenntnisse im Umgang mit Game-Engines, Visualisierungs- software, Motion Tracking sowie Head Mounted Displays.
- Erweiterte Programmierkenntnisse in C, C# und Python.
- Sicherer Umgang mit MS Office.
- Verhandlungssichere Deutsch- und Englischkenntnisse.
- Team- und Kommunikationsfähigkeit.
Kontakt
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Hier hast du die Möglichkeit, dich direkt online zu bewerben:
www.karriere.bmwgroup.de
Praktikum VR-Methodenentwicklung
Ihre Aufgaben
Volkswagen ist einer der größten Automobilhersteller der Welt - werden Sie Teil davon!
Wir bringen innovative Ideen zur Serienreife, damit jeder davon profitieren kann. Effiziente und nachhaltige Technologien kennzeichnen nicht nur unsere Produkte, sondern auch deren Entstehungsprozess. Und weil jeder Volkswagen nur so gut ist wie die Menschen, die dahinter stehen, bieten wir jedem einzelnen Mitarbeiter optimale Entwicklungsperspektiven. Wenn Sie mit uns gemeinsam die automobile Zukunft gestalten wollen - steigen Sie ein.
Für unser junges, engagiertes Team im Bereich Visuelle Kommunikation suchen wir für den Zeitraum von mindestens 5 Monaten studentische Verstärkung.
Unser Tätigkeitsfokus liegt auf der Produktion von Bildern und Filmen zur Kommunikation technischer Sachverhalte.
Sie haben Spaß am Film und kennen sich in Game-Engines aus? Kommen Sie zu uns und erarbeiten Sie anhand eines konkreten Beispiels die Umsetzung einer Animation mithilfe der Game-Engine Unity. Bearbeiten Sie neben den technischen Herausforderungen u.a. folgende Fragestellungen :
- Wieviel und welche Art der Interaktion ist in einer Animation sinnvoll?
- Führt Interaktion im Film zu einem besseren Verständnis, wann wird sie als störend empfunden?
- Welche Freiheitsgrade kann ich einem Betrachter der Animation einräumen, wie muss genau dies eingeschränkt werden?
- Welche Vor/Nachteile erfahren Betrachter und Produzent?
Ihre Qualifikationen
- Studium in einem gestalterischen Studiengang
- Abgeschlossenes Grundstudium
- Sehr gutes gestalterisches und kommunikatives Verständnis
- Gute Kenntnisse in Photoshop, Maya, Unity, Fusion, AfterEffects oder vergleichbares
- Teamfähigkeit, Einsatzbereitschaft, schnelle Auffassungsgabe, Eigeninitiative
Weitere Informationen
Diese Stelle ist bei der Volkswagen AG in Wolfsburg zu besetzen.
Ein Einsatz ist möglich ab: Zeitnah
Referenzcode: E-1676/2018
Ihre Fragen beantwortet: Frau Jana Juenemann
unter der Telefonnummer +49-5361-9-14185

Virtuelle 3D Simulation von Korallenriffen
Inhalt
Das Leibniz-Zentrum für Marine Tropenökologie (ZMT) forscht über Küstenökosysteme in den Tropen
und ihre Reaktion auf Änderungen in ihrer Umwelt. Basierend auf realen Daten zur Interaktion von
Korallen und ihrer Reaktion auf Umweltveränderungen ist am ZMT ein abstraktes Simulationsmodell
eines Korallenriffes entstanden, das zeigt, wie es sich unter Einfluss verschiedener Stressfaktoren
entwickelt. Zur besseren Veranschaulichung soll auf Basis des vorhandenen Wissens über die Abläufe
in Korallenriffen eine dreidimensionale virtuelle Umgebung (ggf. auch immersiv) geschaffen werden,
mit der interaktiv die Entwicklung des Riffs beeinflusst und naturnah verfolgt werden kann.
Mögliche Aufgaben
- Programmierung von 3D Simulationen ausgewählter Rifforganismen, wie z.B. verschiedener Korallenarten und Rifffische. Insbesondere sollen dabei die Regeln und Algorithmen des ZMT umgesetzt werden, die Veränderungen dieser Organismen unter Einfluss von Umweltstressoren beschreiben.
- Entwicklung von Algorithmen zur Darstellung von Veränderungen ausgewählter 3D simulierter Rifforganismen.
- Integration verschiedener, ausgewählter Rifforganismen in eine gemeinsame virtuelle Umgebung.
- Entwicklung und Implementierung von Interaktionsmetaphern, die es z.B. Ausstellungsbesuchern erlauben, sehr einfach und intuitiv mit der virtuellen Umgebung zu interagieren (z.B. Navigation) und Umweltfaktoren (z.B. Temperatur) zu verändern.
Voraussetzungen
- Kenntnisse in der Modellierung von 3D Objekten mit Maya oder 3DS Max.
- Erfahrung mit Game-Engines (z.B. Ogre3D, Unity, CryEngine) und der Programmierung in diesen API’s bzw. Frameworks.
- Programmiererfahrung in C++ oder einer Game-Engine-Skript-Sprache
- Computergraphik-Kenntnisse
Kontakt
Je nach Schwerpunkt und Studiengang liegt die Betreuung überwiegend
bei der AG Computergraphik und Virtuelle Realität oder beim ZMT.
Prof. G. Zachmann, zach at informatik.uni-bremen.de, Tel. 63991, Bibliothekstraße 5, 3.OG MZH 3460
PD Dr. Hauke Reuter
Leibniz-Zentrum für Marine Tropenökologie GmbH Bremen
E-Mail: Hauke.reuter at zmt-bremen.de
Direct Animation für Roboter-Pfadplanung
Inhalt
In vielen Produktionsprozessen werden Arbeitsschritte von Robotern übernommen.
Dabei führen die Maschinen mitunter komplexe Bewegungsabläufe aus, die der Art
wie Menschen mit ihren Händen arbeiten nicht unähnlich ist. Schließlich verfügen
Menschen über ein hochausgebildetes kognitiv-motorisches System zur Manipulation
ihrer Umwelt. Hieraus motiviert sich ein Ansatz der Robotik, menschliche Tätigkeiten
als Vorlage für die Planung von Robotersteuerung zu nehmen, also dass Roboter
Bewegungsabläufe von Menschen „lernen“.
Ein bisher selten verfolgter Ansatz ist es, innovative Techniken zur Animationserstellung
(Direct Animation) für die Pfadplanung von Robotern zu verwenden. So könnten z.B. Menschen
mittels Touch- oder 3D-Eingabegeräten die Bewegung von realen oder virtuellen Roboterarmen
interaktiv wie eine Gliederpuppe steuern (Digital Puppetry). Wird dies aufgenommen,
kann die Bewegung wieder abgespielt werden und durch erneute Steuerung in weiteren Layers
oder Passes erweitert und verfeinert werden (Layered Animation). Mittels intuitiver
Methoden wie Dragimation können zeitliche Abläufe dieser Bewegungen einfach und schnell
angepasst werden.
Die Arbeitsgruppe Digitale Medien und die Arbeitsgruppe Computergraphik und Virtuelle Realität
suchen nach AbschlusskandidatInnen, die Interesse an einer Arbeit in diesem Themenbereich haben.
Ziel ist es, Interaktionstechniken zur Steuerung von Robotersimulationen zu entwickeln und zu
evaluieren. Konzepte und Techniken der Direct Animation sollen dabei Modell stehen. Konkret sollen
verschiedene Ansätze entwickelt, in vorhandene Systeme integriert, und auf ihre Eignung für die
Roboter-Pfadplanung untersucht werden. Dabei spielen ebenso Themen wie Motion Capture eine Rolle
wie Aspekte der Computersimulation (etwa Kollisionserkennung).
Voraussetzungen
- Grundkenntnisse Computergraphik
- Interesse an Mensch-Computer-Interaktion und Evaluation
- Programmierkenntnisse bzw. -erfahrungen (vorzugsweise in C/C++)
Kontakt
Prof. G. Zachmann, zach at informatik.uni-bremen.de, Tel. 63991, Bibliothekstraße 5, 3.OG MZH 3460
B. Walther-Franks
AG Digitale Medien
Raum 5320, MZH, Bibliothekstr. 1
Sprechstunde nach Vereinbarung
E-Mail: bw at tzi.de
Abschlussarbeit oder Praktikum bei KUKA

Inhalt
Weltweit setzt die KUKA AG Maßstäbe in der Robotertechnologie. Wir sind ein global expandierendes Unternehmen im Bereich Robotik und Automatisierungstechnik.
Für Forschungsarbeiten in den Bereichen
- Umweltmodellierung mit Sensorik
- Abstandsberechnung
- Kollisionsfreie Bahnplanung
sucht die KUKA Laboratories GmbH in Augsburg engagierte MINT-Studenten mit
- guten Programmierkenntnissen (C++, Java)
- Kenntnissen in 3D Computergraphik und/oder virtueller Realität
- schneller Einarbeitungsfähigkeit
- konzeptionellem Denkvermögen
- Selbständiger Arbeitsweise und Eigeninitiative
- Kommunikations- und Teamfähigkeit
- Kreativität und Interesse an der Robotik
In Zusammenarbeit mit Prof. Dr. Zachmann, Lehrstuhl für Computergraphik und virtuelle Realität,
erwarten Sie vergütete Praktika und Abschlussarbeiten in spannenden Aufgabenfeldern.
Mehr Infos beim Lehrstuhl (cgvr.cs.uni-bremen.de) und im Internet unter:
www.kuka.jobs
Kontakt
Prof. G. Zachmann, zach at informatik.uni-bremen.de, Tel. 63991, Bibliothekstraße 5, 3.OG MZH 3460
Kollisionserkennung
Inhalt

Bei der Kollisionserkennung geht es generell darum, zu erkennen, ob sich zwei graphische Objekte berühren oder gar durchdringen. Meistens sind diese Objekte in Form von Polygonen gegeben (sog. “polygon soups”), aber andere Repräsentationen sind mindestens ebenso interessant.
Kollisionserkennung ist eine wichtige Basistechnologie für viele Bereiche der Computergraphik, z.B. Animation, physikalisch-basierte Simulation, Interaktion in virtuellen Umgebungen, Robotik, virtual prototyping, etc.
Die zur Zeit gängigen hierarchischen Algorithmen scheinen an eine Grenze zu stoßen. Deswegen suchen wir hier nach neuartigen Algorithmen in anderen Richtungen.
Ein weiteres großes noch weitgehend unerforschtes Gebiet sind Algorithmen zur Kollisionserkennung von deformierbaren Objekten. Diese sind natürlich eine Voraussetzung für die Simulation von Kleidern, Bauteilen aus Plastik oder Gummi, etc.
Voraussetzungen
Grundkenntnisse in Computer-Graphik und linearer Algebra, C/C++.
"Nice-to-Have" wäre Unix/Linux, das läßt sich aber schnell nachholen.
Kontakt
Prof. G. Zachmann, zach at informatik.uni-bremen.de, Tel. 63991, Bibliothekstraße 5, 3.OG MZH 3460
Natürliche Interaktion in VR
Inhalt



Im Bereich der Virtuellen Realität interessieren uns vor allem intuitive und natürliche Interaktion. Das langfristige Ziel ist, die Interaktion mit virtuellen Umgebungen so natürlich zu gestalten wie unsere täglich gewohnte Interaktion mit der realen Umwelt.
Insbesondere die Hand (genauer gesagt: die virtuelle Hand) wurde bisher vernachlässigt, obwohl sie eigentlich unser wichtigstes “Werkzeug” ist. Deswegen bieten wir verschiedene Themen zu diesem Komplex an.
Ein Ziel ist z.B. das “natürliche Greifen”. Dazu muß einerseits eine realistische, deformierbare Hand modelliert werden, andererseits muß das Greifen eines Objektes an sich simuliert werden.
Voraussetzungen
Grundkenntnisse in Computer-Graphik und linearer Algebra, C/C++.
“Nice-to-Have” wäre Unix/Linux, das läßt sich aber schnell nachholen.
Kontakt
Prof. G. Zachmann, zach at informatik.uni-bremen.de, Tel. 63991, Bibliothekstraße 5, 3.OG MZH 3460