Theses
On this page, you can find a number of
topics
for which we are looking for a student who is interested
in working on them as part of their thesis
(bachelor or master).
The
list
is sorted in reverse chronological order;
that means, the further a topic is towards the bottom,
the more likely it is already taken (or no longer relevant to us).
However, this list is by no means exhaustive!
In fact, we always have many more topics available.
So, please also check out our research projects; in all projects,
there are lots of opportunities for doing a thesis.
In addition, there are quite a few "free floating"
topics, which are not listed here
nor are they connected with research projects;
those are ideas we would like to try out or get familiar with.
If you are interested in one of the topics, please send me (or the respective contact person) an email with your transcript of record and 1-2 sentences of motivation.
If you would like to talk to us abouth thesis topics,
just make an appointment with one of the project members
or researchers of my group.
You can also come to my office hours (mondays 6pm - 8pm,
no appoitment needed).
Please make sure to send me or the researchers your
transcript of records.
Ethics
Unlike 20 years ago, a lot of computer science research can and will have a huge impact on our society and the way we live. That impact can be good, but today, our research could also have a considerable negative impact.
I encourage you to consider the potential impact, both good and bad, of your work. If there is a negative impact, I also encourage you to try to think about ways to mitigate that.
As a matter of course, I expect you to follow ACM's
Code of Ethics and Professional Conduct.
I think, we all should go a step further
and change the scientific peer-reviewing system,
not only for paper submissions but also for grant proposal submissions,
before we start a thesis, a new product development, etc.
Here is an interview with Brent Hecht,
who has a point with his radical proposal, I think.
This article (in German) explains quite well, I think,
how
agile software development can include ethical considerations
("Ethik in der agilen Software-Entwicklung", August 2021, Informatik Spektrum der Gesellschaft für Informatik).
Doing Your Thesis Abroad
If you are interested in doing your thesis abroad, please
talk to us, we might be able to help with establishing a contact.
You also might want to look for financial aid,
such as this DAAD stipend.
Doing Your Thesis with a Company
If you are interested in doing your thesis at a company, we might be able to help establish a contact, for instance, with Kizmo, Kuka (robot developer), Icido (VR software), Volkswagen (VR), Dassault Systèmes 3DEXCITE (rendering and visualization), ARRI (camera systems), Maxon (Hersteller von Cinema4D), etc.
Doing Your Thesis in the Context of a Research Project
We always have a number of research projects going on, and in the context of those, there are always a number of topics for potential master's or bachelor's theses. If you are interested in such an "embedded" thsis topic, please pick one of those research projects, then talk to the contact given there or talk to me.
Formalities
If you feel comfortable with writing in English, I encourage
you to write your thesis in English.
(Or, if you want to become more fluent in English writing.)
I recommend to write your thesis using LaTeX!
There are no typographic requirements regarding your thesis:
just make it comfortable to read; I suggest you put some effort
into making it typographically pleasing.
A good starting point is the
Classic Thesis Template
by André Miede.
(Archived Version 4.6)
But feel free to use some other style.
Regarding the structure of your thesis, just look at some of the examples in our collection of finished thesis.
Referencing / citation: with the natbib LaTeX package, this
should be relatively straight-forward, just pick one of the predefined
citation/referencing styles.
If you are interested in variants,
here is the Ultimate Citation Cheat Sheet
that contains examples of the three most prevalent styles.
I suggest to follow the MLA style.
(Source)
Recommendations While Doing the Actual Work
- When you start doing your thesis, keep kind of a diary or log book,
where you record your ideas, and keep track of what you have done.
Eamples:- Max' notes on his tablet (thanks Max!)
- My notes as a stack of papers in a folder when I did my master's thesis (called "Diplomarbeit" at the time)
- Lab notebooks by Hahn and Bell
- Lab notebooks by other famous people: Leonardo da Vinci, Graham Bell, Thomas Edison 1, 2, 3. (Source)
- Good Laboratory Notebook Practices (Source)
- Have your laptop/computer make a backup every day automatically! (I just narrowly escaped a total disaster! one week after I had left the research institution where I did my thesis, the hard disk of the big machine (SGI Onyx) that contained all my data crashed completely! and that was one week before my deadline!)
Recommendations for Writing Up
- Write in active voice, not passive voice, whenever you describe what you have done or when you have made choices. This is also recommended in the APA style and you can read more about when to use active voice and when to use passive voice.
- Whenever you describe methods, algorithms, or software that others have developed, say so, i.e., "give credit where credit is due". (There is nothing wrong with using ideas, software, etc., from others, so long as you give credit.)
- Motivate your decision and choices. You can do so by reasoning, by citing previous work, by making experiments, etc.
- Evaluate your algorithms and methods. If you have developed an algorithm, the evaluation consists of experiments about its performance (quantitative and/or qualitative); ideally, you can also make a theoretical analysis using big-O calculus. If you have developed a user interaction method, the evaluation consists of user studies.
- When you describe the prior work in Section 2 of your thesis (a.k.a. state-of-the-art), also try to assess their good features and their limitations. (Usually, one sentence is enough.)
- In your chapter "Conclusions", try to summarize what you have done, describe for which cases your new method performs well and by what factor it performs better than the state-of-the-art; also, describe the limitations of your new method.
- When you describe your algorithms, please use pseudo-code (and equations, if there are any). Never use Blueprints or flow graphs. Real code (and blueprints) goes into an appendix. If you want to look at some good examples, you can look at the following theses: Hermann Meißenhelter's , Roland Fischer's, my own. (This list is, of course, by no means exhaustive!)
- Look at some of the examples on our Finished Theses page.
- Before you turn in your thesis, ask your advisor to have a quick look at it.
- When you turn in your thesis, please send me a PDF via mail.
Guidelines for Type(s) of Chart to use in your Thesis
At some point in your work, you probably will generate some charts to present your results. Some charts are better in showing specific facets than other charts. In the following table, you can find an overview of which chart is useful in communicating which properties of the data [B. Saket, A. Endert, and Ç. Demiralp: "Task-Based Effectiveness of Basic Visualizations", IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 7, pp. 2505–2512, July 2019].
How to use the table: first, pick the purpose of your visualization of your data; for example, let's assume you want to find correlations. So, you go to the "Correlations" row. Next, pick your top criterion; in our example, let's assume you strive to maximize user preference. So, you go to the cell under the "User preference" column. Finally, pick one of the chart types on the left hand side in that cell (they are ranked by score regarding the respective criterion you picked). In our example, you should probably use the lines chart; if that does not fit your purposes (for whatever other reasons), then you probably want to pick the bar chart instead. The arrows symbolize "performs better than" relationships between chart types (inside that cell).
Criteria we Use When Grading Your Thesis
Bei der Beurteilung einer Master- bzw. Bachelor-Arbeit verwenden wir folgende Kriterien:
- Kenntnisse und Fähigkeiten (was bringt die Studentin mit?)
- Systematik und Wissenschaftlichkeit (kann die Studentin wissenschaftlich arbeiten?)
- Initiative, Einsatz, Durchhaltevermögen (wie stellt sich die Studentin während der Arbeit an? wie ist ihre Frustrationstoleranz? hat sie eigene Ideen und geht diese mit Energie an? macht sie evtl. "Dienst nach Vorschrift"?)
- Qualität der Ergebnisse (was kam bei der Arbeit tatsächlich heraus?)
- Präsentation der Ergebnisse (kann die Studentin präzise und verständlich über ihre Arbeit berichten? das betrifft sowohl die schriftliche Ausarbeitung als auch den Vortrag)
Recommendations for Your Presentation During Your Defense (Colloquium)
- Length: for master's theses, your talk should not exceed 20 minutes; for bachelor's theses, you should err towards 15 minutes.
- You can omit the slide "Structure of the talk" (contrary to what you probably learnt). Reason: the structure is always the same, i.e., motivation, problem/task, related/prior work, concept/architecture/algorithms, implementation, evaluation, conclusions, future work.
- In your introduction, try to motivate your work; to do so, try to answer two questions: 1) what is the "big picture" where your works fit in? 2) what is the exact problem you are trying to solve? 3) in which way do existing solutions / scientific works fall short?
- Focus on the "meat", i.e., your algorithms, your user study, or your software architecture; basically, any- and everything that is hard computer science.
- Towards the end, show plots, show pictures, show videos.
- Draw conclusions: what is now possible with your novel stuff? point out limitations that still exist.
- Also good practice: show a video at the end.
- Bad practice: too much text, no diagrams.
- Practice your talk! You can ask your friends, girl friend, or record yourself. (I know it might hurt, but it is helpful.)
- Don't forget to invite your advisor/supervisor(s) to your defense!
Links
For printing your thesis, you might want to consider
Druck-Deine-Diplomarbeit.
We have heard from other students that they have
had good experiences with them (and I have seen nice examples of their print products).
Also, there is a friendly copy shop,
Haus der Dokumente,
on Wiener Str. 7, right on the campus.
The List
Master Thesis: Investigating Methods for a Semantic Abstraction Schedule of Data to Aid Neural Networks During Training
Subject
Neural networks have proven themselves to be a valuable tool for solving certain problems in a wide variety of application domains. The beginning of each neural network model stays the same, however: The acquisition of suitable data and the subsequent training of the model. The training itself is a huge area of research alone, including optimization techniques for training time and gradient stability. The goal of this thesis is to investigate whether training can be accelerated and/or more stabilized with a "semantic abstraction schedule" for data samples in training batches. More specifically, the task will be to investigate what effects the (epoch-dependent) scheduled deblurring of sample images might have on the training of Encoder-Decoder networks. And whether another form of parameterized semantic abstraction of the sample information exists or is even more suitable.
Your Tasks/Challenges:
- Develop and investigate methods to get visual abstractions of images, without damaging essential information for the network, using Pytorch
- Investigate how and if such methods can act as a tool for regularization
- Investigate if there is another way of achieving the same effect by "acting" on the network itself, rather than on the data. Maybe there is a way of adapting EMA (Exponential Moving Average) models to achieve a similar effect during training?
- Test and evaluate the methods on different encoder-decoder networks or classifiers
Requirements:
- Good understanding of machine learning and deep learning concepts, familiarity with basic classifier and encoder-decoder neural networks
- Intermediate proficiency in programming and familiarity with one of the popular deep learning frameworks (Pytorch is a plus)
- Understanding of optimization problems (the training of neural networks is an optimization problem) and their rudimentary structure
- Familiarity with GPGPU-programming and basics of CUDA are a plus
Contact:
Thomas Hudcovic,
hudo at uni-bremen dot de
Prof. Dr. Gabriel Zachmann, zach informatik.uni-bremen.de
Thesis: Development of a Website for Full-Text, Fuzzy and VLM-aided Semantic Search for Art History Data
Subject
Even though the World Wide Web has become ubiquitous and websites and the development thereof has been around for more than 30 years, it is a surprisingly non-trivial problem to get websites "right". Web stacks have become so bloated that even high-level frameworks for the front-end, like React, have even more frameworks built on-top of them. Is this really how it is supposed to be? Develop a simple full-stack website/webapp for searching through a curated corpus of art historical data, allowing for full-text, fuzzy and semantic search (via image or text queries), while keeping it maintainable and easy-to-understand.
Your Tasks/Challenges:
- Develop a website with which a user can do full-text and fuzzy search on image and corresponding archiving and metadata stored in a Postgres-database. And investigate if semantic search can be achieved using a local vision language model (VLM), aided by multi-modal retrieval-augmented generation (RAG)
- Work together with art historians, who curate the data, and conduct a user test for the website
- Investigate whether it is feasible to keep the whole development stack lean and forgo "bloaty" frameworks like React or Vue altogether by applying HTMX and a dedicated CSS library (e.g. Pico CSS, Tailwind CSS) to do the front-end, with a back-end language of your choice
Requirements:
- Strong knowledge of full-stack web development
- Solid understanding (or the willingness to learn) of (vision) language models and (multi-modal) retrieval augmented generation
- Familiarity with Go and HTMX (or the willingness to learn) is a huge plus
- Familiarity with concepts used in full-text and fuzzy search (e.g. stemming, lexeme-matching, n-grams, ...)
Contact:
Thomas Hudcovic,
hudo at uni-bremen dot de
Prof. Dr. Gabriel Zachmann, zach informatik.uni-bremen.de
Master Thesis: Comparative Study of Dynamic Point Cloud Rendering in VR
Objective:
This thesis aims to compare modern rendering techniques for dynamic point clouds in virtual reality (VR) and on traditional displays. Dynamic point clouds, often utilized in telepresence applications, are captured using multiple depth cameras like the Azure Kinect. These applications require high-quality and efficient methods for rendering continuous surfaces from captured data. While various advanced rendering techniques have been developed for real-time high-quality visualization, there is a need to evaluate and compare these methods within a VR context.
Research Tasks:
The candidate will focus on the development and application of a rendering bridge for Unreal Engine, designed to facilitate the use of various rendering techniques—irrespective of their native implementation in OpenGL, Vulkan, or DirectX. The tasks will include:
- Further development of our Unreal Engine rendering bridge.
- Designing and conducting a comparative study of different rendering techniques, including Splat Rendering, Separate Meshes, Truncated Signed Distance Fields (TSDF) combined with Marching Cubes, Pointersect, P2ENet, and possibly Fusion4D.
Technical Approach:
The research will involve integrating and optimizing rendering techniques in VR environments, leveraging the capabilities of Unreal Engine and the newly developed rendering bridge. This bridge will allow for flexible experimentation with different graphics APIs and rendering methods to determine which provides the best performance and visual fidelity for VR applications. Note that the rendering techniques mentioned above are already almost all implemented in OpenGL / CUDA.
Required Skills:
- Strong proficiency in C++ and basic knowledge of OpenGL.
- Extensive experience with Unreal Engine.
- Experience in designing and conducting research studies.
Contact:
Andre Mühlenbrock, muehlenb at uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Master Thesis: Exploring the Impact of Self-Shadows in Multi-User VR Applications
Objective:
This thesis will investigate the influence of shadows in both single-user and multi-user virtual reality (VR) settings. Current VR systems, constrained by limited tracking to only the user's head and hands, often fail to render self-shadows accurately. The goal is to determine how the presence of realistic shadows affects user interaction and perception in VR, examining both beneficial outcomes (like enhanced depth perception and interaction accuracy) and potential drawbacks.
Research Tasks:
The candidate will design experiments to test the effects of self-shadows on user performance in various tasks within VR environments and implement them into a game engine. Additionally, the research will explore the psychological impact of shadows in multi-user scenarios (perhaps a little in the style of "Peter Schlemihl's Wondrous Tale").
Technical Approach:
To facilitate this research, a VR application will be developed where an invisible 3D avatar, replicating the participant’s movements including hands and feet, will cast a shadow. This will involve: Employing a game engine (such as Unreal Engine) to handle shadow rendering. Utilizing Vive trackers for precise tracking of the body's joints, animating the avatar through inverse kinematics. Optionally using motion capture systems like Optitrack for detailed movement replication or depth sensors (e.g., Microsoft Azure Kinect) to reconstruct physical interactions more accurately.
Required Skills:
- Design studies / structured experiments.
- Proficiency with game engines like Unreal Engine.
- Basic knowledge in animation techniques or point cloud processing.
- Strong programming skills.
Contact:
Andre Mühlenbrock, muehlenb at uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Master Thesis: Immersive Visualization in VR
Subject
Immersive visualization tries to combine scientific visualization (e.g., visualization of wheather simulations) with immersive technologies (e.g., VR headsets and 3D interaction). In this thesis, you are to implement visualization techniques on the game engine Unreal 5, such that standard Excel or CSV tables can be read by your system and visualized in VR. Also, some standard 3D interaction techniques are to be implemented, such as navigation around the data, selecting regions of the data, etc. Depending on availability, we would like to visualize data of the spreading of the recent pandemic, and data of specfic ocean measurements.
Your Tasks/Challenges:
- Get comfortable with Unreal engine 5, immersive visualization techniques, data sources
- Implement exisiting immersive visualization techniques in Unreal
- Develop techniques to deal with temporal data in VR (e.g., define time slices)
Requirements:
- C++
- Good ability to work self-driven.
Contact:
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Thesis: Massive 3D Container Packing
Subject
Packing problems occur in many different forms and also in different real-world applications. In this project, we consider a variable set of objects and a single arbitrary container in 3D space. AutoPacking is software that packs arbitrary 3D objects into a 3D container and is given. This thesis aims to run and evaluate packings with many objects. While working on this project, it's important to be prepared for potential challenges. These could include issues such as high memory consumption or rendering problems, which might require solutions like a better data format or a converter.
Your Tasks/Challenges:
- Get comfortable with AutoPacking
- Pack a large amount of objects
- Visualize nicely the results
- Master: We can discuss further tasks
Requirements:
- C++
- Good ability to work self-driven.
Contact:
Hermann Meißenhelter, meissenhelter at uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Thesis: Machine Learning for Uncertainty Analysis in Rigid Body Dynamics
Subject
In scenarios involving nonlinear functions that adjust uncertain parameters, whether correlated or independent, the predominant approach for propagating uncertainty involves sampling techniques derived from the Monte Carlo method. However, when dealing with large-scale data or costly functions, this error propagation can become prohibitively expensive. Under such circumstances, implementing a surrogate model or leveraging parallel computing strategies may prove essential.
The rigid body is approximated with a set of spheres. Intersecting spheres are used to compute a force. The task would be to learn the outgoing force uncertainty (Covariance or the Eigendecomposition) for a set of colliding spheres (input).
Your Tasks/Challenges:
- Generate a large enough dataset
- Train a network: MLP, KAN (Kolmogorov Arnold Network), BNN (Bayesian Neural Network) or Gaussian Process?
- Comparison to sampling (ground truth)
Requirements:
- C++, Math, Machine Learning (PyTorch)
- Good ability to work self-driven.
Contact:
Hermann Meißenhelter, meissenhelter at uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Thesis: Probabilistic Collision Detection
Subject
In many applications, such as robotics, virtual environments, and dynamic simulation, obtaining exact representations of objects is often difficult. Instead, the representations of objects are described using probability distribution functions. This is because the data from the environment is captured using sensors, and only partial observations are available. Additionally, the primitives captured or extracted using sensors tend to be noisy. In this scenario, the objective is to calculate the probability of collision between two or more objects when the representations of one or more objects (such as positions, orientations, etc.) are expressed in terms of probability distributions.
A significant part has already been done. We use sphere packings to approximate the objects and compute the collision probability between two spheres (isotropic Gaussian) analytically with a tight upper bound (also for rigid bodies). However, we are missing a comparison/benchmark between different methods.
Your Tasks/Challenges:
- Compute ground truth collision probability by sampling
- Setup objects apart by absolute distances and measure collision probability and computation time
- Nice to have: Implement some other method(s) for comparison
- Optional: Include rotational uncertainty in collision probability computation
Requirements:
- C++ and some math
- Good ability to work self-driven.
Contact:
Hermann Meißenhelter, meissenhelter at uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Thesis: Treatment of Reflex Syncopes Using VR
Subject
There has been a lot of research on treatment of phobias using VR, and the effectiveness has been shown numerous times for a number of concrete phobias (e.g., fear of heights). Reflex syncopes (for instance, brief loss of consciousness when seeing blood) are different in that they affect directly the blood pressure and heart rate.
The question of this thesis is whether such conditions can be treated using VR or AR, perhaps similar to the exposure therapy for phobias.
Your Tasks/Challenges:
This thesis is special because it faces a number of challenges:
- You need to find a clinical expert for syncopes for collaboration;
- You need to find enough participants that exhibit syncopes when exposed to the same type of trigger (e.g., blood);
- Determine whether VR or AR are better suited for a potential treatment;
- You need to implement a high-quality rendition of the trigger that can robustly trigger the syncope.
- Conduct the user study.
On the other hand, this thesis treads on totally uncharted ground, so your reward could be immense (even a scientific paper at a conference, with our help, could be possible).
Requirements:
- Some experience with game engines (either Unreal or Unity should work);
- Good ability to work self-driven.
Contact:
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Thesis: Original or fake? WOLS (Alfred Otto Wolfgang Schulze, 1913–1951)
Subject
The artist Alfred Otto Wolfgang Schulze, alias Wols (Berlin 1913 - Paris 1951) is considered one of the most important pioneers of informal art in Europe. He left Germany in 1932 and moved to Paris, where he initially worked mainly as an advertising and portrait photographer. When he was interned in 1939 due to France's entry into the war, he shifted his artistic focus and created numerous watercolours and pen and ink drawings inspired by Surrealism. After his release from the Les Milles camp, he increasingly devoted himself to the graphic depiction of dissolution processes as well as sculptural and linear structures. He created his first completely abstract pictures. When he then created around forty paintings for the René Drouin Gallery in Paris in the mid-1940s, his pictures came to symbolise what was henceforth known in Europe as Art Informel or Tachisme and developed in the USA at the same time as Jackson Pollock under the name Abstract Expressionism.
The Wols Archive, run by the Karin and Uwe Hollweg Foundation and founded by Ewald and Sylvia Rathke, has recorded the majority of WOLS' works to date and will publish a catalogue raisonné of his drawings and watercolours in a few years' time.
Wols' posthumous success had and still has a very negative downside: Wols' works have been forged very frequently since the end of the 1950s. Thanks to the connoisseurship of Wols expert Ewald Rathke, it has been possible to identify the majority of forgeries since the 1970s; however, new genuine and fake drawings and watercolours continue to emerge.
Your Tasks/Challenges:
A program is to be developed which, on the basis of hundreds of genuine and secured fake works (all of which are digitally available at the Karin and Uwe Hollweg Foundation), will provide automated assistance in the future identification of genuine and false works.
Requirements:
Some experience with machine learning, in particular neural networks, and/or random forests, etc. Also, a bit of programming experience with Python. Ideally, a bit of experience with PyTorch or TensorFlow.
Contact:
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Bachelor Thesis: Influence of Hand Models on Hand Pose Estimation
Subject
Pose Estimation of hands (in combination with objects) is an important foundation for both VR and robotics applications. Methods based on deep learning usually outperform classical methods by a wide margin. However, they require annotated training images. The annotation process of real images with hands is cumbersome and prone to errors. Therefore synthetic data is important tool to train pose estimators.
In this thesis you measure the influence of hand models with different quality (see image) on hand pose estimation.
Your Tasks/Challenges:
- Use our data generation tools and create training data for different existing hand models
- Use an existing hand pose estimator to investigate the influence of this models
Requirements:
- Experience in Python
Contact:
Janis Roßkamp, j.rosskamp at cs.uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Thesis: A Hybrid Approach to Pose Estimation of Hands using Deep Learning
Subject
Hand Pose Estimation is important for both virtual reality applications and motion capturing (mocap) for games and movies. Current methods often use RGB images (top image) or marker-based strategies. However, RGB images typically fail to provide high-precision pose estimation, whereas marker-based motion capturing requires numerous markers to accurately track the hand requiring the use of an intrusive glove (bottom image). Moreover, this marker-based approach results in the loss of all hand information, such as shape and color.
In this thesis, you develop a novel hybrid method that combines the strengths of mocap and RGB images to enhance the accuracy of hand pose estimation. Using only a few markers, i.e. on the fingertips, we can improve current methods on RGB images.
Your Tasks/Challenges:
- Generate synthetic training data suitable for the hybrid hand pose estimation method. You can use our existing tools for the synthetic data generation
- Various projects with code exist for Hand Pose Estimation. You can choose a project which you find most suitable and modify its underlying neural network so that it can incorporate precise motion capturing data.
Requirements:
- Experience in Python
- Basic knowledge in Deep Learning
- Optional: Experience in PyTorch
Contact:
Janis Roßkamp, j.rosskamp at cs.uni-bremen.de
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Master Thesis: Real-Time Rendering of Dynamic Point Clouds
Subject
Rendering 3D point clouds, captured using depth or LiDAR sensors, in real-time is a fundamental and challenging area in computer graphics. A point cloud consists of numerous 3D points that hold spatial position and color information. The primary goal of point cloud rendering is to display this collection of points (which might be a simple array of 3D-Vectors) in such a way that they are perceived as a opaque surface on a screen.
The current methods in point cloud rendering are diverse. Some approaches visualize the points directly as circular "splats" on the screen. Other techniques transform the points into a 3D grid and reconstruct a mesh, which is then rendered. Recent techniques utilize different types of deep neural networks.
Your Task:
Depending on your preferences, your task may vary an may be:
- to develop/conceive your own novel rendering technique.
- to make improvements to existing rendering techniques.
- to reimplement a part of an highly advanced rendering technique from the literature (see next paragraph).
Highly advanced techniques such as Fusion4D (Microsoft) or Function4D (among others, by Google), which achieve high-quality results, typically do not release their source code. Therefore, re-implementing parts of these techniques would be very interesting for a large research community in the context of a master thesis.
Working Environment:
Regardless of your chosen path (whether developing a new technique, improving an existing one, or partially re-implementing a very advanced unpublished technique), your technique should finally be integrated into our Point Cloud Rendering Framework (PCRFramework). Our PCRFramework is a stable, lightweight and easy-to-expand framework implemented in modern C++ and offers an ImGUI frontend, that already supports different basic kinds of point cloud rendering techniques (Splat Rendering, Mesh Rendering, TSDF with Real-time Marching Cubes). It already possesses numerous functions to load point clouds from an Azure Kinect, Microsoft Kinect v2, or from a recorded file. The framework integrates CUDA and already offers some useful functions to access the loaded point cloud in CUDA as well as from the CPU. Should you wish to use neural networks, the PCRFramework is also capable of loading and inferring neural networks trained in PyTorch via LibTorch. In this case, your implementation would primarily be in Python and PyTorch.
Requirements:
- Solid knowledge and experience in 3D rendering techniques (CG1 / CG2).
- Programming skills (C++ and either OpenGL / GLSL or CUDA).
- Optional: Python and PyTorch (when you want to utilize neural networks).
Contact:
Andre Mühlenbrock, muehlenb at uni-bremen.de
Prof. Dr. Gabriel Zachmann,
email: zach at informatik.uni-bremen.de
Master Thesis: Redirected Walking in Shared Real and Virtual Spaces
Subject
Redirected walking (RDW) enables users to walk in a larger VR space than
the real space.
This works by shifting and rotating the virtual space ever so
slightly, ideally below the user's noticeable threshold.
There has been a lot of research on RDW techniques and
the just noticeable thresholds.
However, how do you redirect multiple users in a shared
virtual environment in the case the users also share
the same real space, e.g., a big lab or a huge indoor court?
The setup is a number of users wearing untethered HMDs
moving around in a large, common, tracked space
(for instance, using optical tracking and WiFi HMDs).
This setup is not quite consumer grade (yet), but we can imagine a
future, where such kind of arcades are possible.
An approach to solving the RDW problem could be kind of a trajectory
optimization, where users' trajectories in real space
are predicted, and the optimization goal is the
total deviation from all the trajectories of all users.
Your Tasks/Challenges:
Research the literature on multi-user RDW. Formalize the optimization problem as a mathematical non-linear optimization problem. Identify a suitable math library for solving the problem in real-time. Implement the system. Test and evaluate it with a number of users.
Requirements:
- Passion for the topic
- Programming skills
- Ideally, a bit of experience with the Unreal Engine (can be learned rather quickly)
- Ideally, some experience with mathematical optimization methods; you will not need to develop optimization codes or algorithms, but you should feel comfortable with applying existing optimization methods (and use respective libraries)
Contact:
Prof. Dr. Gabriel Zachmann,
email: zach at informatik.uni-bremen.de
Master Thesis: Inverse Reinforcement Learning and Affordances
Subject
People could program powerful chess computers before they could program a robot to walk on two legs, and many of the tasks we find easy as human beings, such as daily activities involved in preparing meals or cleaning up, turn out to be difficult to specify in detail. Thus, if we want robots to be competent helpers in the home, it would be better if we could teach them by showing what needs to be done, and for them to learn from watching us. Several techniques are being researched to enable such learning. One of these techniques is IRL—inverse reinforcement learning [1]—where the goal is to discover, by watching an "expert," the reward function that this expert is maximizing. This is more effective than simple imitation of the expert's actions. Consider the proverbial monkey shown how to wash dishes. The monkey may go through the motions of wiping, but if it did not understand that the dishes should be clean afterwards, then it won't do a good job. However, IRL is an ill-posed problem: there can be an infinity of reward functions that the expert may be demonstrating. To even make an educated guess would often require considering enormous search spaces—there are many parameters that go into characterizing even the simplest manipulation action! Additionally, the environments in which human beings perform tasks, and the tasks themselves, are in principle of unbounded complexity: if a human knows how to stack three plates on top of each other, they also know how to stack four or ten.
Your Tasks/Challenges:
The subject of this thesis is to develop an IRL system that combines existing research into relational IRL[2], modular IRL[3], and explicitly represented knowledge to enable a simulated agent to learn, from demonstrations performed in a simulated environment, how to perform tasks such as stacking various items, putting objects in and taking them out of containers, and how to cover containers. While the project can start with published techniques, it also raises research questions to investigate. Relational IRL is a technique to learn rewards that generalize and describe tasks for environments of, in principle, arbitrary complexity. However, the choice of logical formulas in the relational descriptions has a significant influence on the quality of the learned rewards—how can the logical language of these descriptions be well-chosen for the tasks we have in mind, such as stacking and container use? Furthermore, because IRL is mathematically ill-posed, many reward functions are learnable. [2], cited below, shows an example of an unstacking task, where both a reward for "there are 4, 5, or 6 blocks on the floor" and a reward for "there are no stacked blocks" are learnable from the same data, but it is only the second one that captures the intended level of generality. How can the learning process be influenced to prefer the more generalizable rewards? How can we encode which parameters of the demonstration count "as-is" and which are allowed to vary arbitrarily? The manipulations involved in stacking or container use are complex. Can these be split into several phases, allowing for independent learning for each phase and thus simplifying the search space for the IRL problem?
Requirements:
- Motivation
- Programming skills (Python, PyTorch, OpenAI Gym).
- Unreal Engine (Virtual Reality or OptiTrack)
Contact:
Prof. Dr. Gabriel Zachmann,
email: zach at informatik.uni-bremen.de
Master Theses at DLR/CGVR: Point Clouds in VR
Subject
The Department of Maritime Security Technologies at the Institute for the Protection of Maritime Infrastructures is dedicated to solving a variety of technological issues necessary for the implementation and testing of innovative system concepts to protect maritime infrastructures. This includes the development of visualization methods for maritime infrastructures, including vast point cloud data sets. The Computer Graphics and Virtual Reality Research Lab (CGVR) at University of Bremen carries out fundamental and applied research in visual computing, which comprises computer graphics as well as computer vision. In addition, we have a long history in research in virtual reality, which draws on methods from computer graphics, HCI, and computer vision. These two research groups offer the opportunity for joint master theses, allowing students to get the best of both worlds of acadamic and applied science.
Potential Topics:
- Point Cloud Labeling: Your mission will be to create an intuitive and interactive VR application, revolutionizing the way we annotate and process vast amounts of point cloud data. Point cloud data lies at the core of modern technologies such as self-driving cars, augmented reality, and 3D mapping. Your challenge will be to bridge the gap between traditional 2D labeling methods and the immense potential of 3D point cloud data. Through your expertise and creativity, you will unlock the next level of precision and efficiency in data annotation.
- Point Cloud Rendering: With the advent of cutting-edge scanning technologies and 3D data capture, point cloud datasets have grown to unprecedented sizes, containing billions of data points. The conventional rendering approaches simply fall short in handling such colossal volumes, leading to reduced performance, compromised visual fidelity, and frustrating user experiences. Your task will be to innovate and engineer a novel technique that cleverly optimizes memory usage and computational efficiency, while preserving the intricate details and accuracy of the original point cloud data.
- Point Cloud Segmentation: Traditional segmentation approaches often require vast amounts of annotated data, making them cumbersome and time-consuming. Your task will be to explore innovative learning techniques, including few-shot-approaches, designing algorithms that can leverage prior knowledge from a small set of labeled point clouds to accurately segment new, previously unseen data.
Note that you do not need to work on all of the topics; they are meant as potential ideas what you could work on and what is of interest to us. The specific details of your topic will be discussed, once you decide you want to work in this area.
Also, there is the option of your getting some funding when working on one of these topics.
Requirements:
- Proficiency in computer graphics and 3D rendering techniques.
- Strong programming skills (C++, Python, or related languages).
- A plus, but not really necessary, is familiarity with GPU programming (CUDA) and a good understanding of hashing and image matching and feature detection on images
- A passion for pushing the boundaries of what’s possible in the realm of virtual environments and visualization.
Contact:
Prof. Dr. Gabriel Zachmann,
email: zach at informatik.uni-bremen.de
Master thesis: Identifying the Re-Use of Printing Matrices
Subject
Even before Gutenberg invented printing texts, images were printed using matrices, either carved woodblocks or engraved copperplates. Because they were expensive to produce, these matrices were often re-used even after many years or sold to other printers. Since there was no copyright, some printers simply had successful illustrations copied (with greater or lesser accuracy) for their own use.
In the last years, several millions of book illustrations have been digitised, including naturally many re-uses of printing matrices. However, these photographs do not look exactly the same - matrices may become worn or damaged over time, the printing process may have been handled slightly differently, pages can become dirty or torn, lastly, photos were taken by different camera systems and from different angles.
This thesis aims to investigate possible methods to match images to used printing matrices in order to track possible re-use, with the intention of incorporating the devloped methods into real-world usage.
One idea could be to utilize geometric hashing on either extracted feature points (see our Massively Parallel Algorithms lecture) or on features extracted from a trained classifier network.
Your Tasks/Challenges:
- There are already systems that analyse a closed corpus of such images through direct comparison between them (https://www.robots.ox.ac.uk/~vgg/software/vise/). However, here, a procedure is sought that can work with an image database, to which new material is added constantly.
- Two aspects of the material may differ from many other tasks of analysing images:
- Firstly, many examples contain large numbers of lines, not least because light and shade are normally shown by hatching. Hence, finding feature points could be somewhat challenging.
- Secondly, one will not be able to make new photographs of the images under standardised conditions but use the images that are publicly available in repositories such as this: https://www.digitale-sammlungen.de/de/.
- Familiarize yourself with the concepts of spatial hashing (geometric hashing) and implement it so it can take adavantage of the parallelization capabilities of the GPU.
- Just throwing ORB or another feature detector on the images may not be enough to prevent false negative and false positive matches, you might need to incorporate deep learning features and maybe even other attributes of the images and think of suitable data structures for that.
Requirements:
- Solid machine learning and deep learning skills, familiarity with basic classifier neural networks
- Familiarity with the concepts of features and feature extraction w.r.t (convolutional) neural networks
- Familiarity with GPU programming (CUDA) and a good understanding of hashing and image matching and feature detection on images
- Openness for working with images from other time-periods
Contact:
Thomas Hudcovic,
hudo at uni-bremen dot de
Prof. Dr. Gabriel Zachmann, zach informatik.uni-bremen.de
Master Thesis: Gravity Modeling and Stable Diffusion
Subject
Current and future small-body missions, such as the ESA Hera mission or the JAXA MMX mission, demand good knowledge of the gravitational field of the targeted celestial bodies. This is not only motivated to ensure the precise spacecraft operations around the body but likewise important for landing maneuvers, surface (rover) operations, and science, including surface gravimetry. To model, the gravitation of irregularly-shaped, different methods exist.
Recently (latent) stable diffusion has gained popularity as a deep learning approach. Usually, the systems work in the image space. However, this thesis should investigate how the method can be used to model a gravity field (3D space). With the polyhedral method, we can compute the gravity field of 3D shape files as ground truth data.
Your Tasks/Challenges:
- Generate a lot of ground truth data with the polyhedral method
- Find a more compact way to represent the gravity field (latent space, gravitational potential)
- Predict the gravity field of new objects
- Another output could be the density distribution inside a body (inverse problem)
Requirements:
- Excellent machine learning skills
- Basic knowledge of stable diffusion (AutoEncoder, U-Net)
- Motivation
Contact:
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Master Thesis: Sphere Packing Problems
Subject
Sphere packings offer a way to approximate a shape volume. They can be used in many applications. The most common usage is collision detection since it is fast and trivial to test spheres for an intersection. Another application is modeling gravitational fields or applications in medical environments with force feedback.
Also, an important quality criterium is packing density, which is closely related to the fractal dimension. An exact determination of the fractal
dimension is still an open problem.
The practical side is well understood. We use the Protosphere algorithm for triangular meshes to generate sphere packings, which approximate Apollonian diagrams. Yet, the theoretical side needs more exploration.
We are considering multiple areas where you can study single or multiple topics in a thesis.
Your Tasks/Challenges:
- Determining the fractal dimension
- Packing density (theoretical limit for approximation error)
- The precision or effect of prototypes (symmetric object leads not to a complete symmetry in the sphere packing)
Requirements:
- Joy in math and geometry (Computational Geometry)
- Motivation
Contact:
Prof. Dr. Gabriel Zachmann, zach at informatik.uni-bremen.de
Master thesis: Natural hand-object manipulations in VR using Optimization
Subject
One of the long-standing research challenges in VR is to allow users to manipulate
virtual objects the same way they would in the real world, i.e., grasp them,
twiddle and twirl them, etc.
One approach could be physically-based simulation, calculating the forces acting on object
and fingers, and then integrating both hand and object positions.
Another approach, to be explored in this thesis, is to use optimization.
The idea is to calculate hand-object penetrations, or minimal distances in case
there are no penetrations, then determine a new pose for both hand (and fingers)
and the object such that these penetrations are minimized (or distances are maximized).
Software for computing penetrations has been developed in the CGVR lab and is readily available.
Also, many software packages for doing fast non-linear optimization is available in the public domain
(e.g., pagmo).
Task / Challenges:
- Work out the details of the method, for instance, what exactly could be the best objective function for the optimization?
- Determine the best optimization software package, get familiar with our penetration computation software.
- Implement the method in C/C++.
- Perform a small user study.
Requirements:
- Programming skills in C/C++ (at least basic knowledge)
- Mathematical thinking (no theorem proving will be needed)
Contact:
Prof. Dr. Gabriel Zachmann: zach at informatik.uni-bremen.de
Master thesis: Mixed Reality Telepresence: Extending a Collaborative VR Telepresence System by Augmented Reality
Subject
Shared virtual reality (VR) and augmented reality (AR) systems with personalized avatars have great potential for collaborative work between remote users. Studies indicate that these technologies provide great benefits for telepresence applications, as they tend to increase the overall immersion, social presence, and spatial communication in virtual collaborative tasks. In our current project, remote doctors can meet and interact with each other in a shared virtual environment using VR headsets and are able to view live-streamed and 3D-visualized operations (based on RGB-D data) to assist the local doctor in the operating room. The local doctor is also able to join using VR.
The goal of this thesis is to extend the existing UE4 VR telepresence project to allow the local doctor to use AR glasses like the Hololens instead of the VR headset. This enables the doctor to interact - hands-free - with the remote experts while continuing the operation, and prevents interruptions. Your tasks are to adapt the current code such as to work with the Hololens, too (general detection, tracking, registration, interaction gestures). Additionally, the relevant data has to be streamed as fast as possible onto the Hololens to be viewed. Lastly, - optionally - it would be great to use the build-in depth sensor of the Hololens for 3D visualizations of the patient. This could be done by continuously registering the sensor and streaming the data back into the shared virtual world.
Task / Challenges:
- Extending the current UE4 project to allow the usage of AR goggles (Hololens) instead of only VR headsets.
- Designing and implementing interaction gestures for the AR user.
- Implementing low-latency compression and streaming of point cloud/video data to the Hololens.
- (Implementing the continuous registration and streaming of the Hololens's depth camera data for shared point cloud avatars.)
Requirements:
- Some experience in a game engine, ideally the UE4 but Unity is fine too
- Basic programming skills, ideally c++.
-
Helpful:
- Experience in computer graphics.
- Experience with AR/VR.
Contact:
Roland Fischer, s_8ix2ba at uni-bremen.de
Prof. Dr. Gabriel Zachmann,
zach at informatik.uni-bremen.de
Master thesis: High Fidelity Point Clouds: Artificially Increasing the Sensor's Depth Resolution
Subject
RGB-D cameras like Microsoft's Azure Kinect and the corresponding point cloud visualizations of the captured scenes are getting increasingly popular and find usage in a wide range of applications. However, the low depth sensor resolution is a limiting factor resulting in very coarse 3D visualizations.
The goal of this thesis is to find and implement methods to artificially increase the depth sensor's resolution, and, thus, the fidelity of the generated point clouds. The methods have to be fast enough for real-time usage. One approach is to develop or adapt and employ super sampling algorithms (possibly based on deep learning) on the depth images. Another approach would be to experiment with attaching a convex lens in front of the sensor to increase the local pixel density for a distinct area, although this limits the field of view. Using a lens would entail a custom calibration/registration procedure between depth and color sensor. Your task is to explore these and possibly other methods and implement the most convincing one(s).
Task / Challenges:
- Conducting experiments with convex lenses in front of the sensor to achieve a higher density at range.
- Conducting experiments with (deep learning?) super sampling of the depth images.
- Investigation of other methods to artificially increase the depth senors resoultion.
- Implementation of the most convincing method(s).
Requirements:
- Basic programming skills, ideally c++.
- Experience with image processing.
-
Helpful:
- Some experience in a game engine, ideally the UE4 but Unity is fine too
- Experience with RGB-D cameras.
- Experience with deep learning.
Contact:
Roland Fischer, s_8ix2ba at uni-bremen.de
Prof. Dr. Gabriel Zachmann,
zach at informatik.uni-bremen.de
Master thesis: Creation of an RGB-D Dataset with Ground Truth for Supervised Learning and Depth Image Enhancement
Subject
RGB-D cameras (color + depth) are hugely popular for 3D reconstruction and telepresence scenarios. An open problem is the inherent sensor noise which limits the achievable quality. Deep Learning techniques showed to be very promising in image denoising, -completion, and -enhancement tasks, however, for supervised learning, ground truth data is needed. Acquiring suitable, realistic ground truth data for RGB-D images is a huge challenge, which is why there is nearly none yet.
With this thesis, we want to create a universally usable RGB-D dataset with ground truth data. To achieve this, the idea is to arrange a real physical test scene consisting of a wide variety of objects and materials. To precisely specify and change the position and rotation of the RGB-D camera within the scene, we rely on a highly accurate robot/robot arm. The corresponding ground truth images will be acquired by creating a virtual version of the scene and its contained objects, e.g. using the Unreal Engine 4 and Blender. A virtual camera can eventually be placed in the virtual scene, be exactly aligned with the physical one, and record corresponding synthetic ground truth images.
Task / Challenges:
- Creating and arranging a suitable, varied physical test scene with everyday objects
- Exactly recreating the scene via 3D modeling or other appropriate techniques (photogrammetry)
- Recording of test images and trajectories using the robot and an RGB-D camera
- Taking the corresponding color and depth images in the virtual scene
Requirements:
- Experience in 3D modeling or 3D reconstruction
-
Helpful:
- Experience with robots and RGB-D cameras
- Basic programming skills, ideally c++.
- Experience with game engines like the Unreal Engine 4
Contact:
Roland Fischer, s_8ix2ba at uni-bremen.de
Prof. Dr. Gabriel Zachmann,
zach at informatik.uni-bremen.de
Master thesis: Factors influencing correct perception of spatial realtionships in VR
Subject
Learning the human anatomy plays an important role in any surgeon's education. Patient's well-being depends to a significant degree on the surgeon's good understanding of the spatial relationships between all the structures in the human body, such as organs, blood vessels, nerves, etc. The reseaerch question in this thesis is: how much better do people (e.g., medical students) learn those spatial realationships between different structures of the human body when they learn those using virtual reality, as opposed to learning them from 2D books?
Your task
In this thesis, you will build upon an exisiting application that was implemented on top of the game engine Unreal. This application already contains a lot of anatomy and several features to interact with the 3D geometry.
- Design an experiment, based on the virtual anatomy atlas, for investigating the research question stated above? (which organs are best suited? what are good evlauation criteria?)
- Investigate the accuracy of user's spatial perception in the anatomy atlas with a user study.
- perform statistical analysis of the gathered data.
Prerequisites
This thesis does not require the excellent programming skills. You will need considerable knowledge of statistics (which you can learn, of course, during your thesis). In any case, it would be helpful if you had some experience with the Unreal Engine. Like any other thesis, you will need to do a lot of literature research. Participation in our VR course will provide a good basis for understanding virtual reality as a whole.
Contact:
Prof. Dr. G. Zachmann, zach at informatik.uni-bremen.de
Master (Bachelor) thesis: Depth Perception in VR
Subject
The goal of this thesis is to investigate the (distorted) depth perception that usually is recognized in VR. There are a variety of so-called depth cues, i.e., sources of information about the spatial relations of the objects in the environment, which are used by the human visual system to deduce depth. This includes visual monocular depth cues (e.g., occlusion, relative size), oculomotor depth cues (e.g., convergence, accommodation), and binocular depth cues (in particular, disparity). Unfortunately, there are frequent reports of underestimation of distances in virtual environments. There are many potential reasons for this effect, including hardware errors, software errors and errors of human perception. The difference in the images in the left and right eye is called binocular disparity and it is considered to be the strongest depth cue in the personal space. Using random-dot patterns, it was observed that it is possible to perceive depth with no other depth cue than disparity. However, the actual influence of the disparity on the depth perception in VR and probably, an algorithm to influence the disparity in software to correct the depth perception is still unknown. Such an automatic correction algorithm could be a goal changer for many application using VR.
Your task
The goal of this thesis is to take at least the first step to investiagte the influence of the disparity on the depth perception in VR. Our idea is to design a user study where the distance of the eyes is changed in VR (which is pretty straight forward by simply adjusting the virtual cameras) and compare the results to the depth perception in the real world but also with a changed dispartiy. To do that, we have a set of Hyperscopes, this are glasses that uses mirrors to change the disparity. Really challenging is the definition of an experiment that avoids other depth cues which can influence the results.
Prerequisites
This thesis does not require the ultimate programming skills. Nevertheless, it would be helpful if you have a little experience with the Unreal Engine to set up a scene and change the disparity of the virtual cameras. This thesis mainly requires a lot of literature research about the human depth perception but also about the design of good user studies. Moreover, some knowledge about statistics could be helpful for the analysis of the results. Participating our VR course can be a good starting point.
Contact:
Rene Weller, weller at informatik.uni-bremen.de
Prof. Dr. Gabriel Zachmann,
zach at informatik.uni-bremen.de
Master thesis: Radiotherapy optimization
Subject
In radio therapy, tumors or other unheathly tissue is irradiated by a beam (or several beams) of high-energy electromagnetic (radio) waves. If the irradiated energy is large enough, then the unhealthy tissue is "killed", Of course, there is a challenge: the radio beams should hit all the unhealthy tissue, but only that one &emdash; it should leave the healthy tissue intact.
The beams are usually generated by linear accelerators, and the cross section of the beams can be shaped by multi-leaf collimators (think "frustum through arbitrarily shaped window").
In addition, it is possible to overlap several beams coming in from different angles, where the goal is to make the shape of the intersectoin volume as close to the treatment volume as possible. Then, it is easier to adjust the energy of the beams such that the sum of the energies in the intersection volume reaches the level where it can kill the unhealthy tissue, while the energy is below the threshold where it would harm the healthy tissue.
A further challenge arises from the characteristics of proton beams (which are usually used in this kind of therapy): they lose energy as they enter the tissue, but the energy loss does not depend linearly on the penetration depth ("Bragg peak"), and they spread out as they go deeper.
Tasks/Challenges
The goal is to develop algorithms that can compute the optimal positions and energy levels of the proton beams, given a specific target volume (tumor), healthy tissue, and bones in the form of a CT or MRI volume.
- Understand the essential characteristics of the beams, the collimators, and the tissue
- Obtain and understand suitable volume data for later testing from the TCAI
- Probably investigate first inside-out (polygonal) rendering similar to the approach we have followed here
- Investigate a ray-tracing approach (inside out); the idea is to shoot rays from the target volume outwards, taking scattering and dissipating effects into account.
- If the ray-tracing approach is feasible, then you should investigate the potential of the new RTX graphics card, which provide support for ray-tracing
Prerequisites
- Algorithmic thinking
- Experience in C++
- Nice-to-have: knowledge in computer graphics, medical imaging, or geometric computing
Contact:
Thomas Hudcovic,
hudo at uni-bremen dot de
Prof. Dr. Gabriel Zachmann, zach informatik.uni-bremen.de
Virtuelle 3D Simulation von Korallenriffen
Inhalt
Das Leibniz-Zentrum für Marine Tropenökologie (ZMT) forscht über Küstenökosysteme in den Tropen
und ihre Reaktion auf Änderungen in ihrer Umwelt. Basierend auf realen Daten zur Interaktion von
Korallen und ihrer Reaktion auf Umweltveränderungen ist am ZMT ein abstraktes Simulationsmodell
eines Korallenriffes entstanden, das zeigt, wie es sich unter Einfluss verschiedener Stressfaktoren
entwickelt. Zur besseren Veranschaulichung soll auf Basis des vorhandenen Wissens über die Abläufe
in Korallenriffen eine dreidimensionale virtuelle Umgebung (ggf. auch immersiv) geschaffen werden,
mit der interaktiv die Entwicklung des Riffs beeinflusst und naturnah verfolgt werden kann.
Mögliche Aufgaben
- Programmierung von 3D Simulationen ausgewählter Rifforganismen, wie z.B. verschiedener Korallenarten und Rifffische. Insbesondere sollen dabei die Regeln und Algorithmen des ZMT umgesetzt werden, die Veränderungen dieser Organismen unter Einfluss von Umweltstressoren beschreiben.
- Entwicklung von Algorithmen zur Darstellung von Veränderungen ausgewählter 3D simulierter Rifforganismen.
- Integration verschiedener, ausgewählter Rifforganismen in eine gemeinsame virtuelle Umgebung.
- Entwicklung und Implementierung von Interaktionsmetaphern, die es z.B. Ausstellungsbesuchern erlauben, sehr einfach und intuitiv mit der virtuellen Umgebung zu interagieren (z.B. Navigation) und Umweltfaktoren (z.B. Temperatur) zu verändern.
Voraussetzungen
- Kenntnisse in der Modellierung von 3D Objekten mit Maya oder 3DS Max.
- Erfahrung mit Game-Engines (z.B. Ogre3D, Unity, CryEngine) und der Programmierung in diesen API’s bzw. Frameworks.
- Programmiererfahrung in C++ oder einer Game-Engine-Skript-Sprache
- Computergraphik-Kenntnisse
Kontakt
Je nach Schwerpunkt und Studiengang liegt die Betreuung überwiegend
bei der AG Computergraphik und Virtuelle Realität oder beim ZMT.
Prof. G. Zachmann, zach at informatik.uni-bremen.de, Tel. 63991, Bibliothekstraße 5, 3.OG MZH 3460
PD Dr. Hauke Reuter
Leibniz-Zentrum für Marine Tropenökologie GmbH Bremen
E-Mail: Hauke.reuter at zmt-bremen.de