Foot Motion

We developed our foot control to replace the common method in such way, that the surgeon is able to navigate through the image data by himself without leaving the operating table in order to gather more efficiency. To do so the surgeon simply has to interact with the system by performing directed movements with his feet.


Just as the common method our foot control is capable of switching between single images as well as to navigate quickly through large image stacks, which is equivalent to the commands “next image” or “go forward until I tell you to stop” the surgeon usually gives. The foot control works with the combination of the following discreet and continuous gestures:


A foot tap leads to switching to exactly one next or previous image.

A foot swipe in the form of a rotation around the heel leads to the continuous image navigation. As long as the foot keeps rotating the images are navigated with different speeds depending on the degree of rotation. This continues until the foot is put back in the initial position.


To get in the way of accidentally interact with the system when it is unwanted the system has to be activated by a speech recognition prior to using it. After activation the system recognizes foot gestures until it is shut down via speech recognition again.


We are using optical tracking for our foot control. The use of environment sensors has the advantage over body-worn sensors, that there is simply one device per operation required and the surgeons do not have to own a particular adjusted pair of shoes with integrated hardware that would cause additional expenses. By fixing the cameras underneath the operating table the surgeons’ radius of movement persists.


To recognize the feet there just have some particular markers (e.g. sticker) to be attached to the shoes. These markers guarantee a nonconstraining functionality even under bad light conditions.