The human senses of sight, hearing and touch are reflected in gaze and gesture control, voice processing, and audio and haptic control. Interfaces recognise these human abilities, interpret them and react to them. Thanks to machine learning, intuitive interaction can now be continuously optimised.
Visitors can experience this intuitive interaction beyond the use of a mouse and keyboard. As a result, things like avatars can be controlled purely through gazes and gestures. Data can literally be made tactile on a malleable display and be arranged in any way by pressing or pulling. With the help of sounds and clapping, a teddy bear can be controlled in an intergalactic game.
Visitors can discover and see for themselves how machines perform their assigned tasks. Sensory gloves and VR headsets serve as interfaces and are displayed.