Novel Multimodal Interaction Techniques on multi-touch tabletops

Translating the complexity of interfacing with a machine into a true Smart HMI using available technologies and Natural Computer Interaction, NCI. New innovative HMI solutions need replace the operator at the center of the process by applying a true UX Design thinking approach.


TUITO enhance the reach of new Man-Machine experience through supply of high quality hardware and software solutions, development of design services with a user centric approach and integration of innovative interactive technologies in order to improve productivity and product quality.
TUITO research, innovate, architect, design, develop and validate in the field of innovative multi-modal face and body analytics and Natural Computer Interaction (NCI) solutions.
TUITO delivers these capabilities in a market- ready reference architecture, fitting a variety of applications, and provide validation through user experience evaluation and benchmarking.

2. Context for Multi-Modal Interaction techniques

New technologies such as interactive tabletops, eye-tracker, HMD offer new perspectives for Human Computer Interaction (HCI) in collaborative environments such as planning or crisis rooms or to pilot complex machines or systems such as vehicles’ cockpits. While these technologies provide clear benefits, it remains unclear how to efficiently use them. Designers should precisely understand the perceptive, motor and cognitive human abilities, the task and the environment (e.g. safety requirements) in order to favor individual and collective performance, develop users’ expertise, improve cognition and decision making.

3. Development of Multi-Modal Interaction techniques

The primary objective of this thesis is to design, implement and evaluate novel multimodal interaction techniques to favor performance, develop users’ expertise and improve cognition and decision making of the operator in a collaborative environment. The second objective is to develop a platform to facilitate the prototyping, the elaboration, the parameterization and the validation of these interaction techniques. In particular, it should help designers to (1) characterize and formalize multimodal interaction, (2) select relevant criteria and parameters in order to (3) define the design space.

4. Method for Multi-Modal Interaction techniques Integration

The TUITO project will focus on intelligent user sensing methods, which can include face analysis, eye tracking, emotion, intention & authentication, hand gesture recognition or voice analysis, These methods will use models of users’ behavior (intention, distraction) and its environment (task, context) to adapt (1) the interpretation of the users actions and (2) the presentation of the information. In particular, the system should choose which information to show and how (visual, audio, haptic) in order to provide natural and intuitive interactions.

Reference Article to be mentioned
WHY and HOW to study multimodal interaction in cockpit design