[an error occurred while processing this directive]

Doctor Sharon Oviatt

Lecture Abstract

Recent Progress in the Design of Advanced Multimodal Interfaces

The advent of multimodal interfaces based on recognition of human speech, touch, pen input, gesture, gaze, and other natural behavior represents just the beginning of a progression toward pervasive computational interfaces that are capable of human-like sensory perception. Such interfaces eventually will interpret continuous simultaneous input from many different input modes, which will be recognized as users engage in everyday activities. They also will track and incorporate information from multiple sensors on the user’s interface and surrounding physical environment in order to support intelligent multimodal - multisensor adaptation to the user, task and usage environment. In the present talk, I will describe state-of-the-art research on multimodal interaction and interface design, and in particular two topics that are generating considerable activity at the moment both within our own lab and around the world. The first topic focuses on major robustness gains that have been demonstrated for different types of multimodal system, compared with unimodal ones. The second involves a recent surge of research activity on human multisensory processing and users’ multimodal integration patterns during human-computer interaction, as well as implications for the design of adaptive multimodal interfaces. The long-term goal of research in these and related areas is the development of advanced multimodal interfaces that can support new functionality, unparalleled robustness, and flexible adaptation to individual users and real-world mobile usage contexts.