Combining environmental cues & head gestures to interact with wearable devices

Hanheide, Marc and Bauckhage, Christian and Sagerer, Gerhard (2005) Combining environmental cues & head gestures to interact with wearable devices. In: 7th international conference on Multimodal interfaces, October 4�6 2005, Trento, Italy..

Documents
Hanheide2005-Combining_environmental_cues__head_gestures_to_interact_with_wearable_devices.pdf
[img]
[Download]
Request a copy
[img] PDF
Hanheide2005-Combining_environmental_cues__head_gestures_to_interact_with_wearable_devices.pdf - Whole Document
Restricted to Repository staff only

3MB

Full text URL: http://dx.doi.org/10.1145/1088463.1088471

Abstract

As wearable sensors and computing hardware are becoming a reality,
new and unorthodox approaches to seamless human-computer
interaction can be explored. This paper presents the prototype of a
wearable, head-mounted device for advanced human-machine interaction
that integrates speech recognition and computer vision
with head gesture analysis based on inertial sensor data. We will
focus on the innovative idea of integrating visual and inertial data
processing for interaction. Fusing head gestures with results from
visual analysis of the environment provides rich vocabularies for
human-machine communication because it renders the environment
into an interface: if objects or items in the surroundings are
being associated with system activities, head gestures can trigger
commands if the corresponding object is being looked at. We will
explain the algorithmic approaches applied in our prototype and
present experiments that highlight its potential for assistive technology.
Apart from pointing out a new direction for seamless interaction
in general, our approach provides a new and easy to use
interface for disabled and paralyzed users in particular.

Item Type:Conference or Workshop Item (Paper)
Additional Information:As wearable sensors and computing hardware are becoming a reality, new and unorthodox approaches to seamless human-computer interaction can be explored. This paper presents the prototype of a wearable, head-mounted device for advanced human-machine interaction that integrates speech recognition and computer vision with head gesture analysis based on inertial sensor data. We will focus on the innovative idea of integrating visual and inertial data processing for interaction. Fusing head gestures with results from visual analysis of the environment provides rich vocabularies for human-machine communication because it renders the environment into an interface: if objects or items in the surroundings are being associated with system activities, head gestures can trigger commands if the corresponding object is being looked at. We will explain the algorithmic approaches applied in our prototype and present experiments that highlight its potential for assistive technology. Apart from pointing out a new direction for seamless interaction in general, our approach provides a new and easy to use interface for disabled and paralyzed users in particular.
Keywords:Robotics, Human-robot interaction
Subjects:H Engineering > H670 Robotics and Cybernetics
Divisions:College of Science > School of Computer Science
ID Code:6944
Deposited By: Marc Hanheide
Deposited On:30 Nov 2012 10:53
Last Modified:13 Mar 2013 09:19

Repository Staff Only: item control page