Coordinating interactive vision behaviors for cognitive assistance

Wachsmuth, Sven and Wrede, Sebastian and Hanheide, Marc (2007) Coordinating interactive vision behaviors for cognitive assistance. Computer Vision and Image Understanding, 108 (1-2). pp. 135-149. ISSN 1077-3142

Full text not available from this repository.

Full text URL: http://dx.doi.org/10.1016/j.cviu.2006.10.018

Abstract

Most of the research conducted in human-computer interaction (HCI) focuses on a seamless interface between a user and an application that is separated from the user in terms of working space and/or control, like navigation in image databases, instruction of robots, or information retrieval systems. The interaction paradigm of cognitive assistance goes one step further in that the application consists of assisting the user performing everyday tasks in his or her own environment and in that the user and the system share the control of such tasks. This kind of tight bidirectional interaction in realistic environments demands cognitive system skills like context awareness, attention, learning, and reasoning about the external environment. Therefore, the system needs to integrate a wide variety of visual functions, like localization, object tracking and recognition, action recognition, interactive object learning, etc. In this paper we show how different kinds of system behaviors are realized using the Active Memory Infrastructure that provides the technical basis for distributed computation and a data- and event-driven integration approach. A running augmented reality system for cognitive assistance is presented that supports users in mixing beverages. The flexibility and generality of the system framework provides an ideal testbed for studying visual cues in human-computer interaction. We report about results from first user studies.

Item Type:Article
Additional Information:Most of the research conducted in human-computer interaction (HCI) focuses on a seamless interface between a user and an application that is separated from the user in terms of working space and/or control, like navigation in image databases, instruction of robots, or information retrieval systems. The interaction paradigm of cognitive assistance goes one step further in that the application consists of assisting the user performing everyday tasks in his or her own environment and in that the user and the system share the control of such tasks. This kind of tight bidirectional interaction in realistic environments demands cognitive system skills like context awareness, attention, learning, and reasoning about the external environment. Therefore, the system needs to integrate a wide variety of visual functions, like localization, object tracking and recognition, action recognition, interactive object learning, etc. In this paper we show how different kinds of system behaviors are realized using the Active Memory Infrastructure that provides the technical basis for distributed computation and a data- and event-driven integration approach. A running augmented reality system for cognitive assistance is presented that supports users in mixing beverages. The flexibility and generality of the system framework provides an ideal testbed for studying visual cues in human-computer interaction. We report about results from first user studies.
Keywords:Robotics, Human-robot interaction
Subjects:H Engineering > H670 Robotics and Cybernetics
Divisions:College of Science > School of Computer Science
ID Code:6701
Deposited By: Marc Hanheide
Deposited On:26 Oct 2012 12:41
Last Modified:26 Feb 2013 10:05

Repository Staff Only: item control page