An active memory model for cognitive computer vision systems

Wachsmuth, Sven, Wrede, Sebastian, Hanheide, Marc and Bauckhage, Christian (2005) An active memory model for cognitive computer vision systems. KI - Künstliche Intelligenz, 19 (2). pp. 25-31. ISSN 0933-1875

Full content URL: http://www.kuenstliche-intelligenz.de/index.php?id...

Documents
Wachsmuth2005-An_Active_Memory_Model_for_Cognitive_Computer_Vision_Systems.pdf
[img] PDF
Wachsmuth2005-An_Active_Memory_Model_for_Cognitive_Computer_Vision_Systems.pdf - Whole Document
Restricted to Repository staff only

758kB
Item Type:Article
Item Status:Live Archive

Abstract

Computer vision is becoming an integral part in human-machine interfaces as research increasingly aims at a seamless
and natural interaction between a user and an application system. Gesture recognition, context awareness, and grounding
concepts in the commonly perceived environment as well as in the interaction history are key abilities of such systems.
Simultaneously, recent computer vision research has indicated that integrated systems which are embedded in the world
and interact with their environment seem a prerequisite for solving more general vision tasks. Cognitive computer vision
systems which enable the generation of knowledge on the basis of perception, reasoning, and extension of prior models
are a major step towards this goal. For these, the integration, interaction and organization of memory becomes a key
issue in system design. In this article we will present a computational framework for integrated vision systems that is
centered around an active memory component. It supports a fast integration and substitution of system components,
various means of interaction patterns, and enables a system to reason about its own memory content. This framework
will be exemplified by means of a cognitive human-machine interface in an Augmented Reality scenario. The system is
able to acquire new concepts from interaction and provides a context aware scene augmentation for the user.

Additional Information:Computer vision is becoming an integral part in human-machine interfaces as research increasingly aims at a seamless and natural interaction between a user and an application system. Gesture recognition, context awareness, and grounding concepts in the commonly perceived environment as well as in the interaction history are key abilities of such systems. Simultaneously, recent computer vision research has indicated that integrated systems which are embedded in the world and interact with their environment seem a prerequisite for solving more general vision tasks. Cognitive computer vision systems which enable the generation of knowledge on the basis of perception, reasoning, and extension of prior models are a major step towards this goal. For these, the integration, interaction and organization of memory becomes a key issue in system design. In this article we will present a computational framework for integrated vision systems that is centered around an active memory component. It supports a fast integration and substitution of system components, various means of interaction patterns, and enables a system to reason about its own memory content. This framework will be exemplified by means of a cognitive human-machine interface in an Augmented Reality scenario. The system is able to acquire new concepts from interaction and provides a context aware scene augmentation for the user.
Keywords:Robotics, Human-robot interaction, Cameras, Humanoid robots, Real time systems, Robot sensing systems, Robot vision systems
Subjects:H Engineering > H670 Robotics and Cybernetics
Divisions:College of Science > School of Computer Science
ID Code:6742
Deposited On:02 Nov 2012 10:21

Repository Staff Only: item control page