From images via symbols to contexts: using augmented reality for interactive model acquisition

Wachsmuth, Sven and Hanheide, Marc and Wrede, Sebastian and Bauckhage, Christian (2005) From images via symbols to contexts: using augmented reality for interactive model acquisition. In: KI 2005 Workshop on Mixed-reality as a Challenge to Image Understanding and Artificial Intelligence, September 11 - 14, 2005, Koblenz, Germany.

Full content URL: aiweb.techfak.uni-bielefeld.de/files/papers/Wachsm...

Documents
Wachsmuth2005-From_Images_via_Symbols_to_Contexts_Using_Augmented_Reality_for_Interactive_Model_Acquisition.pdf
[img]
[Download]
[img]
Preview
PDF
Wachsmuth2005-From_Images_via_Symbols_to_Contexts_Using_Augmented_Reality_for_Interactive_Model_Acquisition.pdf - Whole Document

429kB

Abstract

Systems that perform in real environments need to bind the internal state to externally
perceived objects, events, or complete scenes. How to learn this correspondence has been a long
standing problem in computer vision as well as artificial intelligence. Augmented Reality provides
an interesting perspective on this problem because a human user can directly relate displayed
system results to real environments. In the following we present a system that is able to bootstrap
internal models from user-system interactions. Starting from pictorial representations it learns
symbolic object labels that provide the basis for storing observed episodes. In a second step, more
complex relational information is extracted from stored episodes that enables the system to react
on specific scene contexts.

Item Type:Conference or Workshop Item (Paper)
Additional Information:Systems that perform in real environments need to bind the internal state to externally perceived objects, events, or complete scenes. How to learn this correspondence has been a long standing problem in computer vision as well as artificial intelligence. Augmented Reality provides an interesting perspective on this problem because a human user can directly relate displayed system results to real environments. In the following we present a system that is able to bootstrap internal models from user-system interactions. Starting from pictorial representations it learns symbolic object labels that provide the basis for storing observed episodes. In a second step, more complex relational information is extracted from stored episodes that enables the system to react on specific scene contexts.
Keywords:Robotics, Human-robot interaction
Subjects:H Engineering > H670 Robotics and Cybernetics
Divisions:College of Science > School of Computer Science
ID Code:6945
Deposited By: Marc Hanheide
Deposited On:30 Nov 2012 11:02
Last Modified:13 Mar 2013 09:19

Repository Staff Only: item control page