Wachsmuth, Sven, Hanheide, Marc, Wrede, Sebastian et al and Bauckhage, Christian
(2005)
From images via symbols to contexts: using augmented reality for interactive model acquisition.
In: KI 2005 Workshop on Mixed-reality as a Challenge to Image Understanding and Artificial Intelligence, September 11 - 14, 2005, Koblenz, Germany.
Full content URL: aiweb.techfak.uni-bielefeld.de/files/papers/Wachsm...
Wachsmuth2005-From_Images_via_Symbols_to_Contexts_Using_Augmented_Reality_for_Interactive_Model_Acquisition.pdf | | ![[img]](http://eprints.lincoln.ac.uk/style/images/fileicons/application_pdf.png) [Download] |
|
![[img]](http://eprints.lincoln.ac.uk/style/images/fileicons/application_pdf.png)  Preview |
|
PDF
Wachsmuth2005-From_Images_via_Symbols_to_Contexts_Using_Augmented_Reality_for_Interactive_Model_Acquisition.pdf
- Whole Document
429kB |
Item Type: | Conference or Workshop contribution (Paper) |
---|
Item Status: | Live Archive |
---|
Abstract
Systems that perform in real environments need to bind the internal state to externally
perceived objects, events, or complete scenes. How to learn this correspondence has been a long
standing problem in computer vision as well as artificial intelligence. Augmented Reality provides
an interesting perspective on this problem because a human user can directly relate displayed
system results to real environments. In the following we present a system that is able to bootstrap
internal models from user-system interactions. Starting from pictorial representations it learns
symbolic object labels that provide the basis for storing observed episodes. In a second step, more
complex relational information is extracted from stored episodes that enables the system to react
on specific scene contexts.
Additional Information: | Systems that perform in real environments need to bind the internal state to externally
perceived objects, events, or complete scenes. How to learn this correspondence has been a long
standing problem in computer vision as well as artificial intelligence. Augmented Reality provides
an interesting perspective on this problem because a human user can directly relate displayed
system results to real environments. In the following we present a system that is able to bootstrap
internal models from user-system interactions. Starting from pictorial representations it learns
symbolic object labels that provide the basis for storing observed episodes. In a second step, more
complex relational information is extracted from stored episodes that enables the system to react
on specific scene contexts. |
---|
Keywords: | Robotics, Human-robot interaction |
---|
Subjects: | H Engineering > H670 Robotics and Cybernetics |
---|
Divisions: | College of Science > School of Computer Science |
---|
Related URLs: | |
---|
ID Code: | 6945 |
---|
Deposited On: | 30 Nov 2012 11:02 |
---|
Repository Staff Only: item control page