Wrede, Sebastian, Hanheide, Marc, Wachsmuth, Sven et al and Sagerer, Gerhard
(2006)
Integration and coordination in a cognitive vision system.
In: Computer Vision Systems, 2006 ICVS '06. IEEE International Conference on, 4 - 7 January 2006, New York.
Full content URL: http://dx.doi.org/10.1109/ICVS.2006.36
Wrede2006-Integration_and_Coordination_in_a_Cognitive_Vision_System_(1).pdf | | ![[img]](http://eprints.lincoln.ac.uk/style/images/fileicons/application_pdf.png) [Download] |
|
![[img]](http://eprints.lincoln.ac.uk/style/images/fileicons/application_pdf.png)  Preview |
|
PDF
Wrede2006-Integration_and_Coordination_in_a_Cognitive_Vision_System_(1).pdf
- Whole Document
611kB |
Item Type: | Conference or Workshop contribution (Paper) |
---|
Item Status: | Live Archive |
---|
Abstract
In this paper, we present a case study that exemplifies
general ideas of system integration and coordination.
The application field of assistant technology provides an
ideal test bed for complex computer vision systems including
real-time components, human-computer interaction, dynamic
3-d environments, and information retrieval aspects.
In our scenario the user is wearing an augmented reality device
that supports her/him in everyday tasks by presenting
information that is triggered by perceptual and contextual
cues. The system integrates a wide variety of visual functions
like localization, object tracking and recognition, action
recognition, interactive object learning, etc. We show
how different kinds of system behavior are realized using
the Active Memory Infrastructure that provides the technical
basis for distributed computation and a data- and eventdriven
integration approach.
Additional Information: | In this paper, we present a case study that exemplifies
general ideas of system integration and coordination.
The application field of assistant technology provides an
ideal test bed for complex computer vision systems including
real-time components, human-computer interaction, dynamic
3-d environments, and information retrieval aspects.
In our scenario the user is wearing an augmented reality device
that supports her/him in everyday tasks by presenting
information that is triggered by perceptual and contextual
cues. The system integrates a wide variety of visual functions
like localization, object tracking and recognition, action
recognition, interactive object learning, etc. We show
how different kinds of system behavior are realized using
the Active Memory Infrastructure that provides the technical
basis for distributed computation and a data- and eventdriven
integration approach. |
---|
Keywords: | Robotics, Human-robot interaction |
---|
Subjects: | H Engineering > H670 Robotics and Cybernetics |
---|
Divisions: | College of Science > School of Computer Science |
---|
Related URLs: | |
---|
ID Code: | 6943 |
---|
Deposited On: | 30 Nov 2012 12:04 |
---|
Repository Staff Only: item control page