Active vision-based localization for robots in a home-tour scenario

Schubert, Falk and Spexard, Thorsten P. and Hanheide, Marc and Wachsmuth, Sven (2007) Active vision-based localization for robots in a home-tour scenario. In: 5th International Conference on Computer Vision Systems (ICVS 2007), 21st-24th March 2007, Bielefeld University, Germany on March 21st-24th.

Documents
Schubert2007-Active_Vision-based_Localization_For_Robots_In_A_Home-Tour_Scenario.pdf
[img]
[Download]
[img]
Preview
PDF
Schubert2007-Active_Vision-based_Localization_For_Robots_In_A_Home-Tour_Scenario.pdf - Whole Document

995kB

Full text URL: http://dx.doi.org/10.2390/biecoll-icvs2007-101

Abstract

Self-Localization is a crucial task for mobile robots. It is not only a requirement
for auto navigation but also provides contextual information to support
human robot interaction (HRI). In this paper we present an active vision-based
localization method for integration in a complex robot system to work in human
interaction scenarios (e.g. home-tour) in a real world apartment. The holistic
features used are robust to illumination and structural changes in the scene. The
system uses only a single pan-tilt camera shared between different vision applications
running in parallel to reduce the number of sensors. Additional information
from other modalities (like laser scanners) can be used, profiting of an integration
into an existing system. The camera view can be actively adapted and the
evaluation showed that different rooms can be discerned.

Item Type:Conference or Workshop Item (Paper)
Additional Information:Self-Localization is a crucial task for mobile robots. It is not only a requirement for auto navigation but also provides contextual information to support human robot interaction (HRI). In this paper we present an active vision-based localization method for integration in a complex robot system to work in human interaction scenarios (e.g. home-tour) in a real world apartment. The holistic features used are robust to illumination and structural changes in the scene. The system uses only a single pan-tilt camera shared between different vision applications running in parallel to reduce the number of sensors. Additional information from other modalities (like laser scanners) can be used, profiting of an integration into an existing system. The camera view can be actively adapted and the evaluation showed that different rooms can be discerned.
Keywords:Robotics, Human-robot interaction
Subjects:H Engineering > H670 Robotics and Cybernetics
Divisions:College of Science > School of Computer Science
ID Code:6939
Deposited By: Marc Hanheide
Deposited On:30 Nov 2012 15:08
Last Modified:13 Mar 2013 09:19

Repository Staff Only: item control page