Rabie, Ahmad, Lang, Christian, Hanheide, Marc et al, Castrillon-Santana, Modesto and Sagerer, Gerhard
(2008)
Automatic initialization for facial analysis in interactive robotics.
In: 6th International Conference, ICVS 2008, May 12-15, 2008, Santorini, Greece.
Full content URL: http://dx.doi.org/10.1007/978-3-540-79547-6_50
Rabie2008-Automatic_Initialization_for_Facial_Analysis_in_Interactive_Robotics.pdf | | ![[img]](http://eprints.lincoln.ac.uk/style/images/fileicons/application_pdf.png) [Download] |
|
![[img]](http://eprints.lincoln.ac.uk/style/images/fileicons/application_pdf.png)  Preview |
|
PDF
Rabie2008-Automatic_Initialization_for_Facial_Analysis_in_Interactive_Robotics.pdf
- Whole Document
1MB |
Item Type: | Conference or Workshop contribution (Paper) |
---|
Item Status: | Live Archive |
---|
Abstract
The human face plays an important role in communication as it allows to discern different interaction partners and provides non-verbal feedback. In this paper, we present a soft real-time vision system that enables an interactive robot to analyze faces of interaction partners not only to identify them, but also to recognize their respective facial expressions as a dialog-controlling non-verbal cue. In order to assure applicability in real world environments, a robust detection scheme is presented which detects faces and basic facial features such as the position of the mouth, nose, and eyes. Based on these detected features, facial parameters are extracted using active appearance models (AAMs) and conveyed to support vector machine (SVM) classifiers to identify both persons and facial expressions. This paper focuses on four different initialization methods for determining the initial shape for the AAM algorithm and their particular performance in two different classification tasks with respect to either the facial expression DaFEx database and to the real world data obtained from a robot’s point of view.
Additional Information: | The human face plays an important role in communication as it allows to discern different interaction partners and provides non-verbal feedback. In this paper, we present a soft real-time vision system that enables an interactive robot to analyze faces of interaction partners not only to identify them, but also to recognize their respective facial expressions as a dialog-controlling non-verbal cue. In order to assure applicability in real world environments, a robust detection scheme is presented which detects faces and basic facial features such as the position of the mouth, nose, and eyes. Based on these detected features, facial parameters are extracted using active appearance models (AAMs) and conveyed to support vector machine (SVM) classifiers to identify both persons and facial expressions. This paper focuses on four different initialization methods for determining the initial shape for the AAM algorithm and their particular performance in two different classification tasks with respect to either the facial expression DaFEx database and to the real world data obtained from a robot’s point of view. |
---|
Keywords: | Robotics, Human-robot interaction |
---|
Subjects: | H Engineering > H670 Robotics and Cybernetics |
---|
Divisions: | College of Science > School of Computer Science |
---|
ID Code: | 6929 |
---|
Deposited On: | 04 Jan 2013 20:31 |
---|
Repository Staff Only: item control page