Facial communicative signal interpretation in human-robot interaction by discriminative video subsequence selection

Lang, Christian, Wachsmuth, Sven, Hanheide, Marc and Wersing, Heiko (2013) Facial communicative signal interpretation in human-robot interaction by discriminative video subsequence selection. In: International Conference on Robotics and Automation (ICRA), May 6-10 2013, Karlsruhe.

Full content URL: http://dx.doi.org/10.1109/ICRA.2013.6630572

Documents
Facial Communicative Signal Interpretation in Human-Robot Interaction by Discriminative Video Subsequence Selection
[img]
[Download]
[img]
Preview
PDF
paper.pdf - Whole Document

632kB
Item Type:Conference or Workshop contribution (Paper)
Item Status:Live Archive

Abstract

Facial communicative signals (FCSs) such as head gestures, eye gaze, and facial expressions can provide useful feedback in conversations between people and also in humanrobot interaction. This paper presents a pattern recognition approach for the interpretation of FCSs in terms of valence, based on the selection of discriminative subsequences in video data. These subsequences capture important temporal dynamics and are used as prototypical reference subsequences in a classification procedure based on dynamic time warping and feature extraction with active appearance models. Using this valence classification, the robot can discriminate positive from negative interaction situations and react accordingly. The approach is evaluated on a database containing videos of people interacting with a robot by teaching the names of several objects to it. The verbal answer of the robot is expected to elicit the display of spontaneous FCSs by the human tutor, which were classified in this work. The achieved classification accuracies are comparable to the average human recognition performance and outperformed our previous results on this task.

Additional Information:Facial communicative signals (FCSs) such as head gestures, eye gaze, and facial expressions can provide useful feedback in conversations between people and also in humanrobot interaction. This paper presents a pattern recognition approach for the interpretation of FCSs in terms of valence, based on the selection of discriminative subsequences in video data. These subsequences capture important temporal dynamics and are used as prototypical reference subsequences in a classification procedure based on dynamic time warping and feature extraction with active appearance models. Using this valence classification, the robot can discriminate positive from negative interaction situations and react accordingly. The approach is evaluated on a database containing videos of people interacting with a robot by teaching the names of several objects to it. The verbal answer of the robot is expected to elicit the display of spontaneous FCSs by the human tutor, which were classified in this work. The achieved classification accuracies are comparable to the average human recognition performance and outperformed our previous results on this task.
Keywords:Facial Expressions, HRI
Subjects:H Engineering > H671 Robotics
G Mathematical and Computer Sciences > G740 Computer Vision
Divisions:College of Science > School of Computer Science
Related URLs:
ID Code:7880
Deposited On:07 Mar 2013 15:07

Repository Staff Only: item control page