Facial expressions as feedback cue in human-robot interaction - a comparison between human and automatic recognition performances

Lang, Christian, Wachsmuth, Sven, Wersing, Heiko and Hanheide, Marc (2010) Facial expressions as feedback cue in human-robot interaction - a comparison between human and automatic recognition performances. In: Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 13-18 June 2010, San Francisco, CA.

Full content URL: http://dx.doi.org/10.1109/CVPRW.2010.5543264

Documents
Lang2010-Facial_expressions_as_feedback_cue_in_human-robot_interaction---a_comparison_between_human_and_automatic_recogn[1].pdf
[img]
[Download]
[img]
Preview
PDF
Lang2010-Facial_expressions_as_feedback_cue_in_human-robot_interaction---a_comparison_between_human_and_automatic_recogn[1].pdf - Whole Document

379kB
Item Type:Conference or Workshop contribution (Paper)
Item Status:Live Archive

Abstract

Facial expressions are one important nonverbal communication cue, as they can provide feedback in conversations between people and also in human-robot interaction. This paper presents an evaluation of three standard pattern recognition techniques (active appearance models, gabor energy filters, and raw images) for facial feedback interpretation in terms of valence (success and failure) and compares the results to the human performance. The used database contains videos of people interacting with a robot by teaching the names of several objects to it. After teaching, the robot should term the objects correctly. The subjects reacted to its answer while showing spontaneous facial expressions, which were classified in this work. One main result is that an automatic classification of facial expressions in terms of valence using simple standard pattern recognition techniques is possible with an accuracy comparable to the average human classification rate, but with a high variance between different subjects, likewise to the human performance.

Additional Information:Facial expressions are one important nonverbal communication cue, as they can provide feedback in conversations between people and also in human-robot interaction. This paper presents an evaluation of three standard pattern recognition techniques (active appearance models, gabor energy filters, and raw images) for facial feedback interpretation in terms of valence (success and failure) and compares the results to the human performance. The used database contains videos of people interacting with a robot by teaching the names of several objects to it. After teaching, the robot should term the objects correctly. The subjects reacted to its answer while showing spontaneous facial expressions, which were classified in this work. One main result is that an automatic classification of facial expressions in terms of valence using simple standard pattern recognition techniques is possible with an accuracy comparable to the average human classification rate, but with a high variance between different subjects, likewise to the human performance.
Keywords:Robotics, Human-robot interaction
Subjects:H Engineering > H670 Robotics and Cybernetics
Divisions:College of Science > School of Computer Science
ID Code:6913
Deposited On:07 Jan 2013 10:35

Repository Staff Only: item control page