Facial communicative signals: valence recognition in task-oriented human-robot interaction

Lang, Christian and Wachsmuth, Sven and Hanheide, Marc and Wersing, Heiko (2012) Facial communicative signals: valence recognition in task-oriented human-robot interaction. International Journal of Social Robotics, 4 (3). pp. 249-262. ISSN 1875-4791

Documents
Lang2012-Facial_Communicative_Signals_-_Valence_Recognition_in_Task-Oriented_Human-Robot_Interaction.pdf
[img]
[Download]
Request a copy
[img] PDF
Lang2012-Facial_Communicative_Signals_-_Valence_Recognition_in_Task-Oriented_Human-Robot_Interaction.pdf - Whole Document
Restricted to Repository staff only

840kB

Full text URL: http://dx.doi.org/10.1007/s12369-012-0145-z

Abstract

From the issue entitled "Measuring Human-Robots Interactions"
This paper investigates facial communicative signals (head gestures, eye gaze, and facial expressions) as nonverbal feedback in human-robot interaction. Motivated by a discussion of the literature, we suggest scenario-specific investigations due to the complex nature of these signals and present an object-teaching scenario where subjects teach the names of objects to a robot, which in turn shall term these objects correctly afterwards. The robot’s verbal answers are to elicit facial communicative signals of its interaction partners. We investigated the human ability to recognize this spontaneous facial feedback and also the performance of two automatic recognition approaches. The first one is a static approach yielding baseline results, whereas the second considers the temporal dynamics and achieved classification rates

Item Type:Article
Additional Information:From the issue entitled "Measuring Human-Robots Interactions" This paper investigates facial communicative signals (head gestures, eye gaze, and facial expressions) as nonverbal feedback in human-robot interaction. Motivated by a discussion of the literature, we suggest scenario-specific investigations due to the complex nature of these signals and present an object-teaching scenario where subjects teach the names of objects to a robot, which in turn shall term these objects correctly afterwards. The robot’s verbal answers are to elicit facial communicative signals of its interaction partners. We investigated the human ability to recognize this spontaneous facial feedback and also the performance of two automatic recognition approaches. The first one is a static approach yielding baseline results, whereas the second considers the temporal dynamics and achieved classification rates
Keywords:Robotics, Human-robot interaction
Subjects:H Engineering > H670 Robotics and Cybernetics
Divisions:College of Science > School of Computer Science
ID Code:6561
Deposited By: Marc Hanheide
Deposited On:12 Oct 2012 11:53
Last Modified:13 Mar 2013 09:16

Repository Staff Only: item control page