Feedback interpretation based on facial expressions in human-robot interaction

Lang, Christian and Hanheide, Marc and Lohse, Manja and Wersing, Heiko and Sagerer, Gerhard (2009) Feedback interpretation based on facial expressions in human-robot interaction. In: The 18th IEEE International Symposium on Robot and Human Interactive Communication, 27 September - 2 October 2009, Toyama, Japan.

Full content URL: http://dx.doi.org/10.1109/ROMAN.2009.5326199

Documents
Lang2009-Feedback_interpretation_based_on_facial_expressions_in_human-robot_interaction.pdf

Request a copy
[img] PDF
Lang2009-Feedback_interpretation_based_on_facial_expressions_in_human-robot_interaction.pdf - Whole Document
Restricted to Repository staff only

372kB
Item Type:Conference or Workshop contribution (Paper)
Item Status:Live Archive

Abstract

In everyday conversation besides speech people also communicate by means of nonverbal cues. Facial expressions are one important cue, as they can provide useful information about the conversation, for instance, whether the interlocutor seems to understand or appears to be puzzled. Similarly, in human-robot interaction facial expressions also give feedback about the interaction situation. We present a Wizard of Oz user study in an object-teaching scenario where subjects showed several objects to a robot and taught the objects' names. Afterward, the robot should term the objects correctly. In a first evaluation, we let other people watch short video sequences of this study. They decided by looking at the face of the human whether the answer of the robot was correct (unproblematic situation) or incorrect (problematic situation). We conducted the experiments under specific conditions by varying the amount of temporal and visual context information and compare the results with related experiments described in the literature.

Additional Information:In everyday conversation besides speech people also communicate by means of nonverbal cues. Facial expressions are one important cue, as they can provide useful information about the conversation, for instance, whether the interlocutor seems to understand or appears to be puzzled. Similarly, in human-robot interaction facial expressions also give feedback about the interaction situation. We present a Wizard of Oz user study in an object-teaching scenario where subjects showed several objects to a robot and taught the objects' names. Afterward, the robot should term the objects correctly. In a first evaluation, we let other people watch short video sequences of this study. They decided by looking at the face of the human whether the answer of the robot was correct (unproblematic situation) or incorrect (problematic situation). We conducted the experiments under specific conditions by varying the amount of temporal and visual context information and compare the results with related experiments described in the literature.
Keywords:Robotics, Human-robot interaction, image sequences, robot vision, Context, Face, Feedback, Human robot interaction, Robot control, Robot sensing systems, Robotics and automation, Speech, Video sequences, Watches, Wizard of Oz, facial expressions, feedback interpretation, object-teaching scenario
Subjects:H Engineering > H670 Robotics and Cybernetics
Divisions:College of Science > School of Computer Science
ID Code:6919
Deposited On:08 Jan 2013 16:03

Repository Staff Only: item control page