Training an interactive humanoid robot using multimodal deep reinforcement learning

Cuayahuitl, Heriberto, Couly, Guillaume and Olalainty, Clement (2016) Training an interactive humanoid robot using multimodal deep reinforcement learning. In: NIPS Workshop on Deep Reinforcement Learning, 9 December 2016, Barcelona, Spain.

Full content URL:

1611.08666v1.pdf - Whole Document

Item Type:Conference or Workshop contribution (Paper)
Item Status:Live Archive


Training robots to perceive, act and communicate using multiple modalities still represents a challenging problem, particularly if robots are expected to learn efficiently from small sets of example interactions. We describe a learning approach as a step in this direction, where we teach a humanoid robot how to play the game of noughts and crosses. Given that multiple multimodal skills can be trained to play this game, we focus our attention to training the robot to perceive the game, and to interact in this game. Our multimodal deep reinforcement learning agent perceives multimodal features and exhibits verbal and non-verbal actions while playing. Experimental results using simulations show that the robot can learn to win or draw up to 98% of the games. A pilot test of the proposed multimodal system for the targeted game---integrating speech, vision and gestures---reports that reasonable and fluent interactions can be achieved using the proposed approach.

Keywords:Deep Reinforcement Learning, Multimodal Human-Robot Interaction, JCOpen
Subjects:G Mathematical and Computer Sciences > G700 Artificial Intelligence
Divisions:College of Science > School of Computer Science
Related URLs:
ID Code:25937
Deposited On:02 Feb 2017 14:58

Repository Staff Only: item control page