Cuayahuitl, Heriberto, Couly, Guillaume and Olalainty, Clement
(2016)
Training an interactive humanoid robot using multimodal deep reinforcement learning.
In: NIPS Workshop on Deep Reinforcement Learning, 9 December 2016, Barcelona, Spain.
Full content URL: http://arxiv.org/abs/1611.08666
![[img]](http://eprints.lincoln.ac.uk/25937/1.hassmallThumbnailVersion/1611.08666v1.pdf)  Preview |
|
PDF
1611.08666v1.pdf
- Whole Document
1MB |
Item Type: | Conference or Workshop contribution (Paper) |
---|
Item Status: | Live Archive |
---|
Abstract
Training robots to perceive, act and communicate using multiple modalities still represents a challenging problem, particularly if robots are expected to learn efficiently from small sets of example interactions. We describe a learning approach as a step in this direction, where we teach a humanoid robot how to play the game of noughts and crosses. Given that multiple multimodal skills can be trained to play this game, we focus our attention to training the robot to perceive the game, and to interact in this game. Our multimodal deep reinforcement learning agent perceives multimodal features and exhibits verbal and non-verbal actions while playing. Experimental results using simulations show that the robot can learn to win or draw up to 98% of the games. A pilot test of the proposed multimodal system for the targeted game---integrating speech, vision and gestures---reports that reasonable and fluent interactions can be achieved using the proposed approach.
Repository Staff Only: item control page