Cuayahuitl, Heriberto, Yu, Seunghak, Williamson, Ashley and Carse, Jacob (2017) Scaling up deep reinforcement learning for multi-domain dialogue systems. In: International Joint Conference on Neural Networks (IJCNN), 14 - 19 May 2017, Anchorage, Alaska.
Full content URL: https://doi.org/10.1109/IJCNN.2017.7966275
Documents |
|
|
PDF
PID4664349.pdf - Whole Document 1MB |
Item Type: | Conference or Workshop contribution (Paper) |
---|---|
Item Status: | Live Archive |
Abstract
Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems due to large search spaces. This paper proposes a three-stage method for multi-domain dialogue policy learning—termed NDQN, and applies it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. In this method, the first stage does multi-policy learning via a network of DQN agents; the second makes use of compact state representations by compressing raw inputs; and the third stage applies a pre-training phase for bootstraping the behaviour of agents in the network. Experimental results comparing DQN
(baseline) versus NDQN (proposed) using simulations report that the proposed method exhibits better scalability and is
promising for optimising the behaviour of multi-domain dialogue systems. An additional evaluation reports that the NDQN agents outperformed a K-Nearest Neighbour baseline in task success and dialogue length, yielding more efficient and successful dialogues.
Keywords: | Deep Reinforcement Learning, Multi-Domain Dialogue Systems |
---|---|
Subjects: | G Mathematical and Computer Sciences > G700 Artificial Intelligence G Mathematical and Computer Sciences > G710 Speech and Natural Language Processing G Mathematical and Computer Sciences > G730 Neural Computing |
Divisions: | College of Science > School of Computer Science |
ID Code: | 26622 |
Deposited On: | 06 Mar 2017 16:03 |
Repository Staff Only: item control page