Huttenrauch, Max, Sosic, Adrian and Neumann, Gerhard
(2018)
Exploiting Local Communication Protocols for Learning Complex Swarm Behaviors with Deep Reinforcement Learning.
In: International Conference for Swarm Intelligence (ANTS).
Exploiting Local Communication Protocols for Learning Complex Swarm Behaviors with Deep Reinforcement Learning | | ![[img]](http://eprints.lincoln.ac.uk/style/images/fileicons/application_pdf.png) [Download] |
|
![[img]](http://eprints.lincoln.ac.uk/style/images/fileicons/application_pdf.png) |
PDF
1709.07224.pdf
- Whole Document
507kB |
Item Type: | Conference or Workshop contribution (Paper) |
---|
Item Status: | Live Archive |
---|
Abstract
Swarm systems constitute a challenging problem for reinforcement learning (RL) as the algorithm needs to learn decentralized control policies that can cope with limited local sensing and communication abilities of the agents. While it is often difficult to directly define the behavior of the agents, simple communication protocols can be defined more easily using prior knowledge about the given task. In this paper, we propose a number of simple communication protocols that can be exploited by deep reinforcement learning to find decentralized control policies in a multi-robot swarm environment. The protocols are based on histograms that encode the local neighborhood relations of the gents
and can also transmit task-specific information, such as the shortest distance and direction to a desired target. In our framework, we use an adaptation of Trust Region Policy Optimization to learn complex collaborative tasks, such as formation building and building a communication link. We evaluate our findings in a simulated 2D-physics environment, and compare the implications of different communication protocols.
Repository Staff Only: item control page