Harman, Helen and Simoens, Pieter (2020) Learning Symbolic Action Definitions from Unlabelled Image Pairs. In: The 4th International Conference on Advances in Artificial Intelligence.
Full content URL: https://doi.org/10.1145/3441417.3441419
Full text not available from this repository.
Item Type: | Conference or Workshop contribution (Paper) |
---|---|
Item Status: | Live Archive |
Abstract
Task planners and goal recognisers often require symbolic models of an agent’s behaviour. These models are usually manually developed, which can be a time consuming and error prone process. Therefore, our work transforms unlabelled pairs of images, showing the state before and after an action has been executed, into reusable action definitions. Each action definition consist of a set of parameters, effects and preconditions. To evaluate these action definitions, states were generated and a task planner invoked. Problems with large state spaces were solved using the action definitions learnt from smaller state spaces. On average, the task plans contained 5.46 actions and planning took 0.06 seconds. Moreover, when 20 % of transitions were missing, our approach generated the correct number of objects, action definitions and plans 70 % of the time.
Keywords: | task planning, modelling agents, representation learning |
---|---|
Subjects: | G Mathematical and Computer Sciences > G700 Artificial Intelligence |
Divisions: | College of Science > Lincoln Institute for Agri-Food Technology |
ID Code: | 44709 |
Deposited On: | 09 Jun 2021 12:52 |
Repository Staff Only: item control page