Modelling deception using theory of mind in multi-agent systems

Sarkadi, Ş., Panisson, A.R., Bordini, R.H. , McBurney, P., Parsons, S. and Chapman, M. (2019) Modelling deception using theory of mind in multi-agent systems. AI Communications, 32 (4). pp. 287-302. ISSN 0921-7126

Full content URL: http://doi.org/10.3233/AIC-190615

Full text not available from this repository.

Item Type:Article
Item Status:Live Archive

Abstract

Agreement, cooperation and trust would be straightforward if deception did not ever occur in communicative interactions. Humans have deceived one another since the species began. Do machines deceive one another or indeed humans? If they do, how may we detect this? To detect machine deception, arguably requires a model of how machines may deceive, and how such deception may be identified. Theory of Mind (ToM) provides the opportunity to create intelligent machines that are able to model the minds of other agents. The future implications of a machine that has the capability to understand other minds (human or artificial) and that also has the reasons and intentions to deceive others are dark from an ethical perspective. Being able to understand the dishonest and unethical behaviour of such machines is crucial to current research in AI. In this paper, we present a high-level approach for modelling machine deception using ToM under factors of uncertainty and we propose an implementation of this model in an Agent-Oriented Programming Language (AOPL). We show that the Multi-Agent Systems (MAS) paradigm can be used to integrate concepts from two major theories of deception, namely Information Manipulation Theory 2 (IMT2) and Interpersonal Deception Theory (IDT), and how to apply these concepts in order to build a model of computational deception that takes into account ToM. To show how agents use ToM in order to deceive, we define an epistemic agent mechanism using BDI-like architectures to analyse deceptive interactions between deceivers and their potential targets and we also explain the steps in which the model can be implemented in an AOPL. To the best of our knowledge, this work is one of the first attempts in AI that (i) uses ToM along with components of IMT2 and IDT in order to analyse deceptive interactions and (ii) implements such a model.

Additional Information:cited By 0
Divisions:College of Science
ID Code:38401
Deposited On:31 Oct 2019 15:38

Repository Staff Only: item control page