King's College London

Research portal

Argumentation-based Dialogue Games for Modelling Deception

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Original languageEnglish
Title of host publicationOnline Handbook of Argumentation for AI
EditorsFederico Castagna, Francesca Mosca, Jack Mumford, Stefan Sarkadi, Andreas Xydis
Pages38-42
Number of pages4
Volume1
Published2020

Documents

King's Authors

Abstract

Machines of the future might either be endowed with or might develop mechanisms to argue with other agents. We consider the contexts in which these types of machines also develop reasons to act dishonestly by attempting to deceive their interlocutors. Using the argumentation dialogue games approach, this work aims to explore how deceptive machines might be engineered inorder to mitigate or neutralise their malicious behaviour. Argumentation dialogue games can be a powerful approach for the modelling of deception given that it offers an explainable way of representing the components necessary for deception such as the knowledge of the agents, their ability to perform actions (to communicate arguments), their ability to reason defeasibly about the world, and most importantly, their ability to reason defeasibl yabout each others’ minds. This paper presents three different hybrid agent-based models derived from argumentation that (i) have been successfully used and that (ii) can be used in future work to model machine deception.

Download statistics

No data available

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454