Student thesis: Doctoral ThesisDoctor of Philosophy


This thesis is about machine deception. It is the first full computational treatment in Artificial Intelligence (AI) on how to create machines able to deceive. The dissertation discusses the limited related research on deception that exists in AI, Philosophy and Psychology.
This thesis tackles the problem of machine deception from two different directions. The main direction is from the cognitive modelling perspective of agents in Multi-Agent Systems (MAS). Working from this perspective has enabled the engineering and formal modelling of reasoning mechanisms that artificial agents could potentially use to deceive and to reason about other minds in a similar fashion to how humans perform these tasks. The other direction is from an evolutionary perspective on agent behaviour in multi-agent systems. Working from this second direction shows how deception can destabilise cooperation in hybrid societies, where humans and machines interact socially through the exchange of knowledge, but it also shows how cooperation can be re-established if the right mechanisms for social interaction are in place.
This thesis presents six contributions to the field of AI: 1) A conceptual grounding of computational deception; 2) A novel approach to model and implement practical reasoning artificial agents that have the capability to model and reason about the minds of other agents in communication; 3) A novel, formal approach to model and engineer deceptive artificial agents in MAS, that is grounded in three major theories of deceptive communication; 4) A detailed step-by-step description of the implementation of the models described by this formal approach in the Jason agent-oriented programming language; 5) A novel approach to model and evaluate deception in evolutionary public goods games of knowledge sharing between agents of hybrid societies; 6) The proposal of an MAS framework for deception to be used in Intelligence Analysis.
This thesis leads to three main future research directions. These regard the refinement of the models presented in the thesis, the creation of MAS tools for deception analysis, and, finally, the creation of a machine worth talking to.
Date of Award1 May 2021
Original languageEnglish
Awarding Institution
  • King's College London
SupervisorPeter McBurney (Supervisor) & Simon Parsons (Supervisor)

Cite this