King's College London

Research portal

Lies, Bullshit, and Deception in Agent-Oriented Programming Languages

Research output: Chapter in Book/Report/Conference proceedingConference paper

Alison R. Panisson, Stefan Sarkadi, Peter John McBurney, Simon Dominic Parsons, Rafael H. Bordini

Original languageEnglish
Title of host publicationProceedings of the 20th International Trust Workshop
Subtitle of host publicationco-located with AAMAS/IJCAI/ECAI/ICML (AAMAS/IJCAI/ECAI/ICML 2018)
Place of PublicationStockholm, Sweden, July 14, 2018
PublisherCEUR-WS
Pages50-61
Number of pages12
Volume2154
Publication statusPublished - 2018

Documents

Links

King's Authors

Research outputs

Abstract

It is reasonable to assume that in the next few decades, intelligent
machines might become much more proficient at socialising. This
implies that the AI community will face the challenges of identifying,
understanding, and dealing with the different types of social behaviours
these intelligent machines could exhibit. Given these potential challenges,
we aim to model in this paper three of the most studied strategic social
behaviours that could be adopted by autonomous and malicious software
agents. These are dishonest behaviours such as lying, bullshitting, and
deceiving that autonomous agents might exhibit by taking advantage of
their own reasoning and communicative capabilities. In contrast to other
studies on dishonest behaviours of autonomous agents, we use an agentoriented
programming language to model dishonest agents’ attitudes and
to simulate social interactions between agents. Through simulation, we
are able to study and propose mechanisms to identify and later to deal
with such dishonest behaviours in software agents.

Download statistics

No data available

View graph of relations

© 2018 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454