Choosing appropriate arguments from trustworthy sources

Alison R. Panisson, Simon Parsons, Peter McBurney, Rafael H. Bordini

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

8 Citations (Scopus)


Recently, argumentation frameworks have been extended in order to consider trust when defining preferences between arguments, given that arguments (or information that supports the arguments) from more trustworthy sources may be preferred to arguments from less trustworthy sources. Although such literature presents interesting results on argumentation-based reasoning and how agents define preferences between arguments, there is little work taking into account agent strategies for argumentation-based dialogues using such information. In this work, we propose an argumentation framework in which agents consider how much the recipient of an argument trusts others in order to choose the most suitable argument for that particular recipient, i.e., arguments constructed using information from those sources that the recipient trusts. Our approach aims to allow agents to construct more effective arguments, depending on the recipients and on their views on the trustworthiness of potential sources.

Original languageEnglish
Title of host publicationComputational Models of Argument - Proceedings of COMMA 2018
PublisherIOS Press
Number of pages8
ISBN (Print)9781614999058
Publication statusPublished - 1 Jan 2018
Event7th International Conference on Computational Models of Argument, COMMA 2018 - Warsaw, Poland
Duration: 12 Sept 201814 Sept 2018

Publication series

NameFrontiers in Artificial Intelligence and Applications
ISSN (Print)0922-6389


Conference7th International Conference on Computational Models of Argument, COMMA 2018


  • Argumentation
  • Multi-Agent Systems
  • Reputation
  • Trust


Dive into the research topics of 'Choosing appropriate arguments from trustworthy sources'. Together they form a unique fingerprint.

Cite this