Argumentative explanations for interactive recommendations

Antonio Rago*, Oana Cocarascu, Christos Bechlivanidis, David Lagnado, Francesca Toni

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

33 Citations (Scopus)

Abstract

A significant challenge for recommender systems (RSs), and in fact for AI systems in general, is the systematic definition of explanations for outputs in such a way that both the explanations and the systems themselves are able to adapt to their human users' needs. In this paper we propose an RS hosting a vast repertoire of explanations, which are customisable to users in their content and format, and thus able to adapt to users' explanatory requirements, while being reasonably effective (proven empirically). Our RS is built on a graphical chassis, allowing the extraction of argumentation scaffolding, from which diverse and varied argumentative explanations for recommendations can be obtained. These recommendations are interactive because they can be questioned by users and they support adaptive feedback mechanisms designed to allow the RS to self-improve (proven theoretically). Finally, we undertake user studies in which we vary the characteristics of the argumentative explanations, showing users' general preferences for more information, but also that their tastes are diverse, thus highlighting the need for our adaptable RS.

Original languageEnglish
Article number103506
JournalARTIFICIAL INTELLIGENCE
Volume296
DOIs
Publication statusPublished - Jul 2021

Keywords

  • Argumentation
  • Explanation
  • Recommender systems
  • User evaluation
  • User interaction

Fingerprint

Dive into the research topics of 'Argumentative explanations for interactive recommendations'. Together they form a unique fingerprint.

Cite this