TY - JOUR
T1 - Argumentative explanations for interactive recommendations
AU - Rago, Antonio
AU - Cocarascu, Oana
AU - Bechlivanidis, Christos
AU - Lagnado, David
AU - Toni, Francesca
N1 - Funding Information:
This research was partially funded by the UK Human-Like Computing EPSRC Network of Excellence EP/R022291/1 . Rago, Cocarascu and Toni were also partially funded by the UK EPSRC project EP/P029558/1 ROAD2H , and Rago by an EPSRC Doctoral Prize Fellowship at Imperial College London, UK. The authors are grateful to all members of the Computational Logic and Argumentation group at Imperial College London for useful feedback on the set up for the user studies in Section 7 , and to Ben Wilkins and Jinfeng Zhong for pointing out some typographical errors in [54] , rectified here. Finally, we would like to thank the AI Journal's reviewers and the editor for their suggestions, which we believe led us to significantly improve this paper.
Publisher Copyright:
© 2021 Elsevier B.V.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2021/7
Y1 - 2021/7
N2 - A significant challenge for recommender systems (RSs), and in fact for AI systems in general, is the systematic definition of explanations for outputs in such a way that both the explanations and the systems themselves are able to adapt to their human users' needs. In this paper we propose an RS hosting a vast repertoire of explanations, which are customisable to users in their content and format, and thus able to adapt to users' explanatory requirements, while being reasonably effective (proven empirically). Our RS is built on a graphical chassis, allowing the extraction of argumentation scaffolding, from which diverse and varied argumentative explanations for recommendations can be obtained. These recommendations are interactive because they can be questioned by users and they support adaptive feedback mechanisms designed to allow the RS to self-improve (proven theoretically). Finally, we undertake user studies in which we vary the characteristics of the argumentative explanations, showing users' general preferences for more information, but also that their tastes are diverse, thus highlighting the need for our adaptable RS.
AB - A significant challenge for recommender systems (RSs), and in fact for AI systems in general, is the systematic definition of explanations for outputs in such a way that both the explanations and the systems themselves are able to adapt to their human users' needs. In this paper we propose an RS hosting a vast repertoire of explanations, which are customisable to users in their content and format, and thus able to adapt to users' explanatory requirements, while being reasonably effective (proven empirically). Our RS is built on a graphical chassis, allowing the extraction of argumentation scaffolding, from which diverse and varied argumentative explanations for recommendations can be obtained. These recommendations are interactive because they can be questioned by users and they support adaptive feedback mechanisms designed to allow the RS to self-improve (proven theoretically). Finally, we undertake user studies in which we vary the characteristics of the argumentative explanations, showing users' general preferences for more information, but also that their tastes are diverse, thus highlighting the need for our adaptable RS.
KW - Argumentation
KW - Explanation
KW - Recommender systems
KW - User evaluation
KW - User interaction
UR - http://www.scopus.com/inward/record.url?scp=85105699242&partnerID=8YFLogxK
U2 - 10.1016/j.artint.2021.103506
DO - 10.1016/j.artint.2021.103506
M3 - Article
AN - SCOPUS:85105699242
SN - 0004-3702
VL - 296
JO - ARTIFICIAL INTELLIGENCE
JF - ARTIFICIAL INTELLIGENCE
M1 - 103506
ER -