TY - CHAP
T1 - Respect as a Lens for the Design of AI Systems
AU - Seymour, William
AU - Van Kleek, Max
AU - Binns, Reuben
AU - Murray Rust, Dave
N1 - Funding Information:
This work was part-funded by grant EP/T026723/1 from the Engineering and Physical Sciences Research Council.
Publisher Copyright:
© 2022 ACM.
PY - 2022/7/26
Y1 - 2022/7/26
N2 - Critical examinations of AI systems often apply principles such as fairness, justice, accountability, and safety, which is reflected in AI regulations such as the EU AI Act. Are such principles sufficient to promote the design of systems that support human flourishing? Even if a system is in some sense fair, just, or 'safe', it can nonetheless be exploitative, coercive, inconvenient, or otherwise conflict with cultural, individual, or social values. This paper proposes a dimension of interactional ethics thus far overlooked: the ways AI systems should treat human beings. For this purpose, we explore the philosophical concept of respect: if respect is something everyone needs and deserves, shouldn't technology aim to be respectful? Despite its intuitive simplicity, respect in philosophy is a complex concept with many disparate senses. Like fairness or justice, respect can characterise how people deserve to be treated; but rather than relating primarily to the distribution of benefits or punishments, respect relates to how people regard one another, and how this translates to perception, treatment, and behaviour. We explore respect broadly across several literatures, synthesising perspectives on respect from Kantian, post-Kantian, dramaturgical, and agential realist design perspectives with a goal of drawing together a view of what respect could mean for AI. In so doing, we identify ways that respect may guide us towards more sociable artefacts that ethically and inclusively honour and recognise humans using the rich social language that we have evolved to interact with one another every day.
AB - Critical examinations of AI systems often apply principles such as fairness, justice, accountability, and safety, which is reflected in AI regulations such as the EU AI Act. Are such principles sufficient to promote the design of systems that support human flourishing? Even if a system is in some sense fair, just, or 'safe', it can nonetheless be exploitative, coercive, inconvenient, or otherwise conflict with cultural, individual, or social values. This paper proposes a dimension of interactional ethics thus far overlooked: the ways AI systems should treat human beings. For this purpose, we explore the philosophical concept of respect: if respect is something everyone needs and deserves, shouldn't technology aim to be respectful? Despite its intuitive simplicity, respect in philosophy is a complex concept with many disparate senses. Like fairness or justice, respect can characterise how people deserve to be treated; but rather than relating primarily to the distribution of benefits or punishments, respect relates to how people regard one another, and how this translates to perception, treatment, and behaviour. We explore respect broadly across several literatures, synthesising perspectives on respect from Kantian, post-Kantian, dramaturgical, and agential realist design perspectives with a goal of drawing together a view of what respect could mean for AI. In so doing, we identify ways that respect may guide us towards more sociable artefacts that ethically and inclusively honour and recognise humans using the rich social language that we have evolved to interact with one another every day.
UR - http://www.scopus.com/inward/record.url?scp=85137156909&partnerID=8YFLogxK
U2 - 10.1145/3514094.3534186
DO - 10.1145/3514094.3534186
M3 - Conference paper
T3 - AIES 2022 - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
SP - 641
EP - 652
BT - AIES 2022 - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
PB - ACM
ER -