Moral sensitivity and the limits of artificial moral agents

Joris Graff*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in a manner inspired by Aristotle. Although disparate in many ways, these philosophers all emphasise what may be called ‘moral sensitivity’ as a precondition for moral competence. Moral sensitivity is the uncodified, practical skill to recognise, in a range of situations, which features of the situations are morally relevant, and how they are relevant. This paper argues that the main types of AMAs currently proposed are incapable of full moral sensitivity. First, top-down AMAs that proceed from fixed rule-sets are too rigid to respond appropriately to the wide range of qualitatively unique factors that moral sensitivity gives access to. Second, bottom-up AMAs that learn moral behaviour from examples are at risk of generalising from these examples in undesirable ways, as they lack embedding in what Wittgenstein calls a ‘form of life’, which allows humans to appropriately learn from moral examples. The paper concludes that AMAs are unlikely to possess full moral competence, but closes by suggesting that they may still be feasible in restricted domains of public morality, where moral sensitivity plays a smaller role.
Original languageEnglish
Article number13
Pages (from-to)1-12
Number of pages12
JournalEthics and Information Technology
Volume26
Issue number1
DOIs
Publication statusPublished - 24 Feb 2024

Keywords

  • Artificial moral agents
  • Machine ethics
  • Moral sensitivity
  • Uncodifiability

Fingerprint

Dive into the research topics of 'Moral sensitivity and the limits of artificial moral agents'. Together they form a unique fingerprint.

Cite this