Skip to main navigation Skip to search Skip to main content

Decision-makers without Reasons: On the Moral and Normative Capacities of Artificial Agents

  • Joris Graff

Research output: ThesisDoctoral thesis 1 (Research UU / Graduation UU)

Abstract

AI systems are taking over increasingly many tasks with morally relevant consequences across societal domains, and are used to assist human decision-makers in such tasks. However, it is not clear whether AI systems can be moral decision-makers - that is, respond properly to moral reasons. This dissertation offers a rigorous, philosophically informed analysis of the relation between AI systems - both in general and more specific types - and moral, and more generally practical, reasons. Part I uses philosophical analysis to argue that AI systems are only to a very limited extend capable of responding to moral reasons. Chapter 1 asks the question whether AI systems can have moral reasons, and suggests that none of the main accounts of reasons can answer this question in a convincing way. It then suggests a more plausible, Wittgensteinian account which implies that AI systems that currently exist, or may exist in the foreseeable future, do not have moral reasons because they do not share in our 'form of life'. Chapter 2 asks the question whether AI systems, even if they do not have moral reasons, may still be 'functionally moral', i.e. respond to morally salient features of situations. It argues that we cannot trust AI systems to behave in a functionally moral way across a wide range of situations, since AI systems lack the uncodified skill to 'see' morally relevant features of situations. The chapter however concludes that functionally moral systems may be feasible in limited domains. Part II develops 'numeric default logic' (NDL) a formal model of moral reasoning that may contribute to development of such limited functionally moral systems. The model, presented in Chapter 3, is based on Horty's insight that (moral) reasons can be modelled as the premises of defaults, or defeasible inference rules. Contrary to Horty's model, NDL assigns numeric values to both propositions and default rules, making it possible to aggregate reasons. Additionally, Chapter 3 shows how the numeric values in NDL can be used to model higher-order reasons. Chapter 4 exploits the similarity between NDL and artificial neural networks to sketch a procedure by which the numeric values of default rules could beg learnt through backpropagation-based gradient descent. Part III discusses the impact of AI systems on human reason-responsiveness. Chapter 5 engages with recent suggestions that AI-assisted decision-making can be improved when the outputs of AI systems are explained in terms of the systems' (motivating) reasons for those outputs. The chapter argues that on neither of the main accounts of motivating reasons, it is possible to attribute reasons to AI systems, and therefore explaining such systems in terms of reasons is misleading. Chapter 6 focuses on a specific type of algorithms, namely those used for curating timelines of social media users. It argues against recent accounts which conceptualise the harms of such algorithms in terms of manipulation. An alternative account is offered on which social media algorithms harm users' personal autonomy, by making it harder for users to act for their own reasons.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • Utrecht University
Supervisors/Advisors
  • Broersen, Jan, Supervisor
  • Klein, Dominik, Co-supervisor
Award date6 Mar 2026
Place of PublicationUtrecht
Publisher
Print ISBNs978-90-393-8036-9
DOIs
Publication statusPublished - 6 Mar 2026

Keywords

  • artificial intelligence
  • moral reasons
  • practical reasoning
  • moral sensitivity
  • machine ethics
  • artificial moral agents
  • default logic
  • explainable AI
  • social media
  • personal autonomy

Fingerprint

Dive into the research topics of 'Decision-makers without Reasons: On the Moral and Normative Capacities of Artificial Agents'. Together they form a unique fingerprint.

Cite this