The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?

Sven Nyholm, Jilles Smids

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.
Original languageEnglish
Pages (from-to)1275-1289
Number of pages15
JournalEthical Theory and Moral Practice
Volume19
Issue number5
DOIs
Publication statusPublished - 1 Nov 2016

Keywords

  • Decision-making
  • Moral and legal responsibility
  • Risks and uncertainty
  • Self-driving cars
  • The trolley problem

Fingerprint

Dive into the research topics of 'The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?'. Together they form a unique fingerprint.

Cite this