Abstract
Crashes involving self-driving cars at least superficially resemble trolley dilemmas. This article discusses what lessons machine ethicists working on the ethics of self-driving cars can
learn from trolleyology. The article proceeds by providing an account of the trolley problem
as a paradox and by distinguishing two types of solutions to the trolley problem. According
to an optimistic solution, our case intuitions about trolley dilemmas are responding to morally relevant differences. The pessimistic solution denies that this is the case. An optimistic
solution would yield first-order moral insights for the ethics of self-driving cars, but such a
solution is difficult to come by. More plausible is the pessimistic solution, and it teaches us a
methodological lesson. The lesson is that machine ethicists should discount case intuitions
and instead rely on intuitions and judgments at a higher level of generality
learn from trolleyology. The article proceeds by providing an account of the trolley problem
as a paradox and by distinguishing two types of solutions to the trolley problem. According
to an optimistic solution, our case intuitions about trolley dilemmas are responding to morally relevant differences. The pessimistic solution denies that this is the case. An optimistic
solution would yield first-order moral insights for the ethics of self-driving cars, but such a
solution is difficult to come by. More plausible is the pessimistic solution, and it teaches us a
methodological lesson. The lesson is that machine ethicists should discount case intuitions
and instead rely on intuitions and judgments at a higher level of generality
Original language | English |
---|---|
Pages (from-to) | 70-87 |
Number of pages | 18 |
Journal | Utilitas |
Volume | 35 |
Issue number | 1 |
Early online date | 2022 |
DOIs | |
Publication status | Published - Mar 2023 |