Abstract
Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accord- ingly, they need to be programmed for how to deal with crash scenarios. Should cars be programmed to always prioritize their owners, to minimize harm, or to respond to crashes on the basis of some other type of principle? The article first discusses whether everyone should have the same “ethics settings.” Next, the oft‐made analogy with the trolley problem is examined. Then follows an assess- ment of recent empirical work on lay‐people's attitudes about crash algorithms relevant to the ethical issue of crash optimization. Finally, the article discusses what traditional ethical theories such as utilitarianism, Kantianism, virtue ethics, and contractualism imply about how cars should handle crash scenarios. The aim of the article is to provide an overview of the existing literature on these topics and to assess how far the discussion has gotten so far. 1
Original language | English |
---|---|
Pages (from-to) | e12507 |
Journal | Philosophy Compass |
Volume | 13 |
Issue number | 7 |
DOIs | |
Publication status | Published - Jul 2018 |