Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision-Making

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Algorithms based on Artificial Intelligence technologies are slowly transforming street-level bureaucracies, yet a lack of algorithmic transparency may jeopardize citizen trust. Based on procedural fairness theory, this article hypothesizes that two core elements of algorithmic transparency (accessibility and explainability) are crucial to strengthening the perceived trustworthiness of street-level decision-making. This is tested in one experimental scenario with low discretion (a denied visa application) and one scenario with high discretion (a suspicion of welfare fraud). The results show that: (1) explainability has a more pronounced effect on trust than the accessibility of the algorithm; (2) the effect of algorithmic transparency not only pertains to trust in the algorithm itself but also—partially—to trust in the human decision-maker; (3) the effects of algorithmic transparency are not robust across decision context. These findings imply that transparency-as-accessibility is insufficient to foster citizen trust. Algorithmic explainability must be addressed to maintain and foster trustworthiness algorithmic decision-making.

Original languageEnglish
Pages (from-to)241-262
Number of pages22
JournalPublic Administration Review
Volume83
Issue number2
DOIs
Publication statusPublished - 1 Mar 2023

Bibliographical note

Funding Information:
Netherlands Organization for Scientific Research (NWO), Grant/Award Number: VENI‐451‐15‐024 Funding information

Funding Information:
I thank the following people for their comments on a previous version of paper and/or feedback on the experimental design: Mark Bovens, Albert Meijer, Floris Bex, Lars Tummers, Robin Bouwman and Marij Swinkels. I also thank three anonymous reviewers for their constructive feedback on the manuscript. Data collection was funded by a research grant from the Netherlands Organization for Scientific Research (NWO) under grant number: VENI‐451‐15‐024.

Publisher Copyright:
© 2022 The Author. Public Administration Review published by Wiley Periodicals LLC on behalf of American Society for Public Administration.

Funding

Netherlands Organization for Scientific Research (NWO), Grant/Award Number: VENI‐451‐15‐024 Funding information I thank the following people for their comments on a previous version of paper and/or feedback on the experimental design: Mark Bovens, Albert Meijer, Floris Bex, Lars Tummers, Robin Bouwman and Marij Swinkels. I also thank three anonymous reviewers for their constructive feedback on the manuscript. Data collection was funded by a research grant from the Netherlands Organization for Scientific Research (NWO) under grant number: VENI‐451‐15‐024.

Keywords

  • Artificial-intelligence
  • Procedural justice
  • Black-box
  • Government
  • Discretion
  • Big

Fingerprint

Dive into the research topics of 'Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision-Making'. Together they form a unique fingerprint.

Cite this