Abstract
Police officers increasingly rely on AI recommendations in their decision-making processes. This chapter delves into the potential and challenges of AI in law enforcement, focusing on the role of Explainable AI (XAI) on appropriate reliance, where appropriate is neither stubbornly rejecting, nor complacently accepting AI recommendations. We discuss recent XAI research findings and their implications for law enforcement, contrasting the limitations of both generalistic human-centred lab studies and application-grounded studies. We explore how XAI can impact the law enforcement environment, drawing from organisational and sociological literature, and from empirical examples from our research at the Netherlands Police Lab for Artificial Intelligence. Our insights show that XAI is no silver bullet towards appropriate reliance but-when designed well-it is one of the factors in the context of a decision that pushes or pulls a person towards appropriate reliance. Ideally, XAI provides considerations to decide to follow, reject, or cross-check an AI’s recommendation. We propose a contextual model to understand when and how XAI, along with other contextual factors, can facilitate appropriate reliance on AI recommendations. This model can help to identify individual and task characteristics to assess what factors foster the appropriate reliance on AI recommendations in specific organisational settings.
| Original language | English |
|---|---|
| Title of host publication | Public Governance and Emerging Technologies |
| Subtitle of host publication | Values, Trust, and Regulatory Compliance |
| Editors | Jurgen Goossens, Esther Keymolen, Antonia Stanojević |
| Place of Publication | Cham |
| Publisher | Springer |
| Pages | 61-82 |
| Number of pages | 22 |
| Edition | 1 |
| ISBN (Electronic) | 9783031847486 |
| ISBN (Print) | 9783031847479 |
| DOIs | |
| Publication status | Published - 27 Mar 2025 |
Bibliographical note
Publisher Copyright:© The Editor(s) (if applicable) and The Author(s) 2025.