Abstract
Decision-making aided by Artifcial Intelligence in high-stakes domains such as law enforcement must be informed and accountable. Thus, designing explainable artifcial intelligence (XAI) for such settings is a key social concern. Yet, explanations are often misunderstood by end-users due to being overly technical or abstract. To address this, our study engaged with police employees in the Netherlands, who are users of a text classifer. We found that for them, usability and usefulness are of great importance in explanation design, whereas interpretability and understandability are less valued. Further, our work reports on how design elements included in machine learning model explanations are interpreted. Drawing from these insights, we contribute recommendations that guide XAI system designers to cater to the specifc needs of specialized users in high-stakes domains and suggest design considerations for machine learning model explanations aimed at domain experts.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2024 ACM Designing Interactive Systems Conference, DIS 2024 |
Editors | Anna Vallgarda, Li Jonsson, Jonas Fritsch, Sarah Fdili Alaoui, Christopher A. Le Dantec |
Publisher | Association for Computing Machinery |
Pages | 995-1009 |
Number of pages | 15 |
ISBN (Electronic) | 9798400705830 |
DOIs | |
Publication status | Published - Jul 2024 |
Publication series
Name | Proceedings of the 2024 ACM Designing Interactive Systems Conference, DIS 2024 |
---|
Bibliographical note
Publisher Copyright:© 2024 Copyright held by the owner/author(s).
Funding
This research was supported by multiple funding bodies, including the Netherlands Police and the Swedish Research Council\u20142022-03196. Pawe\u0142 W. Wo\u017Aniak is supported by an endowment from TU Wien.
Funders | Funder number |
---|---|
Netherlands Police | |
TU Wien | |
Swedish Research Council | 2022-03196 |