Computational Accountability

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Automated decision making systems take decisions that matter. Some human or legal person remains responsible. Looking back, that person is accountable for the decisions made by the system, and may even be liable in case of damages. That puts constraints on the way in which decision making systems are designed, and how they are deployed in organizations. In this paper, we analyze computational accountability in three steps. First, being accountable is analyzed as a relationship between an actor deploying the system and a critical forum of subjects, users, experts and developers. Second, we discuss system design. In principle, evidence must be collected about the decision rule and the case data that were applied. However, many AI algorithms are not interpretable for humans. Alternatively, internal controls must ensure that a system uses valid algorithms and reliable data sets for training, which are appropriate for the application domain. Third, we discuss the governance model: roles, responsibilities, procedures and infrastructure, to ensure effective operation of these controls. The paper ends with a case study in the IT audit domain, to illustrate practical feasibility.
Original languageEnglish
Title of host publicationICAIL '23: Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law
PublisherAssociation for Computing Machinery
Pages121–130
ISBN (Print)979-8-4007-0197-9
DOIs
Publication statusPublished - 19 Jun 2023

Fingerprint

Dive into the research topics of 'Computational Accountability'. Together they form a unique fingerprint.

Cite this