Discovering the rationale of decisions: towards a method for aligning learning and reasoning

Cor Steging, Silja Renooij, Bart Verheij

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

In AI and law, systems that are designed for decision support should be explainable when pursuing justice. In order for these systems to be fair and responsible, they should make correct decisions and make them using a sound and transparent rationale. In this paper, we introduce a knowledge-driven method for model-agnostic rationale evaluation using dedicated test cases, similar to unit-testing in professional software development. We apply this new quantitative human-in-the-loop method in a machine learning experiment aimed at extracting known knowledge structures from artificial datasets from a real-life legal setting. We show that our method allows us to analyze the rationale of black box machine learning systems by assessing which rationale elements are learned or not. Furthermore, we show that the rationale can be adjusted using tailor-made training data based on the results of the rationale evaluation.

Original languageEnglish
Title of host publicationProceedings of the 18th International Conference on Artificial Intelligence and Law, ICAIL 2021
EditorsA.Z. Wyner
PublisherACM Press
Pages235-239
Number of pages5
ISBN (Electronic)9781450385268
DOIs
Publication statusPublished - 21 Jun 2021

Publication series

NameProceedings of the 18th International Conference on Artificial Intelligence and Law, ICAIL 2021

Keywords

  • explainable AI
  • learning knowledge from data
  • machine learning
  • responsible AI

Fingerprint

Dive into the research topics of 'Discovering the rationale of decisions: towards a method for aligning learning and reasoning'. Together they form a unique fingerprint.

Cite this