ORLA: Learning Explainable Argumentation Models

Cándido Otero Moreira, Dennis Craandijk, Floris Bex

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

This paper presents ORLA (Online Reinforcement Learning Argumentation), a new approach for learning explainable symbolic argumentation models through direct exploration of the world. ORLA takes a set of expert arguments that promote some action in the world, and uses reinforcement learning to determine which of those arguments are the most effective for performing a task by maximizing a performance score. Thus, ORLA learns a preference ranking over the expert arguments such that the resulting value-based argumentation framework (VAF) can be used as a reasoning engine to select actions for performing the task. Although model-extraction methods exist that extract a VAF by mimicking the behavior of some non-symbolic model (e.g., a neural network), these extracted models are only approximations to their non-symbolic counterparts, which may result in both a performance loss and non-faithful explanations. Conversely, ORLA learns a VAF through direct interaction with the world (online learning), thus producing faithful explanations without sacrificing performance. This paper uses the Keepaway world as a case study and shows that models trained using ORLA not only perform better than those extracted from non-symbolic models but are also more robust. Moreover, ORLA is evaluated as a strategy discovery tool, finding a better solution than the expert strategy proposed by a related study.
Original languageEnglish
Title of host publicationProceedings of the 20th International Conference on Principles of Knowledge Representation and Reasoning
PublisherIJCAI Organization
Pages542-551
ISBN (Print)978-1-956792-02-7
DOIs
Publication statusPublished - Sept 2023

Keywords

  • Argumentation
  • Symbolic reinforcement learning
  • Explainable AI
  • Applications that combine KR with machine learning

Fingerprint

Dive into the research topics of 'ORLA: Learning Explainable Argumentation Models'. Together they form a unique fingerprint.

Cite this