Contrastive Explanations with Local Foil Trees

  • Jasper van der Waa*
  • , Marcel Robeer*
  • , Jurriaan van Diggelen
  • , Matthieu Brinkhuis
  • , Mark Neerincx
  • *Corresponding author for this work

Research output: Working paperPreprintAcademic

Abstract

Recent advances in interpretable Machine Learning (iML) and eXplainable AI (XAI) construct explanations based on the importance of features in classification tasks. However, in a high-dimensional feature space this approach may become unfeasible without restraining the set of important features. We propose to utilize the human tendency to ask questions like "Why this output (the fact) instead of that output (the foil)?" to reduce the number of features to those that play a main role in the asked contrast. Our proposed method utilizes locally trained one-versus-all decision trees to identify the disjoint set of rules that causes the tree to classify data points as the foil and not as the fact. In this study we illustrate this approach on three benchmark classification tasks.
Original languageEnglish
PublisherarXiv
Number of pages7
DOIs
Publication statusPublished - 19 Jun 2018

Bibliographical note

presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden

Keywords

  • stat.ML
  • cs.AI
  • cs.LG

Fingerprint

Dive into the research topics of 'Contrastive Explanations with Local Foil Trees'. Together they form a unique fingerprint.

Cite this