Abstract
Recent advances in interpretable Machine Learning (iML) and eXplainable AI (XAI) construct explanations based on the importance of features in classification tasks. However, in a high-dimensional feature space this approach may become unfeasible without restraining the set of important features. We propose to utilize the human tendency to ask questions like "Why this output (the fact) instead of that output (the foil)?" to reduce the number of features to those that play a main role in the asked contrast. Our proposed method utilizes locally trained one-versus-all decision trees to identify the disjoint set of rules that causes the tree to classify data points as the foil and not as the fact. In this study we illustrate this approach on three benchmark classification tasks.
| Original language | English |
|---|---|
| Publisher | arXiv |
| Number of pages | 7 |
| DOIs | |
| Publication status | Published - 19 Jun 2018 |
Bibliographical note
presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, SwedenKeywords
- stat.ML
- cs.AI
- cs.LG
Fingerprint
Dive into the research topics of 'Contrastive Explanations with Local Foil Trees'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver