Abstract
Knowledge bases are employed in a variety of applications from natural language processing to semantic web search; alas, in practice their usefulness is hurt by their incompleteness. Embedding models attain state-of-the-art accuracy in knowledge base completion, but their predictions are notoriously hard to interpret. In this paper, we adapt "pedagogical approaches" (from the literature on neural networks) so as to interpret embedding models by extracting weighted Horn rules from them. We show how pedagogical approaches have to be adapted to take upon the large-scale relational aspects of knowledge bases and show experimentally their strengths and weaknesses.
Original language | English |
---|---|
Publication status | Published - 14 Jul 2018 |
Event | 2018 ICML Workshop on Human Interpretability in Machine Learning - Stockholm, Sweden Duration: 14 Jul 2018 → 14 Jul 2018 https://sites.google.com/view/whi2018/home |
Workshop
Workshop | 2018 ICML Workshop on Human Interpretability in Machine Learning |
---|---|
Abbreviated title | WHI |
Country/Territory | Sweden |
City | Stockholm |
Period | 14/07/18 → 14/07/18 |
Internet address |
Keywords
- Artificial Intelligence
- Machine Learning