Metrics for Evaluating Explainable Recommender Systems

Joris Hulstijn, Igor Tchappi, Amro Najjar, Reyhan Aydoğan

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

Abstract

Recommender systems aim to support their users by reducing information overload so that they can make better decisions. Recommender systems must be transparent, so users can form mental models about the system’s goals, internal state, and capabilities, that are in line with their actual design. Explanations and transparent behaviour of the system should inspire trust and, ultimately, lead to more persuasive recommendations. Here, explanations convey reasons why a recommendation is given or how the system forms its recommendations. This paper focuses on the question how such claims about effectiveness of explanations can be evaluated. Accordingly, we investigate various models that are used to assess the effects of explanations and recommendations. We discuss objective and subjective measurement and argue that both are needed. We define a set of metrics for measuring the effectiveness of explanations and recommendations. The feasibility of using these metrics is discussed in the context of a specific explainable recommender system in the food and health domain.
Original languageEnglish
Title of host publicationExplainable and Transparent AI and Multi-Agent Systems
Subtitle of host publication5th International Workshop, EXTRAAMAS 2023, London, UK, May 29, 2023, Revised Selected Papers
Place of PublicationCham
PublisherSpringer
Pages 212–230
Edition1
ISBN (Electronic)978-3-031-40878-6
ISBN (Print)978-3-031-40877-9
DOIs
Publication statusPublished - 5 Sept 2023

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume14127
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Fingerprint

Dive into the research topics of 'Metrics for Evaluating Explainable Recommender Systems'. Together they form a unique fingerprint.

Cite this