Towards Explainable Recommender Systems for Illiterate Users

Igor Tchappi, Joris Hulstijn, Ephraim Sinyabe Pagou, Sukriti Bhattacharya, Amro Najjar

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Explainable AI (XAI) has emerged in recent years as a set of techniques to build systems that enable humans to understand the outcomes produced by artificial intelligent entities. Although these initiatives have advanced over the past few years, most approaches focus on explanations that are meant for literate or even skilled end users such as engineers, researchers etc. Few works available in the literature address the needs of illiterate end-users in XAI (illiterate centered design). This paper proposes a generic model to extract the contents of explanations from a given explainable AI system, and translate them into a representation format that illiterate end users may understand. The usefulness of the model is shown by reference to an application of a food recommender system.
Original languageEnglish
Title of host publicationHAI '23: Proceedings of the 11th International Conference on Human-Agent Interaction
PublisherAssociation for Computing Machinery
Pages415–416
ISBN (Print)979-8-4007-0824-4
DOIs
Publication statusPublished - 4 Dec 2023

Fingerprint

Dive into the research topics of 'Towards Explainable Recommender Systems for Illiterate Users'. Together they form a unique fingerprint.

Cite this