Comparing Approaches for Explaining DNN-Based Facial Expression Classifications

Kaya ter Burg, Heysem Kaya

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Classifying facial expressions is a vital part of developing systems capable of aptly interacting with users. In this field, the use of deep-learning models has become the standard. However, the inner workings of these models are unintelligible, which is an important issue when deploying them to high-stakes environments. Recent efforts to generate explanations for emotion classification systems have been focused on this type of models. In this work, an alternative way of explaining the decisions of a more conventional model based on geometric features is presented. We develop a geometric-features-based deep neural network (DNN) and a convolutional neural network (CNN). Ensuring a sufficient level of predictive accuracy, we analyze explainability using both objective quantitative criteria and a user study. Results indicate that the fidelity and accuracy scores of the explanations approximate the DNN well. From the performed user study, it becomes clear that the explanations increase the understanding of the DNN and that they are preferred over the explanations for the CNN, which are more commonly used. All scripts used in the study are publicly available.
Original languageEnglish
Article number367
Pages (from-to)1-22
JournalAlgorithms
Volume15
Issue number10
DOIs
Publication statusPublished - Oct 2022

Bibliographical note

Publisher Copyright:
© 2022 by the authors.

Keywords

  • CNN explainability
  • DNN explainability
  • FER
  • emotion recognition
  • facial expression recognition

Fingerprint

Dive into the research topics of 'Comparing Approaches for Explaining DNN-Based Facial Expression Classifications'. Together they form a unique fingerprint.

Cite this