Zero-Shot Audio-Visual Compound Expression Recognition Method based on Emotion Probability Fusion

  • Elena Ryumina
  • , Maxim Markitantov
  • , Dmitry Ryumin
  • , Heysem Kaya
  • , Alexey Karpov

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

A Compound Expression Recognition (CER) as a subfield of affective computing is a novel task in intelligent human-computer interaction and multimodal user interfaces. We propose a novel audio-visual method for CER. Our method relies on emotion recognition models that fuse modalities at the emotion probability level while decisions regarding the prediction of compound expressions are based on the pair-wise sum of weighted emotion probability distributions. Notably our method does not use any training data specific to the target task. Thus the problem is a zero-shot classification task. The method is evaluated in multi-corpus training and cross-corpus validation setups. We achieved F1 scores of 32.15% and 25.56% for the AffWild2 and C-EXPR-DB test subsets without training on target corpus and target task respectively. Therefore our method is on par with methods developed training target corpus or target task. The source code is publicly available from https://elenaryumina.github.io/AVCER.
Original languageEnglish
Title of host publicationProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops
Pages4752-4760
Number of pages9
Publication statusPublished - 16 Jun 2024

Keywords

  • compund emotion recognition
  • Affective Computing

Fingerprint

Dive into the research topics of 'Zero-Shot Audio-Visual Compound Expression Recognition Method based on Emotion Probability Fusion'. Together they form a unique fingerprint.

Cite this