Abstract
As emotions play a central role in human communication, automatic emotion recognition has attracted increasing attention in the last two decades. While multimodal systems enjoy high performances on lab-controlled data, they are still far from providing ecological validity on non-lab-controlled, namely “in-the-wild” data. This work investigates audiovisual deep learning approaches to emotion recognition in in-the-wild problem. Inspired by the outstanding performance of end-to-end and transfer learning techniques, we explored the effectiveness of architectures in which a modality-specific Convolutional Neural Network (CNN) is followed by a Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) using the AffWild2 dataset under the Affective Behavior Analysis in-the-Wild (ABAW) challenge protocol. We deployed unimodal end-to-end and transfer learning approaches within a multimodal fusion system, which generated final predictions using a weighted score fusion scheme. Exploiting the proposed deep-learning-based multimodal system, we reached a test set challenge performance measure of 48.1% on the ABAW 2020 Facial Expressions challenge, which advances the first-runner-up performance.
Original language | English |
---|---|
Article number | 11 |
Pages (from-to) | 1-23 |
Number of pages | 23 |
Journal | Multimodal Technologies and Interaction |
Volume | 6 |
Issue number | 2 |
DOIs | |
Publication status | Published - Feb 2022 |
Bibliographical note
Funding Information:Funding: This research was partially supported by the Russian Foundation for Basic Research (Project No. 19-29-09081), by the Council for Grants of the President of Russia (Grant No. NSH-17.2022.1.6), as well as by the Russian state research (No. 0073-2019-0005).
Publisher Copyright:
© 2022 by the authors. Licensee MDPI, Basel, Switzerland.
Keywords
- Affective computing
- Deep learning architectures
- Emotion recognition
- Face processing
- Multimodal fusion
- Multimodal representations