Abstract
Audio-driven 3D facial animation synthesis has been an active field of research with attention from both academia and industry. While there are promising results in this area, recent approaches largely focus on lip-sync and identity control, neglecting the role of emotions and emotion control in the generative process. That is mainly due to the lack of emotionally rich facial animation data and algorithms that can synthesize speech animations with emotional expressions at the same time. In addition, majority of the models are deterministic, meaning given the same audio input, they produce the same output motion. We argue that emotions and non-determinism are crucial to generate diverse and emotionally-rich facial animations. In this paper, we propose ProbTalk3D a non-deterministic neural network approach for emotion controllable speech-driven 3D facial animation synthesis using a two-stage VQ-VAE model and an emotionally rich facial animation dataset 3DMEAD. We provide an extensive comparative analysis of our model against the recent 3D facial animation synthesis approaches, by evaluating the results objectively, qualitatively, and with a perceptual user study. We highlight several objective metrics that are more suitable for evaluating stochastic outputs and use both in-the-wild and ground truth data for subjective evaluation. To our knowledge, that is the first non-deterministic 3D facial animation synthesis method incorporating a rich emotion dataset and emotion control with emotion labels and intensity levels. Our evaluation demonstrates that the proposed model achieves superior performance compared to state-of-the-art emotion-controlled, deterministic and non-deterministic models. We recommend watching the supplementary video for quality judgement. The entire codebase is publicly available1.
Original language | English |
---|---|
Title of host publication | Proceedings, MIG 2024 - 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games |
Editors | Stephen N. Spencer |
Publisher | Association for Computing Machinery |
ISBN (Electronic) | 9798400710902 |
DOIs | |
Publication status | Published - 21 Nov 2024 |
Event | 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games, MIG 2024 - Arlington, United States Duration: 21 Nov 2024 → 23 Nov 2024 |
Publication series
Name | Proceedings, MIG 2024 - 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games |
---|
Conference
Conference | 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games, MIG 2024 |
---|---|
Country/Territory | United States |
City | Arlington |
Period | 21/11/24 → 23/11/24 |
Bibliographical note
Publisher Copyright:© 2024 ACM.
Keywords
- deep learning
- emotion-controlled facial animation
- facial animation synthesis
- non-deterministic models
- virtual humans