ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Audio-driven 3D facial animation synthesis has been an active field of research with attention from both academia and industry. While there are promising results in this area, recent approaches largely focus on lip-sync and identity control, neglecting the role of emotions and emotion control in the generative process. That is mainly due to the lack of emotionally rich facial animation data and algorithms that can synthesize speech animations with emotional expressions at the same time. In addition, majority of the models are deterministic, meaning given the same audio input, they produce the same output motion. We argue that emotions and non-determinism are crucial to generate diverse and emotionally-rich facial animations. In this paper, we propose ProbTalk3D a non-deterministic neural network approach for emotion controllable speech-driven 3D facial animation synthesis using a two-stage VQ-VAE model and an emotionally rich facial animation dataset 3DMEAD. We provide an extensive comparative analysis of our model against the recent 3D facial animation synthesis approaches, by evaluating the results objectively, qualitatively, and with a perceptual user study. We highlight several objective metrics that are more suitable for evaluating stochastic outputs and use both in-the-wild and ground truth data for subjective evaluation. To our knowledge, that is the first non-deterministic 3D facial animation synthesis method incorporating a rich emotion dataset and emotion control with emotion labels and intensity levels. Our evaluation demonstrates that the proposed model achieves superior performance compared to state-of-the-art emotion-controlled, deterministic and non-deterministic models. We recommend watching the supplementary video for quality judgement. The entire codebase is publicly available1.

Original languageEnglish
Title of host publicationProceedings, MIG 2024 - 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games
EditorsStephen N. Spencer
PublisherAssociation for Computing Machinery
ISBN (Electronic)9798400710902
DOIs
Publication statusPublished - 21 Nov 2024
Event17th ACM SIGGRAPH Conference on Motion, Interaction, and Games, MIG 2024 - Arlington, United States
Duration: 21 Nov 202423 Nov 2024

Publication series

NameProceedings, MIG 2024 - 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games

Conference

Conference17th ACM SIGGRAPH Conference on Motion, Interaction, and Games, MIG 2024
Country/TerritoryUnited States
CityArlington
Period21/11/2423/11/24

Bibliographical note

Publisher Copyright:
© 2024 ACM.

Keywords

  • deep learning
  • emotion-controlled facial animation
  • facial animation synthesis
  • non-deterministic models
  • virtual humans

Fingerprint

Dive into the research topics of 'ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE'. Together they form a unique fingerprint.

Cite this