Skip to main navigation
Skip to search
Skip to main content
Utrecht University Home
Help & FAQ
Home
Research output
Search by expertise, name or affiliation
FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using Diffusion
Stefan Stan,
Kazi Injamamul Haque
,
Zerrin Yumak
Sub Human-Centered Computing
Human-Centered Computing
Research output
:
Working paper
›
Preprint
›
Academic
Overview
Fingerprint
Fingerprint
Dive into the research topics of 'FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using Diffusion'. Together they form a unique fingerprint.
Sort by
Weight
Alphabetically
Keyphrases
Facial Expression Synthesis
100%
Speech-driven
100%
3D Facial Animation
100%
Non-deterministic
66%
Blendshapes
66%
Diffusion Method
66%
Publicly Available
33%
Challenging Tasks
33%
Nonverbal
33%
State-of-the-art Techniques
33%
Vertex-based
33%
Facial Cues
33%
Deep Learning Methods
33%
Representation Model
33%
Deep Learning Model
33%
Speech Input
33%
Facial Animation
33%
HuBERT
33%
Audio Input
33%
Subjective Analysis
33%
Speech-driven Facial Animation
33%
Speech Representation
33%
Computer Science
Facial Animation
100%
Representation Model
20%
Deep Learning Model
20%
Deep Learning Method
20%