Abstract
Automatic distinction between genuine (spontaneous) and posed expressions is important for visual analysis of social signals. In this paper, we describe an informative set of features for the analysis of face dynamics, and propose a completely automatic system to distinguish between genuine and posed enjoyment smiles. Our system incorporates facial landmarking and tracking, through which features are extracted to describe the dynamics of eyelid, cheek, and lip corner movements. By fusing features over different regions, as well as over different temporal phases of a smile, we obtain a very accurate smile classifier. We systematically investigate age and gender effects, and establish that age-specific classification significantly improves the results, even when the age is automatically estimated. We evaluate our system on the 400-subject UvA-NEMO database we have recently collected, as well as on three other smile databases from the literature. Through an extensive experimental evaluation, we show that our system improves the state of the art in smile classification and provides useful insights in smile psychophysics.
Original language | English |
---|---|
Article number | 7018058 |
Pages (from-to) | 279-294 |
Number of pages | 16 |
Journal | IEEE Transactions on Multimedia |
Volume | 17 |
Issue number | 3 |
DOIs | |
Publication status | Published - 1 Mar 2015 |
Keywords
- Affective computing
- expression dynamics
- expression spontaneity
- face analysis
- genuine smile
- human-computer interaction
- social signals