Abstract
Building on the notion that processing of emotional stimuli is sensitive to context, in two experimental tasks we explored whether the detection of emotion in emotional words (task 1) and facial expressions (task 2) is facilitated by social verbal context. Three different levels of contextual supporting information were compared, namely (1) no information, (2) the verbal expression of an emotionally matched word pronounced with a neutral intonation, and (3) the verbal expression of an emotionally matched word pronounced with emotionally matched intonation. We found that increasing levels of supporting contextual information enhanced emotion detection for words, but not for facial expressions. We also measured activity of the corrugator and zygomaticus muscle to assess facial simulation, as processing of emotional stimuli can be facilitated by facial simulation. While facial simulation emerged for facial expressions, the level of contextual supporting information did not qualify this effect. All in all, our findings suggest that adding emotional-relevant voice elements positively influence emotion detection.
Original language | English |
---|---|
Pages (from-to) | 413–423 |
Journal | Experimental Brain Research |
Volume | 239 |
Early online date | 1 Jan 2020 |
DOIs | |
Publication status | Published - 2021 |
Funding
This work was supported by the Netherlands Organization for Scientific Research Social Sciences under Grant number 464-10-010 (ORA Reference No. ORA-10-108) awarded to the last author. We thank Eva Beeftink for her help in programming the experiment and collecting the data.
Keywords
- Auditory context
- EMG
- Emotion detection
- Emotion processing
- Words and faces