Abstract
Emotion labels are usually obtained via either manual annotation, which is tedious and time-consuming, or questionnaires, which neglect the time-varying nature of emotions and depend on human's unreliable introspection. To overcome these limitations, we developed a continuous, real-time, joystick-based emotion annotation framework. To assess the same, 30 subjects each watched 8 emotion-inducing videos. They were asked to indicate their instantaneous emotional state in a valence-arousal (V-A) space, using a joystick. Subsequently, five analyses were undertaken: (i) a System Usability Scale (SUS) questionnaire unveiled the framework's excellent usability; (ii) MANOVA analysis of the mean V-A ratings and (iii) trajectory similarity analyses of the annotations confirmed the successful elicitation of emotions; (iv) Change point analysis of the annotations, revealed a direct mapping between emotional events and annotations, thereby enabling automatic detection of emotionally salient points in the videos; and (v) Support Vector Machines (SVM) were trained on classification of 5 second chunks of annotations as well as their change-points. The classification results confirmed that ratings patterns were cohesive across the participants. These analyses confirm the value, validity, and usability of our annotation framework. They also showcase novel tools for gaining greater insights into the emotional experience of the participants.
Original language | English |
---|---|
Pages (from-to) | 78-84 |
Number of pages | 7 |
Journal | IEEE Transactions on Affective Computing |
Volume | 11 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2020 |
Keywords
- Videos
- Usability
- Tools
- Standards
- Two dimensional displays
- Real-time systems
- Support Vector Machines
- Affective computing
- emotion
- human-computer interaction
- annotation
- time-series analysis
- change-point analysis
- pattern recognition
- machine learning