Abstract
In this paper, both categorical and dimensional annotations have been made of neutral and emotional speech synthesis (anger, fear, sad, happy and relaxed). With various prosodic emotion manipulation techniques we found emotion classification rates of 40%, which is significantly above chance level (17%). The classification rates are higher for sentences that have a semantics matching the synthetic emotion. By manipulating the pitch and duration, differences in arousal were perceived whereas differences in valence were hardly perceived. Of the investigated emotion manipulation methods, EmoFilt and EmoSpeak performed very similar, except for the emotion fear. Copy synthesis did not perform well, probably caused by suboptimal alignments and the use of multiple speakers.
Original language | English |
---|---|
Title of host publication | 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII 2009) |
Publisher | IEEE |
Pages | 303-307 |
Number of pages | 5 |
ISBN (Electronic) | 978-1-4244-4799-2 |
ISBN (Print) | 978-1-4244-4800-5 |
DOIs | |
Publication status | Published - 2009 |