Perception of synthetic emotion expressions in speech: Categorical and dimensional annotations

G. Bloothooft, J.M. Kessens, M.A. Neerincx, M. Kroes, R. Looije

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Abstract

    In this paper, both categorical and dimensional annotations have been made of neutral and emotional speech synthesis (anger, fear, sad, happy and relaxed). With various prosodic emotion manipulation techniques we found emotion classification rates of 40%, which is significantly above chance level (17%). The classification rates are higher for sentences that have a semantics matching the synthetic emotion. By manipulating the pitch and duration, differences in arousal were perceived whereas differences in valence were hardly perceived. Of the investigated emotion manipulation methods, EmoFilt and EmoSpeak performed very similar, except for the emotion fear. Copy synthesis did not perform well, probably caused by suboptimal alignments and the use of multiple speakers.
    Original languageEnglish
    Title of host publication3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII 2009)
    PublisherIEEE
    Pages303-307
    Number of pages5
    ISBN (Electronic)978-1-4244-4799-2
    ISBN (Print)978-1-4244-4800-5
    DOIs
    Publication statusPublished - 2009

    Bibliographical note

    Proceedings of a meeting held 10-12 September 2009, Amsterdam, Netherlands.

    Fingerprint

    Dive into the research topics of 'Perception of synthetic emotion expressions in speech: Categorical and dimensional annotations'. Together they form a unique fingerprint.

    Cite this