Abstract
While both speech emotion recognition and music emotion recognition have been studied extensively in different communities, little research went into the recognition of emotion from mixed audio sources, i.e. when both speech and music are present. However, many application scenarios require models that are able to extract emotions from mixed audio sources, such as television content. This paper studies how mixed audio affects both speech and music emotion recognition using a random forest and deep neural network model, and investigates if blind source separation of the mixed signal beforehand is beneficial. We created a mixed audio dataset, with 25% speech-music overlap without contextual relationship between the two. We show that specialized models for speech-only or music-only audio were able to achieve merely 'chance-level' performance on mixed audio. For speech, above chance-level performance was achieved when trained on raw mixed audio, but optimal performance was achieved with audio blind source separated beforehand. Music emotion recognition models on mixed audio achieve performance approaching or even surpassing performance on music-only audio, with and without blind source separation. Our results are important for estimating emotion from real-world data, where individual speech and music tracks are often not available.
Original language | English |
---|---|
Pages | 67-71 |
Number of pages | 5 |
DOIs | |
Publication status | Published - 25 Oct 2020 |
Event | ICMI 2020 Late Breaking Results - Virtual Event, Utrecht, Netherlands Duration: 25 Oct 2020 → 29 Oct 2020 https://icmi.acm.org/2021/index.php?id=cflbr |
Workshop
Workshop | ICMI 2020 Late Breaking Results |
---|---|
Abbreviated title | ICMI20LBR |
Country/Territory | Netherlands |
City | Utrecht |
Period | 25/10/20 → 29/10/20 |
Internet address |
Keywords
- speech emotion recognition
- music emotion recognition
- blindsource separation