Abstract
Oral History Archives (OHA) are a rich source of emotional narratives, encapsulating the personal stories of people across different demographics, historical periods, and cultures. Computational technologies have transformed the oral history archival field by facilitating the transcription and verbal content analysis of interview collections where manual inspection is too time-consuming. However, these methods fail to include the subjective part of the archives. In this project, we explore the potential of automatic breathing patterns and non-verbal cues analysis applied to OHA interviews to gain new insights into the individual and collective emotional responses across different demographics. The proposed framework will investigate if automatic breathing signal prediction enhances the performance of speech emotion recognition models and if a cross-dataset learning approach for breathing signal prediction and paralinguistics analysis will work in OHA. Next, we will further use the emotional information gathered to study cultural differences when it comes to narrating traumatic experiences, focusing on different OHA collections. Lastly, to enhance our research and the literature, we will also design emotion elicitation experiments to create new emotional speech breathing datasets.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2022 International Conference on Multimodal Interaction |
Publisher | Association for Computing Machinery |
Pages | 668–672 |
ISBN (Print) | 978-1-4503-9390-4 |
DOIs | |
Publication status | Published - 7 Nov 2022 |
Keywords
- Afective Computing
- Oral History
- Paralinguistics
- Breathing Analysis
- Interpretability
- Data Collection