Multimodal Emotion Recognition for Visualizing Storyline in a TV Series

Tanja Crijns, Metehan Doyran, Maurits van der Goes, Cecilia Herrera, Heysem Kaya, Osman Semih Kayhan, Rana Klein, Vincent Koops, Cas Laugs, Daan Odijk

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Automatic analysis of video archives is a topic
long-researched in multimedia. In this work, conducted with
RTL Netherlands, we investigated methods for developing an
integrated tool for analysis and visualization of the storyline in
a TV series by combining a range of technologies in affective
computing and multimedia analysis. The input to the proposed
system is a set of episodes from a TV series, in proper temporal
order, including subtitles. We analyse the input in audio, video,
and text modalities, and identify characters in each scene. We
accumulate information about the interactions of the characters
and create an interactive visualisation that helps visualizing the
episodes of the series, as well as accessing specific information.
Our results are potentially useful for creating a tool that will
help directors in creating promotional material, for multimedia
summarization, and for creating visual interfaces into multimodal
archival material. We also analyze the language of soap operas,
how music and sound are used, and how different modalities are
used to create certain affective results.
Original languageEnglish
Title of host publicationProc. ICT with Industry
Number of pages7
Publication statusPublished - 2020

Fingerprint

Dive into the research topics of 'Multimodal Emotion Recognition for Visualizing Storyline in a TV Series'. Together they form a unique fingerprint.

Cite this