Context-aware Visual Storytelling with Visual Prefix Tuning and Contrastive Learning

Research output: Contribution to conferencePaperAcademic

Abstract

Visual storytelling systems generate multi-sentence stories from image sequences. In this task, capturing contextual information and bridging visual variation bring additional challenges. We propose a simple yet effective framework that leverages the generalization capabilities of pretrained foundation models, only training a lightweight vision-language mapping network to connect modalities, while incorporating context to enhance coherence. We introduce a multimodal contrastive objective that also improves visual relevance and story informativeness. Extensive experimental results, across both automatic metrics and human evaluations, demonstrate that the stories generated by our framework are diverse, coherent, informative, and interesting.
Original languageEnglish
Pages384-401
Publication statusPublished - 2024
Event17th International Natural Language Generation Conference - Tokyo, Japan
Duration: 23 Sept 202427 Sept 2024
Conference number: 17
https://2024.inlgmeeting.org/

Conference

Conference17th International Natural Language Generation Conference
Abbreviated titleINLG
Country/TerritoryJapan
CityTokyo
Period23/09/2427/09/24
Internet address

Fingerprint

Dive into the research topics of 'Context-aware Visual Storytelling with Visual Prefix Tuning and Contrastive Learning'. Together they form a unique fingerprint.

Cite this