MMChat: Multi-Modal Chat Dataset on Social Media

Yinhe Zheng, Guanyi Chen, Xin Liu, Ke Lin

Research output: Working paperPreprintAcademic

Abstract

Incorporating multi-modal contexts in conversation is an important step for developing more engaging dialogue systems. In this work, we explore this direction by introducing MMChat: a large scale multi-modal dialogue corpus (32.4M raw dialogues and 120.84K filtered dialogues). Unlike previous corpora that are crowd-sourced or collected from fictitious movies, MMChat contains image-grounded dialogues collected from real conversations on social media, in which the sparsity issue is observed. Specifically, image-initiated dialogues in common communications may deviate to some non-image-grounded topics as the conversation proceeds. We develop a benchmark model to address this issue in dialogue generation tasks by adapting the attention routing mechanism on image features. Experiments demonstrate the usefulness of incorporating image features and the effectiveness in handling the sparsity of image features.
Original languageEnglish
PublisherarXiv
Pages1-8
DOIs
Publication statusPublished - 16 Aug 2021

Keywords

  • cs.CL
  • cs.CV

Fingerprint

Dive into the research topics of 'MMChat: Multi-Modal Chat Dataset on Social Media'. Together they form a unique fingerprint.

Cite this