Rethinking symbolic and visual context in referring expression generation

S Schüz*, A Gatt, S Zarrieß

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

Abstract

Situational context is crucial for linguistic reference to visible objects, since the same description can refer unambiguously to an object in one context but be ambiguous or misleading in others. This also applies to Referring Expression Generation (REG), where the production of identifying descriptions is always dependent on a given context. Research in REG has long represented visual domains through symbolic information about objects and their properties, to determine identifying sets of target features during content determination. In recent years, research in visual REG has turned to neural modeling and recasted the REG task as an inherently multimodal problem, looking at more natural settings such as generating descriptions for objects in photographs. Characterizing the precise ways in which context influences generation is challenging in both paradigms, as context is notoriously lacking precise definitions and categorization. In multimodal settings, however, these problems are further exacerbated by the increased complexity and low-level representation of perceptual inputs. The main goal of this article is to provide a systematic review of the types and functions of visual context across various approaches to REG so far and to argue for integrating and extending different perspectives on visual context that currently co-exist in research on REG. By analyzing the ways in which symbolic REG integrates context in rule-based approaches, we derive a set of categories of contextual integration, including the distinction between positive and negative semantic forces exerted by context during reference generation. Using this as a framework, we show that so far existing work in visual REG has considered only some of the ways in which visual context can facilitate end-to-end reference generation. Connecting with preceding research in related areas, as possible directions for future research, we highlight some additional ways in which contextual integration can be incorporated into REG and other multimodal generation tasks.
Original languageEnglish
Article number1067125
Number of pages18
JournalFrontiers in Artificial Intelligence
Volume6
DOIs
Publication statusPublished - Mar 2023

Keywords

  • Natural Language Generation
  • Referring Expression Generation (REG)
  • Vision and Language
  • language grounding
  • scene context
  • visual context

Fingerprint

Dive into the research topics of 'Rethinking symbolic and visual context in referring expression generation'. Together they form a unique fingerprint.

Cite this