Generating quantified descriptions of abstract visual scenes

G. Chen, C.J. van Deemter, Chenghua Lin

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Quantified expressions have always taken up a central position in formal theories of meaning and language use. Yet quantified expressions have so far attracted far less attention from the Natural Language Generation community than, for example, referring expressions. In an attempt to start redressing the balance, we investigate a recently developed corpus in which quantified expressions play a crucial role; the corpus is the result of a carefully controlled elicitation experiment, in which human participants were asked to describe visually presented scenes. Informed by an analysis of this corpus, we propose algorithms that produce computer-generated descriptions of a wider class of visual scenes, and we evaluate the descriptions generated by these algorithms in terms of their correctness, completeness, and human-likeness. We discuss what this exercise can teach us about the nature of quantification and about the challenges posed by the generation of quantified expressions.
Original languageEnglish
Title of host publicationProceedings of the 12th International Conference on Natural Language Generation
Place of PublicationTokyo, Japan
PublisherAssociation for Computational Linguistics
Pages529–539
Number of pages11
ISBN (Electronic)978-1-950737-94-9
Publication statusPublished - 28 Oct 2019
Event12th International Conference on Natural Language Generation - National Museum of Emerging Science and Innovation, Tokyo, Japan
Duration: 28 Oct 20191 Nov 2019
https://www.inlg2019.com/

Conference

Conference12th International Conference on Natural Language Generation
Country/TerritoryJapan
CityTokyo
Period28/10/191/11/19
Internet address

Fingerprint

Dive into the research topics of 'Generating quantified descriptions of abstract visual scenes'. Together they form a unique fingerprint.

Cite this