Skip to main navigation Skip to search Skip to main content

Neural referential form selection: Generalisability and interpretability

  • Guanyi Chen*
  • , Fahime Same*
  • , Kees van Deemter*
  • *Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

In recent years, a range of Neural Referring Expression Generation (REG) systems have been built and they have often achieved encouraging results. However, these models are often thought to lack transparency and generality. Firstly, it is hard to understand what these neural REG models can learn and to compare their performance with existing linguistic theories. Secondly, it is unclear whether they can generalise to data in different text genres and different languages. To answer these questions, we propose to focus on a sub-task of REG: Referential Form Selection (RFS). We introduce the task of RFS and a series of neural RFS models built on state-of-the-art neural REG models. To address the issue of interpretability, we probe these RFS models using probing classifiers that consider information known to impact the human choice of Referential Forms. To address the issue of generalisability, we assess the performance of RFS models on multiple datasets in multiple genres and two different languages, namely, English and Chinese.
Original languageEnglish
Article number101466
Number of pages23
JournalComputer Speech and Language
Volume79
DOIs
Publication statusPublished - Apr 2023

Bibliographical note

Funding Information:
Fahime Same is funded by the German Research Foundation (DFG)– Project-ID 281511265– SFB 1252 “Prominence in Language”.

Publisher Copyright:
© 2022 The Author(s)

Keywords

  • Deep learning
  • Multilinguality
  • Natural Language Generation
  • Probing classifier
  • Referring Expression Generation

Fingerprint

Dive into the research topics of 'Neural referential form selection: Generalisability and interpretability'. Together they form a unique fingerprint.

Cite this