What can Neural Referential Form Selectors Learn?

Guanyi Chen, Fahime Same, Kees van Deemter

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Abstract

    Despite achieving encouraging results, neural Referring Expression Generation models are often thought to lack transparency. We probed neural Referential Form Selection (RFS) models to find out to what extent the linguistic features influencing the RE form are learned and captured by state-of-the-art RFS models. The results of 8 probing tasks show that all the defined features were learned to some extent. The probing tasks pertaining to referential status and syntactic position exhibited the highest performance. The lowest performance was achieved by the probing models designed to predict discourse structure properties beyond the sentence level.
    Original languageEnglish
    Title of host publicationProceedings of the 14th International Conference on Natural Language Generation
    Place of PublicationAberdeen, Scotland, UK
    PublisherAssociation for Computational Linguistics
    Pages154-166
    Number of pages13
    Publication statusPublished - 1 Aug 2021

    Fingerprint

    Dive into the research topics of 'What can Neural Referential Form Selectors Learn?'. Together they form a unique fingerprint.

    Cite this