Using BERT for choosing classifiers in Mandarin

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Abstract

    Choosing the most suitable classifier in a linguistic context is a well-known problem in the production of Mandarin and many other languages. The present paper proposes a solution based on BERT, compares this solution to previous neural and rule-based models, and argues that the BERT model performs particularly well on those difficult cases where the classifier adds information to the text.
    Original languageEnglish
    Title of host publicationProceedings of the 14th International Conference on Natural Language Generation
    Place of PublicationAberdeen, Scotland, UK
    PublisherAssociation for Computational Linguistics
    Pages172-176
    Number of pages5
    Publication statusPublished - 1 Aug 2021

    Fingerprint

    Dive into the research topics of 'Using BERT for choosing classifiers in Mandarin'. Together they form a unique fingerprint.

    Cite this