Abstract
Choosing the most suitable classifier in a linguistic context is a well-known problem in the production of Mandarin and many other languages. The present paper proposes a solution based on BERT, compares this solution to previous neural and rule-based models, and argues that the BERT model performs particularly well on those difficult cases where the classifier adds information to the text.
Original language | English |
---|---|
Title of host publication | Proceedings of the 14th International Conference on Natural Language Generation |
Place of Publication | Aberdeen, Scotland, UK |
Publisher | Association for Computational Linguistics |
Pages | 172-176 |
Number of pages | 5 |
Publication status | Published - 1 Aug 2021 |