Abstract
We develop a multilingual version of the Wug Test, an artificial word completion experiment that is typically used to test the morphological knowledge of children, and apply it to the GPT family of large language models (LLMs). LLMs’ performance on this test was evaluated by native speakers of six different languages, who judged whether the inflected and derived forms generated by the models conform to the morphological rules of their language. Our results show that LLMs can generalize their morphological knowledge to new, unfamiliar words, but that their success in generating the “correct” generalization (as judged by native human speakers) is predicted by a language’s morphological complexity (specifically, integrative complexity). We further find that the amount of training data has surprisingly little on LLMs’ morphological generalization abilities within the scope of the analyzed languages. These findings highlight that “morphology matters”, and have important implications for improving low-resource language modeling.
Original language | English |
---|---|
Title of host publication | CMCL 2024 - 13th Edition of the Workshop on Cognitive Modeling and Computational Linguistics, Proceedings of the Workshop |
Editors | Tatsuki Kuribayashi, Giulia Rambelli, Ece Takmaz, Philipp Wicke, Yohei Oseki |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 177-188 |
Number of pages | 12 |
ISBN (Electronic) | 9798891761438 |
DOIs | |
Publication status | Published - 2024 |
Event | 13th Edition of the Workshop on Cognitive Modeling and Computational Linguistics, CMCL 2024 - Bangkok, Thailand Duration: 15 Aug 2024 → … |
Publication series
Name | CMCL 2024 - 13th Edition of the Workshop on Cognitive Modeling and Computational Linguistics, Proceedings of the Workshop |
---|
Conference
Conference | 13th Edition of the Workshop on Cognitive Modeling and Computational Linguistics, CMCL 2024 |
---|---|
Country/Territory | Thailand |
City | Bangkok |
Period | 15/08/24 → … |
Bibliographical note
Publisher Copyright:©2024 Association for Computational Linguistics.