ChatGPT as an informant

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

While previous machine learning protocols have failed to achieve even observational adequacy in acquiring natural language, generative large language models (LLMs) now produce large amounts of free text with few grammatical errors. This is surprising in view of what is known as “the logical problem of language acquisition”. Given the likely absence of negative evidence in the training process, how would the LLM acquire the information that certain strings are to be avoided as ill-formed? We attempt to employ Dutch-speaking ChatGPT as a linguistic informant by capitalizing on the documented “few shot learning” ability of LLMs. We then investigate whether ChatGPT has acquired familiar island constraints, in particular the CNPC, and compare its performance to that of native speakers. Although descriptive and explanatory adequacy may remain out of reach, initial results indicate that ChatGPT performs well over chance in detecting island violations.
Original languageEnglish
Pages (from-to)242-260
JournalNota Bene
Volume1
Issue number2
DOIs
Publication statusPublished - Dec 2024

Fingerprint

Dive into the research topics of 'ChatGPT as an informant'. Together they form a unique fingerprint.

Cite this