Dimensions of Explanatory Value in NLP models

Kees van Deemter*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Performance on a dataset is often regarded as the key criterion for assessing NLP models. I argue for a broader perspective, which emphasizes scientific explanation. I draw on a long tradition in the philosophy of science, and on the Bayesian approach to assessing scientific theories, to argue for a plurality of criteria for assessing NLP models. To illustrate these ideas, I compare some recent models of language production with each other. I conclude by asking what it would mean for institutional policies if the NLP community took these ideas onboard.

Original languageEnglish
Pages (from-to)749-761
Number of pages13
JournalComputational Linguistics
Volume49
Issue number3
Early online date6 Jun 2023
DOIs
Publication statusPublished - Sept 2023

Fingerprint

Dive into the research topics of 'Dimensions of Explanatory Value in NLP models'. Together they form a unique fingerprint.

Cite this