fl-IRT-ing with Psychometrics to Improve NLP Bias Measurement

Dominik Bachmann*, Oskar van der Wal, Edita Chvojka, Willem H. Zuidema, Leendert van Maanen, Katrin Schulz

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

To prevent ordinary people from being harmed by natural language processing (NLP) technology, finding ways to measure the extent to which a language model is biased (e.g., regarding gender) has become an active area of research. One popular class of NLP bias measures are bias benchmark datasets—collections of test items that are meant to assess a language model’s preference for stereotypical versus non-stereotypical language. In this paper, we argue that such bias benchmarks should be assessed with models from the psychometric framework of item response theory (IRT). Specifically, we tie an introduction to basic IRT concepts and models with a discussion of how they could be relevant to the evaluation, interpretation and improvement of bias benchmark datasets. Regarding evaluation, IRT provides us with methodological tools for assessing the quality of both individual test items (e.g., the extent to which an item can differentiate highly biased from less biased language models) as well as benchmarks as a whole (e.g., the extent to which the benchmark allows us to assess not only severe but also subtle levels of model bias). Through such diagnostic tools, the quality of benchmark datasets could be improved, for example by deleting or reworking poorly performing items. Finally, in regards to interpretation, we argue that IRT models’ estimates for language model bias are conceptually superior to traditional accuracy-based evaluation metrics, as the former take into account more information than just whether or not a language model provided a biased response.

Original languageEnglish
Article number37
Number of pages34
JournalMinds and Machines
Volume34
Issue number4
DOIs
Publication statusPublished - 4 Sept 2024

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.

Funding

The authors wish to thank Petr Palíš1ek, Alina Leidinger, and the two anonymous peer reviewers for their thoughtful and insightful feedback! This publication is part of the project "The biased reality of online media - Using stereotypes to make media manipulation visible" (Project number 406.DI.19.059) of the research programme Open Competition Digitalisation-SSH, which is financed by the Dutch Research Council (NWO).

FundersFunder number
Nederlandse Organisatie voor Wetenschappelijk Onderzoek406.DI.19.059

    Keywords

    • Bias benchmark datasets
    • Item response theory
    • Language models
    • NLP
    • Psychometrics

    Fingerprint

    Dive into the research topics of 'fl-IRT-ing with Psychometrics to Improve NLP Bias Measurement'. Together they form a unique fingerprint.

    Cite this