Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Understanding the sources of variability in annotations is crucial for developing fair NLP systems, especially for tasks like sexism detection where demographic bias is a concern. This study investigates the extent to which annotator demographic features influence labeling decisions compared to text content. Using a Generalized Linear Mixed Model, we quantify this influence, finding that while statistically present, demographic factors account for a minor fraction (~8%) of the observed variance, with tweet content being the dominant factor. We then assess the reliability of Generative AI (GenAI) models as annotators, specifically evaluating if guiding them with demographic personas improves alignment with human judgments. Our results indicate that simplistic persona prompting often fails to enhance, and sometimes degrades, performance compared to baseline models. Furthermore, explainable AI (XAI) techniques reveal that model predictions rely heavily on content-specific tokens related to sexism, rather than correlates of demographic characteristics. We argue that focusing on content-driven explanations and robust annotation protocols offers a more reliable path towards fairness than potentially persona simulation.
Original languageEnglish
Title of host publicationProceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Place of PublicationVienna, Austria
PublisherAssociation for Computational Linguistics
Pages92-104
DOIs
Publication statusPublished - 1 Aug 2025

Fingerprint

Dive into the research topics of 'Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation'. Together they form a unique fingerprint.

Cite this