Algorithmic profiling as a source of hermeneutical injustice

Silvia Milano, Carina Prunkl

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only demonstrate how the philosophical conceptual framework of epistemic injustice can help pinpoint potential, systematic harms from algorithmic profiling, but we also identify a novel source of hermeneutical injustice that to date has received little attention in the relevant literature, what we call epistemic fragmentation. As we detail in this paper, epistemic fragmentation is a structural characteristic of algorithmically-mediated environments that isolate individuals, making it more difficult to develop, uptake and apply new epistemic resources, thus making it more difficult to identify and conceptualise emerging harms in these environments. We thus trace the occurrence of hermeneutical injustice back to the fragmentation of the epistemic experiences of individuals, who are left more vulnerable by the inability to share, compare and learn from shared experiences.
Original languageEnglish
Pages (from-to)185-203
Number of pages19
JournalPhilosophical Studies
Volume182
Issue number1
Early online date5 Feb 2024
DOIs
Publication statusPublished - 2025

Bibliographical note

Publisher Copyright:
© 2024, The Author(s).

Funding

The authors would like to thank audiences at Egenis Research Exchange, the LSE Choice Group, the Oxford Epistemology Group, and the editors of the S.I. on AI and Normative Theory for helpful discussions on earlier versions of this article. Special thanks go to Adrian Currie, Tom Roberts, Rose Trappes, and two anonymous reviewers for this journal, for their detailed and generous comments. This work has been partly supported through research funding provided by the Wellcome Trust (grant nr 223,765/Z/21/Z), Sloan Foundation (grant nr G-2021-16779), Department of Health and Social Care, and Luminate Group. Their funding supports the Trustworthiness Auditing for AI project and Governance of Emerging Technologies research programme at the Oxford Internet Institute, University of Oxford. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. This study did not generate any new data.

FundersFunder number
Alfred P. Sloan FoundationG-2021-16779
Wellcome Trust223,765/Z/21/Z
Department of Health and Social Care

    Keywords

    • Algorithmic profiling; Epistemic fragmentation
    • Epistemic injustice
    • Ethics of AI
    • Hermeneutical injustice

    Fingerprint

    Dive into the research topics of 'Algorithmic profiling as a source of hermeneutical injustice'. Together they form a unique fingerprint.

    Cite this