Connecting ethics and epistemology of AI

F. Russo, E.S. Schliesser, J.H.M. Wagemans

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and other normative considerations, such as intersectoral vulnerabilities, at critical stages of the whole process from design and implementation to use and assessment. To connect ethics and epistemology of AI, we perform a double shift of focus. First, we move from trusting the output of an AI system to trusting the process that leads to the outcome. Second, we move from expert assessment to more inclusive assessment strategies, aiming to facilitate expert and non-expert assessment. Together, these two moves yield a framework usable for experts and non-experts when they inquire into relevant epistemological and ethical aspects of AI systems. We dub our framework ‘epistemology-cum-ethics’ to signal the equal importance of both aspects. We develop it from the vantage point of the designers: how to create the conditions to internalize values into the whole process of design, implementation, use, and assessment of an AI system, in which values (epistemic and non-epistemic) are explicitly considered at each stage and inspectable by every salient actor involved at any moment.
Original languageEnglish
Pages (from-to)1585–1603
Number of pages19
JournalAI & SOCIETY
Volume39
Issue number4
Early online date17 Jan 2023
DOIs
Publication statusPublished - 2024
Externally publishedYes

Bibliographical note

Publisher Copyright:
© The Author(s) 2023.

Funding

We received in 2020 a seed-money grant from the University of Amsterdam, as part of the Research Priority Area \u2018Human(e) AI\u2019, to conduct research on the topics of the paper and to organize some events related to it. We are very grateful to Gregory Wheeler, Juan Dur\u00E1n, and Katie Creel for joining some of our online meetings and for giving us plenty of useful comments. Ayb\u00FCke \u00D6zg\u00FCn and Emanuele Ratti read an earlier version of this paper, and their suggestions have been likewise invaluable. We also thank all participants of the events where we presented this work (Issues in XAI: Between Ethics and Epistemology, Delft University of Technology, May 2022; PEPTalk, University of Amsterdam, February 2022, Workshop \u2018Bias and discrimination in algorithmic decision making\u2019, University of Hannover, October 2021) and two anonymous reviewers for useful and constructive feedback. Any errors or inaccuracies remain of course ours. Eric Schliesser gratefully acknowledges funding from Netherlands Organisation for Scientific Research (NWO), grant# 406.18.FT.014, A New Normative Framework for Financial Debt.

FundersFunder number
Universiteit van Amsterdam
Technische Universiteit Delft
University of Hannover
Nederlandse Organisatie voor Wetenschappelijk Onderzoek406.18

    Keywords

    • Glass-box AI
    • Ethics of AI
    • Epistemology of AI
    • Critical Questions
    • Holistic Model Validation
    • Ethics-cum-Epistemology
    • Argumentation from expert opinion

    Fingerprint

    Dive into the research topics of 'Connecting ethics and epistemology of AI'. Together they form a unique fingerprint.

    Cite this