Privacy constrained fairness estimation for decision trees

Florian van der Steen*, Fré Vink, Heysem Kaya*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Abstract: The protection of sensitive data becomes more vital, as data increases in value and potency. Furthermore, the pressure increases from regulators and society on model developers to make their Artificial Intelligence (AI) models non-discriminatory. To boot, there is a need for interpretable, transparent AI models for high-stakes tasks. In general, measuring the fairness of any AI model requires the sensitive attributes of the individuals in the dataset, thus raising privacy concerns. In this work, the trade-offs between fairness (in terms of Statistical Parity (SP)), privacy (quantified with a budget), and interpretability are further explored in the context of Decision Trees (DTs) as intrinsically interpretable models. We propose a novel method, dubbed Privacy-Aware Fairness Estimation of Rules (PAFER), that can estimate SP in a Differential Privacy (DP)-aware manner for DTs. Our method is the first to assess algorithmic fairness on a rule-level, providing insight into sources of discrimination for policy makers. DP, making use of a third-party legal entity that securely holds this sensitive data, guarantees privacy by adding noise to the sensitive data. We experimentally compare several DP mechanisms. We show that using the Laplacian mechanism, the method is able to estimate SP with low error while guaranteeing the privacy of the individuals in the dataset with high certainty. We further show experimentally and theoretically that the method performs better for those DTs that humans generally find easier to interpret.

Original languageEnglish
Article number308
Number of pages27
JournalApplied Intelligence
Volume55
Issue number5
DOIs
Publication statusPublished - 13 Jan 2025

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.

Funding

The research leading to this article was conducted during an internship at the Dutch Central Government Audit Service (ADR) as part of the Utrecht University MSc thesis study of the first author.

FundersFunder number
Dutch Central Government Audit Service
ADR

    Keywords

    • Differential privacy
    • Fairness
    • Interpretability
    • Responsible AI

    Fingerprint

    Dive into the research topics of 'Privacy constrained fairness estimation for decision trees'. Together they form a unique fingerprint.

    Cite this