Fair inference on error-prone outcomes

Research output: Working paperPreprintAcademic

Abstract

Fair inference in supervised learning is an important and active area of research, yielding a range of useful methods to assess and account for fairness criteria when predicting ground truth targets. As shown in recent work, however, when target labels are error-prone, potential prediction unfairness can arise from measurement error. In this paper, we show that, when an error-prone proxy target is used, existing methods to assess and calibrate fairness criteria do not extend to the true target variable of interest. To remedy this problem, we suggest a framework resulting from the combination of two existing literatures: fair ML methods, such as those found in the counterfactual fairness literature on the one hand, and, on the other, measurement models found in the statistical literature. We discuss these approaches and their connection resulting in our framework. In a healthcare decision problem, we find that using a latent variable model to account for measurement error removes the unfairness detected previously.
Original languageEnglish
PublisherarXiv
Pages1-14
DOIs
Publication statusPublished - 17 Mar 2020

Keywords

  • Fairness
  • Fair machine learning
  • Measurement error
  • Algorithmic bias
  • Measurement invariance
  • Differential item functioning
  • Item bias
  • Latent variable model

Fingerprint

Dive into the research topics of 'Fair inference on error-prone outcomes'. Together they form a unique fingerprint.

Cite this