TY - JOUR
T1 - Achieving Fair Inference Using Error-Prone Outcomes
AU - Boeschoten, Laura
AU - van Kesteren, Erik-Jan
AU - Bagheri, Ayoub
AU - Oberski, Daniel L.
N1 - Publisher Copyright:
© 2021, Universidad Internacional de la Rioja. All rights reserved.
PY - 2021/3
Y1 - 2021/3
N2 - Recently, an increasing amount of research has focused on methods to assess and account for fairness criteria when predicting ground truth targets in supervised learning. However, recent literature has shown that prediction unfairness can potentially arise due to measurement error when target labels are error prone. In this study we demonstrate that existing methods to assess and calibrate fairness criteria do not extend to the true target variable of interest, when an error-prone proxy target is used. As a solution to this problem, we suggest a framework that combines two existing fields of research: fair ML methods, such as those found in the counterfactual fairness literature and measurement models found in the statistical literature. Firstly, we discuss these approaches and how they can be combined to form our framework. We also show that, in a healthcare decision problem, a latent variable model to account for measurement error removes the unfairness detected previously.
AB - Recently, an increasing amount of research has focused on methods to assess and account for fairness criteria when predicting ground truth targets in supervised learning. However, recent literature has shown that prediction unfairness can potentially arise due to measurement error when target labels are error prone. In this study we demonstrate that existing methods to assess and calibrate fairness criteria do not extend to the true target variable of interest, when an error-prone proxy target is used. As a solution to this problem, we suggest a framework that combines two existing fields of research: fair ML methods, such as those found in the counterfactual fairness literature and measurement models found in the statistical literature. Firstly, we discuss these approaches and how they can be combined to form our framework. We also show that, in a healthcare decision problem, a latent variable model to account for measurement error removes the unfairness detected previously.
KW - Algorithmic Bias
KW - Fair Machine Learning
KW - Latent Variable Model
KW - Measurement Error
KW - Measurement Invariance
UR - http://www.scopus.com/inward/record.url?scp=85108316914&partnerID=8YFLogxK
U2 - 10.9781/ijimai.2021.02.007
DO - 10.9781/ijimai.2021.02.007
M3 - Article
SN - 1989-1660
VL - 6
SP - 9
EP - 15
JO - International Journal of Interactive Multimedia and Artificial Intelligence
JF - International Journal of Interactive Multimedia and Artificial Intelligence
IS - 5
ER -