Using Explainability for Bias Mitigation: A Case Study for Fair Recruitment Assessment

Gizem Sogancioglu, Heysem Kaya, Albert Ali Salah

Research output: Contribution to conferencePaperAcademic

Abstract

In this study, we propose a bias-mitigation algorithm, dubbed ProxyMute, that uses an explainability method to detect proxy features of a given sensitive attribute (e.g., gender) and reduces their effects on decisions by disabling them during prediction time. We evaluate our method for a job recruitment use-case, on two different multimodal datasets, namely, FairCVdb and ChaLearn LAP-FI. The exhaustive set of experiments shows that information regarding the proxy features that are provided by explainability methods is beneficial and can be successfully used for the problem of bias mitigation. Furthermore, when combined with a target label normalization method, the proposed approach shows a good performance by yielding one of the fairest results without deteriorating the performance significantly compared to previous works on both experimental datasets. The scripts to reproduce the results are available at: https://github.com/gizemsogancioglu/expl-bias-mitigation.
Original languageEnglish
Pages631-639
Number of pages9
DOIs
Publication statusPublished - 9 Oct 2023
Event25th International Conference on Multimodal Interaction - Paris, Paris, France
Duration: 9 Oct 202313 Oct 2023
https://icmi.acm.org/2023/

Conference

Conference25th International Conference on Multimodal Interaction
Abbreviated titleICMI'23
Country/TerritoryFrance
CityParis
Period9/10/2313/10/23
Internet address

Keywords

  • SHAP
  • automatic recruitment
  • behavior analysis
  • bias mitigation
  • explainability
  • fairness

Fingerprint

Dive into the research topics of 'Using Explainability for Bias Mitigation: A Case Study for Fair Recruitment Assessment'. Together they form a unique fingerprint.

Cite this