Using Explainability for Bias Mitigation: A Case Study for Fair Recruitment Assessment

Research output: Contribution to conferencePaperAcademic


In this study, we propose a bias-mitigation algorithm, dubbed ProxyMute, that uses an explainability method to detect proxy features of a given sensitive attribute (e.g., gender) and reduces their effects on decisions by disabling them during prediction time. We evaluate our method for a job recruitment use-case, on two different multimodal datasets, namely, FairCVdb and ChaLearn LAP-FI. The exhaustive set of experiments shows that information regarding the proxy features that are provided by explainability methods is beneficial and can be successfully used for the problem of bias mitigation. Furthermore, when combined with a target label normalization method, the proposed approach shows a good performance by yielding one of the fairest results without deteriorating the performance significantly compared to previous works on both experimental datasets. The scripts to reproduce the results are available at:
Original languageEnglish
Number of pages9
Publication statusPublished - 9 Oct 2023
Event25th International Conference on Multimodal Interaction - Paris, Paris, France
Duration: 9 Oct 202313 Oct 2023


Conference25th International Conference on Multimodal Interaction
Abbreviated titleICMI'23
Internet address


  • SHAP
  • automatic recruitment
  • behavior analysis
  • bias mitigation
  • explainability
  • fairness


Dive into the research topics of 'Using Explainability for Bias Mitigation: A Case Study for Fair Recruitment Assessment'. Together they form a unique fingerprint.

Cite this