Abstract
In this study, we propose a bias-mitigation algorithm, dubbed ProxyMute, that uses an explainability method to detect proxy features of a given sensitive attribute (e.g., gender) and reduces their effects on decisions by disabling them during prediction time. We evaluate our method for a job recruitment use-case, on two different multimodal datasets, namely, FairCVdb and ChaLearn LAP-FI. The exhaustive set of experiments shows that information regarding the proxy features that are provided by explainability methods is beneficial and can be successfully used for the problem of bias mitigation. Furthermore, when combined with a target label normalization method, the proposed approach shows a good performance by yielding one of the fairest results without deteriorating the performance significantly compared to previous works on both experimental datasets. The scripts to reproduce the results are available at: https://github.com/gizemsogancioglu/expl-bias-mitigation.
Original language | English |
---|---|
Pages | 631-639 |
Number of pages | 9 |
DOIs | |
Publication status | Published - 9 Oct 2023 |
Event | 25th International Conference on Multimodal Interaction - Paris, Paris, France Duration: 9 Oct 2023 → 13 Oct 2023 https://icmi.acm.org/2023/ |
Conference
Conference | 25th International Conference on Multimodal Interaction |
---|---|
Abbreviated title | ICMI'23 |
Country/Territory | France |
City | Paris |
Period | 9/10/23 → 13/10/23 |
Internet address |
Keywords
- SHAP
- automatic recruitment
- behavior analysis
- bias mitigation
- explainability
- fairness