Understanding Gender Biases in Knowledge Base Embeddings

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Knowledge base (KB) embeddings have been shown to contain gender biases. In this paper, we study two questions regarding these biases: how to quantify them, and how to trace their origins in KB? Specifically, first, we develop two novel bias measures respectively for a group of person entities and an individual person entity. Evidence of their validity is observed by comparison with real-world census data. Second, we use the influence function to inspect the contribution of each triple in KB to the overall group bias. To exemplify the potential applications of our study, we also present two strategies (by adding and removing KB triples) to mitigate gender biases in KB embeddings.
Original languageEnglish
Title of host publicationProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
EditorsS. Muresan, P. Nakov, A. Villavicencio
PublisherAssociation for Computational Linguistics
Pages1381–1395
DOIs
Publication statusPublished - 2022
EventProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) - Dublin, Ireland
Duration: 22 May 202227 May 2022

Conference

ConferenceProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Country/TerritoryIreland
CityDublin
Period22/05/2227/05/22

Fingerprint

Dive into the research topics of 'Understanding Gender Biases in Knowledge Base Embeddings'. Together they form a unique fingerprint.

Cite this