Explaining Contextualization in Language Models using Visual Analytics.

Rita Sevastjanova, Aikaterini-Lida Kalouli, Christin Beck, Hanna Schäfer, Mennatallah El-Assady

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Despite the success of contextualized language models on various NLP tasks, it is still unclear what these models really learn. In this paper, we contribute to the current efforts of explaining such models by exploring the continuum between function and content words with respect to contextualization in BERT, based on linguistically-informed insights. In particular, we utilize scoring and visual analytics techniques: we use an existing similarity-based score to measure contextualization and integrate it into a novel visual analytics technique, presenting the model's layers simultaneously and highlighting intra-layer properties and inter-layer differences. We show that contextualization is neither driven by polysemy nor by pure context variation. We also provide insights on why BERT fails to model words in the middle of the functionality continuum.

Original languageEnglish
Title of host publicationProceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
EditorsChengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
PublisherAssociation for Computational Linguistics
Pages464-476
Number of pages13
ISBN (Electronic)9781954085527
ISBN (Print)9781954085527
DOIs
Publication statusPublished - 2021

Fingerprint

Dive into the research topics of 'Explaining Contextualization in Language Models using Visual Analytics.'. Together they form a unique fingerprint.

Cite this