TY - GEN
T1 - Explaining Contextualization in Language Models using Visual Analytics.
AU - Sevastjanova, Rita
AU - Kalouli, Aikaterini-Lida
AU - Beck, Christin
AU - Schäfer, Hanna
AU - El-Assady, Mennatallah
N1 - Funding Information:
We thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for funding within project BU 1806/10-2 “Questions Visualized” of the FOR2111 and project D02 “Evaluation Metrics for Visual Analytics in Linguistics” (Project ID: 251654672 – TRR 161).
Publisher Copyright:
© 2021 Association for Computational Linguistics
PY - 2021
Y1 - 2021
N2 - Despite the success of contextualized language models on various NLP tasks, it is still unclear what these models really learn. In this paper, we contribute to the current efforts of explaining such models by exploring the continuum between function and content words with respect to contextualization in BERT, based on linguistically-informed insights. In particular, we utilize scoring and visual analytics techniques: we use an existing similarity-based score to measure contextualization and integrate it into a novel visual analytics technique, presenting the model's layers simultaneously and highlighting intra-layer properties and inter-layer differences. We show that contextualization is neither driven by polysemy nor by pure context variation. We also provide insights on why BERT fails to model words in the middle of the functionality continuum.
AB - Despite the success of contextualized language models on various NLP tasks, it is still unclear what these models really learn. In this paper, we contribute to the current efforts of explaining such models by exploring the continuum between function and content words with respect to contextualization in BERT, based on linguistically-informed insights. In particular, we utilize scoring and visual analytics techniques: we use an existing similarity-based score to measure contextualization and integrate it into a novel visual analytics technique, presenting the model's layers simultaneously and highlighting intra-layer properties and inter-layer differences. We show that contextualization is neither driven by polysemy nor by pure context variation. We also provide insights on why BERT fails to model words in the middle of the functionality continuum.
UR - http://www.scopus.com/inward/record.url?scp=85118923186&partnerID=8YFLogxK
U2 - 10.18653/v1/2021.acl-long.39
DO - 10.18653/v1/2021.acl-long.39
M3 - Conference contribution
SN - 9781954085527
SP - 464
EP - 476
BT - Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
A2 - Zong, Chengqing
A2 - Xia, Fei
A2 - Li, Wenjie
A2 - Navigli, Roberto
PB - Association for Computational Linguistics
ER -