Abstract
Grounding has been argued to be a crucial component towards the development of more complete and truly semantically competent artificial intelligence systems. Literature has divided into two camps: While some argue that grounding allows for qualitatively different generalizations, others believe it can be compensated by mono-modal data quantity. Limited empirical evidence has emerged for or against either position, which we argue is due to the methodological challenges that come with studying grounding and its effects on NLP systems. In this paper, we establish a methodological framework for studying what the effects are—if any—of providing models with richer input sources than text-only. The crux of it lies in the construction of comparable samples of populations of models trained on different input modalities, so that we can tease apart the qualitative effects of different input sources from quantifiable model performances. Experiments using this framework reveal qualitative differences in model behavior between cross-modally grounded, cross-lingually grounded, and ungrounded models, which we measure both at a global dataset level as well as for specific word representations, depending on how concrete their semantics is.
Original language | English |
---|---|
Pages | 11031-11042 |
DOIs | |
Publication status | Published - 2023 |
Event | The 2023 Conference on Empirical Methods in Natural Language Processing: (Findings) - Duration: 6 Dec 2023 → 10 Dec 2023 |
Conference
Conference | The 2023 Conference on Empirical Methods in Natural Language Processing |
---|---|
Period | 6/12/23 → 10/12/23 |