Abstract
Word embeddings have advanced the state of the art in NLP across numerous tasks. Understanding the contents of dense neural representations is of utmost interest to the computational semantics community. We propose to focus on relating these opaque word vectors with human-readable definitions, as found in dictionaries This problem naturally divides into two subtasks: converting definitions into embeddings, and converting embeddings into definitions. This task was conducted in a multilingual setting, using comparable sets of embeddings trained homogeneously.
Original language | English |
---|---|
Title of host publication | Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022) |
Publisher | Association for Computational Linguistics |
Number of pages | 14 |
DOIs | |
Publication status | Published - Jul 2022 |