Constructive Type-Logical Supertagging with Self-Attention Networks

Research output: Contribution to conferencePaperAcademic

Abstract

We propose a novel application of self-attention networks towards grammar induction. We present an attention-based supertagger for a refined type-logical grammar, trained on constructing types inductively. In addition to achieving a high overall type accuracy, our model is able to learn the syntax of the grammar's type system along with its denotational semantics. This lifts the closed world assumption commonly made by lexicalized grammar supertaggers, greatly enhancing its generalization potential. This is evidenced both by its adequate accuracy over sparse word types and its ability to correctly construct complex types never seen during training, which, to the best of our knowledge, was as of yet unaccomplished.
Original languageEnglish
Pages113-123
Publication statusPublished - 24 May 2019
EventRepresentation Learning For NLP: ACL Workshop - Florence, Italy
Duration: 2 Aug 20192 Aug 2019
Conference number: 4
https://sites.google.com/view/repl4nlp2019

Workshop

WorkshopRepresentation Learning For NLP
Abbreviated titleREPL4NLP
Country/TerritoryItaly
CityFlorence
Period2/08/192/08/19
Internet address

Bibliographical note

Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)

Fingerprint

Dive into the research topics of 'Constructive Type-Logical Supertagging with Self-Attention Networks'. Together they form a unique fingerprint.
  • Neural Proof Nets

    Kogkalidis, K., Moortgat, M. J. & Moot, R. C. A., 2020, p. 26–40.

    Research output: Contribution to conferencePaperAcademic

    Open Access
    File

Cite this