Learning compositional structures for semantic graph parsing

Meaghan Fowlie, Jonas Groschwitz, Alexander Koller

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

AM dependency parsing is a method for neural semantic graph parsing that exploits the principle of compositionality. While AM dependency parsers have been shown to be fast and accurate across several graphbanks, they require explicit annotations of the compositional tree structures for training. In the past, these were obtained using complex graphbank-specific heuristics written by experts. Here we show how they can instead be trained directly on the graphs with a neural latent-variable model, drastically reducing the amount and complexity of manual heuristics. We demonstrate that our model picks up on several linguistic phenomena on its own and achieves comparable accuracy to supervised training, greatly facilitating the use of AM dependency parsing for new sembanks.
Original languageEnglish
Title of host publicationProceedings of the 5th Workshop on Structured Prediction for NLP
Pages22-36
Number of pages15
DOIs
Publication statusPublished - 6 Aug 2021
EventStructured Prediction for NLP - Online
Duration: 6 Aug 20216 Aug 2021
Conference number: 5
http://structuredprediction.github.io/SPNLP21

Workshop

WorkshopStructured Prediction for NLP
Abbreviated titleSPNLP
Period6/08/216/08/21
Internet address

Fingerprint

Dive into the research topics of 'Learning compositional structures for semantic graph parsing'. Together they form a unique fingerprint.

Cite this