Neural Proof Nets

Research output: Contribution to conferencePaperAcademic

Abstract

Linear logic and the linear λ-calculus have a long standing tradition in the study of natural language form and meaning. Among the proof calculi of linear logic, proof nets are of particular interest, offering an attractive geometric representation of derivations that is unburdened by the bureaucratic complications of conventional prooftheoretic formats. Building on recent advances in set-theoretic learning, we propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting syntactic primitives and permuting them into alignment. Our methodology induces a batch-efficient, end-to-end differentiable architecture that actualizes a formally grounded yet highly efficient neuro-symbolic parser. We test our approach on ÆThel, a dataset of type-logical derivations for written Dutch, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear λ-calculus with an accuracy of as high as 70%.
Original languageEnglish
Pages26–40
DOIs
Publication statusPublished - 2020
EventThe SIGNLL Conference on Computational Natural Language Learning -
Duration: 19 Nov 202020 Nov 2020
https://www.conll.org/

Conference

ConferenceThe SIGNLL Conference on Computational Natural Language Learning
Abbreviated titleCoNLL
Period19/11/2020/11/20
Internet address

Keywords

  • Categorial Grammar
  • Linear Logic
  • Parsing

Fingerprint

Dive into the research topics of 'Neural Proof Nets'. Together they form a unique fingerprint.

Cite this