On Learning Interpreted Languages with Recurrent Models

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Can recurrent neural nets, inspired by human sequential data processing, learn to understand language? We construct simplified data sets reflecting core properties of natural language as modeled in formal syntax and semantics: recursive syntactic structure and compositionality. We find LSTM and GRU networks to generalize to compositional interpretation well, but only in the most favorable learning settings, with a well-paced curriculum, extensive training data, and left-to-right (but not right-to-left) composition.
Original languageEnglish
Pages (from-to)471-483
Number of pages13
JournalComputational Linguistics
Volume48
Issue number2
DOIs
Publication statusPublished - Jun 2022

Fingerprint

Dive into the research topics of 'On Learning Interpreted Languages with Recurrent Models'. Together they form a unique fingerprint.

Cite this