TY - UNPB
T1 - Learning optionality and repetition
AU - Fowlie, Meaghan
PY - 2014
Y1 - 2014
N2 - Adjuncts are syntactic elements that are optional, transparent to selection, and often, though not always, repeatable. Classic examples are adjectives and adverbs. How do learners learn optionaility? How do they learn repeatability? I explore a variety of learners’ approaches to optionality and repeatability, including both human learners and learning algorithms. Mathematically, a learner is a function from an input text to a grammar. The kinds of patterns in the input that the learner is sensitive to depends on the assumptions that the particular learner makes about the nature of the language. In learnability theory, learners of Regular languages are much better understood than those for languages higher on the Chomsky hierarchy (Chomsky 1959). Since human languages are known to be Mildly Context-Sensitive (Joshi 1985), learning algorithms for such languages are clearly more relevant to actual human language learning; however, as research into such learners is still in its infancy, and since our understanding of Regular learners has driven higher-level learners (see for example Clark, Eyraud, & Habraud (2008)’s substitutable CF learner and Yoshinaka (2008)’s k,l-substitutable CF learner, which are extentions of Angluin (1982)’s 0- and k-reversible learners to the context free level), I will look at learners low on the Chomsky hierarchy as well. I provide here three examples.
AB - Adjuncts are syntactic elements that are optional, transparent to selection, and often, though not always, repeatable. Classic examples are adjectives and adverbs. How do learners learn optionaility? How do they learn repeatability? I explore a variety of learners’ approaches to optionality and repeatability, including both human learners and learning algorithms. Mathematically, a learner is a function from an input text to a grammar. The kinds of patterns in the input that the learner is sensitive to depends on the assumptions that the particular learner makes about the nature of the language. In learnability theory, learners of Regular languages are much better understood than those for languages higher on the Chomsky hierarchy (Chomsky 1959). Since human languages are known to be Mildly Context-Sensitive (Joshi 1985), learning algorithms for such languages are clearly more relevant to actual human language learning; however, as research into such learners is still in its infancy, and since our understanding of Regular learners has driven higher-level learners (see for example Clark, Eyraud, & Habraud (2008)’s substitutable CF learner and Yoshinaka (2008)’s k,l-substitutable CF learner, which are extentions of Angluin (1982)’s 0- and k-reversible learners to the context free level), I will look at learners low on the Chomsky hierarchy as well. I provide here three examples.
M3 - Working paper
VL - 14
T3 - UCLA Working Papers in Linguistics
BT - Learning optionality and repetition
PB - UCLA Dept of Linguistics
ER -