Abstract
Words in natural languages are organized into grammatical categories. Mintz (2002) suggested that frames, -frequently occurring word-pairs that span an intermediate target-word-, facilitate categorization of this word. Artificial-language studies demonstrated that dense overlap of distributional cues (frames and adjacent dependencies) across target-words enhances category-learning (Reeder et al., 2013). However, category-learning was tested using only trained intermediate target-words, whereas in natural languages category-learning requires abstraction away from individual items.
We use the entropy model (Radulescu et al., 2020) to investigate generalization in frame-based categorization. This model provides a quantitative measure of input-complexity (entropy) and argues that abstract generalizations are gradually attained as entropy exceeds learner’s processing capacity. We suggest that abstract category-learning (generalization) requires high-entropy input.
Adults are exposed to an artificial language in a low-entropy vs a high-entropy condition (sparse vs dense frame/target-word overlap) and tested with grammaticality-judgments. Familiar/trained intervening target-words in novel category-conforming vs non-conforming combinations with frames, test item-specific category-learning. New/untrained intervening items in category-conforming vs non-conforming combinations with frames test abstract category-learning.
In line with our predictions, preliminary results suggest that both item-specific and abstract category-learning are higher in high-entropy. Furthermore, item-specific category-learning is higher than abstract category-learning in both conditions. This difference is greater in low-entropy.
We use the entropy model (Radulescu et al., 2020) to investigate generalization in frame-based categorization. This model provides a quantitative measure of input-complexity (entropy) and argues that abstract generalizations are gradually attained as entropy exceeds learner’s processing capacity. We suggest that abstract category-learning (generalization) requires high-entropy input.
Adults are exposed to an artificial language in a low-entropy vs a high-entropy condition (sparse vs dense frame/target-word overlap) and tested with grammaticality-judgments. Familiar/trained intervening target-words in novel category-conforming vs non-conforming combinations with frames, test item-specific category-learning. New/untrained intervening items in category-conforming vs non-conforming combinations with frames test abstract category-learning.
In line with our predictions, preliminary results suggest that both item-specific and abstract category-learning are higher in high-entropy. Furthermore, item-specific category-learning is higher than abstract category-learning in both conditions. This difference is greater in low-entropy.
Original language | English |
---|---|
Number of pages | 1 |
Publication status | Published - 5 Jun 2024 |
Funding
This work was supported by the Netherlands Organization for Scientific Research (NWO PGW.20.001) .
Funders | Funder number |
---|---|
Not added |