Abstract
Reinforcement learning approaches to cognitive modeling represent task acquisition as learning to choose the sequence of steps that accomplishes the task while maximizing a reward. However, an apparently unrecognized problem for modelers is choosing when, what, and how much to reward; that is, when (the moment: end of trial, subtask, or some other interval of task performance), what (the objective function: e.g., performance time or performance accuracy), and how much (the magnitude: with binary, categorical, or continuous values). In this article, we explore the problem space of these three parameters in the context of a task whose completion entails some combination of 36 state-action pairs, where all intermediate states (i.e., after the initial state and prior to the end state) represent progressive but partial completion of the task. Different choices produce profoundly different learning paths and outcomes, with the strongest effect for moment. Unfortunately, there is little discussion in the literature of the effect of such choices. This absence is disappointing, as the choice of when, what, and how much needs to be made by a modeler for every learning model.
Original language | English |
---|---|
Pages (from-to) | 333-358 |
Number of pages | 26 |
Journal | Cognitive Science |
Volume | 36 |
Issue number | 2 |
DOIs | |
Publication status | Published - 1 Mar 2012 |
Keywords
- Adaptive behavior
- Choice
- Cognitive architecture
- Expected utility
- Expected value
- Reinforcement learning
- Skill acquisition and learning
- Strategy selection