When, What, and How Much to Reward in Reinforcement Learning-Based Models of Cognition

Christian P. Janssen*, Wayne D. Gray

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Reinforcement learning approaches to cognitive modeling represent task acquisition as learning to choose the sequence of steps that accomplishes the task while maximizing a reward. However, an apparently unrecognized problem for modelers is choosing when, what, and how much to reward; that is, when (the moment: end of trial, subtask, or some other interval of task performance), what (the objective function: e.g., performance time or performance accuracy), and how much (the magnitude: with binary, categorical, or continuous values). In this article, we explore the problem space of these three parameters in the context of a task whose completion entails some combination of 36 state-action pairs, where all intermediate states (i.e., after the initial state and prior to the end state) represent progressive but partial completion of the task. Different choices produce profoundly different learning paths and outcomes, with the strongest effect for moment. Unfortunately, there is little discussion in the literature of the effect of such choices. This absence is disappointing, as the choice of when, what, and how much needs to be made by a modeler for every learning model.

Original languageEnglish
Pages (from-to)333-358
Number of pages26
JournalCognitive Science
Volume36
Issue number2
DOIs
Publication statusPublished - 1 Mar 2012

Keywords

  • Adaptive behavior
  • Choice
  • Cognitive architecture
  • Expected utility
  • Expected value
  • Reinforcement learning
  • Skill acquisition and learning
  • Strategy selection

Fingerprint

Dive into the research topics of 'When, What, and How Much to Reward in Reinforcement Learning-Based Models of Cognition'. Together they form a unique fingerprint.

Cite this