Using Continuous Action Spaces to Solve Discrete Problems

H.P. van Hasselt, M.A. Wiering

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Real-world control problems are often modeled as Markov decision processes (MDPs) with discrete action spaces to facilitate the use of the many reinforcement learning algorithms that exist to find solutions for such MDPs. For many of these problems an underlying continuous action space can be assumed. We investigate the performance of the Cacla algorithm, which uses a continuous actor, on two such MDPs: the mountain car and the cart pole. We show that Cacla has clear advantages over discrete algorithms such as Q-learning and Sarsa, even though its continuous actions get rounded to actions in the same finite action space that may contain only a small number of actions. In particular, we show that Cacla retains much better performance when the action space is changed by removing some actions after some time of learning.
Original languageUndefined/Unknown
Title of host publicationProceedings of the 2009 International Joint Conference on Neural Networks (IJCNN 2009)
Place of PublicationAtlanta, GA
PublisherIEEE
Pages1149-1156
Number of pages8
Publication statusPublished - 14 Jun 2009

Cite this