Abstract
Real-world control problems are often modeled as Markov decision processes (MDPs) with discrete action spaces to facilitate the use of the many reinforcement learning algorithms that exist to find solutions for such MDPs. For many of these problems an underlying continuous action space can be assumed. We investigate the performance of the Cacla algorithm, which uses a continuous actor, on two such MDPs: the mountain car and the cart pole. We show that Cacla has clear advantages over discrete algorithms such as Q-learning and Sarsa, even though its continuous actions get rounded to actions in the same finite action space that may contain only a small number of actions. In particular, we show that Cacla retains much better performance when the action space is changed by removing some actions after some time of learning.
Original language | Undefined/Unknown |
---|---|
Title of host publication | Proceedings of the 2009 International Joint Conference on Neural Networks (IJCNN 2009) |
Place of Publication | Atlanta, GA |
Publisher | IEEE |
Pages | 1149-1156 |
Number of pages | 8 |
Publication status | Published - 14 Jun 2009 |