Abstract
In this paper we turn our attention to comparing the policy function obtained by
Beck and Wieland (2002) to the one obtained with adaptive control methods. It
is an integral part of the optimal learning method used by Beck and Wieland to
obtain a policy function that provides the optimal control as a feedback function
of the state of the system. However, computing this function is not necessary
when doing Monte Carlo experiments with adaptive control methods. Therefore, we
have modified our software in order to obtain the policy function for comparison to
the BW results.
Beck and Wieland (2002) to the one obtained with adaptive control methods. It
is an integral part of the optimal learning method used by Beck and Wieland to
obtain a policy function that provides the optimal control as a feedback function
of the state of the system. However, computing this function is not necessary
when doing Monte Carlo experiments with adaptive control methods. Therefore, we
have modified our software in order to obtain the policy function for comparison to
the BW results.
Original language | English |
---|---|
Place of Publication | Utrecht |
Publisher | UU USE Tjalling C. Koopmans Research Institute |
Number of pages | 19 |
Publication status | Published - Aug 2008 |
Publication series
Name | Discussion Paper Series / Tjalling C. Koopmans Research Institute |
---|---|
No. | 19 |
Volume | 08 |
ISSN (Electronic) | 2666-8238 |
Keywords
- Active learning
- dual control
- optimal experimentation
- stochastic optimization
- time-varying parameters
- numerical experiments