Abstract
Finding the ground state of a quantum mechanical system can be formulated as an optimal control problem. In this formulation, the drift of the optimally controlled process is chosen to match the distribution of paths in the Feynman–Kac (FK) representation of the solution of the imaginary time Schrödinger equation. This provides a variational principle that can be used for reinforcement learning of a neural representation of the drift. Our approach is a drop-in replacement for path integral Monte Carlo, learning an optimal importance sampler for the FK trajectories. We demonstrate the applicability of our approach to several problems of one-, two-, and many-particle physics.
Original language | English |
---|---|
Pages | 635-653 |
Publication status | Published - 2020 |
Event | PMLR Proceedings of Machine Learning Research: Mathematical and Scientific Machine Learning, 20-24 July 2020, Princeton University, Princeton, NJ, USA - Princeton University, NJ, United States Duration: 20 Jul 2020 → 24 Jul 2020 Conference number: 107 http://proceedings.mlr.press/v107/ |
Conference
Conference | PMLR Proceedings of Machine Learning Research |
---|---|
Country/Territory | United States |
City | NJ |
Period | 20/07/20 → 24/07/20 |
Internet address |
Keywords
- Quantum Mechanics
- Feynman–Kac Formula
- Optimal Control
- Reinforcement Learning