Quantum Ground States from Reinforcement Learning

Ariel Barr, Willem Gispen, Austen Lamacraft

Research output: Contribution to conferencePaperAcademic

Abstract

Finding the ground state of a quantum mechanical system can be formulated as an optimal control problem. In this formulation, the drift of the optimally controlled process is chosen to match the distribution of paths in the Feynman–Kac (FK) representation of the solution of the imaginary time Schrödinger equation. This provides a variational principle that can be used for reinforcement learning of a neural representation of the drift. Our approach is a drop-in replacement for path integral Monte Carlo, learning an optimal importance sampler for the FK trajectories. We demonstrate the applicability of our approach to several problems of one-, two-, and many-particle physics.
Original languageEnglish
Pages635-653
Publication statusPublished - 2020
EventPMLR Proceedings of Machine Learning Research: Mathematical and Scientific Machine Learning, 20-24 July 2020, Princeton University, Princeton, NJ, USA - Princeton University, NJ, United States
Duration: 20 Jul 202024 Jul 2020
Conference number: 107
http://proceedings.mlr.press/v107/

Conference

ConferencePMLR Proceedings of Machine Learning Research
Country/TerritoryUnited States
CityNJ
Period20/07/2024/07/20
Internet address

Keywords

  • Quantum Mechanics
  • Feynman–Kac Formula
  • Optimal Control
  • Reinforcement Learning

Fingerprint

Dive into the research topics of 'Quantum Ground States from Reinforcement Learning'. Together they form a unique fingerprint.

Cite this