Optimally weighted loss functions for solving PDEs with Neural Networks

Remco van der Meer, Cornelis W. Oosterlee, Anastasia Borovykh*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Recent works have shown that deep neural networks can be employed to solve partial differential equations, giving rise to the framework of physics informed neural networks (Raissi et al., 2007). We introduce a generalization for these methods that manifests as a scaling parameter which balances the relative importance of the different constraints imposed by partial differential equations. A mathematical motivation of these generalized methods is provided, which shows that for linear and well-posed partial differential equations, the functional form is convex. We then derive a choice for the scaling parameter that is optimal with respect to a measure of relative error. Because this optimal choice relies on having full knowledge of analytical solutions, we also propose a heuristic method to approximate this optimal choice. The proposed methods are compared numerically to the original methods on a variety of model partial differential equations, with the number of data points being updated adaptively. For several problems, including high-dimensional PDEs the proposed methods are shown to significantly enhance accuracy.

Original languageEnglish
Article number113887
Pages (from-to)1-18
JournalJournal of Computational and Applied Mathematics
Volume405
DOIs
Publication statusPublished - 15 May 2022

Keywords

  • Convection–diffusion equation
  • High-dimensional problems
  • Loss functional
  • Neural network
  • Partial differential equation
  • Poisson equation

Fingerprint

Dive into the research topics of 'Optimally weighted loss functions for solving PDEs with Neural Networks'. Together they form a unique fingerprint.

Cite this