Training Neural Networks is ER-complete

Mikkel Abrahamsen, Linda Kleist, Till Miltzow

Research output: Working paperPreprintAcademic

Abstract

Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weights for the neural network such that the total error is below the threshold. We determine the algorithmic complexity of this fundamental problem precisely, by showing that it is ∃R-complete. This means that the problem is equivalent, up to polynomial-time reductions, to deciding whether a system of polynomial equations and inequalities with integer coefficients and real unknowns has a solution. If, as widely expected, ∃R is strictly larger than NP, our work implies that the problem of training neural networks is not even in NP.
Neural networks are usually trained using some variation of backpropagation. The result of this paper offers an explanation why techniques commonly used to solve big instances of NP-complete problems seem not to be of use for this task. Examples of such techniques are SAT solvers, IP solvers, local search, dynamic programming, to name a few general ones.
Original languageEnglish
PublisherarXiv
Pages1-12
DOIs
Publication statusPublished - 19 Feb 2021

Fingerprint

Dive into the research topics of 'Training Neural Networks is ER-complete'. Together they form a unique fingerprint.

Cite this