The Separation Capacity of Random Neural Networks

Sjoerd Dirksen, Martin Genzel, Laurent Jacques, Alexander Stollenwerk

Research output: Working paperPreprintAcademic

Abstract

Neural networks with random weights appear in a variety of machine learning applications, most prominently as the initialization of many deep learning algorithms and as a computationally cheap alternative to fully learned neural networks. In the present article we enhance the theoretical understanding of random neural nets by addressing the following data separation problem: under what conditions can a random neural network make two classes X−,X+⊂Rd (with positive distance) linearly separable? We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability. Crucially, the number of required neurons is explicitly linked to geometric properties of the underlying sets X−,X+ and their mutual arrangement. This instance-specific viewpoint allows us to overcome the usual curse of dimensionality (exponential width of the layers) in non-pathological situations where the data carries low-complexity structure. We quantify the relevant structure of the data in terms of a novel notion of mutual complexity (based on a localized version of Gaussian mean width), which leads to sound and informative separation guarantees. We connect our result with related lines of work on approximation, memorization, and generalization.
Original languageEnglish
PublisherarXiv
Pages1-34
DOIs
Publication statusPublished - 31 Jul 2021

Keywords

  • cs.LG
  • math.ST
  • stat.TH
  • Random neural networks
  • classification
  • hyperplane separation
  • high-dimensional geometry
  • Gaussian mean width

Fingerprint

Dive into the research topics of 'The Separation Capacity of Random Neural Networks'. Together they form a unique fingerprint.

Cite this