TY - UNPB
T1 - The Separation Capacity of Random Neural Networks
AU - Dirksen, Sjoerd
AU - Genzel, Martin
AU - Jacques, Laurent
AU - Stollenwerk, Alexander
PY - 2021/7/31
Y1 - 2021/7/31
N2 - Neural networks with random weights appear in a variety of machine learning applications, most prominently as the initialization of many deep learning algorithms and as a computationally cheap alternative to fully learned neural networks. In the present article we enhance the theoretical understanding of random neural nets by addressing the following data separation problem: under what conditions can a random neural network make two classes X−,X+⊂Rd (with positive distance) linearly separable? We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability. Crucially, the number of required neurons is explicitly linked to geometric properties of the underlying sets X−,X+ and their mutual arrangement. This instance-specific viewpoint allows us to overcome the usual curse of dimensionality (exponential width of the layers) in non-pathological situations where the data carries low-complexity structure. We quantify the relevant structure of the data in terms of a novel notion of mutual complexity (based on a localized version of Gaussian mean width), which leads to sound and informative separation guarantees. We connect our result with related lines of work on approximation, memorization, and generalization.
AB - Neural networks with random weights appear in a variety of machine learning applications, most prominently as the initialization of many deep learning algorithms and as a computationally cheap alternative to fully learned neural networks. In the present article we enhance the theoretical understanding of random neural nets by addressing the following data separation problem: under what conditions can a random neural network make two classes X−,X+⊂Rd (with positive distance) linearly separable? We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability. Crucially, the number of required neurons is explicitly linked to geometric properties of the underlying sets X−,X+ and their mutual arrangement. This instance-specific viewpoint allows us to overcome the usual curse of dimensionality (exponential width of the layers) in non-pathological situations where the data carries low-complexity structure. We quantify the relevant structure of the data in terms of a novel notion of mutual complexity (based on a localized version of Gaussian mean width), which leads to sound and informative separation guarantees. We connect our result with related lines of work on approximation, memorization, and generalization.
KW - cs.LG
KW - math.ST
KW - stat.TH
KW - Random neural networks
KW - classification
KW - hyperplane separation
KW - high-dimensional geometry
KW - Gaussian mean width
U2 - 10.48550/arXiv.2108.00207
DO - 10.48550/arXiv.2108.00207
M3 - Preprint
SP - 1
EP - 34
BT - The Separation Capacity of Random Neural Networks
PB - arXiv
ER -