Criticality versus uniformity in deep neural networks

Aleksandar Bukva, Kevin T. Grosvenor, Ro Jefferson, Koenraad Schalm

Research output: Working paperPreprintAcademic

Abstract

Deep feedforward networks initialized along the edge of chaos exhibit exponentially superior training ability as quantified by maximum trainable depth. In this work, we explore the effect of saturation of the tanh activation function along the edge of chaos. In particular, we determine the line of uniformity in phase space along which the post-activation distribution has maximum entropy. This line intersects the edge of chaos, and indicates the regime beyond which saturation of the activation function begins to impede training efficiency. Our results suggest that initialization along the edge of chaos is a necessary but not sufficient condition for optimal trainability.
Original languageEnglish
PublisherarXiv
Number of pages12
DOIs
Publication statusPublished - 10 Apr 2023

Bibliographical note

12 pages, 8 figures

Keywords

  • cs.LG

Fingerprint

Dive into the research topics of 'Criticality versus uniformity in deep neural networks'. Together they form a unique fingerprint.

Cite this