Title: Disentangling Trainability and Generalization in Deep Neural Networks
A longstanding goal in the theory of deep learning is to characterize the conditions under which a given neural network architecture will be trainable, and if so, how well it might generalize to unseen data. In this work, we provide such a characterization in the limit of very wide and very deep networks, for which the analysis simplifies considerably. For wide networks, the trajectory under gradient descent is governed by the Neural Tangent Kernel (NTK), and for deep networks, the NTK itself maintains only weak data dependence. By analyzing the spectrum of the NTK, we formulate necessary conditions for trainability and generalization across a range of architectures, including Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs). We identify large regions of hyperparameter space for which networks can memorize the training set but completely fail to generalize. We find that CNNs without global average pooling behave almost identically to FCNs, but that CNNs with pooling have markedly different and often better generalization performance. A thorough empirical investigation of these theoretical results shows excellent agreement on real datasets.
Lechao Xiao is a research scientist in the Brain Team, Google Research. He works on theory of deep learning, including optimization, Gaussian Processes, generalization of neural networks. Before joining Google, he was a postdoc at the University of Pennsylvania working on Harmonic analysis. He received his Ph.D. in mathematics from the University of Illinois at Urbana-Champaign in 2014 and his B.S. in mathematics from Zhejiang University in 2009.