Typical Interpolating Neural Nets Generalize Well and Exhibit Tempered Overfitting with Narrow Teachers

Sun 02.02 12:30 - 13:00

Abstract: Background. A main theoretical puzzle is why over-parameterized Neural Networks (NNs) generalize well when trained to zero loss (i.e., so they interpolate the data). Usually, the NN is trained with Stochastic Gradient Descent (SGD) or one of its variants. However, recent empirical work examined the generalization of a random NN that interpolates the data: the NN was sampled from a seemingly uniform prior over the parameters, conditioned on that the NN perfectly classifies the training set. Interestingly, such a NN sample typically generalized well. Contributions. We prove that such a random NN interpolator typically generalizes well if there exists an underlying narrow "teacher NN" that agrees with the labels. Specifically, we show that such a 'flat' prior over the NN parameterization induces a rich prior over the NN functions, due to the redundancy in the NN structure. In particular, this creates a bias towards simpler functions, which require less relevant parameters to represent -- enabling learning with a sample complexity approximately proportional to the complexity of the teacher (roughly, the number of non-redundant parameters), rather than the student's. We then study the overfitting behavior of such interpolating models in the presence of label noise, and prove that it is tempered (as was observed empirically) for a class of models with binary weights. Finally, we discuss the relation of these results to practical optimization methods.

Speaker

Itamar Harel

Technion

  • Advisors Daniel Soudry

  • Academic Degree M.Sc.