Generalization Error Lower Bounds for Nonlinear Learning Models

Seminar: 
Applied Mathematics
Event time: 
Wednesday, September 22, 2021 - 2:30pm
Location: 
https://yale.zoom.us/j/2188028533
Speaker: 
Inbar Seroussi
Speaker affiliation: 
Weizmann
Event description: 

Abstract: Deep learning algorithms operate in regimes that defy classical learning theory. Neural networks architectures often contain more parameters than training samples. Despite their huge complexity, the generalization error achieved on real data is small. In this talk, we aim to study generalization properties of algorithms in high dimensions. Interestingly, we show that algorithms in high dimension require a small bias for good generalization. We show that this is indeed the case for deep neural networks in the overparameterized regime. In addition, we provide lower bounds on the generalization error in various settings for any algorithm. We calculate such bounds using random matrix theory (RMT). We will review the connection between deep neural networks and RMT and existing results. These bounds are particularly useful when the analytic evaluation of standard performance bounds is not possible due to the complexity and nonlinearity of the model. The bounds can serve as a benchmark for testing performance and optimizing the design of actual learning algorithms.
(Joint work with Prof. Ofer Zeitouni)