Wednesday, September 22, 2021
Time | Items |
---|---|
All day |
|
2pm |
09/22/2021 - 2:30pm Abstract: Deep learning algorithms operate in regimes that defy classical learning theory. Neural networks architectures often contain more parameters than training samples. Despite their huge complexity, the generalization error achieved on real data is small. In this talk, we aim to study generalization properties of algorithms in high dimensions. Interestingly, we show that algorithms in high dimension require a small bias for good generalization. We show that this is indeed the case for deep neural networks in the overparameterized regime. In addition, we provide lower bounds on the generalization error in various settings for any algorithm. We calculate such bounds using random matrix theory (RMT). We will review the connection between deep neural networks and RMT and existing results. These bounds are particularly useful when the analytic evaluation of standard performance bounds is not possible due to the complexity and nonlinearity of the model. The bounds can serve as a benchmark for testing performance and optimizing the design of actual learning algorithms. Location:
https://yale.zoom.us/j/2188028533
|
4pm |
09/22/2021 - 4:00pm The Putnam seminar meets every Wednesday from 4 to 5:30 in LOM 214. As always, everyone is warmly welcomed to come to hang out, learn more cool math, and meet folks. The seminar is casual, and folks can come and go as they like. See Pat Devlin’s webpage (and/or contact him) for more information. Folks can sign up for the mailing list here: https://forms.gle/nYPx72KVJxJcgLha8 Location:
LOM 214
|