ISL Colloquium

← List all talks ...

Asymptotic Learning in Overparameterized Models

Anant Sahai – Professor, UC Berkeley

Thu, 25-Jan-2024 / 4:00pm / Packard 202

Abstract

Why are modern machine learning systems able to achieve zero training error and yet generalize well, even when the training data is noisy and there are many more parameters than data points? It is possible to begin to understand the issues by leveraging stylized linear models (inspired by information theory) where we can explore the fundamental limits for overparameterized learning. Interestingly, this creates a strong intellectual bridge to signal processing, and from this emerges a heuristic perspective that allows us to quantitatively conjecture asymptotic behavior as well as predict new phenomena like the fact that classification and regression can behave qualitatively differently. Setting out to prove those conjectures forces us to learn even more, including new sharper concentration inequalities, and reflecting upon the asymptotic behavior in these regimes allows us to engage in counterexample-style thinking that sheds light on the (dis)connection between test loss and training loss in overparameterized settings.

Bio

Anant Sahai is a Professor of EECS at UC Berkeley. His research interests span machine learning, wireless communication, information theory, signal processing, and decentralized control – with a particular interest at the intersections of these fields. Within wireless communication, he is particularly interested in Spectrum Sharing and Cognitive Radio and serves as the lead for Data and Machine Learning in the relatively new NSF Center for Spectrum Innovation: SpectrumX.