ISL Colloquium

← List all talks ...

Approximating cross-validation: guarantees for model assessment and selection

Ashia Wilson – Professor, MIT

Thu, 18-Mar-2021 / 4:30pm / Zoom: https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ

Talk

Abstract

Cross-validation (CV) is the de facto standard for selecting accurate predictive models and assessing model performance. However, CV suffers from a need to repeatedly refit a learning procedure on a large number of training datasets. To reduce the computational burden, a number of works have introduced approximate CV procedures that simultaneously reduce runtime and provide model assessments comparable to CV when the prediction problem is sufficiently smooth. An open question however is whether these procedures are suitable for model selection. In this talk, I’ll describe (i) broad conditions under which the model selection performance of approximate CV nearly matches that of CV, (ii) examples of prediction problems where approximate CV selection fails to mimic CV selection, and (iii) an extension of these results and the approximate CV framework more broadly to non-smooth prediction problems like L1-regularized empirical risk minimization.

Bio

Ashia is an Assistant Professor in EECS at MIT. Her research focuses on the methodological foundations and theory of various topics in machine learning. She is interested in developing frameworks for algorithmic assessment and providing rigorous guarantees for algorithmic performance. She received her BA from Harvard University with a concentration in applied mathematics and a minor in philosophy, and a PhD from UC Berkeley in statistics. She most recently held a postdoctoral position in the machine learning group at Microsoft Research, New England.