ISL Colloquium

← List all talks ...

The Robustness Problem

Justin Gilmer – Research Scientist, Google Brain

Thu, 9-Jan-2020 / 4:30pm / Packard 101

Talk

Abstract

Despite impressive performance on many benchmarks, state-of-the-art machine learning algorithms have been shown to be extremely brittle on out-of-distribution inputs. While there has been a focus in recent years on robustness to small lp-perturbations, this talk will discuss robustness to more general types of corruptions. We will investigate several questions related to robustness: Why are current models so brittle? Is recent work on lp-robustness making progress towards robustness to distribution shift? How should we best measure model robustness to ensure that models can be safely deployed in complex dynamic environments? Additionally, we will present experiments showing how models latch onto spurious correlations in image data, and how data augmentation shifts model bias towards different features in the data, resulting in trade-offs in the robustness properties of the model.

Bio

Justin is a Research Scientist at Google Brain. He has a broad set of research interests, from graph neural networks to model interpretability. Much of his current focus is on building robust statistical classifiers that can generalize well in dynamic environments in the real world. He holds a PhD in Theoretical Mathematics from Rutgers University.