ISL Colloquium

← List all talks ...

Adversarial machine learning and instrumental variables for flexible causal modeling

Vasilis Syrgkanis – Assistant Professor, Stanford

Thu, 2-Feb-2023 / 4:00pm / Packard 202

Abstract

Machine learning models are increasingly being used to automate decision-making in a multitude of domains. Making good decisions requires uncovering causal relationships from data. Many causal estimation problems reduce to estimating a model that satisfies a set of conditional moment restrictions. We develop an approach for estimating flexible models defined via conditional moment restrictions, with a prototypical application being non-parametric instrumental variable regression. We introduce a min-max criterion function, under which the estimation problem can be thought of as solving a zero-sum game between a modeler who is optimizing over the hypothesis space of the target causal model and an adversary who identifies violating moments over a test function space. We analyze the statistical estimation rate of the resulting estimator for arbitrary hypothesis spaces, with respect to an appropriate analogue of the mean squared error metric, for ill-posed inverse problems. We show that when the minimax criterion is regularized with a second moment penalty on the test function and the test function space is sufficiently rich, then the estimation rate scales with the critical radius of the hypothesis and test function spaces, a quantity which typically gives tight fast rates. Our main result follows from a novel localized Rademacher analysis of statistical learning problems defined via minimax objectives. We provide applications of our main results for several hypothesis spaces used in practice such as: reproducing kernel Hilbert spaces, high dimensional sparse linear functions, spaces defined via shape constraints, ensemble estimators such as random forests, and neural networks. For each of these applications we provide computationally efficient optimization methods for solving the corresponding minimax problem and stochastic first-order heuristics for neural networks. Based on joint works with: Nishanth Dikkala, Greg Lewis and Lester Mackey.

Bio

Vasilis Syrgkanis is an Assistant Professor of Management Science and Engineering at Stanford University. Prior to joining Stanford, he was a Principal Researcher at Microsoft Research, New England, where he was co-leading the project on Automated Learning and Intelligence for Causation and Economics (ALICE). He received his Ph.D. in Computer Science from Cornell and spent two years at Microsoft Research, New York as a postdoctoral researcher. His research addresses problems at the intersection of machine learning, causal inference, economics and theoretical computer science. His work has received best paper awards at the 2015 ACM Conference on Economics and Computation (EC’15), the 2015 Annual Conference on Neural Information Processing Systems (NeurIPS’15) and the Conference on Learning Theory (COLT’19).