ISL Colloquium

← List all talks ...

Adaptivity and Confounding in Multi-Armed Bandit Experiments

Daniel Russo – Professor, Columbia Business School

Thu, 3-Mar-2022 / 4:00pm / Zoom: https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ

(Hosted jointly with Stanford RL Forum.)

Talk

Abstract

Multi-armed bandit algorithms minimize experimentation costs required to converge on optimal behavior. They do so by rapidly adapting experimentation effort away from poorly performing actions as feedback is observed. But this desirable feature makes them sensitive to confounding, which is the primary concern underlying classical randomized controlled trials. We highlight, for instance, that popular bandit algorithms cannot address the problem of identifying the best action when day-of-week effects may influence reward observations. In response, this paper proposes deconfounded Thompson sampling, which makes simple, but critical, modifications to the way Thompson sampling is usually applied. Theoretical guarantees suggest the algorithm strikes a delicate balance between adaptivity and robustness to confounding. It attains asymptotic lower bounds on the number of samples required to confidently identify the best action – suggesting optimal adaptivity – but also satisfies strong performance guarantees in the presence of day-of-week effects and delayed observations – suggesting unusual robustness. At the core of the paper is a new model of contextual bandit experiments in which issues of delayed learning and distribution shift arise organically.

Bio

Daniel Russo is an Associate Professor in the Decision, Risk, and Operations division of Columbia Business School. His research focuses on problems at the intersection of sequential decision-making and statistical machine learning. He completed his PhD at Stanford under the supervision of Ben Van Roy.