ISL Colloquium

← List all talks ...

Experimentation and Decision-Making in Two-Sided Marketplaces: The Impact of Interference / Simple Agent, Complex Environment: Efficient Reinforcement Learning with Agent States

Hannah Li / Shi Dong – PhD Students, Stanford

Thu, 14-Oct-2021 / 4:00pm / Packard 101

Senior graduate students Hannah Li and Shi Dong will each give a 25min talk.

Experimentation and Decision-Making in Two-Sided Marketplaces: The Impact of Interference

Abstract

Marketplace platforms use experiments (also known as “A/B tests”) as a method for making data-driven decisions. When platforms consider introducing a new feature, they often first run an experiment to test the feature on a subset of users and then use this data to decide whether to launch the feature platform-wide. However, it is well documented that estimates of the treatment effect arising from these experiments may be biased, due to the presence of interference. In this talk, we survey a collection of recent results and insights we have developed on experimentation and decision-making in two-sided marketplaces. In particular, we study the bias that interference creates in both the treatment effect estimates as well as standard error estimates, and show how both types of biases affect the platform’s ability to make decisions. We show that for a large class of interventions (“positive interventions”), these biases cause the platform to launch too often. Through simulations calibrated to real-world data, we show that in many settings the treatment effect bias impacts decision-making more than the standard error bias. Based on joint work with Ramesh Johari, Inessa Liskovich, Gabriel Weintraub, and Geng Zhao.

Bio

Hannah is a PhD Candidate at Stanford University, where she is advised by Ramesh Johari and Gabriel Weintraub. She is part of the Operations Research group in MS&E and the Research in Algorithms and Incentives in Networks(RAIN) group. Her research uses techniques from math modeling, optimization, and causal inference in order to analyze and design data science methodology for marketplace platforms. Before coming to Stanford, she graduated from Pomona College with a degree in mathematics.

Simple Agent, Complex Environment: Efficient Reinforcement Learning with Agent States

Abstract

In this work, we design a simple reinforcement learning (RL) agent that implements an optimistic version of Q-learning and establish through regret analysis that this agent can operate with some level of competence in an arbitrarily complex environment. While we leverage concepts from the literature on provably efficient RL, we consider a general agent-environment interface and provide a novel agent design and analysis. This level of generality positions our results to inform the design of future agents for operation in complex real environments. We establish that, as time progresses, our agent performs competitively relative to policies that require longer times to evaluate. The time it takes to approach asymptotic performance is polynomial in the complexity of the agent’s state representation and the time required to evaluate the best policy that the agent can represent. Notably, there is no dependence on the complexity of the environment. The ultimate per-period performance loss of the agent is bounded by a constant multiple of a measure of distortion introduced by the agent’s state representation. This work is the first to establish that an algorithm approaches this asymptotic condition within a tractable time frame.

Bio

Shi Dong is a sixth-year PhD student in Electrical Engineering at Stanford University, where he is advised by Prof. Benjamin Van Roy. Prior to Stanford, he received his undergraduate degree from Tsinghua University. He is interested in using mathematical tools to understand how successful reinforcement learning agents are designed. His recent work has been selected as a finalist in the INFORMS George Nicholson Student Paper Competition. He is on the 2021-2022 academic job market.