ISL Colloquium
Menu Close

On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness

Murat Erdogdu
Assistant Professor, University of Toronto
Thursday, October 28, 2021 at 4:00 PM • Packard 101

Abstract

We study sampling from a target distribution efe^{-f} using the unadjusted Langevin Monte Carlo (LMC) algorithm. For any potential function ff whose tails behave like \\|x\\|^\\alpha for alphain[1,2]\\alpha \\in [1,2], and has beta\\beta-H”older continuous gradient, we derive the sufficient number of steps to reach the eps\\eps-neighborhood of a dd-dimensional target distribution as a function of alpha\\alpha and beta\\beta. Our rate estimate, in terms of eps\\eps dependency, is not directly influenced by the tail growth rate alpha\\alpha of the potential function as long as its growth is at least linear, and it only relies on the order of smoothness beta\\beta.

Our rate recovers the best known rate which was established for strongly convex potentials with Lipschitz gradient in terms of eps\\eps dependency, but we show that the same rate is achievable for a wider class of potentials that are degenerately convex at infinity.

Bio

Murat is currently an assistant professor at the University of Toronto in departments of Computer Science and Statistical Sciences. He is also a faculty member of the Vector Institute, and a CIFAR Chair in AI. Before, he was a postdoctoral researcher at Microsoft Research - New England lab. His research interests include optimization, machine learning, statistics, applied probability, and connections among these fields. He obtained his Ph.D. from the Department of Statistics at Stanford University. He has an M.S. degree in Computer Science from Stanford, and B.S. degrees in Electrical Engineering and Mathematics, both from Bogazici University.