On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness
Abstract
We study sampling from a target distribution using the unadjusted Langevin Monte Carlo (LMC) algorithm. For any potential function whose tails behave like \\|x\\|^\\alpha for , and has -H”older continuous gradient, we derive the sufficient number of steps to reach the -neighborhood of a -dimensional target distribution as a function of and . Our rate estimate, in terms of dependency, is not directly influenced by the tail growth rate of the potential function as long as its growth is at least linear, and it only relies on the order of smoothness .
Our rate recovers the best known rate which was established for strongly convex potentials with Lipschitz gradient in terms of dependency, but we show that the same rate is achievable for a wider class of potentials that are degenerately convex at infinity.
Bio
Murat is currently an assistant professor at the University of Toronto in departments of Computer Science and Statistical Sciences. He is also a faculty member of the Vector Institute, and a CIFAR Chair in AI. Before, he was a postdoctoral researcher at Microsoft Research - New England lab. His research interests include optimization, machine learning, statistics, applied probability, and connections among these fields. He obtained his Ph.D. from the Department of Statistics at Stanford University. He has an M.S. degree in Computer Science from Stanford, and B.S. degrees in Electrical Engineering and Mathematics, both from Bogazici University.