ISL Colloquium

← List all talks ...

On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness

Murat Erdogdu – Professor, University of Toronto

Thu, 28-Oct-2021 / 4:00pm / Packard 101

Abstract

We study sampling from a target distribution $e^{-f}$ using the unadjusted Langevin Monte Carlo (LMC) algorithm. For any potential function $f$ whose tails behave like $|x|^\alpha$ for $\alpha \in [1,2]$, and has $\beta$-H\“older continuous gradient, we derive the sufficient number of steps to reach the $\eps$-neighborhood of a $d$-dimensional target distribution as a function of $\alpha$ and $\beta$. Our rate estimate, in terms of $\eps$ dependency, is not directly influenced by the tail growth rate $\alpha$ of the potential function as long as its growth is at least linear, and it only relies on the order of smoothness $\beta$.

Our rate recovers the best known rate which was established for strongly convex potentials with Lipschitz gradient in terms of $\eps$ dependency, but we show that the same rate is achievable for a wider class of potentials that are degenerately convex at infinity.

Bio

Murat is currently an assistant professor at the University of Toronto in departments of Computer Science and Statistical Sciences. He is also a faculty member of the Vector Institute, and a CIFAR Chair in AI. Before, he was a postdoctoral researcher at Microsoft Research - New England lab. His research interests include optimization, machine learning, statistics, applied probability, and connections among these fields. He obtained his Ph.D. from the Department of Statistics at Stanford University. He has an M.S. degree in Computer Science from Stanford, and B.S. degrees in Electrical Engineering and Mathematics, both from Bogazici University.