Randomized Algorithms for Solving Linear Systems with Low-Rank Structure
Abstract
Iterative methods for solving large systems of linear equations are some of the most ubiquitous tools in areas such as scientific computing, machine learning, and optimization. Yet, these methods may exhibit extremely slow performance for systems arising from poorly conditioned inputs. Fortunately, this phenomenon can be addressed for a broad class of problems where the spectrum exhibits low-rank structure, which arise for instance in kernel methods, second-order optimization, or as a result of the underlying properties of the data. A key paradigm which enables tackling low-rank structured problems is randomization, i.e., introducing stochasticity into the algorithmic procedure via techniques such as matrix sketching and sampling. In this talk, I will formalize the task of solving a linear system with low-rank structure, and discuss our recent work on designing and analyzing the complexity of randomized algorithms for preconditioning and solving such systems. In particular, I will show how these methods can outperform state-of-the-art deterministic solvers, such as those based on Krylov subspace methods, both in theory and in practice.
Bio
Michał Dereziński is an Assistant Professor of Computer Science and Engineering at the University of Michigan. Previously, he was a postdoctoral fellow in the Department of Statistics at the University of California, Berkeley, and a research fellow at the Simons Institute for the Theory of Computing. Michał obtained his Ph.D. in Computer Science at the University of California, Santa Cruz, where he received the Best Dissertation Award for his work on sampling methods in statistical learning. Michał’s current research is focused on theoretical foundations of randomized algorithms for machine learning, optimization, and applied mathematics. In particular, his work in the area of Randomized Numerical Linear Algebra was funded by an NSF CAREER Award. He also received the Google ML and Systems Junior Faculty Award for contributions in Theoretical Foundations of Machine Learning, and the Best Paper Award at the 34th Conference on Neural Information Processing Systems.