← List all talks ...

# Compressed Sensing using Generative Models: Theory and Applications

### Ajil Jalal – Postdoc, UC Berkeley

Thu, 9-Nov-2023 / 4:00pm
/ Packard 202

### Abstract

The goal of compressed sensing is to make use of image structure to estimate an image from a small number of linear measurements. This has applications in many areas, such as in MRI, where it can allow for smaller scan times while preserving scan quality.

In classical compressed sensing, image structure is typically represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all – instead, we suppose that vectors lie near the range of a generative model G: R^k -> R^n. Our main theorem is that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an L_2/L_2 recovery guarantee; this is O(k d log n) for typical d-layer neural networks. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy.

The second part of the talk will generalize results from the first: for any prior distribution on the image, we show that the Posterior Sampling estimator using a generative model achieves near-optimal recovery guarantees. Moreover, this result is robust to model mismatch, as long as the generative model’s distribution is close to the true distribution in Wasserstein distance. Time permitting, we will also discuss fairness benefits of our algorithm and its applications to MRI.

### Bio

Ajil Jalal is a postdoctoral research associate under Prof. Kannan Ramchandran at UC Berkeley. He completed his PhD under Prof. Alexandros G. Dimakis at UT Austin in 2022. His research focuses on designing algorithms to solve signal processing problems using generative models. Specifically, by showing that generative models are generalizations of sparsity, we can achieve provable guarantees similar to LASSO, with the empirical benefits provided by large datasets and compute.