Skip to main content
On Solving Inverse Problems Using Latent Diffusion-based Generative Models
Sanjay Shakkottai
IFML Seminar, UT Austin

Diffusion models have emerged as a powerful new approach to generative modeling. In this talk, we present the first framework that uses pre-trained latent diffusion models to solve linear inverse problems such as image denoising, inpainting, and super-resolution. Previously proposed algorithms (such as DPS and DDRM) only apply to pixel-space diffusion models. We theoretically analyze our algorithm showing provable sample recovery in a linear model setting. The algorithmic insight obtained from our analysis extends to more general settings often considered in practice. Experimentally, we outperform previously proposed posterior sampling algorithms in a wide variety of problems including random inpainting, block inpainting, denoising, deblurring, destriping, and super-resolution. Next, we present an efficient second-order approximation using Tweedie's formula to mitigate the bias incurred in the widely used first-order samplers. With this method, we devise a surrogate loss function to refine the reverse process at every diffusion step to address inverse problems and perform high-fidelity text-guided image editing. Based on joint work with Litu Rout, Negin Raoof, Giannis Daras, Constantine Caramanis, Alex Dimakis, Yujia Chen, Abhishek Kumar, and Wen-Sheng Chu. Papers: https://arxiv.org/abs/2307.00619, https://arxiv.org/abs/2312.00852