By generalizing DIP, we can design an adaptation algorithm that corrects the PF-ODE trajectory of posterior sampling with diffusion models, such that one can reconstruct from OOD measurements.
Prompt tuning of text embedding leads to better reconstruction quality when solving inverse problems with latent diffusion models.
DDS enables fast sampling from the posterior without the need for heavy gradient computation in DIS.
We show that seemingly different direct diffusion bridges are equivalent, and that we can push the pareto frontier of the perception-distortion tradeoff with data consistency gradient guidance.
TPDM improves 3D voxel generative modeling with 2D diffusion models. We show that 3D generative prior can be accurately represented as the product of two independent 2D diffusion priors that scale to both unconditional sampling and solving inverse problems.
We propose a method to perform posterior sampling with diffusion models on blind inverse problems.
We propose a method that can solve 3D inverse problems in the medical imaging domain using only the pre-trained 2D diffusion model augmented with the conventional model-based prior.
Diffusion posterior sampling enables solving arbitrary noisy (e.g. Gaussian, Poisson) inverse problems that are both linear or non-linear.
Come-close to diffuse-fast when solving inverse problems with diffusion models. We establish state-of-the-art results with only 20 diffusion steps across various tasks including SR, inpainting, and CS-MRI