Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS24 1: Learned Regularization for Solving Inverse Problems
Time:
Thursday, 07/Sept/2023:
1:30pm - 3:30pm

Session Chair: Johannes Hertrich
Session Chair: Sebastian Neumayer
Location: VG1.101


Show help for 'Increase or decrease the abstract text size'
Presentations

The Power of Patches for Training Normalizing Flows

Fabian Altekrüger1,2, Alexander Denker3, Paul Hagemann2, Johannes Hertrich2, Peter Maass3, Gabriele Steidl2

1Humboldt-University Berlin, Germany; 2Technical University Berlin, Germany; 3University Bremen, Germany

We introduce two kinds of data-driven patch priors learned from very few images: First, the Wasserstein patch prior penalizes the Wasserstein-2 distance between the patch distribution of the reconstruction and a possibly small reference image. Such a reference image is available for instance when working with materials’ microstructures or textures. The second regularizer learns the patch distribution using a normalizing flow. Since already a small image contains a large number of patches, this enables us to train the regularizer based on very few training images. For both regularizers, we show that they induce indeed a probability distribution such that they can be used within a Bayesian setting. We demonstrate the performance of patch priors for MAP estimation and posterior sampling within Bayesian inverse problems. For both approaches, we observe numerically that only very few clean reference images are required to achieve high-quality results and to obtain stability with respect to small perturbations of the problem.



Trust your source: quantifying source condition elements for variational regularisation methods

Martin Benning1,4, Tatiana Bubba2, Luca Ratti3, Danilo Riccio1

1Queen Mary University of London, United Kingdom; 2University of Bath, United Kingdom; 3University of Genoa, Italy; 4The Alan Turing Institute, United Kingdom

Source conditions are a key tool in variational regularisation to derive error estimates and convergence rates for ill-posed inverse problems. In this paper, we provide a recipe to practically compute source condition elements as the solution of convex minimisation problems that can be solved with first-order algorithms. We demonstrate the validity of our approach by testing it for two inverse problem case studies in machine learning and image processing: sparse coefficient estimation of a polynomial via LASSO regression and recovery of an image from a subset of the coefficients of its Fourier transform. We further demonstrate that the proposed approach can easily be modified to solve the learning task of identifying the optimal sampling pattern in the Fourier domain for given image and variational regularisation method, which has applications in the context of sparsity promoting reconstruction from magnetic resonance imaging data. We conclude by presenting a methodology with which data-driven regularisations with quantitative error estimates can be designed and trained.


Plug-and-Play image reconstruction is a convergent regularization method

Andrea Ebner, Markus Haltmeier

University of Innsbruck, Austria

Non-uniqueness and instability are characteristic features of image reconstruction processes. As a result, it is necessary to develop regularization methods that can be used to compute reliable approximate solutions. A regularization method provides of a family of stable reconstructions that converge to an exact solution of the noise-free problem as the noise level tends to zero. The standard regularization technique is defined by variational image reconstruction, which minimizes a data discrepancy augmented by a regularizer. The actual numerical implementation makes use of iterative methods, often involving proximal mappings of the regularizer. In recent years, plug-and-play image reconstruction (PnP) has been developed as a new powerful generalization of variational methods based on replacing proximal mappings by more general image denoisers. While PnP iterations yield excellent results, neither stability nor convergence in the sense of regularization has been studied so far. In this work, we extend the idea of PnP by considering families of PnP iterations, each being accompanied by its own denoiser. As our main theoretical result, we show that such PnP reconstructions lead to stable and convergent regularization methods. This shows for the first time that PnP is mathematically equally justified for robust image reconstruction as variational methods.


Provably Convergent Plug-and-Play Quasi-Newton Methods

Hong Ye Tan1, Subhadip Mukherjee2, Junqi Tang3, Carola-Bibiane Schönlieb1

1University of Cambridge, United Kingdom; 2University of Bath, United Kingdom; 3University of Birmingham, United Kingdom

Plug-and-Play (PnP) methods are a class of efficient data-driven methods for solving imaging inverse problems, wherein one incorporates an off-the-shelf denoiser inside iterative optimization schemes such as proximal gradient descent and ADMM. PnP methods have been shown to yield excellent reconstruction performance and are also supported by convergence guarantees. However, existing provable PnP methods impose heavy restrictions on the denoiser (such as nonexpansiveness) or the fidelity term (such as strict convexity). In this work, we propose a provable PnP method that imposes relatively light conditions based on proximal denoisers and introduce a quasi-Newton step to greatly accelerate convergence. By specially parameterizing the deep denoiser as a gradient step, we further characterize the fixed points of the resulting quasi-Newton PnP algorithm.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany