Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS34 1: Learned reconstructions for nonlinear inverse problems
Time:
Monday, 04/Sept/2023:
1:30pm - 3:30pm

Session Chair: Simon Robert Arridge
Session Chair: Andreas Selmar Hauptmann
Location: VG3.103


Show help for 'Increase or decrease the abstract text size'
Presentations

Continuous generative models for nonlinear inverse problems

Matteo Santacesaria1, Giovanni S. Alberti1, Johannes Hertrich2, Silvia Sciutto1

1University of Genoa, Italy; 2Technische Universität Berlin, Germany

Generative models are a large class of deep learning architectures, trained to describe a subset of a high dimensional space with a small number of parameters. Popular models include variational autoencoders, generative adversarial networks, normalizing flows and, more recently, score-based diffusion models. In the context of inverse problems, generative models can be used to model prior information on the unknown with a higher level of accuracy than classical regularization methods.

In this talk we will present a new data-driven approach to solve inverse problems based on generative models. Taking inspiration from well-known convolutional architectures, we construct and explicitly characterize a class of injective generative models defined on infinite dimensional functions spaces. The construction is based on wavelet multi resolution analysis: one of the key theoretical novelties is the generalization of the strided convolution between discrete signals to an infinite dimensional setting. After an off-line training of the generative model, the proposed reconstruction method consists in an iterative scheme in the low-dimensional latent space. The main advantages are the faster iterations and the reduced ill-posedness, which is shown with new Lipschitz stability estimates. We also present numerical simulations validating the theoretical findings for linear and nonlinear inverse problems such as electrical impedance tomography.


Data-driven quantitative photoacoustic imaging

Janek Grohl

University of Cambridge, United Kingdom

Photoacoustic imaging faces the challenge of accurately quantifying measurements to accurately reconstruct chromophore concentrations and thus improve patient outcomes in clinical applications. Proposed approaches to solve the quantification problem are often limited in scope or only applicable to simulated data. We use a collection of well-characterised imaging targets (phantoms) as well as simulated data to enable supervised training and validation of quantification methods and train a U-Net on the data set. Our experiments demonstrate that phantoms can serve as reliable calibration objects and that deep learning methods can generalize to estimate the optical properties of previously unseen test images. Application of the trained model to a blood flow phantom and a mouse model highlights the strengths and weaknesses of the proposed approach.


Mapping properties of neural networks and inverse problems

Matti Lassas1, Michael Puthawala2, Ivan Dokmanić3, Maarten de Hoop4

1University of Helsinki, Finland; 2South Dakota State University, USA; 3University of Basel, Switzerland; 4Rice University, USA

We will consider mapping properties of neural networks, in particular, injectivity of neural networks, universal approximation property of neural networks and the properties which the ranges of neural networks need to have. Also, we study approximation of probability measures using neural networks composed of invertible flows and injective layer and applications of these results in inverse problems.


Data-driven regularization theory of invertible ResNets for solving inverse problems

Judith Nickel, Clemens Arndt, Tobias Kluth, Sören Dittmer, Alexander Denker, Meira Iske, Nick Heilenkötter, Peter Maass

University of Bremen, Germany

Data-driven solution techniques for inverse problems, typically based on specific learning strategies, exhibit remarkable performance in image reconstruction tasks. These learning-based reconstruction strategies often follow a two-step scheme. First, one uses a given dataset to train the reconstruction scheme, which one often parametrizes via a neural network. Second, the reconstruction scheme is applied to a new measurement to obtain a reconstruction. We follow these steps but specifically parametrize the reconstruction scheme with invertible residual networks (iResNets). We demonstrate that the invertibility opens the door to new investigations into the influence of the training and the architecture on the resulting reconstruction scheme. To be more precise, we analyze the effect of different iResNet architectures, loss functions, and prior distributions on the trained network. The investigations reveal a formal link to the regularization theory of linear inverse problems for shallow network architectures and connections to MAP estimation with Gaussian noise models. Moreover, we analytically optimize the parameters of specific classes of architectures in the context of Bayesian inversion, revealing the influence of the prior and noise distribution on the solution.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany