Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS33 2: Quantifying uncertainty for learned Bayesian models
Time:
Thursday, 07/Sept/2023:
1:30pm - 3:30pm

Session Chair: Marta Malgorzata Betcke
Session Chair: Martin Holler
Location: VG1.105


Show help for 'Increase or decrease the abstract text size'
Presentations

Calibration-Based Probabilistic Error Bounds for Inverse Problems in Imaging

Martin Zach1, Andreas Habring2, Martin Holler2, Dominik Narnhofer1, Thomas Pock1

1Graz University of Technology, Austria; 2Universität Graz, Austria

Traditional hand-crafted regularizers, such as the total variation, have a profound history in the context of inverse problems. Typically, they are accompanied by a geometrical interpretation and experts are familiar with (artifacts in) the resulting reconstructions. Modern, learned regularizers can hardly be interpreted in this way, thus it is important to supply uncertainty maps or error bounds in addition to any reconstruction. In this talk, we give an overview of calibration-based methods that provide 1) pixel-wise probabilistic error bounds or 2) probabilistic confidence with respect to entire structures in the reconstruction. We show results on the clinically highly relevant problem of undersampled magnetic resonance reconstruction.


Posterior-Variance-Based Error Quantification for Inverse Problems in Imaging

Dominik Narnhofer1, Andreas Habring2, Martin Holler2, Thomas Pock1

1Graz University of Technology; 2University of Graz

We present a method for obtaining pixel-wise error bounds in Bayesian regularization of inverse imaging problems. The proposed approach employs estimates of the posterior variance together with techniques from conformal prediction in order to obtain error bounds with coverage guarantees, without making any assumption on the underlying data distribution. It is generally applicable to Bayesian regularization approaches, independent, e.g., of the concrete choice of the prior. Furthermore, the coverage guarantees can also be obtained in case only approximate sampling from the posterior is possible. With this in particular, the proposed framework is able to incorporate any learned prior in a black-box manner.

Such a guaranteed coverage without assumptions on the underlying distributions is only achievable since the magnitude of the error bounds is, in general, unknown in advance. Nevertheless, as we confirm with experiments with multiple regularization approaches, the obtained error bounds are rather tight.

A preprint of this work is available at https://arxiv.org/abs/2212.12499


How to sample from a posterior like you sample from a prior

Babak Maboudi Afkham1, Matthias Chung2, Julianne Chung2

1DTU, Denmark; 2Emory University, USA

The importance of quantifying uncertainties is rising in many applications of inverse problems. One way to estimate uncertainties is to explore the posterior distribution, e.g. in the context of Bayesian inverse problems. Standard approaches in exploring the posterior, e.g. the Markov Chain Monte Carlo (MCMC) methods, are often inefficient for large-scale and non-linear inverse problems.

In this work, we propose a method that exploits data to construct accelerated sampling from the posterior distributions for goal-oriented inverse problems. We use variational encoder-decoder (VED) to approximate the mapping that relates a measurement vector to the posterior distribution. The output of the VED network is an approximation of the true distribution and can estimate its moment, e.g. using Monte-Carlo methods. This enables real-time uncertainty quantification. The proposed method showcases a promising approach for large-scale inverse problems.


Uncertainty Quantification for Computed Tomography via the Linearised Deep Image Prior

Riccardo Barbano

University College London, United Kingdom

Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty, hindering their real-world deployment. In this talk we present a method, termed as the linearised deep image prior (DIP) that estimates the uncertainty associated with reconstructions produced by the DIP with total variation regularisation (TV). We discuss how to endow the DIP with conjugate Gaussian-linear model type error-bars computed from a local linearisation of the neural network around its optimised parameters. This approach provides pixel-wise uncertainty estimates and a marginal likelihood objective for hyperparameter optimisation. Throughout the talk we demonstrate the method on synthetic data and real-measured high-resolution 2D $\mu$CT data, and show that it provides superior calibration of uncertainty estimates relative to previous probabilistic formulations of the DIP.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany