Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS17: Machine Learning Techniques for Bayesian Inverse Problems
Time:
Tuesday, 05/Sept/2023:
4:00pm - 6:00pm

Session Chair: Angelina Senchukova
Location: VG1.104


Show help for 'Increase or decrease the abstract text size'
Presentations

Stochastic Normalizing Flows for Inverse Problems via Markov Chains

Paul Hagemann, Johannes Hertrich, Gabriele Steidl

TU Berlin, Germany

Normalizing flows aim to learn the underlying probability distribution of given samples. For this, we train a diffeomorphism which pushes forward a simple latent distribution to the data distribution. However, recent results show that normalizing flows suffer from topolgical constraints and limited expressiveness. Stochastic normalizing flows can overcome these topological constraints and improve the expressiveness of normalizing flow architectures by combining deterministic, learnable flow transformations with stochastic sampling methods. We consider stochastic normalizing flows from a Markov chain point of view. In particular, we replace transition densities by general Markov kernels and establish proofs via Radon-Nikodym derivatives which allows to incorporate distributions without densities in a sound way. Further, we generalize the results for sampling from posterior distributions as required in inverse problems. The performance of the proposed conditional stochastic normalizing flow is demonstrated by numerical examples.


Bayesian computation with Plug & Play priors for inverse problems in imaging

Remi Laumont1, Valentin De bortoli2,6, Andres Almansa3, Julie Delon3,4, Alain Durmus5,7, Marcelo Pereyra8,9

1DTU, Denmark; 2Center for Science of Data, ENS Ulm; 3Universite de Paris, MAP5 UMR 8145,; 4Institut Universitaire de France; 5CMAP, Ecole Polytechnique; 6CNRS; 7Institut Polytechnique de Paris; 8Maxwell Insitute for Mathematical Sciences; 9School of Mathematical and Computer Sciences, Heriot-Watt University

This presentation is devoted to the study of Plug & Play (PnP) methods applied to inverse problems encountered in image restoration. Since the work of Venkatakrishnan et al. in 2013 [1], PnP methods are often applied for image restoration in a Bayesian context. These methods aim at computing Minimum Mean Square Error (MMSE) or Maximum A Posteriori (MAP) for inverse problems in imaging by combining an explicit likelihood and an implicit prior defined by a denoising algorithm. In the literature, PnP methods differ mainly in the iterative scheme used for both optimization and sampling. In the case of optimization algorithms, recent works guarantee the convergence to a fixed point of a certain operator, fixed point which is not necessarily the MAP. In the case of sampling algorithms in the literature, there is no evidence of convergence. Moreover, there are still important open questions concerning the correct definition of the underlying Bayesian models or the computed estimators, as well as their regularity properties, necessary to ensure the stability of the numerical scheme. The aim of this thesis is to develop simple but efficient restoration methods while answering some of these questions. The existence and nature of MAP and MMSE estimators for PnP prior is therefore a first line of study. Three methods with convergence results are then presented, PnP-SGD for MAP estimation and PnP-ULA and PPnP-ULA for sampling. A particular interest is given to denoisers encoded by deep neural networks. The efficiency of these methods is demonstrated on classical image restoration problems such as denoising, deblurring or interpolation. In addition to allowing the estimation of MMSE, sampling makes possible the quantification of uncertainties, which is crucial in domains such as biomedical imaging. [2] and [3] are the papers related to this talk.

[1] S. Venkatakrishnan, V. Singanallur, C. Bouman, B. Wohlberg. Plug-and-play priors for model based reconstruction, IEEE Global Conference on Signal and Information Processing, 2013. DOI: 10.1109/GlobalSIP.2013.6737048.

[2] R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M, Pereyra. Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie, SIAM Journal on Imaging Sciences 15(2): 701-737, 2022.

[3] R. Laumont, V. De Bortoli, A. Almansa, J. Delon, A. Durmus, M. Pereyra. On Maximum a Posteriori Estimation with Plug & Play Priors and Stochastic Gradient Descent, Journal of Mathematical Imaging and Vision 65: 140–163, 2023.


Edge-preserving inversion with heavy-tailed Bayesian neural networks priors

Angelina Senchukova1, Felipe Uribe1, Jana de Wiljes2, Lassi Roininen1

1LUT University, Finland; 2University of Potsdam, Germany

We study Bayesian inverse problems where the unknown target function is piecewise constant. Priors based on neural networks with heavy-tailed-distributed weights/biases have been employed due to their discretization-independent property and ability to capture discontinuities. We aim at developing neural network priors whose parameters are drawn from Student's t distributions. The idea is to parameterize the unknown function using a neural network which sets a finite-dimensional inference framework. This requires finding the posterior distribution of the weights/biases of the network representation. The resulting posterior is, however, high-dimensional and multimodal which makes it difficult to characterize using traditional sampling algorithms. Therefore, we explore data assimilation techniques to sample the posterior distribution more effectively. As a numerical example, we consider a simple signal deconvolution to illustrate the properties of the prior.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany