Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
CT07: Contributed talks
Time:
Thursday, 07/Sept/2023:
4:00pm - 6:00pm

Session Chair: Christian Aarset
Location: VG1.105


Show help for 'Increase or decrease the abstract text size'
Presentations

Accelerating MCMC for imaging science by using an implicit Langevin algorithm

Teresa Klatzer1,3, Konstantinos Zygalakis1,3, Paul Dobson1, Marcelo Pereyra2,3, Yoann Altmann2,3

1University of Edinburgh, United Kingdom; 2Heriot-Watt University, United Kingdom; 3Maxwell Institute for Mathematical Sciences, United Kingdom

In this work, we present a highly efficient gradient-based Markov chain Monte Carlo methodology to perform Bayesian computation in imaging problems. Like previous Monte Carlo approaches, the proposed method is derived from a discretisation of a Langevin diffusion. However, instead of a conventional explicit discretisation like Euler-Maruyama, here we use an implicit discretisation based on the theta-method. In particular, the proposed sampling algorithm requires to solve an optimization problem in each step. In the case of a log-concave posterior, this optimisation problem is strongly convex and thus can be solved efficiently, leading to effective step sizes that are significantly larger than traditional methods permit. We can show that even for these large step sizes the corresponding Markov Chain has low bias while also preserving the posterior variance. We demonstrate the proposed methodology on a range of problems including non-blind image deconvolution and denoising. Comparisons with state-of-the-art MCMC methods confirm that the Markov chains generated with our method exhibit significantly faster convergence speeds, and produce lower mean square estimation errors at equal computational budget.


Quasi-Monte Carlo methods for Bayesian optimal experimental design problems governed by PDEs

Vesa Kaarnioja

Free University of Berlin, Germany

The goal in Bayesian optimal experimental design is to maximize the expected information gain for the reconstruction of unknown quantities when there is a limited budget for collecting measurement data. We consider Bayesian inverse problems governed by partial differential equations. This leads us to consider an optimization problem, where the objective functional involves nested high-dimensional integrals which we approximate by using tailored rank-1 lattice quasi-Monte Carlo (QMC) rules. We show that these QMC rules achieve faster-than-Monte Carlo convergence rates. Numerical experiments are presented to assess the theoretical results.


Maximum marginal likelihood estimation of regularisation parameters in Plug & Play Bayesian estimation: Application to non-blind and semi-blind image deconvolution

Charlesquin Kemajou Mbakam1, Marcelo Pereyra2, Jean-Francois Giovannelli3

1Heriot-Watt University, United Kingdom; 2Heriot-Watt University, United Kingdom; 3Université de Bordeaux

Bayesian Plug & Play (PnP) priors are widely acknowledged as a powerful framework for solving a variety of inverse problems in imaging. This Bayesian PnP framework has made tremendous advances in recent years, resulting in state-of-the-art methods. Although PnP methods have been distinguished by their ability to regularize Bayesian inverse problems through a denoising algorithm, setting the amount of regularity enforced by the prior, determined by the noise level parameter of the denoiser, has been an issue for several reasons. This talk aims to present an empirical Bayesian extension of an existing Plug & Play (PnP) Bayesian inference method. The main novelty of this work is that we estimate the regularisation parameter directly from the observed data by maximum marginal likelihood estimation (MMLE). However, noticing that the MMLE problem is computationally and analytically intractable, we incorporate a Markov kernel within a stochastic approximation proximal gradient scheme to address this difficulty. The resulting method calibrates a regularisation parameter by MMLE while generating samples asymptotically distributed according to the empirical Bayes (pseudo-) posterior distribution of interest. Additionally, the proposed method can estimate other unknown parameters of the model using MMLE; such as the noise level of the observation model, and the parameters of the forward operator simultaneously. The proposed method has been demonstrated with a range of non-blind and semi-blind image deconvolution problems, as well as compared to state-of-the-art methods.



Choosing observations to mitigate model error in Bayesian inverse problems

Nada Cvetkovic1, Han Cheng Lie2, Harshit Bansal1, Karen Veroy--Grepl1

1Eindhoven University of Technology, Netherlands, The; 2Universität Potsdam, Germany

In inverse problems, one often assumes a model for how the data is generated from the underlying parameter of interest. In experimental design, the goal is to choose observations to reduce uncertainty in the parameter. When the true model is unknown or expensive, an approximate model is used that has nonzero `model error' with respect to the true data-generating model. Model error can lead to biased parameter estimates. If the bias is large, uncertainty reduction around the estimate is undesirable. This raises the need for experimental design that takes model error into account. We present a framework for model error-aware experimental design in Bayesian inverse problems. Our framework is based on Lipschitz stability results for the posterior with respect to model perturbations. We use our framework to show how one can combine experimental design with models of the model error in order to improve the results of inference.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany