Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
MS05 2: Numerical meet statistical methods in inverse problems
1:30pm - 3:30pm
Session Chair: Martin Hanke Session Chair: Markus Reiß Session Chair: Frank Werner
Utilising Monte Carlo method for light transport in the inverse problem of quantitative photoacoustic tomography
Tanja Tarvainen1, Niko Hänninen1, Aki Pulkkinen1, Simon Arridge2
1University of Eastern Finland, Finland; 2University College London, United Kingdom
We study the inverse problem of quantitative photoacoustic tomography in a situation where the forward operator is stochastic. In the approach, Monte Carlo method for light transport is used to simulate light propagation in the imaged target. Monte Carlo method is based on random sampling of photon paths as they propagate in the medium. In the inverse problem, MAP estimates for absorption and scattering are computed, and the reliability of the estimates is evaluated. Now, due to the stochastic nature of the forward operator, also the search direction of the optimization algorithm for solving the MAP estimates is stochastic. An adaptive approach for controlling the number of simulated photons during the iteration is studied.
Discretisation-adaptive regularisation via frame decompositions
University of Bonn, Germany
We consider linear inverse problems under white (non-Gaussian) noise. We introduce a discretisation scheme to apply the discrepancy principle and the heuristic discrepancy principle, which require bounded data norm. Choosing the discretisation dimension in an adaptive fashion yields convergence, without further restrictions for the operator, the distribution of the white noise or the unknown ground solution. We discuss connections to Lepski's method and apply the technique to ill-posed integral equations with noisy point evaluations and show that here discretisation-adaptive regularisation can be used in order to reduce the numerical complexity. Finally, we apply the technique to methods based on the frame decomposition, tailored for applications in atmospheric tomography.
Operator Learning Meets Inverse Problems
Nicholas H Nelsen1, Maarten V de Hoop2, Nikola B Kovachki3, Andrew M Stuart1
1Caltech, USA; 2Rice University, USA; 3NVIDIA, USA
This talk introduces two connections between operator learning and inverse problems. The first involves framing the supervised learning of a linear operator between function spaces as a Bayesian inverse problem. The resulting analysis of this inverse problem establishes posterior contraction rates and generalization error bounds in the large data limit. These results provide practical insights on how to reduce sample complexity. The second connection is about solving inverse problems with operator learning. This work focuses on the inverse problem of electrical impedance tomography (EIT). Classical methods for EIT tend to be iterative (hence slow) or lack sufficient accuracy. Instead, a new type of neural operator is trained to directly map the data (the Neumann-to-Dirichlet boundary map, a linear operator) to the unknown parameter of the inverse problem (the conductivity, a function). Theory based on emulating the D-bar method for direct EIT shows that the EIT solution map is well-approximated by the proposed architecture. Numerical evidence supports the findings in both settings.
UNLIMITED: The UNiversal Lepskii-Inspired MInimax Tuning mEthoD
In this talk we consider statistical linear inverse problems in separable Hilbert spaces. They arise in applications spanning from astronomy over medical imaging to engineering. We study the (ordered) filter-based regularization methods, including e.g. spectral cutoff, Tikhonov, iterated Tikhonov, Landweber, and Showalter. The proper choice of regularization parameter is always crucial, and often relies on the (unknown) structure assumptions of the true signal. Aiming at a fully automatic procedure, we investigate a specific a posteriori parameter choice rule, which we call UNiversal Lepskii-Inspired MInimax Tuning method (UNLIMITED). We show that the UNLIMTED rule leads to adaptively minimax optimal rates over various smoothness function classes in mildly and severely ill-posed problems. In particular, our results reveal that the “common sense” that one typically loses a log-factor for Lepskii-type methods is actually wrong! In addition, the empirical performance of UNLIMITED is examined in simulations.