Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Date: Thursday, 07/Sept/2023
9:00am - 9:50amPl 6: Plenary lecture
Location: ZHG 011
Session Chair: Fioralba Cakoni
 

Correlation-based imaging and inverse problems in helioseismology

Laurent Gizon

MPI for Solar System Research, Germany

The outer 30% of the solar interior covers the Sun’s convection zone. There, under the influence of rotation, convective motions drive the large-scale flows that power the global dynamo. The convection is also a source of stochastic excitation for the acoustic waves that permeate the solar interior.

Measurements of the frequencies of the modes of oscillation have been used very successfully to infer, for example, internal rotation as a function of radius and unsigned latitude. Current research focuses on developing improved methods to recover the 3D sound speed and vector flows in the interior from the correlations of the acoustic wavefield measured at the surface.

In this presentation, I intend to present recent uniqueness results for the passive inverse problem [1,2], as well as linear inversions of seismic data for the meridional flow [3], and advances in helioseismic holography – an imaging technique that enables us to see active regions on the Sun’s far side [4]. I will then discuss a new and promising iterative method, which combines the computational efficiency of helioseismic holography and the quantitative nature of helioseismic tomography [5]. If time permits, I will mention the possibility of extending helioseismology to the interpretation of the recently discovered inertial modes of oscillation [6].

[1] A. D. Agaltsov, T. Hohage, R. G. Novikov. Monochromatic identities for the Green function and uniqueness results for passive imaging. SIAM J. Appl. Math. 78:2865, 2018. doi:10.1137/18M1182218

[2] A. D. Agaltsov, T. Hohage, R. G. Novikov. Global uniqueness in a passive inverse problem of helioseismology. Inverse Problems 36:055004, 2020. doi:10.1088/1361-6420/ab77d9

[3] L. Gizon et al. Meridional flow in the Sun’s convection zone is a single cell in each hemisphere. Science 368:1469, 2020. doi:10.1126/science.aaz7119

[4] D. Yang, L. Gizon, H. Barucq. Imaging individual active regions on the Sun’s far side with improved helioseismic holography. Astron. Astrophys. 669:A89, 2023. doi:10.1051/0004-6361/202244923

[5] B. Mueller et al. Quantitative passive imaging by iterated back propagation: The example of helioseismic holography. 2023. in preparation

[6] L. Gizon et al. Solar inertial modes: Observations, identification, and diagnostic promise. Astron. Astrophys. 652:L6, 2021. doi:10.1051/0004-6361/202141462

 
9:50am - 10:40amPl 7: Plenary lecture
Location: ZHG 011
Session Chair: Elena Beretta
 

Always-Scattering, Non-Scattering, and Inverse Scattering

Jingni Xiao

Drexel University, United States of America

In this talk, I will present some recent progress on always-scattering, non-scattering, and their connections to inverse scattering.

We consider scattering problems when a medium is probed by incident waves and as a result scattered waves are induced. The aim of inverse scattering is to deduce information about an unknown medium by measuring the corresponding scattered waves outside the medium. Inverse scattering has applications in many fields of science and technology, of which radar is one of the most prevalent.

Non-scattering is a particular phenomenon that arises when a medium is probed but no scattered waves can be measured externally. Non-scattering impacts inverse scattering, and it has applications in invisibility where one tries to avoid detection of an object. Moreover, non-scattering is related to resonance, injectivity of the relative scattering operator, and free boundary problems. There can be situations when non-scattering never occurs for a given medium; this phenomenon is called always-scattering. The always-scattering feature has applications in inverse problems for uniquely determining the shape or other properties of a medium from scattering measurements.
 
10:40am - 11:10amC6: Coffee Break
Location: ZHG Foyer
11:10am - 12:00pmPl 8: Plenary lecture
Location: ZHG 011
Session Chair: Axel Munk
 

High-dimensional non-linear Bayesian inverse problems

Richard Nickl

University of Cambridge, United Kingdom

We will review recent progress and open problems in our understanding of the statistical and computational performance of Bayesian inference algorithms in non-linear inverse regression problems arising with partial differential equation models.
 
12:00pm - 1:30pmLB3: Lunch Break
Location: Mensa
1:30pm - 3:30pmCT04: Contributed talks
Location: VG2.104
Session Chair: Christian Aarset
 

Weighted sparsity regularization for estimating the source term in the potential equation

Ole Løseth Elvetun, Bjørn Fredrik Nielsen

Norwegian University of Life Sciences, Norway

We investigate the possibility for using boundary measurements to recover a sparse source term $f(x)$ in the potential equation. This work is motivated by the observation that standard methods typically suggest that internal sinks and sources are located close to the boundary and hence fail to produce adequate results. That is, the large null space of the associated forward operator is not “correctly handled” by the classical regularization techniques.

Provided that weighted sparsity regularization is used, we derive criteria which assure that several sinks ($f(x)<0$) and sources ($f(x)>0$) can be identified. Furthermore, we present two cases for which these criteria always are fulfilled: a) well-separated sources and sinks, and b) many sources or sinks located at the boundary plus one interior source/sink. Our approach is such that the linearity of the associated forward operator is preserved in the discrete formulation. The theory is therefore conveniently developed in terms of Euclidean spaces, and it can be applied to a wide range of problems. In particular, it can be applied to both isotropic and anisotropic cases. We present a series of numerical experiments.

This work extends the results presented at the "Symposium on Inverse Problems" in Potsdam in September 2022: The theory for the single source case is generalized to the several sources and sinks situation, we do not employ any box constraints and the analysis is carried out for the potential equation instead of focusing on the screened Poisson equation or the Helmholtz equation.



Lipschitz stability for inverse source problems of waves on Lorentzian manifolds

Hiroshi Takase

Kyushu University, Japan

We consider an inverse problem of the wave equation on a Lorentzian manifold, a type of semi-Riemannian manifold. This kind of equation is obtained by linearizing the Einstein equation and is known as the equation satisfied by gravitational waves. In this talk, we prove a global Lipschitz stability for the inverse source problem of determining a source term in the equation. Sobolev spaces on manifolds, semigeodesic coordinates, and Carleman estimates, which are important tools in geometric analysis, will also be discussed.


Logarithmic stability and instability estimates for random inverse source problems

Philipp Ronald Mickan1, Thorsten Hohage1,2

1Georg-August Universität Göttingen, Germany; 2Max Planck Institute for Solar System Research, Göttingen, Germany

We study the inverse source problem to determine the strength of a random acoustic source from correlation data. More precisely, the data of the inverse problems are correlations of random time-harmonic acoustic waves measured on a surface surrounding a region of random, uncorrelated sources. Such a model is used in experimental aeroacoustics to determine the strength of sound sources [1]. Uniqueness has been previously established [1,2]. In this talk we report on logarithmic stability results and logarithmic convergence rates for the Tikhonov regularisation applied to the inverse source problem by establishing a variational source condition under Sobolev type smoothness assumption. We also present logarithmic instability estimates using an entropy argument. Furthermore, we will show numerical experiments supporting our theoretical results.

[1] T. Hohage, H.-G. Raumer, C. Spehr. Uniqueness of an inverse source problem in experimental aeroacoustics. Inverse Problems, 36(7):075012, 2020.

[2] A. J. Devaney. The inverse problem for random sources. Journal of Mathematical Physics, 20(8):1687–1691, 1979.


Combined EEG/MEG source analysis for reconstructing the epileptogenic zone in focal epilepsy

Carsten H. Wolters1, Frank Neugebauer1, Sampsa Pursiainen2, Martin Burger3, Jörg Wellmer4, Stefan Rampp5

1Institute for Biomagnetism and Biosignalanalysis, University of Münster, Germany; 2Tampere University, Finland; 3DESY and University of Hamburg, Germany; 4Ruhr-Epileptology, Dpt. Of Neurology, University Hospital Knappschaftskrankenhaus Bochum, Germany; 5Department of Neurosurgery, University Hospital Erlangen, Germany

MEG and EEG source analysis is frequently used in presurgical evaluation of pharmacoresistant epilepsy patients. The localization quality depends, among other aspects, on the selected inverse and forward approaches and their respective parameter choices. In my talk, I will present new forward and inverse approaches and their application for the identification of the epileptogenic zone in focal epilepsy. The forward approaches are based on the finite element method (FEM). The inverse approaches contain beamforming, hierarchical Bayesian modeling (HBM) and standard dipole scanning techniques. I will discuss advantages and disadvantages of those approaches and compare their performance in a retrospective evaluation study with patients of focal epilepsy.

[1] Neugebauer, F., Antonakakis, M., Unnwongse, K., Parpaley, Y., Wellmer, J., Rampp, S., Wolters, C.H., Validating EEG, MEG and Combined MEG and EEG Beamforming for an Estimation of the Epileptogenic Zone in Focal Cortical Dysplasia. Brain Sci:114, 2022. https://doi.org/10.3390/brainsci12010114.

[2] Aydin, Ü., Rampp, S., Wollbrink, A., Kugel, H., Cho, J.-H., Knösche, T.R.,Grova, C., Wellmer, J., Wolters, C.H., Zoomed MRI guided by combined EEG/MEG source analysis: A multimodal approach for optimizing presurgical epilepsy work-up and its application in a multi-focal epilepsy patient case study, Brain Topography, 30(4):417-433, 2017. https://doi.org/10.1007/s10548-017-0568-9.
 
1:30pm - 3:30pmCT05: Contributed talks
Location: VG2.105
Session Chair: Tram Nguyen
 

Quaternary image decomposition with cross-correlation-based multi-parameter selection

Laura Girometti, Martin Huska, Alessandro Lanza, Serena Morigi

University of Bologna, Italy

Separating different features in images is a challenging problem, especially in the separation of the textural component when the image is noisy. In the last two decades many papers were published on image decomposition, addressing modeling and algorithmic aspects and presenting the use of image decomposition in cartooning, texture separation, denoising, soft shadow/spot light removal and structure retrieval. Given the desired properties of the image components, all the valuable contributions to this problem rely on a variational-based formulation which minimizes the sum of different energy norms: total variation semi-norm, $L^1$-norm, G-norm, approximation of the G-norm by the $div(L^p)$-norm and by the $H^{-1}$-norm, homogeneous Besov space, to model the oscillatory component of an image. The intrinsic difficulty with these minimization problems comes from the numerical intractability of the considered norms, from the tuning of the numerous model parameters, and, overall, from the complexity of extracting noise from a textured image, given the strong similarity between these two components.

In this talk, I will present a two-stage variational model for the additive decomposition of images into piecewise constant, smooth, textured and white noise components. Then, I will discuss how the challenging separation of noise from textured images can be successfully overcome by integrating a whiteness constraint in the model, and how the selection of the regularization parameters can be performed based on a novel multi-parameter cross-correlation principle. Finally, I will present numerical results that show the potentiality of the proposed model for the decomposition of textured images corrupted by several kinds of additive white noises.


Heuristic parameter choice from local minimum points of the quasioptimality function for the class of regularization methods

Uno Hämarik, Toomas Raus

University of Tartu, Estonia

We consider an operator equation \begin{equation*} Au=f, \quad f\in R(A),\tag{1} \end{equation*} where $A\in L(H, F)$ is the linear continuous operator between real Hilbert spaces $H$ and $F$. In general this problem is ill-posed: the range $R(A)$ may be non-closed, the kernel $N(A)$ may be non-trivial. Instead of an exact right-hand side $f_*$ we have only an approximation $f \in F$. For the regularization of problem (1) we consider the following class of regularization methods: \begin{equation*} u_r = (I- A^* A g_r (A^* A )) u_0 + g_r ( A^* A) A^* f. \end{equation*} Here $u_0$ is the initial approximation, $r$ is the regularization parameter, $I$ is the identity operator and the generating function $g_r(\lambda)$ satisfies the conditions \begin{equation*} \sup_{0\leq \lambda \leq \|A^*A\|} \left| g_r (\lambda)\right| \leq \gamma r, \quad r\geq 0, \quad \gamma>0. \end{equation*} \begin{equation*} \sup_{0\leq \lambda \leq \|A^*A\|} \lambda^p \left| 1-\lambda g_r (\lambda)\right| \leq \gamma_p r^{-p}, \quad r\geq 0, \quad 0\leq p \leq p_0, \quad \gamma_p>0. \end{equation*} Examples of methods of this class are (iterated) Tikhonov method, Landweber iteration method, implicite iteration method, method of asymptotical regularization, the truncated singular value decomposition methods etc.

If the noise level of data is unknown, for the choice of the regularization parameter $r$ heuristic rule is needed. We propose to choose $r$ from the set $L_{min}$ of the local minimum points of the quasioptimality criterion function \begin{equation*} \psi_{Q}(r)=r \|A^*(I-AA^*g_r(AA^*))^{\frac{2}{p_0}}(Au_r-f)\| \end{equation*} on the set of parameters $\Omega=\left\{r_{j}: \,r_{j}=q r_{j-1},\, j=1,2,...,M, \, q>1 \right\} $. Then the following error estimates hold:

a) \begin{equation*} \min_{r \in L_{min}}\left\|u_r-u_{*}\right\| \leq C \min_{r_0 \leq r \leq r_M} \left\{ \left\| u_r^{+}-u_{*}\right\|+\left\| u_r-u_r^{+}\right\| \right\}. \end{equation*} Here $u_{*}$ and $u_r^{+}$ are the exact and regularized solutions of equation $Au=f_*$ and the constant $C \leq c_q \ln(r_M / r_0) $ can be computed for each individual problem $Au=f$.

b) Let $u_{*}=(A^{*}A)^{p/2}v, \, \left\|v\right\| \leq \rho $. If $r_0=1, \, r_M=c \left\|f-f_{*}\right\|^{-2}, \, c=(2 \left\|u_{*}\right\|)^{2}$, then \begin{equation*} \min_{r \in L_{min}}\left\|u_r-u_{*}\right\| \leq c_{p,q} \rho^{1/(p+1)} \left| \ln \left\| f-f_{*} \right\| \right| \left\| f-f_{*} \right\|^{p/(p+1)}, 0<p \leq 2 p_0 . \end{equation*} We consider some algorithms for parameter choice from the set $L_{min}$.


Choice of the regularization parameter in case of over- or underestimated noise level of data

Toomas Raus

University of Tartu, Estonia

We consider an operator equation \begin{equation*} Au=f_{*}, \quad f_{*}\in R(A), \end{equation*} where $A\in L(H,F)$ is the linear continuous operator between real Hilbert spaces $H$ and $F$. We assume that instead of the exact right-hand side $f_{*}$ we have only an approximation $f\in F$ with supposed noise level $\delta$. To get the regularized solution we consider Tikhonov method $u_\alpha=(\alpha I+A^{*}A)^{-1}A^{*}f,$ where $\alpha>0$ is the regularization parameter.

In article [1] is shown that at least one local minimum point $m_k $ of the quasioptimality criterion function \begin{equation*} \psi_{Q}(\alpha)=\alpha \left\|du_{\alpha}/d\alpha\right\|=\alpha^{-1}\left\|A^{*}(Au_{2,\alpha}-f)\right\|, \quad u_{2,\alpha}=(\alpha I+A^{*}A)^{-1}(\alpha u_{\alpha}+A^{*}f), \end{equation*} is always a good regularization parameter. We will use this fact to choose proper regularization parameter in case of a possible over- or underestimation of the noise level.

If the actual noise level $ \left\| f - f_*\right\|$ can be less than $\delta$, then we propose the following rule.

Rule 1. Let $c>1$ and the parameter $\alpha(\delta)$ is choosen according to the modified discrepancy principle or monotone error rule (see [2]). For the regularization parameter choose smallest local minimum point $m_k \leq \alpha(\delta) $ of the quasioptimality criterion function for which holds \begin{equation*} \max_{\alpha,\alpha', m_k \leq \alpha' < \alpha \leq \alpha(\delta)} \frac{\psi_{Q}(\alpha')}{\psi_{Q}(\alpha)} \leq c. \qquad \qquad \qquad(1) \end{equation*} If such local minimum point does not exist, then choose $\alpha(\delta)$.

If the actual noise level can be both larger or smaller than $\delta$ then we propose the following rule.

Rule 2. Let $c>1$ and the parameter $\alpha(\delta)$ is chosen according to the balancing principle (see [2]). If there exists local minimum point $m_{k_0} > \alpha(\delta)$ for which $\psi_{Q}(\alpha(\delta)) > c \psi_{Q}(m_{k_0})$, then choose $m_{k_0}$ for the regularization parameter. Otherwise, choose smallest local minimum point $m_k \leq \alpha(\delta) $ for which holds inequality (1). If such local minimum point does not exist, then choose $\alpha(\delta)$.

[1] T. Raus, U. Hämarik. Heuristic parameter choice in Tikhonov method from minimizers of the quasi-optimality function. In: Hofmann, Bernd, Leitao, Antonio, Zubelli, Jorge P. (Ed.). New Trends in Parameter Identification for Mathematical Models (1−18). Birkhäuser De Gruyter:227 - 244, 2018.

[2] T. Raus, U. Hämarik. About the Balancing Principle for Choice of the Regularization Parameter. Numerical Functional Analysis and Optimization, 30:9-10, 951 - 970, 2008.


Multi-Penalty TV Regularisation for Image Denoising: A Study

Kemal Raik

University of Vienna, Austria

A common method for image denoising would be through TV regularisation, i.e., $$ \frac{1}{2}\|k\ast x-y^\delta\|^2+\alpha\operatorname{TV}(x)\to\min_x, $$ with $k=\operatorname{id}$ and $\alpha>0$ as the parameter determining the trade-off between the accuracy and computational stability of your solution. The noise level $\|y-y^\delta\|\le\delta$ is usually unknown, and therefore in this talk, I have opted to present a numerical study of the performance of several heuristic (i.e., noise-level free) parameter choice rules for total variation regularisation, both isotropic and anisotropic, with a focus on image denoising.

This is a prelude, however, to the more ominous multi-parameter choice problem [2] through the example of semiblind deconvolution [1], in which one only has an approximation $k_\eta$ of a blurring kernel $k$, with $\|k-k^\eta\|\le\eta$ (and $\eta$ is known, thus the expression "semi"-blind). The functional we would like to minimise would then be

$$ \frac{1}{2}\|k\ast x-y^\delta\|^2+\alpha\operatorname{TV}(x)+\beta\|k-k^\eta\|^2\to\min_{x,k}. $$ To quote a famous science-fiction film: "now there are two of them" ($\alpha$ and $\beta$, that is).

[1] A. Buccini, M. Donatelli, R. Ramlau, A Semiblind Regularization Algorithm for Inverse Problems with Application to Image Deblurring, SIAM Journal on Scientific Computing, 2018. https://epubs.siam.org/doi/10.1137/16M1101830.

[2] M. Fornasier, V. Naumova, S. V. Pereverzyev, Parameter Choice Strategies for Multipenalty Regularization, SIAM Journal on Numerical Analysis, 2014. https://epubs.siam.org/doi/10.1137/130930248.
 
1:30pm - 3:30pmCT06: Contributed talks
Location: VG2.107
Session Chair: Milad Karimi
 

$L^1$-data fitting for Inverse Problems with subexponentially-tailed data

Kristina Meth, Frank Werner

Julius-Maximilians-Universität Würzburg, Germany

Outgoing from [1] and [2] we analyze variational regularization with $L^1$ data fidelity. We investigate discrete models with regular data in the sense that the tails decay subexponentiallly. Therefore, error bounds are provided and numerical simulations of convergence rates are presented.

[1] T. Hohage, F. Werner, Convergence rates for inverse problems with impulsive noise, SIAM J. Numer. Anal., 52: 1203-1221, 2014.

[2] C.König, F. Werner, T. Hohage, Convergence rates for exponentially ill-posed inverse problems with impulsive noise, SIAM J. Numer. Anal., 54: 341-360, 2016.


Globally Convergent Convexification Method for Coefficient Inverse Problems for Three Equations

Mikhail Klibanov

University of North Carolina at Charlotte, United States of America

Three Coefficient Inverse Problems (CIP) will be considered. Respectively, three versions of the globally convergent convexification numerical method will be presented. Global convergence theorems will be outlined and numerical results will be presented. Results outlined below are the first ones for each considered CIP. These CIPs are:

1. CIP for the radiative transport equation with euclidian propagation of particles [1].

2. CIP for the Riemannian radiative transport equation. In this case, particles propagate along geodesic lines between their scattering events [2].

3. Travel Time Tomography Problem in the 3-d case [3]. This is a CIP for the eikonal equation in the 3-d case. First numerical results in 3-d for this CIP will be presented,

[1] M.V. Klibanov, J. Li, L.H. Nguyen, Z. Yang, Convexification numerical method for a coefficient inverse problem for the radiative transport equation, SIAM J. Imaging Sciences, 16;35-63, 2023.

[2] M.V. Klibanov, J. Li, L.H. Nguyen, V.G. Romanov, Z. Yang, Convexification numerical method for a coefficient inverse problem for the Riemannian radiative transfer equation, arxiv: 2212.12593, 2022.

[3] M.V. Klibanov, J. Li, W. Zhang, Numerical solution of the 3-D travel time tomography problem, Journal of Computational Physics, 476:111910, 2023. published online https://doi.org/10.1016/j.jcp.2023.111910.


On modeling and regularization of piezoelectric inverse problems using all-at-once and reduced approaches

Raphael Kuess

Humboldt-Universität zu Berlin, Germany

Piezoelectric materials are an essential component for a wide range of electrical devices. Consequently, the range of possible applications for piezoelectric materials is expansive, encompassing, for example, electronic toothbrushes and microphones, as well as ultrasound imaging and sonar devices.

Simplified, the behaviour in the small signal range can be described by a linearly coupled PDE system for mechanical displacement and electrical potential, which can then be extended by a non-linear PDE system to consider piezoelectric material in high signal range. Since many applications require high precision and also due to the transition to lead-free piezoceramics, a consistent and reproducible characterization of the material parameter set is of very high importance to properly determine the material properties, as the material data provided by the manufacturers often deviate significantly from the real data and are difficult and expensive to measure.

Therefore, this talk will focus on the parameter identification problem for the piezoelectric partial differential equation based on a measured and simulated quantity of the sample. Hence, we will derive the forward operator of this inverse problem generally. Then we will consider this inverse problem using regularization techniques based on all-at-once and reduced iterative methods, and further discuss the connection between the adjoint operators of the all-at-once approach and the adjoint differential equations of the reduced approach. Since several applications exhibit nonlinear material behaviour, the all-at-once approach is of particular interest, especially with respect to computational aspects. Thus, modeling, analysis and the solution of these inverse problems in these different settings by fitting simulated data is the main focus. Finally, numerical examples are provided.



Solution of the fractional order differential equation for Laplace transform of a boundary functional of a semi-Markov process using inverse Laplace transform

Elshan Ibayev

Institute of Control Systems of the Ministry of Science and Education of the Republic of Azerbaijan, Azerbaijan

Let $\left\{\xi _{k} \right\}_{k=1}^{\infty } ,$ and $\left\{\zeta _{k} \right\}_{k=1}^{\infty } $ be two independent sequences of random variables defined on any probability space $(\Omega ,\, F,P)$, such that the random variables in each sequence are independent, positive and identically distributed. Now we can construct the stochastic process $X_{1} \left(t\right)$ as follows

$X_{1} \left(t\right)=z-t+\sum _{i=0}^{k-1}\zeta _{i} $, if $\sum _{i=0}^{k-1}\xi _{i} \le t<\sum _{i=0}^{k}\xi _{i} $ ,

where $\xi _{0} =\zeta _{0} =0$. The process $X_{1} \left(t\right)$ is called the semi-Markov random walk process with negative drift, positive jumps. Let this process is delayed by a barrier zero: \[X(t)=X_{1} \left(t\right)-\mathop{\inf }\limits_{0\le s\le t} \left\{0,X_{1} (s)\right\}\]

Now, we introduce the random variable $\tau _{0} =\inf \left\{t:\, \, X(t)=0\right\}$. We set $\tau _{0} =\infty $ if $X(t)>0$for all $t$. It is obvious that the random variable $\tau _{0} $ is the time of the first crossing of the process $X(t)$ into the delaying barrier at zero level. $\tau _{0} $ is called the boundary functional of the semi-Markov random walk process with negative drift, positive jumps.

The aim of the present work is to determine the Laplace transform of the conditional distribution of the random variable $\tau _{0} $. Laplace transform of the conditional distribution of the random variable $\tau _{0} $. by \[L(\theta \left|z\right. )=E\left[e^{-\theta \tau _{0} } \left|X(0)=z\right. \right],\, \, \, \, \theta >0,\, \, z\ge 0.\]

Let us denote the conditional distribution of random variable of $\tau _{0} $ and the Laplace transform of the conditional distribution with \[N(t\left|z\right. )=P\left[\tau _{0} >t\left|X(0)=z\right. \right],\] and \[\tilde{N}(\theta \left|z\right. )=\int _{t=0}^{\infty }e^{-\theta t} N(t\left|z\right. )dt ,\] respectively.

Thus, we can easily obtain that \[\tilde{N}(\theta \left|z\right. )=\frac{1-L(\theta \left|z\right. )}{\theta } \] or, equivalently, \[L(\theta \left|z\right. )=1-\theta \tilde{N}(\theta \left|z\right. ).\]

We construct an integral equation for the $\tilde{N}(\theta \left|z\right. )$. In particular, constructed integral equation reduced to the fractional order differential equation in the class of gamma distributions. The fractional derivatives are described in the Riemann-Liouville sense. Finally , Laplace transform of $\tilde{N}(\theta \left|z\right. )$ is obtained in the form of a threefold sum.
 
1:30pm - 3:30pmMS05 2: Numerical meet statistical methods in inverse problems
Location: VG2.102
Session Chair: Martin Hanke
Session Chair: Markus Reiß
Session Chair: Frank Werner
 

Utilising Monte Carlo method for light transport in the inverse problem of quantitative photoacoustic tomography

Tanja Tarvainen1, Niko Hänninen1, Aki Pulkkinen1, Simon Arridge2

1University of Eastern Finland, Finland; 2University College London, United Kingdom

We study the inverse problem of quantitative photoacoustic tomography in a situation where the forward operator is stochastic. In the approach, Monte Carlo method for light transport is used to simulate light propagation in the imaged target. Monte Carlo method is based on random sampling of photon paths as they propagate in the medium. In the inverse problem, MAP estimates for absorption and scattering are computed, and the reliability of the estimates is evaluated. Now, due to the stochastic nature of the forward operator, also the search direction of the optimization algorithm for solving the MAP estimates is stochastic. An adaptive approach for controlling the number of simulated photons during the iteration is studied.


Discretisation-adaptive regularisation via frame decompositions

Tim Jahn

University of Bonn, Germany

We consider linear inverse problems under white (non-Gaussian) noise. We introduce a discretisation scheme to apply the discrepancy principle and the heuristic discrepancy principle, which require bounded data norm. Choosing the discretisation dimension in an adaptive fashion yields convergence, without further restrictions for the operator, the distribution of the white noise or the unknown ground solution. We discuss connections to Lepski's method and apply the technique to ill-posed integral equations with noisy point evaluations and show that here discretisation-adaptive regularisation can be used in order to reduce the numerical complexity. Finally, we apply the technique to methods based on the frame decomposition, tailored for applications in atmospheric tomography.


Operator Learning Meets Inverse Problems

Nicholas H Nelsen1, Maarten V de Hoop2, Nikola B Kovachki3, Andrew M Stuart1

1Caltech, USA; 2Rice University, USA; 3NVIDIA, USA

This talk introduces two connections between operator learning and inverse problems. The first involves framing the supervised learning of a linear operator between function spaces as a Bayesian inverse problem. The resulting analysis of this inverse problem establishes posterior contraction rates and generalization error bounds in the large data limit. These results provide practical insights on how to reduce sample complexity. The second connection is about solving inverse problems with operator learning. This work focuses on the inverse problem of electrical impedance tomography (EIT). Classical methods for EIT tend to be iterative (hence slow) or lack sufficient accuracy. Instead, a new type of neural operator is trained to directly map the data (the Neumann-to-Dirichlet boundary map, a linear operator) to the unknown parameter of the inverse problem (the conductivity, a function). Theory based on emulating the D-bar method for direct EIT shows that the EIT solution map is well-approximated by the proposed architecture. Numerical evidence supports the findings in both settings.


UNLIMITED: The UNiversal Lepskii-Inspired MInimax Tuning mEthoD

Housen Li1, Frank Werner2

1Georg-August-Universität Göttingen, Germany; 2Universität Würzburg

In this talk we consider statistical linear inverse problems in separable Hilbert spaces. They arise in applications spanning from astronomy over medical imaging to engineering. We study the (ordered) filter-based regularization methods, including e.g. spectral cutoff, Tikhonov, iterated Tikhonov, Landweber, and Showalter. The proper choice of regularization parameter is always crucial, and often relies on the (unknown) structure assumptions of the true signal. Aiming at a fully automatic procedure, we investigate a specific a posteriori parameter choice rule, which we call UNiversal Lepskii-Inspired MInimax Tuning method (UNLIMITED). We show that the UNLIMTED rule leads to adaptively minimax optimal rates over various smoothness function classes in mildly and severely ill-posed problems. In particular, our results reveal that the “common sense” that one typically loses a log-factor for Lepskii-type methods is actually wrong! In addition, the empirical performance of UNLIMITED is examined in simulations.
 
1:30pm - 3:30pmMS06 2: Inverse Acoustic and Electromagnetic Scattering Theory - 30 years later
Location: VG3.103
Session Chair: Fioralba Cakoni
Session Chair: Houssem Haddar
 

Nonlinearity parameter imaging in the frequency domain

Barbara Kaltenbacher, William Rundell

texas A&M University, United States of America

Nonlinear parameter tomography is a technique for enhancing ultrasound imaging and amounts to identifying the spatially varying coefficient $\eta=\eta(x)$ in the Westervelt equation $ p_{tt}-c^2\triangle p - b\triangle p_t = \eta(p^2)_{tt} + h$ in a domain $(0,T)\times\Omega$. Here $p$ is the acoustic pressure, $c$ the speed of sound, $b$ the diffusivity of sound, and $h$ the excitation. Observations consist of pressure measurements on some manifold $\Sigma$ immersed in the acoustic domain $\Omega$.

Our imaging goal is to show unique recovery when $\eta(x)$ is a finite set $\{a_i\chi(D_i)\}_i$ and where each $D_i$ is starlike with respect to its centroid.

Assuming periodic excitations of the form $h(x,t) = A e^{i\omega t}$ for some fixed frequency $\omega$ one can convert this to an infinite system of coupled linear Helmholtz equations. We will give both uniqueness and reconstructions results and note that this work was inspired by a previous paper of one author and Rainer Kress.


The Lippmann-Schwinger Lanczos algorithm for inverse scattering.

Justin Baker4, Elena Cherkaev4, Vladimir Druskin1, Shari Moskow2, Mikhail Zaslavsky3

1WPI, United States of America; 2Drexel University, United States of America; 3Southern Methodist University, United States of America; 4University of Utah, United States of America

We combine data-driven reduced order models with the Lippmann- Schwinger integral equation to produce a direct nonlinear inversion method. The ROM is viewed as a Galerkin projection and is sparse due to Lanczos orthogonalization. Embedding into the continuous problem, a data-driven internal solution is produced. This internal solution is then used in the Lippmann-Schwinger equation, in a direct or iterative framework. The approach also allows us to process more general transfer functions, i.e., to remove the main limitation of the earlier versions of the ROM based inversion algorithms. We describe how the generation of internal solutions simplifies in the time domain, and show how Lanczos orthogonalization in the spectral domain relates to time stepping. We give examples of its use given mono static data, targeting synthetic aperture radar.


Analysis of topological derivative for qualitative defect imaging using elastic waves

Marc Bonnet

ENSTA Paris, France

The concept of topological derivative has proved effective as a qualitative inversion tool for wave-based identification of finite-sized objects. This approach is often based on a heuristic interpretation of the topological derivative. Its mathematical justification has however also been studied, in particular in cases where the true obstacle is small enough for asymptotic approximations of wave scattering to be applicable, and also for finite-sized objects in the scalar wave framework. This work extends our previous efforts in the latter direction to the identification of elastic inhomogeneities embedded in elastic media interrogated by elastic waves. The data used for identification, assumed to be of near-field nature (i.e. no far-field approximation is introduced), is introduced through a misfit functional $J$. The imaging functional that reveals embedded inhomogeneities then consists of the topological derivative $\mathcal{T}_J$ of $J$ (in particular, the actual minimization of $J$ is not performed, making the procedure significantly faster than standard inversion based on PDE-constrained minimization). Our main contribution consists in an analysis of $\mathcal{T}_J$ using a suitable factorization of the near fields, achievable thanks to a convenient reformulation of the volume integral equation formulation of the forward elastodynamic scattering problem established earlier. Our results include justification of both the sign heuristics for $\mathbf{z}\mapsto\mathcal{T}_J(\mathbf{z})$ (which is expected to be most negative at points $\mathbf{z}$ inside, or close to, the support of the sought flaw) and the spatial decay of $\mathcal{T}_J(\mathbf{z})$ as $\mathbf{z}$ moves away from the flaw support. This result, subject to a limitation on the strength of the inhomogeneity to be identified, provides a theoretical conditional validation of the usual heuristic interpretation of $\mathcal{T}_J$ as an imaging functional. Our findings are demonstrated on 3D computational experiments.
 
1:30pm - 3:30pmMS10 1: Optimization in Inverse Scattering: from Acoustics to X-rays
Location: VG1.103
Session Chair: Radu Ioan Bot
Session Chair: Russell Luke
 

Numerical Linear Algebra Networks for Solving Linear Inverse Problems

Otmar Scherzer

University Vienna, Austria

We consider solving a probably ill-conditioned linear operator equation, where the operator is not modeled, but given indirectly via training samples. The method de- and encoding is very useful for such applications and will be analyzed from a point of regularization theory. This analysis in particular shows that preprocessing of the image data is required, which is implemented by linear algebra networks.


Regularization by Randomization: The Case of Partially Separable Optimization

Russell Luke

University of Göttingen, Germany

We present a Markov-chain analysis of blockwise-stochastic algorithms for solving partially block-separable optimization problems. Our main contributions to the extensive literature on these methods are statements about the Markov operators and distributions behind the iterates of stochastic algorithms, and in particular the regularity of Markov operators and rates of convergence of the distributions of the corresponding Markov chains. This provides a detailed characterization of the moments of the sequences beyond just the expected behavior. This also serves as a case study of how randomization restores favorable properties to algorithms that iterations of only partial information destroys. We demonstrate this on stochastic blockwise implementations of the forward-backward and Douglas-Rachford algorithms for nonconvex (and, as a special case, convex), nonsmooth optimization.



Fast convex optimization via closed-loop time scaling of gradient dynamics

Hedy Attouch1, Radu Ioan Bot2, Dang-Khoa Nguyen2

1Univ. Montpellier, France; 2University of Vienna, Austria

In a Hilbert setting, for convex differentiable optimization, we develop a general framework for adaptive accelerated gradient methods. They are based on damped inertial dynamics where the coefficients are designed in a closed-loop way. Specifically, the damping is a feedback control of the velocity, or of the gradient of the objective function. For this, we develop a closed-loop version of the time scaling and averaging technique introduced by the authors. We thus obtain autonomous inertial dynamics which involve vanishing viscous damping and implicit Hessian driven damping. By simply using the convergence rates for the continuous steepest descent and Jensen's inequality, without the need for further Lyapunov analysis, we show that the trajectories have several remarkable properties at once: they ensure fast convergence of values, fast convergence of the gradients towards zero, and they converge to optimal solutions. Our approach leads to parallel algorithmic results, that we study in the case of proximal algorithms. These are among the very first general results of this type obtained using autonomous dynamics.

[1] H. Attouch, R.I. Bot, D.-K. Nguyen. Fast convex optimization via time scale and averaging of the steepest descent, arXiv:2208.08260, 2022

[2] H. Attouch, R.I. Bot, D.-K. Nguyen. Fast convex optimization via closed-loop time scaling of gradient dynamics, arXiv:2301.00701, 2023


Fast Optimistic Gradient Descent Ascent method in continuous and discrete time

Radu Ioan Bot, Ernö Robert Csetnek, Dang-Khoa Nguyen

University of Vienna, Austria

In this talk we address continuous in time dynamics as well as numerical algorithms for the problem of approaching the set of zeros of a single-valued monotone and continuous operator $V$ ([1,2]). The starting point of our investigations is a second order dynamical system that combines a vanishing damping term with the time derivative of $V$ along the trajectory, which can be seen as an analogous of the Hessian-driven damping in case the operator is originating from a potential. Our method exhibits fast convergence rates for the norm of the operator along the trajectory and also for the restricted gap function, which is a measure of optimality for variational inequalities. We also prove the weak convergence of the trajectory to a zero of $V$.

Temporal discretizations of the dynamical system generate implicit and explicit numerical algorithms, which can be both seen as accelerated versions of the Optimistic Gradient Descent Ascent (OGDA) method for monotone operators, for which we prove that they share the asymptotic features of the continuous dynamics. All convergence rate statements are last iterate convergence results. Numerical experiments indicate the that explicit numerical algorithm outperform other methods designed to solve equations governed by monotone operators.

[1] R. I. Bot, E. R. Csetnek, D.-K. Nguyen. Fast OGDA in continuous and discrete time, arXiv:2203.10947, 2022.

[2] R. I. Bot, D.-K. Nguyen. Fast Krasnosel'skii-Mann algorithm with a convergence rate of the fixed point iteration of $o(1/k)$, arXiv:2206.09462, 2022

 
1:30pm - 3:30pmMS16 1: Wave propagation and quantitative tomography
Location: VG0.111
Session Chair: Leonidas Mindrinos
Session Chair: Leopold Veselka
 

Phase-contrast THz-CT for non-destructive testing

Simon Hubmer1, Ronny Ramlau1,2

1Johann Radon Institue Linz, Austria; 2Johannes Kepler University Linz, Austria

In this talk, we consider the imaging problem of THz computed tomography (THz-CT), in particular for the non-destructive testing of extruded plastic profiles. We derive a general nonlinear mathematical model describing a full THz tomography experiment, and consider several approximations connecting THz tomography with standard computerized tomography and the Radon transform. The employed models are based on geometrical optics, and contain both the THz signal amplitude and the phase. We consider several reconstruction approaches using the corresponding phase-contrast sinograms, and compare them both qualitiatively and quantitatively on experimental data obtained from 3D printed plastic profiles which were scanned with a THz time-domain spectrometer in transmission geometry.


Diffraction tomography for a generalized incident beam wave

Noemi Naujoks

University of Vienna, Austria

The mathematical imaging problem of diffraction tomography is an inverse scattering technique used to find the material properties of an object. Here, the object is exposed to a certain form of radiation and the scattered wave is recorded. In conventional diffraction tomography, the incident wave is assumed to be a monochromatic plane wave arriving from a fixed direction of propagation. However, this plane wave excitation does not necessarily correspond to measurement setups used in practice: There, the size of the emitting device is limited and therefore cannot produce plane waves. Besides, it is common to emit focused beams to achieve a better resolution in the far field. In this talk, I will present our recent results that allow diffraction tomography to be applied to these realistic illumination scenarios. We use a new forward model, that incorporates individually generated incident fields. Based on this, a new reconstruction algorithm is developed.



Bias-free localizations in cryo-single molecule localization microscopy

Fabian Hinterer

Johannes Kepler University Linz, Austria

Single molecule localization microscopy (SMLM) has the potential to resolve structural details of biological samples at the nanometer length scale. Compared to room temperature experiments, SMLM performed under cryogenic temperature achieves higher photon yields and, hence, higher localization precision. However, to fully exploit the resolution it is crucial to account for the anisotropic emission characteristics of fluorescence dipole emitters with fixed orientation. In this talk, I will present recent advances along this avenue.


Uncertainty-aware blob detection in astronomical imaging

Fabian Parzer1, Prashin Jethwa1, Alina Boecker2,3, Mayte Alfaro-Cuello4,5, Otmar Scherzer1,6,7, Glenn van de Ven1

1University of Vienna, Austria; 2Max-Planck Institut für Astronomie, Germany; 3Instituto de Astrofisica de Canarias, Spain; 4Universidad Central de Chile, Chile; 5Space Telescope Science Institute, USA; 6Johann Radon Institute for Computational and Applied Mathematics, Linz, Austria; 7Christian Doppler Laboratory for Mathematical Modeling and Simulation of Next Generations of Ultrasound Devices, Vienna, Austria

Blob detection, i. e. detection of blob-like shapes in an image, is a common problem in astronomy. A difficulty arises when the image of interest has to be recovered from noisy measurements, and thus comes with uncertainties. Formulating the reconstruction of the image as a Bayesian inverse problem, we propose an uncertainty-aware version of the classic Laplacian-of-Gaussians method for blob detection. It combines ideas from scale-space theory, statistics and variational regularization to identify salient blobs in uncertain images. The proposed method is illustrated on a problem from stellar dynamics: the identification of components in a stellar distribution recovered from integrated-light spectra. This talk is based on our recent preprint [1].

[1] F. Parzer, P. Jethwa, A. Boecker, M. Alfaro-Cuello, O. Scherzer, G. van de Ven. Uncertainty-Aware Blob Detection with an Application to Integrated-Light Stellar Population Recoveries, arXiv:2208.05881, 2022.
 
1:30pm - 3:30pmMS20 1: Recent advances in inverse problems for elliptic and hyperbolic equations
Location: VG3.104
Session Chair: Ru-Yu Lai
 

Determining a nonlinear hyperbolic system with unknown sources and nonlinearity

Yi-Hsuan Lin

National Yang Ming Chiao Tung University, Taiwan

This talk is devoted to some inverse boundary problems associated with a time-dependent semilinear hyperbolic equation, where both nonlinearity and sources (including initial displacement and initial velocity) are unknown. It is shown in several generic scenarios that one can uniquely determine the nonlinearity and/or the sources by using passive or active boundary observations. In order to exploit the nonlinearity and the sources simultaneously, we develop a new technique, which combines the observability for linear wave equations and an approximation property with higher order linearization for the semilinear hyperbolic equation


Uniqueness in an inverse problem of fractional elasticity

Giovanni Covi

University of Bonn, Germany

We study an inverse problem for fractional elasticity. In analogy to the classical problem of linear elasticity, we consider the unique recovery of the Lamé parameters associated to a linear, isotropic fractional elasticity operator from fractional Dirichlet-to-Neumann data. In our analysis we make use of a fractional matrix Schrödinger equation via a generalization of the so-called Liouville reduction, a technique classically used in the study of the scalar conductivity equation. We conclude that unique recovery is possible if the Lamé parameters agree and are constant in the exterior, and their Poisson ratios agree everywhere. Our study is motivated by the significant recent activity in the field of nonlocal elasticity.

This is a joint work with Prof. Maarte de Hoop and Prof. Mikko Salo.


Calderon problem for elliptic systems via complex ray transform

Mihajlo Cekic

University of Zurich, Switzerland

Let $(M, g)$ be a Riemannian manifold embedded (up to a conformal factor) into the product $\mathbb{R}^2 \times (M_0, g_0)$, let $A$ be a skew-Hermitian matrix of $1$-forms and let $Q$ be a matrix potential. In this talk, I will explain how to simultaneously recover the pair $(A, Q)$, up to gauge-equivalence, from the associated Dirichlet-to-Neumann map of the Schroedinger operator $d_A^*d_A + Q := (d + A)^* (d + A) + Q$. Techniques involve constructing complex geometric optics (CGO) solutions and analysing a complex ray transform that arises. This improves on the previously known results.


Asymptotics Applied to Small Volume Inverse Shape Problems

Isaac Harris

Purdue University, United States of America

We consider two inverse shape problems coming from diffuse optical tomography and inverse scattering. For both problems, we assume that there are small volume subregions that we wish to recover using the measured Cauchy data. We will derive an asymptotic expansion involving their respective fields. Using the asymptotic expansion, we derive a MUSIC-type algorithm for the Reciprocity Gap Functional, which we prove can recover the subregion(s) with a finite amount of Cauchy data. Numerical examples will be presented for both problems in two dimensions in the unit circle.
 
1:30pm - 3:30pmMS24 1: Learned Regularization for Solving Inverse Problems
Location: VG1.101
Session Chair: Johannes Hertrich
Session Chair: Sebastian Neumayer
 

The Power of Patches for Training Normalizing Flows

Fabian Altekrüger1,2, Alexander Denker3, Paul Hagemann2, Johannes Hertrich2, Peter Maass3, Gabriele Steidl2

1Humboldt-University Berlin, Germany; 2Technical University Berlin, Germany; 3University Bremen, Germany

We introduce two kinds of data-driven patch priors learned from very few images: First, the Wasserstein patch prior penalizes the Wasserstein-2 distance between the patch distribution of the reconstruction and a possibly small reference image. Such a reference image is available for instance when working with materials’ microstructures or textures. The second regularizer learns the patch distribution using a normalizing flow. Since already a small image contains a large number of patches, this enables us to train the regularizer based on very few training images. For both regularizers, we show that they induce indeed a probability distribution such that they can be used within a Bayesian setting. We demonstrate the performance of patch priors for MAP estimation and posterior sampling within Bayesian inverse problems. For both approaches, we observe numerically that only very few clean reference images are required to achieve high-quality results and to obtain stability with respect to small perturbations of the problem.



Trust your source: quantifying source condition elements for variational regularisation methods

Martin Benning1,4, Tatiana Bubba2, Luca Ratti3, Danilo Riccio1

1Queen Mary University of London, United Kingdom; 2University of Bath, United Kingdom; 3University of Genoa, Italy; 4The Alan Turing Institute, United Kingdom

Source conditions are a key tool in variational regularisation to derive error estimates and convergence rates for ill-posed inverse problems. In this paper, we provide a recipe to practically compute source condition elements as the solution of convex minimisation problems that can be solved with first-order algorithms. We demonstrate the validity of our approach by testing it for two inverse problem case studies in machine learning and image processing: sparse coefficient estimation of a polynomial via LASSO regression and recovery of an image from a subset of the coefficients of its Fourier transform. We further demonstrate that the proposed approach can easily be modified to solve the learning task of identifying the optimal sampling pattern in the Fourier domain for given image and variational regularisation method, which has applications in the context of sparsity promoting reconstruction from magnetic resonance imaging data. We conclude by presenting a methodology with which data-driven regularisations with quantitative error estimates can be designed and trained.


Plug-and-Play image reconstruction is a convergent regularization method

Andrea Ebner, Markus Haltmeier

University of Innsbruck, Austria

Non-uniqueness and instability are characteristic features of image reconstruction processes. As a result, it is necessary to develop regularization methods that can be used to compute reliable approximate solutions. A regularization method provides of a family of stable reconstructions that converge to an exact solution of the noise-free problem as the noise level tends to zero. The standard regularization technique is defined by variational image reconstruction, which minimizes a data discrepancy augmented by a regularizer. The actual numerical implementation makes use of iterative methods, often involving proximal mappings of the regularizer. In recent years, plug-and-play image reconstruction (PnP) has been developed as a new powerful generalization of variational methods based on replacing proximal mappings by more general image denoisers. While PnP iterations yield excellent results, neither stability nor convergence in the sense of regularization has been studied so far. In this work, we extend the idea of PnP by considering families of PnP iterations, each being accompanied by its own denoiser. As our main theoretical result, we show that such PnP reconstructions lead to stable and convergent regularization methods. This shows for the first time that PnP is mathematically equally justified for robust image reconstruction as variational methods.


Provably Convergent Plug-and-Play Quasi-Newton Methods

Hong Ye Tan1, Subhadip Mukherjee2, Junqi Tang3, Carola-Bibiane Schönlieb1

1University of Cambridge, United Kingdom; 2University of Bath, United Kingdom; 3University of Birmingham, United Kingdom

Plug-and-Play (PnP) methods are a class of efficient data-driven methods for solving imaging inverse problems, wherein one incorporates an off-the-shelf denoiser inside iterative optimization schemes such as proximal gradient descent and ADMM. PnP methods have been shown to yield excellent reconstruction performance and are also supported by convergence guarantees. However, existing provable PnP methods impose heavy restrictions on the denoiser (such as nonexpansiveness) or the fidelity term (such as strict convexity). In this work, we propose a provable PnP method that imposes relatively light conditions based on proximal denoisers and introduce a quasi-Newton step to greatly accelerate convergence. By specially parameterizing the deep denoiser as a gradient step, we further characterize the fixed points of the resulting quasi-Newton PnP algorithm.
 
1:30pm - 3:30pmMS26 2: Trends and open problems in cryo electron microscopy
Location: VG3.102
Session Chair: Carlos Esteve-Yague
Session Chair: Johannes Schwab
 

High Dimensional Covariance Estimation in Cryo-EM

Marc Aurèle Gilles, Amit Singer

Princeton University, United States of America

Cryogenic electron-microscopy (cryo-EM) is an imaging technique able to recover the 3D structures of proteins at near-atomic resolution. A unique characteristic of cryo-EM is the possibility of recovering the structure of flexible proteins in different conformations from a single electron microscopy image dataset. One way to estimate these conformations relies on estimating the covariance matrix of the scattering potential directly from the electron data. From that matrix, one can perform principal component analysis to recover the distribution of conformations of a protein. While theoretically attractive, this method has been constrained to low resolutions because of high storage and computational complexity; indeed, the covariance matrix contains $N^6$ entries where images are of size $N\times N$. In this talk, we present a new estimator for the covariance matrix and show that we can compute it in a rank k-approximate covariance in $O(kN^3)$. Finally, we demonstrate on simulated and real datasets that we can recover the conformations of structures at high resolution.


Bayesian random tomography

Michael Habeck

Jena University Hospital, Germany

The reconstruction problem in random tomography is to reconstruct a 3D volume from 2D projection images acquired in unknown random directions. Random tomography is a common problem in imaging science and highly relevant to cryo-electron microscopy. This talk outlines a Bayesian approach to random tomography [1, 2]. At the core of the approach is a meshless representation of the 3D volume based on a Gaussian radial basis function kernel. Each Gaussian can be interpreted as a particle such that the unknown volume is represented by a cloud of particles. The particle representation allows us to speed up the computation of projection images and to represent a large variety of molecular structures accurately and efficiently. Another innovation is the use of Markov chain Monte Carlo algorithms to infer the particle positions as well as the unknown orientations. Posterior sampling is challenging due to the high dimensionality and multimodality of the posterior distribution. We tackle these challenges by using Hamiltonian Monte Carlo and a recently developed Geodesic slice sampler [3]. We demonstrate the strengths of the approach on various simulated and real datasets.

[1] P. Joubert, M. Habeck. Bayesian inference of initial models in cryo-electron microscopy using pseudo-atoms, Biophysical journal 108(5): 1165-1175, 2015.

[2] N. Vakili, M. Habeck. Bayesian Random Tomography of Particle Systems, Frontiers in Molecular Biosciences 8, 2021. [658269]

[3] M. Habeck, M. Hasenpflug, S. Kodgirwar, D. Rudolf. Geodesic slice sampling on the sphere, arXiv preprint arXiv:2301.08056, 2023.



Advancements and New Questions in Analysing the Geometry of Molecular Conformations in Cryo-EM

Roy R. Lederman

Yale U, United States of America

Cryo-Electron Microscopy (cryo-EM) is an imaging technology revolutionizing structural biology. One of the great promises of cryo-EM is to study mixtures of conformations of molecules. We will discuss recent advancements in the analysis of continuous heterogeneity - the continuum of conformations in flexible macromolecule. We will discuss some of the mathematical and technical questions arising from these recent algorithms.


Optimal transport: a promising tool for cryo-electron microscopy

Amit Moscovich

Tel Aviv University, Israel

Optimal transport is a branch of mathematics whose central problem is minimizing the cost of transporting a given source distribution to a target distribution. The Wasserstein metric is defined to be the cost of a minimizing transport plan. For mass distributions in Euclidean space, the Wasserstein metric is closely related to physical motion, making it a natural choice for many of the core problems in cryo-electron microscopy.

Historically, computational bottlenecks have limited the applicability of optimal transport for image processing and volumetric processing. However, recent advances in computational optimal transport have yielded fast approximation schemes that can be readily used for the analysis of high-resolution images and volumetric arrays.

In this talk, we will present the optimal transportation problem and some of its key properties. Then we will discuss several promising applications to cryo-electron microscopy, including particle picking, class averaging and continuous heterogeneity analysis.

 
1:30pm - 3:30pmMS30 2: Inverse Problems on Graphs and Machine Learning
Location: VG2.103
Session Chair: Emilia Lavie Kyllikki Blåsten
Session Chair: Matti Lassas
Session Chair: Jinpeng Lu
 

Deep Invertible Approximation of Topologically Rich Maps between Manifolds

Michael Puthawala1, Matti Lassas2, Ivan Dokmanic3, Pekka Pankka2, Maarten de Hoop4

1South Dakota State University, United States of America; 2University of Helsinki; 3University of Basel; 4Rice University

How can we design neural networks that allow for stable universal approximation of maps between topologically interesting manifolds? In this talk, we will provide the surprisingly simple answer. By exploiting the topological parallels between locally bilipschitz maps, covering spaces, and local homeomorphisms as well as universal approximation arguments from machine learning, we find that a novel network of the form $p \circ \mathcal{E}$, where $\mathcal{E}$ is a smooth embedding and $p$ a fixed coordinate projection, are universal approximators of local diffeomorphisms between compact smooth submanifolds embedded in $\mathbb{R}^n$. We emphasize the case when the map to be learned changes topology. Further, we find that by constraining the projection $p$, multivalued inversions of our networks can be computed without sacrificing universality. As an application of the problem, we show that the question of learning a group invariant function where the group action is unknown can be naturally reduced to the question of learning local diffeomorphisms when the group action is continuous, finite, and has constant-sized orbits. In this context the novel inversion result permits us to recover orbits of the group action.


Some inverse problems on graphs with internal functionals

Fernando Guevara Vasquez, Guang Yang

University of Utah, United States of America

We consider the problem of finding the resistors in a network from knowing the power that they dissipate under loads imposed at a few terminal nodes. This data could be obtained e.g. from thermal imaging of the network. We use a method inspired by Bal [1] to give sufficient conditions under which the linearized problem admits a unique solution. Similar results are shown for a discrete analogue to the Schrödinger equation and for the case of impedances or complex valued conductivities.

[1] Bal, Guillaume. Hybrid inverse problems and redundant systems of partial differential equations, Inverse problems and applications 619: 15-48, 2014.


Imaging water supply pipes using pressure waves

Emilia Lavie Kyllikki Blåsten1, Fedi Zouari2, Moez Louati2, Mohamed S. Ghidaoui2

1LUT University, Finland; 2Hong Kong University of Science and Technology, Hong Kong

I will present a collaboration with applied mathematicians and civil engineers from the mathematical point of view. We worked on the problem of imaging water supply pipes for problem detection (is there a problem? where is the problem? how severe is the problem?). I will talk about the one-dimensional setting and also present a reconstruction algorithm for tree networks. The problem is modeled mathematically by a quantum tree graph with fluid pressure and flow, and the pipe's internal cross-sectional area as an unknown. The method is based on a simple time reversal boundary control method originally presented by Sondhi and Gopinath for one dimensional problems and later by Oksanen to higher dimensional manifolds.



Recontructing Interactions from Dynamics

Ivan Dokmanic, Liming Pan, Cheng Shi

University of Basel, Switzerland

Simple interactions between particles, people, or neurons give rise to astoundigly complex dynamics on the underlying interaction graphs. I will describe a class of models for dynamical systems on graphs which seems to provide an accurate description for a variety of phenomena from diverse domains. I will then show how this "deep graph dynamics prior" leads to an algorithm to reconstruct the unknown interaction graph when only the dynamics are observed. Potential applications in physics, publich health, Earth science, and neuroscience are important and numerous.

 
1:30pm - 3:30pmMS32 2: Parameter identification in time dependent partial differential equations
Location: VG1.104
Session Chair: Barbara Kaltenbacher
Session Chair: William Rundell
 

On the identification of cavities in a nonlinear diffusion-reaction model arising from cardiac electrophysiology

Elena Beretta1, Andrea Aspri2, Elisa Francini3, Dario Pierotti4, Sergio Vessella3

1New York University Abu Dhabi, United Arab Emirates; 2Università degli Studi di Milano, Italy; 3Università di Firenze, Italy; 4Politecnico di Milano, Italy

Detecting ischemic regions from noninvasive (or minimally invasive) measurements is of primary importance to prevent lethal ventricular ischemic tachycardia. This is usually performed by recording the electrical activity of the heart, by means of either body surface or intracardiac measurements. Mathematical and numerical models of cardiac electrophysiology can be used to shed light on the potentialities of electrical measurements in detecting ischemia. More specifically, the goal is to combine boundary measurements of (body-surface or intracavitary) potentials and a mathematical description of the electrical activity of the heart in order to possibly identify the position, shape, and size of heart ischemias and/or infarctions. The ischemic region is a non-excitable tissue that can be modeled as an electrical insulator (cavity) and the cardiac electrical activity can be comprehensively described in terms of the monodomain model, consisting of a boundary value problem for the nonlinear reaction-diffusion monodomain system. In my talk, I will illustrate some recent results concerning the inverse problem of detecting the cavity from boundary measurements.



Identification of the electric potential of the time-fractional Schrödinger équation by boundary measurement

Éric Soccorsi

Aix Marseille université, France

This talk deals with the inverse problem of identifying the real valued electric potential of the time-fractional Schrödinger equation, by boundary observation of its solution. Its main purpose is to establish that the Dirichlet-to-Neumann map computed at one fixed arbitrary time uniquely determines the time-independent potential.


The Recovery of Coefficients in Wave Equations from Time-trace Data

Barbara Kaltenbacher, William Rundell

Texas A&M University, United States of America

The Westervelt equation is a common formulation used in nonlinear optics and several of its coefficients are meaningful as imaging parameters of physical consequence. We look at the recovery of some of these from both an analytic and a reconstruction perspective.
 
1:30pm - 3:30pmMS33 2: Quantifying uncertainty for learned Bayesian models
Location: VG1.105
Session Chair: Marta Malgorzata Betcke
Session Chair: Martin Holler
 

Calibration-Based Probabilistic Error Bounds for Inverse Problems in Imaging

Martin Zach1, Andreas Habring2, Martin Holler2, Dominik Narnhofer1, Thomas Pock1

1Graz University of Technology, Austria; 2Universität Graz, Austria

Traditional hand-crafted regularizers, such as the total variation, have a profound history in the context of inverse problems. Typically, they are accompanied by a geometrical interpretation and experts are familiar with (artifacts in) the resulting reconstructions. Modern, learned regularizers can hardly be interpreted in this way, thus it is important to supply uncertainty maps or error bounds in addition to any reconstruction. In this talk, we give an overview of calibration-based methods that provide 1) pixel-wise probabilistic error bounds or 2) probabilistic confidence with respect to entire structures in the reconstruction. We show results on the clinically highly relevant problem of undersampled magnetic resonance reconstruction.


Posterior-Variance-Based Error Quantification for Inverse Problems in Imaging

Dominik Narnhofer1, Andreas Habring2, Martin Holler2, Thomas Pock1

1Graz University of Technology; 2University of Graz

We present a method for obtaining pixel-wise error bounds in Bayesian regularization of inverse imaging problems. The proposed approach employs estimates of the posterior variance together with techniques from conformal prediction in order to obtain error bounds with coverage guarantees, without making any assumption on the underlying data distribution. It is generally applicable to Bayesian regularization approaches, independent, e.g., of the concrete choice of the prior. Furthermore, the coverage guarantees can also be obtained in case only approximate sampling from the posterior is possible. With this in particular, the proposed framework is able to incorporate any learned prior in a black-box manner.

Such a guaranteed coverage without assumptions on the underlying distributions is only achievable since the magnitude of the error bounds is, in general, unknown in advance. Nevertheless, as we confirm with experiments with multiple regularization approaches, the obtained error bounds are rather tight.

A preprint of this work is available at https://arxiv.org/abs/2212.12499


How to sample from a posterior like you sample from a prior

Babak Maboudi Afkham1, Matthias Chung2, Julianne Chung2

1DTU, Denmark; 2Emory University, USA

The importance of quantifying uncertainties is rising in many applications of inverse problems. One way to estimate uncertainties is to explore the posterior distribution, e.g. in the context of Bayesian inverse problems. Standard approaches in exploring the posterior, e.g. the Markov Chain Monte Carlo (MCMC) methods, are often inefficient for large-scale and non-linear inverse problems.

In this work, we propose a method that exploits data to construct accelerated sampling from the posterior distributions for goal-oriented inverse problems. We use variational encoder-decoder (VED) to approximate the mapping that relates a measurement vector to the posterior distribution. The output of the VED network is an approximation of the true distribution and can estimate its moment, e.g. using Monte-Carlo methods. This enables real-time uncertainty quantification. The proposed method showcases a promising approach for large-scale inverse problems.


Uncertainty Quantification for Computed Tomography via the Linearised Deep Image Prior

Riccardo Barbano

University College London, United Kingdom

Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty, hindering their real-world deployment. In this talk we present a method, termed as the linearised deep image prior (DIP) that estimates the uncertainty associated with reconstructions produced by the DIP with total variation regularisation (TV). We discuss how to endow the DIP with conjugate Gaussian-linear model type error-bars computed from a local linearisation of the neural network around its optimised parameters. This approach provides pixel-wise uncertainty estimates and a marginal likelihood objective for hyperparameter optimisation. Throughout the talk we demonstrate the method on synthetic data and real-measured high-resolution 2D $\mu$CT data, and show that it provides superior calibration of uncertainty estimates relative to previous probabilistic formulations of the DIP.
 
1:30pm - 3:30pmMS36 1: Advances in limited-data X-ray tomography
Location: VG3.101
Session Chair: Jakob Sauer Jørgensen
Session Chair: Samuli Siltanen
 

Learned proximal operators meets unrolling: a deeply learned regularisation for limited angle tomography

Tatiana Alessandra Bubba

University of Bath, United Kingdom

In recent years, limited angle CT has become a challenging testing ground for several theoretical and numerical studies, where both variational regularisation and data-driven techniques have been investigated extensively. In this talk, I will present a hybrid reconstruction framework where the proximal operator of an accelerated unrolled scheme is learned to ensure suitable theoretical guarantees. The recipe relays on the interplay between sparse regularization theory, harmonic analysis, microlocal analysis and Plug and Play methods. The numerical results show that these approaches significantly surpasses both pure model- and more data-based reconstruction methods.


A new variational appraoch for limited data reconstructions in x-ray tomography

Jürgen Frikel

OTH Regensburg, Germany

It is well known that image reconstructions from limited tomographic data often suffer from significant artifacts and missing features. To remove these artifacts and to compensate for the missing information, reconstruction methods have to incorporate additional information about the objects of interest. An important example of such methods is TV reconstruction. It is well known that this technique can efficiently compensate for missing information and reduce reconstruction artifacts. At the same time, however, tomographic data are also contaminated by noise, which poses an additional challenge. The use of a single penalty term (regularizer) within a variational regularization framework must therefore account for both the missing data and the noise. However, it is known that a single regularizer does not work perfectly for both tasks. In this talk, we introduce a new variational formulation that combines the advantages of two different regularizers, one aimed at accurate reconstruction in the presence of noise and the other aimed at selecting a solution with reduced artifacts. Both reconstructions are linked by a data consistency condition that makes them close to each other in the data domain. We demonstrate the proposed method for the limited angle CT problem using a combined curvelet and TV approach.



Material Decomposition Techniques for Spectral Computed Tomography

Francesca Bevilacqua1, Yiqiu Dong2, Jakob Sauer Jørgensen2, Alessandro Lanza1, Monica Pragiola3

1University of Bologna, Italy; 2Technical University of Denmark, Denmark; 3University of Naples Federico II, Italy

Spectral computed tomography is an evolving technique which exploits the property of materials to attenuate X-rays in different ways depending on the specific energy. Compared to conventional CT, spectral CT employs a photon-counting detector that records the energy of individual photons and produce a fine grid of discrete energy-dependent data. In this way it is easier to distinguish materials that have similar attenuation coefficients in an energy range, but different in others. The material decomposition process allows to not only reconstruct the object, but also to estimate the concentration of the materials that compose it.

Different strategies to reconstruct material-specific images have been developed in the last years, but many improvements have yet to be made especially for low-dose cases and few projections. This setup is justified by the slowness and flux limit of the high energy resolution photon counting detectors, but leads to noisier data, especially across the energy channels, and less spatial information. The study of the noise distribution, together with the usage of suitable regularizers and the selection of their parameters become crucial to obtain a good quality reconstruction and material decomposition. The talk will address all these issues by focusing on the case study of materials that have high atomic number with similar attenuation coefficients and K-edges in the considered energy range.



Bayesian approach to limited-data CT reconstruction for inspection of subsea pipes

Jakob Sauer Jørgensen

Technical University of Denmark

In subsea pipe inspection using X-ray computed tomography (CT), obtaining data is time-consuming and costly due to the challenging underwater conditions. We propose an efficient Bayesian CT reconstruction method with a new class of structural Gaussian priors incorporating known material properties to enhance quality from limited data. Experiments with real and synthetic data demonstrate artifact reduction, increased contrast, and enhanced reconstruction certainty compared to conventional reconstruction methods.

Authhors: Silja L. Christensen, Nicolai A. B. Riis, Felipe Uribe and Jakob S. Jørgensen
 
1:30pm - 3:30pmMS37 1: Passive imaging in terrestrial and extra-terrestrial seismology
Location: VG1.102
Session Chair: Florian Faucher
Session Chair: Damien Fournier
 

Source-free seismic imaging with reciprocity-gap misfit criterion.

Florian Faucher

Inria Bordeaux, France

We consider the quantitative inverse problem of recovering sub-surface Earth’s parameters from measurements obtained near surface. The reconstruction procedure uses the iterative minimization of a misfit criterion that evaluates the discrepancy between the observed and simulated signals, following the principles of Full Waveform Inversion. In the context of passive imaging, the position and characterization of the source signature are unknown, hence increasing the difficulty of inversion. In this work, we propose a new misfit criterion based upon reciprocity formulas, and that allows for source-free inversion, such that no information regarding the probing sources is required, making it an interesting candidate for ambient noise imaging. Our misfit criterion relies on the deployment of new sensing devices such as dual-sensors and distributed acoustic sensing technology, that offer the perspective of measuring different wave fields. It is the combination of these wave fields that makes the essence of our Full Reciprocity-gap Waveform Inversion method, [1, 2]. with two and three-dimensional reconstructions of acoustic and elastic media.

[1] F. Faucher, G. Alessandrini, H. Barucq, M. V. de Hoop, R. Gaburro, E. Sincich. Full Reciprocity-Gap Waveform Inversion, enabling sparse-source acquisition, Geophysics, 85 (6), 2020. https://dx.doi.org/10.1190/geo2019-0527.1

[2] F. Faucher, M. V. de Hoop, O. Scherzer. Reciprocity-gap misfit functional for Distributed Acoustic Sensing, combining data from active and passive sources , Geophysics, 86 (2), 2021. https://doi.org/10.1190/geo2020-0305.1


Improving our Understanding of Jupiter’s and Saturn’s Interior Structure

Burkhard Militzer

University of Califonia, Berkeley, United States of America

Traditionally models for the interior structure of giant planets are constrained by spacecraft measurements that fly by a planet at close range and measure its gravitational field with high precision. Still with increasing depth, it becomes more and more difficult with such measurements to uniquely determine what type of layers exist in a giant planet. This is especially true for the cores of giant planets that harbor valuable information on how the planet formed and what the early solar system looked like. Measurements of normal modes on the other hand offer an alternate potentially powerful approach to probing much deeper into a giant planet. While such dynamic measurements are very challenging, a number of such observations have already been reported. Here we review ring seismological measurements of spiral density waves in Saturn’s rings, radial velocity measurements of Jupiter’s atmosphere as well as a recent analysis of time dependent variations in Jupiter’s gravity field. We then compare results from these measurements with predictions from models for the interiors of Jupiter and Saturn that were constrained by gravity measurements alone. We conclude by discussing Jupiter’s dilute core and a recent study that explains how Saturn’s ring formed.


Full-Waveform Inversion and Reverse-Time Migration in Earthquake and Exploration Seismology

Frederik J Simons1, Qiancheng Liu1,4, Zhendong Zhang1,3, Zhaolun Liu1,2, Etienne Bachmann1, Alex L. Burky1, Congyue Cui1, Jessica C.E. Irving5, Jeroen Tromp1

1Princeton University, United States of America; 2Ocean University of China; 3Massachusetts Institute of Technology, United States of America; 4Chinese Academy of Sciences; 5The University of Bristol

In this presentation I will gather an overview of various inverse problems that have arisen in the context of (passive) terrestrial imaging—including but not limited to earthquakes, that is. At the smallest scale, I will discuss a source-encoded crosstalk-free Laplace-domain elastic Full Waveform Inversion (FWI) method that uses time- domain solvers, which cuts down drastically on computation time even for very data rich environments. This technique has been used in medical ultrasound, but also at the scale of the globe, and is now actively being developed for applications in the oil industry. At the regional scale, I will discuss full-waveform centroid moment tensor (CMT) inversion of passive seismic data acquired at the reservoir scale, for a field application in Tajikistan. At the largest scale, I will show how receiver function techniques are being supplemented by new technology to image mantle transition zone (MTZ) discontinuities in three-dimensional (3-D) heterogeneous background Earth models, and I will show new seismic evidence for a 1000 km mid-mantle discontinuity under the Pacific obtained by imaging via full-waveform reverse-time migration of precursors to surface-reflected seismic body waves, and its interpretation.


Passive seismic body waves imaging for the deep Earth.

Michel Campillo

Universite Grenoble Alpes, France

The ambient seismic noise has been widely used for surface wave tomography. We present examples of imaging of geological structures of interest at different depths and different scales: the region of the core-mantle boundary and an active fault in the crust. In both cases, we use continuous data from large arrays of sensors. We discuss the global spatial correlation properties of seismic ambient vibrations and their relations with body waves [1-2]. We show the signature of the heterogeneity of the lowermost mantle in contrast to the almost transparent upper core [3]. For the case of the fault systems, a major issue is the strong lateral variations of seismic velocity in the first kilometers that degrade the quality of the imaging. In this case an aberration correction is performed to the data of a dense array through the reflection matrix framework [4].

[1] P. Boué, P. Poli, M. Campillo, P. Roux. Reverberations, coda waves and ambient noise : correlations at the global scale and retrieval of the deep phases, Earth and Planetary Science Let. 391, 137-145, 2014.

[2] L. Li, P. Boué, M. Campillo. Observation and explanation of spurious seismic signals emerging in teleseismic noise correlations Solid Earth 11, 173-184, 2020.

[3] L. Retailleau, P. Boué, L. Li, M. Campillo. Ambient seismic noise imaging of the lowermost mantle beneath the North Atlantic Ocean Geophysical J. Int. 222 (2), 1339-1351, 2020.

[4] R. Touma, T. Blondel, A. Derode, M. Campillo, A. Aubry. A Distortion Matrix Framework for High-Resolution Passive Seismic 3D Imaging: Application to the San Jacinto Fault Zone, California Geophysical J. Int., 226, 780–794, 2021.
 
1:30pm - 3:30pmMS51 1: Analysis, numerical computation, and uncertainty quantification for stochastic PDE-based inverse problems
Location: VG1.108
Session Chair: Mirza Karamehmedovic
Session Chair: Faouzi Triki
 

Deep Learning Methods for Partial Differential Equations and Related Parameter Identification Problems

Derick Nganyu Tanyu1, Jianfeng Ning2, Tom Freudenberg1, Nick Heilenkötter1, Andreas Rademacher1, Uwe Iben3, Peter Maass1

1Centre for Industrial Mathematics, University of Bremen, Germany; 2School of Mathematics and Statistics, Wuhan University, China; 3Robert Bosch GmbH, Germany

Recent years have witnessed a growth in mathematics for deep learning—which seeks a deeper understanding of the concepts of deep learning with mathematics, and explores how to make it more robust—and deep learning for mathematics, where deep learning algorithms are used to solve problems in mathematics. The latter has popularised the field of scientific machine learning where deep learning is applied to problems in scientific computing. Specifically, more and more neural network architectures have been developed to solve specific classes of partial differential equations (PDEs). Such methods exploit properties that are inherent to PDEs and thus solve the PDEs better than classical feed-forward neural networks, recurrent neural networks, and convolutional neural networks. This has had a great impact in the area of mathematical modeling where parametric PDEs are widely used to model most natural and physical processes arising in science and engineering, In this work, we review such methods and extend them for parametric studies as well as for solving the related inverse problems. We equally proceed to show their relevance in some industrial applications


Fourier method for inverse source problem using correlation of passive measurements

Kristoffer Linder-Steinlein1, Mirza Karamehmedović1, Faouzi Triki2

1Technical University of Denmark, Denmark; 2Laboratoire Jean Kuntzmann, Université Grenoble-Alpes, Grenoble, France

We consider the inverse source problem for a Cauchy wave equation with passive cross-correlation data. We propose to consider the cross-correlation as a wave equation itself and reconstruct the cross-correlation in the support of the source for the original Cauchy wave equation. Having access to the cross-correlation in the support of the source, we use a Fourier method to reconstruct the source of the original Cauchy wave equation. We show the inverse source problem is ill-posed and suffer from non-uniqueness when the mean of the source is zero, and provide a uniqueness result and stability estimate in case of non-zero mean.


Feynman's inverse problem - an inverse problem for water waves

Adrian Kirkeby1, Mirza Karamehmedović2

1Simula Research Laboratory, Norway; 2Technical University of Denmark

We analyse an inverse problem for water waves proposed by Richard Feynman in the BBC documentary "Fun to imagine". We show how the presence of both gravity and capillary waves makes water an excellent medium for the propagation of information.


Inference in Stochastic Differential Equations using the Laplace Appromixation

Uffe Høgsbro Thygesen

Technical University of Denmark, Denmark

We consider the problem of estimation of solutions to systems of coupled stochastic differential equations, as well as underlying system parameters, based on discrete-time measurements. We concentrate on the case where transition densities are not available in closed form, and focus on the technique of the Laplace approximation for integrating out unobserved state variables in a Bayesian setting. We demonstrate that the direct approach of inserting sufficiently many time points with unobserved states performs well, when the noise is additive. A pitfall arises when the noise intensity in the state equation depends on state variables, i.e., when the noise is not additive: In this case, maximizing the posterior density over unobserved states does not lead to useful state estimates (i.e., they are not consistent in the fine-time limit). This problem can be overcome by focusing in stead on the minimum-energy realization of the noise process which is consistent with data, which provides a connection to calculus of variations. We demonstrate the theory with numerical examples.

 
3:30pm - 4:00pmC7: Coffee Break
Location: ZHG Foyer
4:00pm - 6:00pmCT07: Contributed talks
Location: VG1.105
Session Chair: Christian Aarset
 

Accelerating MCMC for imaging science by using an implicit Langevin algorithm

Teresa Klatzer1,3, Konstantinos Zygalakis1,3, Paul Dobson1, Marcelo Pereyra2,3, Yoann Altmann2,3

1University of Edinburgh, United Kingdom; 2Heriot-Watt University, United Kingdom; 3Maxwell Institute for Mathematical Sciences, United Kingdom

In this work, we present a highly efficient gradient-based Markov chain Monte Carlo methodology to perform Bayesian computation in imaging problems. Like previous Monte Carlo approaches, the proposed method is derived from a discretisation of a Langevin diffusion. However, instead of a conventional explicit discretisation like Euler-Maruyama, here we use an implicit discretisation based on the theta-method. In particular, the proposed sampling algorithm requires to solve an optimization problem in each step. In the case of a log-concave posterior, this optimisation problem is strongly convex and thus can be solved efficiently, leading to effective step sizes that are significantly larger than traditional methods permit. We can show that even for these large step sizes the corresponding Markov Chain has low bias while also preserving the posterior variance. We demonstrate the proposed methodology on a range of problems including non-blind image deconvolution and denoising. Comparisons with state-of-the-art MCMC methods confirm that the Markov chains generated with our method exhibit significantly faster convergence speeds, and produce lower mean square estimation errors at equal computational budget.


Quasi-Monte Carlo methods for Bayesian optimal experimental design problems governed by PDEs

Vesa Kaarnioja

Free University of Berlin, Germany

The goal in Bayesian optimal experimental design is to maximize the expected information gain for the reconstruction of unknown quantities when there is a limited budget for collecting measurement data. We consider Bayesian inverse problems governed by partial differential equations. This leads us to consider an optimization problem, where the objective functional involves nested high-dimensional integrals which we approximate by using tailored rank-1 lattice quasi-Monte Carlo (QMC) rules. We show that these QMC rules achieve faster-than-Monte Carlo convergence rates. Numerical experiments are presented to assess the theoretical results.


Maximum marginal likelihood estimation of regularisation parameters in Plug & Play Bayesian estimation: Application to non-blind and semi-blind image deconvolution

Charlesquin Kemajou Mbakam1, Marcelo Pereyra2, Jean-Francois Giovannelli3

1Heriot-Watt University, United Kingdom; 2Heriot-Watt University, United Kingdom; 3Université de Bordeaux

Bayesian Plug & Play (PnP) priors are widely acknowledged as a powerful framework for solving a variety of inverse problems in imaging. This Bayesian PnP framework has made tremendous advances in recent years, resulting in state-of-the-art methods. Although PnP methods have been distinguished by their ability to regularize Bayesian inverse problems through a denoising algorithm, setting the amount of regularity enforced by the prior, determined by the noise level parameter of the denoiser, has been an issue for several reasons. This talk aims to present an empirical Bayesian extension of an existing Plug & Play (PnP) Bayesian inference method. The main novelty of this work is that we estimate the regularisation parameter directly from the observed data by maximum marginal likelihood estimation (MMLE). However, noticing that the MMLE problem is computationally and analytically intractable, we incorporate a Markov kernel within a stochastic approximation proximal gradient scheme to address this difficulty. The resulting method calibrates a regularisation parameter by MMLE while generating samples asymptotically distributed according to the empirical Bayes (pseudo-) posterior distribution of interest. Additionally, the proposed method can estimate other unknown parameters of the model using MMLE; such as the noise level of the observation model, and the parameters of the forward operator simultaneously. The proposed method has been demonstrated with a range of non-blind and semi-blind image deconvolution problems, as well as compared to state-of-the-art methods.



Choosing observations to mitigate model error in Bayesian inverse problems

Nada Cvetkovic1, Han Cheng Lie2, Harshit Bansal1, Karen Veroy--Grepl1

1Eindhoven University of Technology, Netherlands, The; 2Universität Potsdam, Germany

In inverse problems, one often assumes a model for how the data is generated from the underlying parameter of interest. In experimental design, the goal is to choose observations to reduce uncertainty in the parameter. When the true model is unknown or expensive, an approximate model is used that has nonzero `model error' with respect to the true data-generating model. Model error can lead to biased parameter estimates. If the bias is large, uncertainty reduction around the estimate is undesirable. This raises the need for experimental design that takes model error into account. We present a framework for model error-aware experimental design in Bayesian inverse problems. Our framework is based on Lipschitz stability results for the posterior with respect to model perturbations. We use our framework to show how one can combine experimental design with models of the model error in order to improve the results of inference.
 
4:00pm - 6:00pmCT08: Contributed talks
Location: VG2.105
Session Chair: Stephan F Huckemann
 

On Adaptive confidence Ellipsoids for sparse high dimensional linear models

Xiaoyang Xie

Cambridge University, United Kingdom

In high-dimensional linear models the problem of constructing adaptive confidence sets for the full parameter is known to be generally impossible. We propose re-weighted loss functions under which constructing fully adaptive confidence sets for the parameter is shown to be possible. We give necessary and sufficient conditions on the loss functions for which adaptive confidence sets exist, and exhibit a concrete rate-optimal procedure for construction of such confidence sets.


Sparsity-promoting hierarchical Bayesian inverse problems and uncertainty quantification

Jan Glaubitz

Massachusetts Institute of Technology, United States of America

Recovering sparse generative models from limited and noisy measurements presents a significant and complex challenge. Given that the available data is frequently inadequate and affected by noise, it is crucial to assess the resulting uncertainty in the relevant parameters. Notably, this uncertainty in the parameters directly impacts the reliability of predictions and decision-making processes.

In this talk, we explore the Bayesian framework, which facilitates the quantification of uncertainty in parameter estimates by treating involved quantities as random variables and leveraging the posterior distribution. Within the Bayesian framework, sparsity promotion and computational efficiency can be attained with hierarchical models with conditionally Gaussian priors and gamma hyper-priors. However, most of the existing literature focuses on the numerical approximation of maximum a posteriori (MAP) estimates, and less attention has been given to sampling methods or other means for uncertainty quantification. To address this gap, our talk will delve into recent advancements and developments in uncertainty quantification and sampling techniques for sparsity-promoting hierarchical Bayesian inverse problems.

Parts of this talk are joint work with Anne Gelb (Dartmouth), Youssef Marzouk (MIT), and Jonathan Lindbloom (Dartmouth).


Recursive Update of Linearization Model Error for Conductivity Reconstruction from ICDI

Puyuan Mi1, Yiqiu Dong1, Bangti Jin2

1Technical University of Denmark, Denmark; 2The Chinese University of Hong Kong, China

Conductivity Reconstruction serves as one of the most critical tasks of medical imaging, while approaches concerning interior current density information (ICDI) have drawn a lot of attention recently. However, they face challenges due to the nonlinearity between the conductivity and the interior current density and the high contrast of the conductivity. In this work, we propose a novel Bayesian framework to tackle these difficulties. We incorporate and iteratively update the model error introduced by linearization in the framework, and we also reform the linearization operator recursively to obtain better approximation. Numerical implementation shows that our method outperforms other approaches in terms of both relative errors of estimates and Kullback-Leibler divergence between distributions.


Fractional graph Laplacian for image reconstruction

Stefano Aleotti1, Alessandro Buccini2, Marco Donatelli1

1University of Study of Insubria, Italy; 2University of Cagliari, Italy

Image reconstruction problems, like image deblurring and computer tomography, are usually ill-posed and require regularization. A popular approach to regularization is to substitute the original problem with an optimization problem that minimizes the sum of two terms, an $\ell^2$ term and an $\ell^q$ term with $0<q\leq 1$. The first penalizes the distance between the measured data and the reconstructed one, the latter imposes sparsity on some features of the computed solution.

In this work, we propose to use the fractional Laplacian of a properly constructed graph in the $\ell^q$ term to compute extremely accurate reconstructions of the desired images. A simple model with a fully plug-and-play method is used to construct the graph and enhanced diffusion on the graph is achieved with the use of a fractional exponent in the Laplacian operator. Since this is a global operator, we propose to replace it with an approximation in an appropriate Krylov subspace. We show that the algorithm is a regularization method under some reasonable assumptions. Some selected numerical examples in image deblurring and computer tomography show the performances of our proposal.

[1] D. Bianchi, A. Buccini, M. Donatelli, E. Randazzo. Graph Laplacian for image deblurring. Electronic Transactions on Numerical Analysis, 55:169-186, 2021.

[2] A. Buccini, M. Donatelli. Graph Laplacian in $\ell^2-\ell^q$ regularization for image reconstruction. Proceedings - 2021 21st International Conference on Computational Science and Its Applications, ICCSA 2021:29-38, 2021.

[3] S. Aleotti, A. Buccini, M. Donatelli. Fractional Graph Laplacian for image reconstruction.In progress, Applied Numerical Mathematics, 2023
 
4:00pm - 6:00pmCT09: Contributed talks
Location: VG2.107
Session Chair: Tram Nguyen
 

Diffraction Tomography: Elastic parameters reconstructions

Bochra Mejri1, Otmar Scherzer1,2,3

1Johann Radon Institute for Computational and Applied Mathematics (RICAM), Austria; 2Faculty of Mathematics, University of Vienna, Austria; 3Christian Doppler Laboratory for Mathematical Modeling and Simulation of Next Generations of Ultrasound Devices (MaMSi), Institute of Mathematics, Austria

In this talk, we introduce an elastic imaging method where elastic properties (i.e. mass density and Lamé parameters) of a weakly scatterer are reconstructed from full-field data of scattered waves. We linearise the inverse scattering problem under consideration using Born's, Rytov's or Kirchhoff's approximation. Primarily, one appeal to the Fourier diffraction theorem developed in our previous work [1] for the pressure-pressure mode (i.e. generating Pressure incident plane waves and measuring the Pressure part of the scattered data). Then, we reconstruct the inverse Fourier transform of the pressure-pressure scattering potential using the inverse $\textit{nonequispaced discrete Fourier transform}$ for 2D transmission acquisition experiments. Finally, we quantify the elastic parameter distributions with different plane wave excitations.

[1] B. Mejri, O. Scherzer. A new inversion scheme for elastic diffraction tomography. arXiv:2212.02798, 2022.


Photoacoustic and Ultrasonic Tomography for Breast Imaging

Felix Lucka

Centrum Wiskunde & Informatica, Computational Imaging, Netherlands

New high-resolution, three-dimensional imaging techniques are being developed that probe the breast without delivering harmful radiation. In particular, photoacoustic tomography (PAT) and ultrasound tomography (UST) promise to give access to high-quality images of tissue parameters with important diagnostic value. However, the involved inverse problems are very challenging from an experimental, mathematical and computational perspective. In this talk, we want to give an overview of these challenges and illustrate them with data from an ongoing clinical feasibility study that uses a prototype scanner for combined PAT and UST.


One-step estimation of spectral optical parameters in quantitative photoacoustic tomography

Miika Suhonen, Aki Pulkkinen, Tanja Tarvainen

University of Eastern Finland, Finland

In quantitative photoacoustic tomography, information about a target tissue is obtained by estimating its optical parameters. In this work, we propose a one-step methodology for estimating spectral optical parameters directly from photoacoustic time-series data. This is carried out by representing the optical parameters with their spectral models and by combining the models of light and ultrasound propagation. The inverse problem is approached in the framework of Bayesian inverse problems. Concentrations of four chromophores, two scattering related parameters, and the Grüneisen parameter are estimated simultaneously. The methodology is evaluated using numerical simulations.


Stable reconstruction of anisotropic conductivity in magneto-acoustic tomography with magnetic induction

Niall Donlon

University of Limerick, Ireland

We study the issues of stability and reconstruction of the anisotropic conductivity $\sigma$ of a biological medium $\Omega\subset\mathbb{R}^3$ by the hybrid inverse problem of Magneto-Acoustic Tomography with Magnetic Induction (MAT-MI). More specifically, we consider a class of anisotropic conductivities given by the ​symmetric and uniformly positive definite matrix-valued functions $A(x,\gamma(x))$, $x\in\Omega$,​ where the one-parameter family $t\mapsto A(x, t)$, $t\in[\lambda^{-1}, \lambda]$, ​is assumed to be $\textit{a-priori}$ known. Under suitable conditions that include $A(\cdot, \gamma(\cdot))\in C^{1,\beta}(\Omega)$, with $0<\beta\leq 1$, we obtain a Lipschitz type stability estimate of the scalar function $\gamma$ in the $L^2(\Omega)$ norm in terms of an internal functional that can be physically measured in the MAT-MI experiment. We demonstrate the effectiveness of our theoretical framework in several numerical experiments, where $\gamma$ is reconstructed in terms of the internal functional. Our result extends previous results in MAT-MI where the conductivity $\sigma$ was either isotropic or of the simpler anisotropic form $\gamma D$, with $D$ an $\textit{a priori}$ known matrix-valued function in $\Omega$. In particular, the more general type of anisotropic conductivity considered here allows for the anisotropic structure to depend non-linearly on the unknown scalar parameter $\gamma$ to be reconstructed. This is joint work with Romina Gaburro, Shari Moskow and Isaac Woods
 
4:00pm - 6:00pmMS05 3: Numerical meet statistical methods in inverse problems
Location: VG2.102
Session Chair: Martin Hanke
Session Chair: Markus Reiß
Session Chair: Frank Werner
 

Bayesian hypothesis testing in statistical inverse problems

Remo Kretschmann, Frank Werner

Institute of Mathematics, University of Würzburg, Germany

In many inverse problems, one is not primarily interested in the whole solution $u^\dagger$, but in specific features of it that can be described by a family of linear functionals of $u^\dagger$. We perform statistical inference for such features by means of hypothesis testing.

This problem has previously been treated by multiscale methods based upon unbiased estimates of those functionals [1]. Constructing hypothesis tests using unbiased estimators, however, has two severe drawbacks: Firstly, unbiased estimators only exist for sufficiently smooth linear functionals, and secondly, they suffer from a huge variance due to the ill-posedness of the problem, so that the corresponding tests have bad detection properties. We overcome both of these issues by considering the problem from a Bayesian point of view, assigning a prior distribution to $u^\dagger$, and using the resulting posterior distribution to define Bayesian maximum a posteriori (MAP) tests.

The existence of a hypothesis test with maximal power among a class of tests with prescribed level has recently been shown for all linear functionals of interest under certain a priori assumptions on $u^\dagger$ [2]. We study Bayesian MAP tests bases upon Gaussian priors both analytically and numerically for linear inverse problems and compare them with unregularized as well as optimal regularized hypothesis tests.

[1] K. Proksch, F. Werner, A. Munk. Multiscale scanning in inverse problems. Ann. Statist. 46(6B): 3569--3602. 2018. https://doi.org/10.1214/17-AOS1669

[2] R. Kretschmann, D. Wachsmuth, F. Werner. Optimal regularized hypothesis testing in statistical inverse problems. Preprint, 2022. https://doi.org/10.48550/arXiv.2212.12897



Predictive risk regularization for Gaussian and Poisson inverse problems

Federico Benvenuto

Università degli Studi di Genova, Italy

In this talk, we present two methods for the choice of the regularization parameter in statistical inverse problems based on the predictive risk estimation, in the case of Gaussian and Poisson noise. In the first case, the criterion for choosing the regularization parameter in Tikhonov regularization is motivated by stability issue in the case of small sized samples and it minimizes a lower bound of the predictive risk. It is applicable when both data norm and noise variance are known, minimizing a function which depends on the signal-to-noise ratio, and also when they are unknown, using an iterative algorithm which alternates between a minimization step of finding the regularization parameter and an estimation step of estimating signal-to-noise ratio. In this second case, we introduce a novel estimator of the predictive risk with Poisson data, when the loss function is the Kullback–Leibler divergence, in order to define a regularization parameter's choice rule for the expectation maximization (EM) algorithm. We present a Poisson counterpart of the Stein's Lemma for Gaussian variables, and from this result we derive the proposed estimator which is asymptotically unbiased with increasing number of measured counts, when the EM algorithm for Poisson data is considered. In both cases we present some numerical tests with synthetic data.


Reconstruction of active forces in actomyosin droplets

Anne Wald, Emily Klass

Georg-August-University Göttingen, Germany

Many processes in cells are driven by the interaction of multiple proteins, for example cell contraction, division or migration. Two important types of proteins are actin filaments and myosin motors. Myosin is able to bind to and move along actin filaments with its two ends, leading to the formation of a dynamic actomyosin network, in which stresses are generated and patterns may form. Droplets containing an actomyosin network serve as a strongly simplified model for a cell, which are used to study elemental mechanisms. We are interested in determining the parameters that characterize this active matter, i.e., active forces that cause the dynamics of an actomyosin network, represented by the flow inside the actomyosin droplet, as well as the local viscosity. This leads to a (deterministic) parameter identification problem for the Stokes equation, where the viscosity inside the droplet can be estimated by means of statistical approaches.


Learning Linear Operators

Nicole Mücke

TU Braunschweig, Germany

We consider the problem of learning a linear operator $\theta$ between two Hilbert spaces from empirical observations, which we interpret as least squares regression in infinite dimensions. We show that this goal can be reformulated as an inverse problem for $\theta$ with the undesirable feature that its forward operator is generally non-compact (even if $\theta$ is assumed to be compact or of $p$-Schatten class). However, we prove that, in terms of spectral properties and regularisation theory, this inverse problem is equivalent to the known compact inverse problem associated with scalar response regression. Our framework allows for the elegant derivation of dimension-free rates for generic learning algorithms under H\"{o}lder-type source conditions. The proofs rely on the combination of techniques from kernel regression with recent results on concentration of measure for sub-exponential Hilbertian random variables. The obtained rates hold for a variety of practically-relevant scenarios in functional regression as well as nonlinear regression with operator-valued kernels and match those of classical kernel regression with scalar response.

 
4:00pm - 6:00pmMS06 3: Inverse Acoustic and Electromagnetic Scattering Theory - 30 years later
Location: VG3.103
Session Chair: Fioralba Cakoni
Session Chair: Houssem Haddar
 

The direct and inverse scattering problem of obliquely incident electromagnetic waves by an inhomogeneous infinitely long cylinder

Drossos Gintides1, Leonidas Mindrinos2, Sotirios Giogiakas1

1National Technical University of Athens, Greece; 2Agricultural University of Athens, Greece

We consider the scattering problem of electromagnetic waves by an infinitely long cylinder in three dimensions.The cylinder is dielectric, isotropic and inhomogeneous(with respect to the lateral directions). The incoming wave is time-harmonic and obliquely incident on the scatterer. We examine the well-posedness of the direct problem (uniqueness and existence of solution) using a Lippmann- Schwinger integral equation formulation. We prove uniqueness of the inverse problem to reconstruct the refractive index of an isotropic circular cross-section using the discreteness of the corresponding transmission eigenvalue problem and solutions based on separation of variables. We solve numerically the inverse problem for media with radial symmetric parameters using a Newton - type scheme. The direct problem is also solved numerically to provide us with the necessary far-field patterns of the scattered fields. We present numerical reconstructions justifying the applicability of the proposed method.


Transmission Eigenvalues for a Conductive Boundary

Isaac Harris

Purdue University, United States of America

In this talk, we will investigate the acoustic transmission eigenvalue problem associated with an inhomogeneous media with a conductive boundary. These are a new class of eigenvalue problems that is not elliptic, not self-adjoint, and non-linear, which gives the possibility of complex eigenvalues. We will discuss the existence of the eigenvalues as well as their dependence on the material parameters. Due to the fact that this is a non-standard eigenvalue problem, a discussion of the numerical calculations will also be highlighted. This is joint work with: R.-C. Ayala, O. Bondarenko, A. Kleefeld, and N. Pallikarakis.


Generalized Sampling method

Lorenzo Audibert

EDF R&D, France

The Generalized Sampling Method has been introduced to justify the so-called Linear Sampling Method of Colton and Kirsch (1996). It offers a framework that allow more flexibility than the Factorization Method of Kirsch which made it possible to extended a little the theoretical analysis of sampling methods. In this contribution we will point out the remaining difficulties of the Generalized Linear Sampling methods namely the form of the regularization term, the treatment of noisy measurements and some configuration of the sources and the receivers that break the symmetry of the near field operator. We will propose solution to address some of this challenges. Numerical illustrations will be provided on various type of measurements from Electrical Impedance Tomography, acoustics and elasticity scattering.
 
4:00pm - 6:00pmMS10 2: Optimization in Inverse Scattering: from Acoustics to X-rays
Location: VG1.103
Session Chair: Radu Ioan Bot
Session Chair: Russell Luke
 

PAC-Bayesian Learning of Optimization Algorithms

Peter Ochs

Saarland University, Germany

The change of paradigm from purely model driven to data driven (learning based) approaches has tremendously altered the picture in many applications in Machine Learning, Computer Vision, Signal Processing, Inverse Problems, Statistics and so on. There is no need to mention the significant boost in performance for many specific applications, thanks to the advent of large scale Deep Learning. In this talk, we open the area of optimization algorithms for this data driven paradigm, for which theoretical guarantees are indispensable. The expectations about an optimization algorithm are clearly beyond empirical evidence, as there may be a whole processing pipeline depending on a reliable output of the optimization algorithm, and application domains of algorithms can vary significantly. While there is already a vast literature on "learning to optimize", there is no theoretical guarantees associated with these algorithms that meet these expectations from an optimization point of view. We develop the first framework to learn optimization algorithms with provable generalization guarantees to certain classes of optimization problems, while the learning based backbone enables the algorithms' functioning far beyond the limitations of classical (deterministic) worst case bounds. Our results rely on PAC-Bayes bounds for general, unbounded loss-functions based on exponential families. We learn optimization algorithms with provable generalization guarantees (PAC-bounds) and explicit trade-off between a high probability of convergence and a high convergence speed.


Accelerated Griffin-Lim algorithm: A fast and provably convergent numerical method for phase retrieval

Rossen Nenov1, Dang-Khoa Nguyen2, Radu Ioan Bot2, Peter Balazs1

1Austrian Academy of Sciences, Austria; 2University of Vienna

The recovery of a signal from the magnitudes of its transformation, like the Fourier transform, is known as the phase retrieval problem and is of big relevance in various fields of engineering and applied physics. The Griffin-Lim algorithm is a staple method commonly used for the phase retrieval problem, which is based on alternating projections. In this talk, we introduce and motivate a fast inertial/momentum modification of the Griffin-Lim Algorithm for the phase retrieval problem and we present a convergence guarantee for the new algorithm.


Audio Inpainting

Peter Balazs1, Georg Tauböck2, Shristi Rajbamshi2, Nicki Holighaus1

1Austrian Academy of Sciences, Austria; 2Technical University of Vienna

The goal of audio inpainting is to fill missing data, i.e., gaps, in an acoustical signal. Depending on the length of the gap this procedure should either recreate the original signal, or at least provide a perceptually pleasant and meaningful solution.

We give an overview of existing methods for different gap lengths, and discuss details of our own method [1] for gaps of medium duration. This approach is based on promoting sparsity in the time-frequency domain, combined with a convexifaction using ADMM with a dictionary learning technique that perturbs the time-frequency atoms around the gap, using an optimization technique originally developed in the context of channel estimation.

[1] G. Tauböck, S. Rajbamshi, P. Balazs. Dictionary learning for sparse audio inpainting, IEEE Journal of Selected Topics in Signal Processing 15 no. 1: 104–119, 2021.


Damage detection by guided ultrasonic waves and uncertainty quantification

Dirk Lorenz, Nanda Kishore Bellam Muralidhar, Carmen Gräßle, Natalie Rauter, Andrey Mikhaylenko, Rolf Lammering

TU Braunschweig, Germany

New materials like fibre metal laminates (FML) call for new methods when it comes to structural health monitoring (SHM). In this talk we describe an approach to SHM in FML based on guided ultrasonic waves that travel through plates as lamb waves. By the controlled emission of such waves and the measurement of the displacement at a few position, we aim to detect if a damage in the material is present. We approach this inverse problem by an analytical model of the forward propoagation and a simple damage model that is (nonlinearly) parameterized by a small number of parameters. To identify the damage parameters we employ Bayesian methods (namely a Markov Chain Monte-Carlo Metropolis-Hastings method and the ensemble Kalman filter). To make these computationally tractable, we use parametric model reduction to speed up the forward evaluations of the model.
 
4:00pm - 6:00pmMS16 2: Wave propagation and quantitative tomography
Location: VG0.111
Session Chair: Leonidas Mindrinos
Session Chair: Leopold Veselka
 

Source Reconstruction from Partial Boundary Data in Radiative Transport

Kamran Sadiq

Johann Radon Institute (RICAM), Austria

This talk concerns the source reconstruction problem in a transport problem through an absorbing and scattering medium from boundary measurement data on an arc of the boundary. The method, specific to two dimensional domains, relies on Bukgheim’s theory of A-analytic maps and it is joint work with A. Tamasan (UCF) and H. Fujiwara (Kyoto U).



Solving Cauchy problems using semi-discretization techniques and BIE

Leonidas Mindrinos

Agricultural University of Athens, Greece

In this work we present a two-step method for the numerical solution of parabolic and hyperbolic Cauchy problems. Both problems are formulated in 2D and the proposed method is considered for the direct and the corresponding inverse problem. The main idea is to combine a semi-discretization with respect to the time variable with THE boundary integral equation method for the spatial variables. The time discretization reduces the problem to a sequence of elliptic stationary problems. The solution is represented using a single-layer ansatz and then we end up solving iteratively for the unknown boundary density functions. We solve the discretized problem on the boundary of the medium with the collocation method. Classical quadrature rules are applied for handling the kernel singularities. We present numerical results for different linear PDEs.

This is a joint work with R. Chapko (Ivan Franko University of Lviv, Ukraine) and B. T. Johansson (Linköping University, Sweden).


Quantitative Parameter Reconstruction from Optical Coherence Tomographic Data

Leopold Veselka1, Wofgang Drexler2, Peter Elbau1, Lisa Krainz2

1University of Vienna, Austria; 2Medical University of Vienna, Austria

Optical Coherence Tomography (OCT), an imaging modality based on the interferometric measurement of back-scattered light, is known for its high-resolution images of biological tissues and its versatility in medical imaging. Especially in its main field of application, ophthalmology, the continuously increasing interest in OCT, aside from improving image quality, has driven the need for quantitative information, like optical properties, for a better medical diagnosis. In this talk, we discuss the quantification of the refractive index, an optical property which describes the change of wavelength between different materials, from OCT data. The presented method is based on a Gaussian beam forward model, resembling the strongly focused laser light typically used within an OCT setup. Samples with layered structure are considered, meaning that the refractive index as a function of depth is well approximated by a piece-wise constant function. For the reconstruction, a layer-by-layer method is presented where in every step the refractive index is obtained via a discretized $L^2$−minimization. The applicability of the proposed method is then verified by reconstructing refractive indices of layered media from both simulated and experimental OCT data.


Augmented total variation regularization in imaging inverse problems

Nicholas E. Protonotarios1,2,3, Carola-Bibiane Schönlieb2, Nikolaos Dikaios1, Antonios Charalambopoulos4

1Mathematics Research Center, Academy of Athens, Athens, Greece; 2Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK; 3Institute of Communication and Computer Systems, National Technical University of Athens, Athens, Greece; 4Department of Mathematics, National Technical University of Athens, Athens, Greece

Total variation ($TV$) regularization has been extensively employed in inverse problems in imaging. In this talk, we propose a new method of $TV$ regularization for medical image reconstruction, which extends standard regularization approaches. Our novel method may be conceived as an augmented version of typical $TV$ regularization. Within this approach, a new monitoring variable, $\omega(x)$, is introduced via an additional term in the minimization functional. The integration in this term is performed with respect to the $TV$ measure, corresponding to the deviation of the image, $u(x)$. The dual function $\omega(x)$ is the integrand of the additional term, and its smoothing nature compensates, when necessary, for the abruptness of the $TV$ measure of the image. It is within this dual variable that the regularity is imposed via the minimization process itself. The main purpose of the dual variable is to control the behavior of $u(x)$, especially regarding its discontinuity properties. Our preliminary results indicate the fast convergence rate of our method, thus highlighting its promising potential. This research is partially supported by the Horizon Europe project SEPTON, under grant agreement 101094901.
 
4:00pm - 6:00pmMS20 2: Recent advances in inverse problems for elliptic and hyperbolic equations
Location: VG3.104
Session Chair: Ru-Yu Lai
 

Fixed angle inverse scattering for velocity

Rakesh Rakesh

University of Delaware, United States of America

An inhomogeneous acoustic medium is probed by a plane wave and the resultant time dependent wave is measured on the boundary of a ball enclosing the inhomogeneous part of the medium. We describe our partial results about the recovery of the velocity of the medium from the boundary measurement. This is a formally determined inverse problem for the wave equation, consisting of the recovery of a non-constant coefficient of the principal part of the operator from the boundary measurement.


Inverse Problems for Some Nonlinear Schrodinger Equations

Ting Zhou

浙江大学, China, People's Republic of

In this talk, I will demonstrate the higher order linearization approach to solve several inverse boundary value problems for nonlinear PDEs, modeling for example nonlinear optics, including nonlinear magnetic Schrodinger equation and its fractional version. Considering partial data problems, the problem will be reduced to solving for the coefficient functions from their integrals against multiple linear solutions that vanish on part of the boundary. We will focus our discussion on choices of linear solutions and some microlocal anlaysis tools and ideas in proving injectivity of the coefficient function from its integral transforms such as the FBI transform.
 
4:00pm - 6:00pmMS24 2: Learned Regularization for Solving Inverse Problems
Location: VG1.101
Session Chair: Johannes Hertrich
Session Chair: Sebastian Neumayer
 

Learning Sparsifying Regularisers

Sebastian Neumayer

EPFL Lausanne, Switzerland

Solving inverse problems is possible, for example, by using variational models. First, we discuss a convex regularizer based on a one-hidden-layer neural network with (almost) free-form activation functions. Our numerical experiments have shown that this simple architecture already achieves state-of-the-art performance in the convex regime. This is very different from the non-convex case, where more complex models usually result in better performance. Inspired by this observation, we discuss an extension of our approach within the convex non-convex framework. Here, the regularizer can be non-convex, but the overall objective has to remain convex. This maintains the nice optimization properties while allowing to significantly boost the performance. Our numerical results show that this convex-energy-based approach is indeed able to outperform the popular BM3D denoiser on the BSD68 test set for various noise scales.


Shared Prior Learning of Energy-Based Models for Image Reconstruction

Thomas Pinetz1, Erich Kobler1, Thomas Pock2, Alexander Effland1

1University of Bonn, Germany; 2Technical University of Graz, Austria

In this talk, we propose a novel learning-based framework for image reconstruction particularly designed for training without ground truth data, which has three major building blocks: energy-based learning, a patch-based Wasserstein loss functional, and shared prior learning. In energy-based learning, the parameters of an energy functional composed of a learned data fidelity term and a data-driven regularizer are computed in a mean-field optimal control problem. In the absence of ground truth data, we change the loss functional to a patch-based Wasserstein functional, in which local statistics of the output images are compared to uncorrupted reference patches. Finally, in shared prior learning, both aforementioned optimal control problems are optimized simultaneously with shared learned parameters of the regularizer to further enhance unsupervised image reconstruction. We derive several time discretization schemes of the gradient flow and verify their consistency in terms of Mosco convergence. In numerous numerical experiments, we demonstrate that the proposed method generates state-of-the-art results for various image reconstruction applications - even if no ground truth images are available for training.



Gradient Step and Proximal denoisers for convergent plug-and-play image restoration.

Samuel Hurault, Arthur Leclaire, Nicolas Papadakis

University of Bordeaux, France

Plug-and-Play (PnP) methods constitute a class of iterative algorithms for imaging problems where regularization is performed by an off-the-shelf denoiser. Specifically, given an image dataset, optimizing a function (e.g. a neural network) to remove Gaussian noise is equivalent to approximating the gradient or the proximal operator of the log prior of the training dataset. Therefore, any off-the-shelf denoiser can be used as an implicit prior and inserted into an optimization scheme to restore images. The PnP and Regularization by Denoising (RED) frameworks provide a basis for this approach, for which various convergence analyses have been proposed in the literature. However, most existing results require either unverifiable or suboptimal hypotheses on the denoiser, or assume restrictive conditions on the parameters of the inverse problem. We will introduce the Gradient Step and Proximal denoisers, and their variants, recently proposed to restore RED and PnP algorithms to their original form as (nonconvex) real proximal splitting algorithms. These new algorithms are shown to converge towards stationary points of an explicit functional and to perform state-of-the-art image restoration, both quantitatively and qualitatively.
 
4:00pm - 6:00pmMS26 3: Trends and open problems in cryo electron microscopy
Location: VG3.102
Session Chair: Carlos Esteve-Yague
Session Chair: Johannes Schwab
 

Stochastic optimization for high-resolution refinement in cryo-EM

Bogdan Toader1, Marcus A. Brubaker2, Roy R. Lederman1

1Yale University, United States of America; 2York University, Canada

Cryo-EM reconstruction with traditional iterative algorithms is often split into two separate stages: ab initio, where an initial estimation of the volume and the pose variables are estimated, and high-resolution refinement, where a high-resolution volume is obtained using local optimization. While state-of-the-art results for high-resolution refinement are obtained using the Expectation-Maximization algorithm, this requires marginalization over the pose variables for all the 2D particle images at each iteration. In contrast, ab initio reconstruction is often performed using a variation of the stochastic gradient descent algorithm, which only uses a subset of the data at each iteration. In this talk, we present an approach that has the potential to enable the use of stochastic optimization algorithms for high-resolution refinement with improved running time. We present an analysis related to the conditioning of the problem that motivates our approach and show preliminary numerical results.


Fast Principal Component Analysis for Cryo-EM Images

Nicholas Marshall1, Oscar Mickelin2, Yunpeng Shi2, Amit Singer2

1Oregon State University, United States of America; 2Princeton University, United States of America

Principal component analysis (PCA) plays an important role in the analysis of cryo-EM images for various tasks such as classification, denoising, compression, and ab-initio modeling. We introduce a fast method for estimating a compressed representation of the 2-D covariance matrix of noisy cryo-electron microscopy projection images that enables fast PCA computation. Our method is based on a new algorithm for expanding images in the Fourier-Bessel basis (the harmonics on the disk), which provides a convenient way to handle the effect of the contrast transfer functions. For $N$ images of size $L$ by $L$, our method has much lower time and space complexities compared to the previous work. We demonstrate our approach on synthetic and experimental data and show acceleration by factors of up to two orders of magnitude.


Reconstructing Molecular Flexibility in Cryogenic Electron Microscopy

Johannes Schwab, Dari Kimanius, Sjors Scheres

MRC-Laboratory of Molecular Biology, United Kingdom

Cryogenic electron microscopy (cryo-EM) is a powerful technique to obtain the 3D structure of macromolecules from thousands of noisy projection images. Since these macromolecules are flexible by nature, the areas where a protein moves yields in a local drop of resolution in the reconstruction. We propose a method named dynamight, that represents the molecule with gaussian basis functions and estimates deformation fields for every experimental image. We further use the estimated deformations to better resolve the flexible regions in the reconstruction using a filtered backprojection algorithm along curved lines. We present results on real data showing that we obtain improved 3D reconstruction
 
4:00pm - 6:00pmMS30 3: Inverse Problems on Graphs and Machine Learning
Location: VG2.103
Session Chair: Emilia Lavie Kyllikki Blåsten
Session Chair: Matti Lassas
Session Chair: Jinpeng Lu
 

Learned Solvers for Forward and Backward Image Flow Schemes

Simon Robert Arridge1, Andreas Selmar Hauptmann2, Giuseppe di Sciacca1, Wiryawan Mehanda3

1University College London, United Kingdom; 2University of Oulu, Finland; 3Improbable, United Kingdom

It is increasingly recognised that there is a close relationship between some network architectures and iterative solvers for partial differential equations. In this talk we present a network architecture for forward and inverse problems in non-linear diffusion. By design the architecture is non-linear, learning an anisotropic diffusivity function for each layer from the output of the previous layer. The performed updates are explicit, by which we obtain better interpretability and generalisability compared to classical architectures. Since backward diffusion is unstable, a learned regularisation is implicitly learned to stabilise this process. We test results on synthetic image data sets that have undergone edge-preserving diffusion and on experimental data of images view through variable density scattering media.
 
4:00pm - 6:00pmMS36 2: Advances in limited-data X-ray tomography
Location: VG3.101
Session Chair: Jakob Sauer Jørgensen
Session Chair: Samuli Siltanen
 

Approaches to the Helsinki Tomography Challenge 2022

Clemens Arndt, Alexander Denker, Sören Dittmer, Johannes Leuschner, Judith Nickel

ZeTeM (Universität Bremen), Germany

In 2022, the Finnish Inverse Problems Society organized the Helsinki Tomography Challenge (HTC) 2022 with the aim of reconstructing an image using only limited-angle measurements. As part of this challenge, we implemented two methods, namely an Edge Inpainting method and a Learned Primal-Dual (LPD) reconstruction. The Edge Inpainting method consists of several successive steps: A classical reconstruction using Perona-Malik, extraction of visible edges, inpainting of invisible edges using a U-Net and a final segmentation using a U-Net. The Learned Primal-Dual approach adapts the classical LPD in two ways, namely replacing the adjoint with the generalized inverse (FBP) and using large U-Nets in the primal update. For the training of the networks we generated a synthetic dataset since only five samples were provided in the challenge. The results of the challenge showed that the Edge Inpainting Method was competitive for a viewing range up to 70 degrees. In contrast, the Learned Primal Dual approach performed well on all viewing ranges of the challenge and scored second best.


Directional regularization with the Core Imaging Library for limited-angle CT in the Helsinki Tomography Challenge 2022

Edoardo Pasca1, Jakob Jørgensen2, Evangelos Papoutsellis1,3, Laura Murgatroyd1, Gemma Fardell1

1Science and Technology Facilities Council, United Kingdom; 2Technical University of Denmark; 3Finden Ltd

The Core Imaging Library (CIL) is a software for Computed Tomography (CT) and other inverse problems. It provides processing algorithms for CT data and tools to write optimisation problems with near math syntax. Last year the Finnish Inverse Problems Society organized the “Helsinki Tomography Challenge 2022” (HTC2022) – an open competition for researchers to submit reconstruction algorithms for a challenging series of real-data limited-angle computed tomography problems. The HTC2022 provided the perfect grounds to test the capabilities of CIL in limited angle CT.

The algorithm we submitted consists of multiple stages: first, pre-processing including beam-hardening correction and data normalization; second a purpose-built directional regularization method exploiting prior knowledge of the scanned object; and finally, a multi-Otsu segmentation method. The algorithm was fully implemented using the optimization prototyping capabilities of CIL and its performance assessed and optimized on the provided training data ahead of submission. The algorithm performed well on limited-angle data down to an angular range of 50 degrees, and in the competition was beaten only by two machine learning based strategies involving generation of very large sets of synthetic training data.

In the spirit of open science, all the data sets are available from the challenge website, https://fips.fi/HTC2022.php, and the submitted algorithm code from https://github.com/TomographicImaging/CIL-HTC2022-Algo2.


VAEs with structured image covariance as priors to inverse imaging problems

Margarat Duff

STFC - UKRI, Scientific Computing, UK

This talk explores how generative models, trained on ground-truth images, can be used as priors for inverse problems, penalizing reconstructions far from images the generator can produce. We utilize variational autoencoders that generate not only an image but also a covariance uncertainty matrix for each image. The covariance can model changing uncertainty dependencies caused by structure in the image, such as edges or objects, and provides a new distance metric from the manifold of learned images. We evaluate these novel generative regularizers on retrospectively sub-sampled real-valued MRI measurements from the fastMRI dataset.

Authors: Margaret A G Duff (Science and Technology Facilities Council (UKRI)) Ivor J A Simpson (University of Sussex) Matthias J Ehrhardt (University of Bath) Neill D F Campbell (University of Bath)



Limited-Angle Tomography Reconstruction via Deep Learning on Synthetic Data

Thomas Germer1, Jan Robine2, Sebastian Konietzny2, Stefan Harmeling2, Tobias Uelwer2

1Heinrich Heine University Düsseldorf, Germany; 2Technical University of Dortmund, Germany

Computed tomography (CT) has become an essential part of modern science. A CT scanner consists of an X-ray source that is spun around an object of interest. On the opposite end of the X-ray source a detector captures X-rays that are not absorbed by the object. The reconstruction of an image is a linear inverse problem which is usually solved by the filtered back projection algorithm. However, when the number of measurements is too small the reconstruction problem is highly ill-posed. This is for example the case when the X-ray source is not spun completely around the object, but rather irradiates the object only from a limited angle. To tackle this problem, we present a deep neural network that performs limited-angle tomography reconstruction. The model is trained on a large amount of carefully-crafted synthetic data. Our approach won the first place in the Helsinki Tomography Challenge 2022.
 
4:00pm - 6:00pmMS37 2: Passive imaging in terrestrial and extra-terrestrial seismology
Location: VG1.102
Session Chair: Florian Faucher
Session Chair: Damien Fournier
 

Reduced order model approach for active and passive imaging with waves

Liliana Borcea1, Josselin Garnier2, Alexander Mamonov3, Jorn Zimmerling4

1University of Michigan, USA; 2Ecole polytechnique, France; 3University of Houston, USA; 4Uppsala University, Sweden

We consider the velocity estimation problem for the scalar wave equation using the array response matrix of sensors. In the active configuration, the sensors probe the unknown medium to be imaged with a pulse and measure the backscattered waves which gives directly the array response matrix. In the passive configuration, the sensors are passive receivers that record the signals transmitted by unknown, ambient noise or opportunistic sources and the array response matrix can be obtained by cross correlating the recorded signals. Under such circumstances, conventional Full Waveform Inversion (FWI) is carried out by nonlinear least-squares fitting of the array response matrix. It turns out that the FWI misfit function is high-dimensional and non-convex with many local minima. A novel approach to FWI based on a data driven reduced order model (ROM) of the wave equation operator is introduced and it is shown that the minimization of ROM misfit function performs much better.


Three-dimensional random wave coupling along a boundary with scaling representative of Mars' crust, and an associated inverse problem

Maarten Valentijn de Hoop, Josselin Garnier, Knut Solna

Rice University, United States of America

We consider random wave coupling along a flat boundary in dimension three, where the coupling is between surface and body modes and is induced by scattering by a randomly heterogeneous medium. In an appropriate, anisotropic scaling regime we obtain a system of radiative transfer equations which are satisfied by the mean Wigner transform of the mode amplitudes. Interestingly, seismograms recently acquired with SEIS on Mars (InSight mission) show a behavior that fits the hypotheses of our analysis about the properties of its crust. We provide a rigorous probabilistic framework for describing solutions to the mentioned system using that it has the form of a Kolmogorov equation for some Markov process. We then prove statistical stability of the smoothed Wigner transform under the Gaussian approximation. We conclude with analyzing the nonlinear inverse problem for the radiative transfer equations and establish the unique recovery of phase and group velocities as well as power spectral information for the medium fluctuations from the observed smoothed Wigner transform.


Frequency-Difference Backprojection of Earthquakes

Jing Ci Neo1, Wenyuan Fan2, Yihe Huang1, David R. Dowling1

1University of Michigan, Ann Arbor, USA; 2Scripps Institution of Oceanography, USA

Backprojection has proven useful in imaging large earthquake rupture processes. The method is generally robust and requires relatively simple assumptions about the fault geometry or the Earth velocity model. It can be applied in both the time and frequency domain. Backprojection images are often obtained from records filtered in a narrow frequency band, limiting its ability to uncover the whole rupture process. Here, we develop and apply a novel frequency-difference backprojection (FDBP) technique to image large earthquakes, which imitates frequencies below the bandwidth of the signal. The new approach originates from frequency-difference beamforming, which was initially designed to locate acoustic sources. Our method stacks the phase-difference of frequency pairs, given by the autoproduct, and is less affected by scattering and -time errors from 3-D Earth structures. It can potentially locate sources more accurately, albeit with lower resolution. We validated the FDBP algorithm with synthetic tests and benchmarked it against conventional backprojection. We successfully applied the method to the 2015 M7.8 Gorkha earthquake, and tested two stacking approaches - Band Width Averaged Autoproduct and its counterpart (BWAP and non-BWAP). The FDBP method shows promise in resolving complex earthquake rupture processes in tectonically complex regions.


Quantitative passive imaging in helioseismology

Björn Müller

MPI für Sonnensystemforschung, Germany

In helioseismology one studies cross-correlations of line-of-sight velocities at the solar surface in order to invert for parameters in the solar interior. In the frequency domain the cross-correlation data takes the form $C(\pmb{r_1}, \pmb{r_2}, \omega)=\psi(\pmb{r_1}, \omega)^* \psi(\pmb{r_2}, \omega)$ with freqeuency $\omega$ and two points $\pmb{r_1}, \pmb{r_2}$ on the solar surface. This data set is of immense size and unfeasible to store, such that it is in need of an apriori averaging in space in frequency. Helioseismic holography is a physically motivated averaging scheme, which is based on backpropagation of surface fluctuations [1]. In this talk we show that the traditional holograms can be understood as the first step of an iterative inversion procedure [2]. This way we can extend traditional helioseismic holography to a full quantitative regularization method, which has two main advantages compared to traditional helioseismic inversions: By changing the order of backpropagation and local correlation we can use the whole cross-correlation data implicitly by avoiding the computation explicitly. Furthermore the iterative setup allows us to tackle nonlinear problems, which are only rarely studied in helioseismology. We validate iterative helioseismic holography on synthetics of large-scale axisymmetric flows like solar differential rotation and meridional flows. Finally we show some interesting future applications of iterative helioseismic holography which can not be studied with traditional helioseismology so far.

[1] C. Lindsey, D. Braun. Helioseismic Holography, Astrophysical Journal 485(2), p.895-903, 1997. doi:10.1086/304445

[2] T. Hohage, H. Raumer, C. Spehr. Uniqueness of an inverse source problem in experimental aeroacoustics, Inverse Problems 36(7), 2020. doi:10.1088/1361-6420/ab8484
 
4:00pm - 6:00pmMS46 1: Inverse problems for nonlinear equations
Location: VG1.104
Session Chair: Lauri Oksanen
Session Chair: Teemu Kristian Tyni
 

Weakly nonlinear geometric optics and inverse problems for hyperbolic nonlinear PDEs

Plamen Stefanov

Purdue University, United States of America

We review recent results by the presenter, Antônio Sá Barreto, and Nikolas Eptaminitakis about inverse problems for the semilinear wave equation and the quasilinear Westervelt wave equation modeling nonlinear acoustic. We study them in a regime in which the solutions are not "small" so that we can linearize; when the nonlinear effects are strong and correspond to the observed phyisical effects. We show that a propagating high frequency pulse recovers the nonlinearity uniquely by recovering its X-ray transform, and we will show numerical simulations.


Identification of nonlinear effects in X-ray tomography

Yiran Wang

Emory University, United States of America

Due to beam-hardening effects, metal objects in X-ray CT often produce streaking artefacts which cause degradation in image reconstruction. It is known that the nature of the phenomena is nonlinear. An outstanding inverse problem is to identify the nonlinearity which is crucial for reduction of the artefacts. In this talk, we show how to use microlocal techniques to extract information of the nonlinearity from the artefacts. An interesting aspect of our analysis is to explore the connection of the artefacts and the geometry of metal objects.


Inverse problems for non-linear hyperbolic equations and many-to-one scattering relations

Matti Lassas

University of Helsinki, Finland

In the talk we give an overview on inverse problems for Lorentzian manifolds. We also discuss how inverse problems for partial differential equations can be solved using non-linear interaction of solutions. In the talk we concentrate on the geometric tools used to solve these problems, for instance to the k-to-1 scattering relation associated to the $k$-th order interactions and the observation time functions on Lorentzian manifolds.



Inverse problems for nonlinear elliptic PDE

Katya Krupchyk

University of California, Irvine, United States of America

We shall discuss some recent progress for inverse boundary problems for nonlinear elliptic PDE. Our focus will be on inverse problems for isotropic quasilinear conductivity equations, as well as nonlinear Schrodinger and magnetic Schrodinger equations. In particular, we shall see that the presence of a nonlinearity may actually help, allowing one to solve inverse problems in situations where the corresponding linear counterpart is open. This talk is based on joint works with Catalin Carstea, Ali Feizmohammadi, Yavar Kian, and Gunther Uhlmann.
 
4:00pm - 6:00pmMS48: Robustness and reliability of Deep Learning for noisy medical imaging
Location: VG2.104
Session Chair: Alessandro Benfenati
Session Chair: Elena Morotti
 

The graphLa+ method: a dynamic regularization based on the graph Laplacian

Davide Bianchi

Harbin Institute of Technology (Shenzhen), China, People's Republic of

We investigate a Tikhonov method that embeds a graph Laplacian operator in the penalty term (graphLa+). The novelty lies in building the graph Laplacian based on a first approximation of the solution derived by any other reconstruction method. Consequently, the penalty term becomes dynamic, depending on and adapting to the observed data and noise. We demonstrate that graphLa+ is a regularization method and we rigorously establish both its convergence and stability properties. Moreover, we present some selected numerical experiments in 2D computerized tomography, where we combine the graphLa+ method with several reconstructors: Filter Back Projection (graphLa+FBP), standard Tikhonov (graphLa+Tik), Total Variation (graphLa+TV) and a trained deep neural network (graphLa+Net). The quality increase of the approximated solutions granted by the graphLa+ approach is outstanding for each given method. In particular, graphLa+Net outperforms any other method, presenting a robust and stable implementation of deep neural networks for applications involving inverse problems.



Investigating the human body by light: the challenge of problem inversion

Paola Causin1, Alessandro Benfenati2

1Department of Mathematics, University of Milano, Italy; 2Department of Environmental Science and Policy, University of Milano, Italy

In the past decades, the use of Computerized Tomography (CT) has increased dramatically owing to its excellent diagnostic performance, easy accessibility, short scanning time, and cost-effectiveness. Enabling CT technologies with a reduced/null radiation dose while preserving/enhancing the diagnostic quality is a key challenge in modern medical imaging. Increased noise levels are, however, an expected downfall of all these new technologies.

In this series of two successive talks we will refer about our research focused on Diffuse Optical Tomography (DOT), a CT technology based on NIR light as investigating signal [1]. Strong light scattering in biological tissues makes the DOT reconstruction problem severely ill-conditioned, so that denoising is a crucial step. In the present talk, after a brief description of the DOT modality, first we will present our results in exploring variational approaches based on partial differential equation models endowed with different regularizers to compute a stable DOT-CT reconstruction [2,3]. Then, we will discuss our recent research on the use of DL-based generative models to produce more effective soft priors which, used in combination with standard forward problem solvers or DL-based forward problem solvers, allow to improve spatial resolution in high contrast zones and reduce noise in low-contrast zones, typical of medical imaging.

[1] S.R. Arridge. Optical tomography in medical imaging, Inverse problems 15(2): R41, 1999.

[2] P. Causin, M.G. Lupieri, G. Naldi, R.M. Weishaeupl. Mathematical and numerical challenges in optical screening of female breast, Int. J. Num. Meth. Biomed. Eng. 36(2): e3286, 2020.

[3] A. Benfenati, P. Causin, M.G. Lupieri, G. Naldi. Regularization techniques for inverse problem in DOT applications. In Journal of Physics: Conference Series (IOP Publishing) 1476(1): 012007, 2020.



Investigating the Human Body by Light: Neural Networks for Data-Driven and Physics-Driven Approches

Alessandro Benfenati1, Paola Causin2

1Environmental and Science Policy department, Università degli studi di Milano La Statale; 2Department of Mathematics,Università degli studi di Milano La Statale

Diffuse Optical Tomography is a medical imaging technique for functional monitoring of body tissues. Unlike other CT technologies (i.e. X-ray CT), DOT employs a non-ionizing light signal and thus can be used for multiple screenings [1]. DOT reconstruction in CW modality leads to an inverse problem for the unknown distribution of the optical absorption coefficient inside the tissue, which has diagnostic relevance.

The classic approach consists in solving an optimization problem, involving a fit-to-data functional (usually, the Least Square functional) coupled with a regularization (e.g., $l^1$, Tikhonov, Elastic Net [2]). In this talk, we refer about our research in adopting a deep learning approach, which exploits both data-driven and hybrid-physics driven techniques. In the first case, we employ neural networks to construct a Learned Singular Value Decomposition [3], whilst in the second case the network architecture is built upon \emph{a priori} knowledge on the physical phenomena. We will present numerical results obtained from synthetic datasets which show robustness even on noisy data.

[1] S. R. Arridge, J. C. Schotland. Optical tomography: forward and inverse problems, Inverse problems 25(12): 123010, 2009.

[2] A. Benfenati, P. Causin, M. G. Lupieri, G. Naldi. Regularization techniques for inverse problem in DOT applications, Journal of Physics: Conference Series (IOP Publishing) 1476(1): 012007, 2020.

[3] A. Benfenati, G. Bisazza, P. Causin. A Learned SVD approach for Inverse Problem Regularization in Diffuse Optical Tomography, 2021. [arXiv preprint arXiv:2111.13401]


Medical image reconstruction in realistic scenarios: what to do if the ground-truth is missing?

Davide Evangelista, Elena Morotti, Elena Loli Piccolomini

University of Bologna, Italy

Deep learning algorithms have recently emerged as the state-of-the-art in solving Inverse Problems, overcoming classical variational methods in terms of both accuracy and efficiency. However, most deep learning algorithms require supervised training, which necessitates a collection of matched low-quality and ground-truth data. This poses a significant challenge in medical imaging, as obtaining such a dataset would require subjecting the patient to approximately double the amount of radiation. As a result, it is common to mathematically simulate the degradation process, which can introduce biases that degrade model performance when tested on real data. To address this issue, we propose a general self-supervised procedure for training neural networks in a setting where the ground truth is missing, but the mathematical model is approximately known. We demonstrate that our proposed method produces results of comparable quality to supervised techniques while being more robust to perturbations. We will provide formal proof of the effectiveness of our proposed method.
 
4:00pm - 6:00pmMS51 2: Analysis, numerical computation, and uncertainty quantification for stochastic PDE-based inverse problems
Location: VG1.108
Session Chair: Mirza Karamehmedovic
Session Chair: Faouzi Triki
 

Spectral properties of radiation for the Helmholtz equation with a random coefficient

Mirza Karamehmedovic, Kristoffer Linder-Steinlein

Technical University of Denmark, Denmark

For the Helmholtz equation with a Gaussian random field coefficient, we approximate and characterize spectrally the source-to-measurement map. To this end, we first analyze the case with a deterministic coefficient, and here discover and quantify a ’spectral leakage’ effect. We compare the theoretically predicted forward operator spectrum with a Finite Element Method computation. Our results are applicable in the analysis of the robustness of solution of inverse source problems in the presence of deterministic and random media.


Optimization under uncertainty for the Helmholtz equation with application to photonic nanojets configuration

Amal Alghamdi1, Peng Chen2, Mirza Karamehmedovic1

1Technical University of Denmark (DTU), Denmark; 2Georgia Institute of Technology, USA

Photonic nanojets (PNJs) have promising applications as optical probes in super-resolution optical microscopy, Raman microscopy, as well as fluorescence microscopy. In this work, we consider optimal design of PNJs using a heterogeneous lens refractive index with a fixed lens geometry and uniform plane wave illumination. In particular, we consider the presence of manufacturing error of heterogeneous lens, and propose a computational framework of Optimization Under Uncertainty (OUU) for robust optimal design of PNJ. We formulate a risk-averse stochastic optimization problem with the objective to minimize both the mean and the variance of a target function, which is constrained by the Helmholtz equation that governs the 2D transverse electric (2D TE) electromagnetic field in a neighborhood of the lens. The design variable is taken as a spatially-varying field variable, where we use a finite element method for its discretization, impose a total variation penalty to promote its sparsity, and employ an adjoint-based BFGS method to solve the resulting high-dimensional optimization problem. We demonstrate that our proposed OUU computational framework can achieve more robust optimal design than a deterministic optimization scheme to significantly mitigate the impact of manufacturing uncertainty.


Posterior consistency for Bayesian inverse Problems with piecewise constant inclusions

Babak Maboudi Afkham.1, Kim Knudsen1, Aksel Rasmussen1, Tanja Tarvainen2

1Technical University of Denmark, Denmark; 2University of Eastern Finland

In Bayesian Inverse Problems the aim is to recover the posterior distribution for the quantity of interest. This distribution is given in terms of the prior distribution modeling a priori knowledge and the likelihood distribution modeling the noise. In many applications, one single estimator, e.g., the posterior mean, is desired and reported, however it is crucial for the fundamental understanding that this estimator is consistent, meaning that the estimator converges in probability to the ground truth when the noise level tends to zero.

In this talk we will explore the fundamental questions and see, how consistency indeed is possible in the case of PDE driven problems such as Photo-Acoustic Tomography with parametrized inclusions.



On uncertainty quantification for nonlinear inverse problems

Kui Ren

Columbia University, United States of America

We study some uncertainty quantification problems in nonlinear inverse coefficient problems for PDEs. We are interested in characterizing the impact of unknown parameters in the PDE models on the reconstructed coefficients. We argue that, unlike the situation in forward problems, uncertainty propagation in inverse problems is influenced by both the forward model and the inversion method used in the reconstructions. For ill-conditioned problems, errors in reconstructions can sometimes dominate the uncertainty caused by the unknown parameters in the model. Based on such observations, we will propose methods that quantify uncertainties more accurately than a generic method by compensating for the errors due to the reconstruction algorithms.
 

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.102+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany