Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Date: Wednesday, 06/Sept/2023
9:00am - 11:00amCT01: Contributed talks
Location: VG2.104
Session Chair: Philipp Ronald Mickan
 

Some Inverse Problems for Parabolic Equations

Mikhail Klibanov

University of North Carolina at Charlotte, United States of America

Two types of new results of the presenter will be discussed:

1. Holder and Lipschitz stability estimates for coefficient inverse problem and inverse source problem with the final overdetermination [1]. The solution of the parabolic equation is known at $t=0$ and $t=T$. Both Dirichlet and Neumann boundary conditions are known either on part of the boundary or on the entire boundary. A new Carleman estimate for the parabolic operator is the key here. Unlike standard Carleman estimates in this one, the Carleman Weight Function is independent on $t$. The Holder stability estimate is in the case of incomplete boundary data and the Lipshitz stability is in the case of complete boundary data. Both results and the methodology are significantly different from previous ones.

2. Stability estimates and uniquness theorems for some inverse problems for the Mean Field Games system [2]. These results are also new. The Mean Field Games system is a system of two parabolic equations, which was originally proposed by J.-M. Lasry and P.-L. Lions in 2006-2007 and became quite popular nowadays due to a number of very exciting applications. The main challenge here is that the time t is running in two opposite directions in these equations. Therefore, the Volterra-like property of conventional systems of parabolic PDEs is not kept here.

[1] M. V. Klibanov, Stability estimates for some parabolic inverse problems with the final overdetermination via a new Carleman estimate, arxiv: 2301.09735, 2023.

[2] M. V. Klibanov, Yu. V. Aveboukh, Stability and uniqueness of two inverse problems for the Mean Field Games system, in preparation.


Inverse problems for hyperbolic conservation laws

Duc-Lam Duong

LUT University, Finland

Hyperbolic conservation laws are central in the theory of PDEs. One of their typical features is the development of shock waves. This poses many challenges to the mathematical theory of both forward and inverse problems. It is well-known that two different initial data may involve into the same solution. In this talk, we will present a number of ways to overcome this difficulty, with emphasis on the Bayesian approach, and survey some recent results.


X-ray holographic imaging using intensity correlations

Milad Karimi, Thorsten Hohage

Georg August Universität Göttingen, Germany,

Holographic coherent x-ray imaging enables nanoscale imaging of biological cells and tissue, rendering both phase and absorption contrast, i.e. real and imaginary part of the refractive index. A main challenge of this imaging technique is radiation damage. We present a different modality of this imaging technique using a partially incoherent incident beam and time-resolved intensity measurements based on new measurement technologies. This enables the acquisition of intensity correlations in addition to the commonly used expectations of intensities. In this talk we explore information content of holographic intensity correlation data, analytically showing that in the linearized model both phase and absorption contrast can uniquely be determined by the intensity correlation data. The uniqueness theorem is derived using multi-dimensional Kramers-Kronig relations. We also deduce a uniqueness theorem for ghost holography imaging as an unconventional X-ray imaging scheme.

For regularized reconstruction it is important to take into account the statistical distribution of the correlation data. The measured intensity data are described by a so-called Cox-processes, roughly speaking a Poisson process with random intensity. For medium-size data sets, we use adaptations of the iteratively regularized Gauss-Newton method and the FISTA method as reconstruction methods. Our numerical results even in the full nonlinear model confirm that both phase and absorption contrast can jointly be reconstructed from only intensity correlations without the use of average intensities. Although these results are encouraging concerning the information content of the new intensity correlation data, the increased dimensionality of these data causes severe computational challenges.
 
9:00am - 11:00amCT02: Contributed talks
Location: VG2.105
Session Chair: Roman Novikov
 

Lipschitz Stability of Recovering the Conductivity from Internal Current Densities

Lingyun Qiu1,2, Siqin Zheng1

1Yau Mathematical Sciences Center, Tsinghua University, Beijing, People's Republic of China; 2Yanqi Lake Beijing Institute of Mathematical Sciences and Applications, Beijing, People's Republic of China

We investigates the inverse problem of reconstructing the electrical conductivity of an object in hybrid imaging methods. These techniques have been developed in recent years to produce clearer images than those produced by electrical impedance tomography. We focus on the inverse problem arising in the quantitative step of many hybrid imaging techniques. The problem is formulated as recovering the isotropic conductivity of an object given internal current densities generated by applying different boundary conditions to the electrostatic equation. We will present two specific examples of these techniques, current density impedance imaging and magneto-acousto-electric tomography, to illustrate the different boundary conditions that can be used. We provide a local Lipschitz stability for the general inverse problem in both full and partial data cases.


Geometric regularization in three-dimensional inverse obstacle scattering

Jannik Rönsch1, Henrik Schumacher2, Max Wardetzky1, Thorsten Hohage1

1Georg-August-Universität Göttingen, Germany; 2Technische Universität Chemnitz, Germany

We study the classical inverse problem to determine the shape of a three-dimensional scattering obstacle from measurements of scattered waves or their far-field patterns. Previous research on this subject has mostly assumed the object to be star-shaped and imposed a Sobolev penalty on the radial function or has defined the penalty term in some other ad-hoc manner which is not invariant under coordinate transformations.

For the case of curves in $\mathbb{R}^2$, reference [1] suggests to use the bending energy as regularisation functional and proposes Tikhonov regularization and regularized Newton methods on a shape manifold. The case of surfaces in $\mathbb{R}^3$ is considerably more demanding. First, a suitable space (manifold) of shapes is not obvious. The second problem is to find a stabilizing functional for generalised Tikhonov regularisation which on the one hand should be bending-sensitive and on the other hand prevent the surface from self-intersections during the reconstruction.

The tangent-point energy is a parametrization-invariant and repulsive surface energy that is constructed as the double integral over a power of the tangent point radius with respect to two points on the surface, i.e. the smallest radius of a sphere being tangent to the first point and intersecting the other. The finiteness of this energy also provides $C^{1,\alpha}$ Hölder regularity of the surfaces.[2] Using this energy as the stabilising functional, we choose general surfaces of Sobolev-Slobodeckij reguality, which are naturally connected to this energy.

The proposed approach works for surfaces of arbitrary (known) topology. In numerical examples we demonstrate that the flexibility of our approach in handling rather general shapes.

[1] J Eckhardt, R Hiptmair, T Hohage, H Schumacher, M Wardetzky. Elastic energy regularization for inverse obstacle scattering problems. 2019

[2] P. Strzlecki, H. von der Mosel. Tangent-point repulsive potentials for a class of smooth $m$-dimensional sets in $\mathbb{R}^n$. Part 1: Smoothing and self-avoidance effects. 2011


Phase retrieval and phaseless inverse scattering with background information

Thorsten Hohage1, Roman Novikov2, Vladimir Sivkin2

1Univ. Gottingen, Germany; 2CMAP, Ecole Polytechnique, France

We consider the problem of finding a compactly supported potential in the multidimensional Schrodinger equation from its differential scattering cross section (squared modulus of the scattering amplitude) at fixed energy. In the Born approximation this problem simplifies to the phase retrieval problem of reconstructing the potential from the absolute value of its Fourier transform on a ball. To compensate for the missing phase information we use the method of a priori known background scatterers. In particular, we propose an iterative scheme for finding the potential from measurements of a single differential scattering cross section corresponding to the sum of the unknown potential and a known background potential, which is sufficiently disjoint. If this condition is relaxed, then we give similar results for finding the potential from additional monochromatic measurements of the differential scattering cross section of the unknown potential without the background potential. The performance of the proposed algorithms is demonstrated in numerical examples. This talk is based on the work

Hohage, Novikov, Sivkin, preprint 2022, hal-03806616


Convergence analysis of optimization-by-continuation algorithms

Ignace Loris

Université libre de Bruxelles, Belgium

We discuss several iterative optimization algorithms for the minimization of a cost function consisting of a linear combination of up to three convex terms with at least one differentiable and a second one prox-simple. Such optimization problems frequently occur in the numerical solution of inverse problems (data misfit term plus penalty or constraint term).

We present several new results on the convergence of proximal-gradient-like algorithms in the context of a optimization-by-continuation strategy. The algorithms special feature lies in their ability to approximate, in a single iteration run, the minimizers of the cost function for many different values of the parameters determining the relative weight of the three terms in the cost function (penalty parameters). As a special case, one recovers a generalization of the primal-dual algorithm of Chambolle and Pock.
 
9:00am - 11:00amCT03: Contributed talks
Location: VG2.107
Session Chair: Martin Halla
 

The Foldy-Lax approximation of scattered field by many small inclusions near the resonating frequencies for Lam\`e system

Divya Gangadaraiah1, Durga Prasad Challa1, Mourad Sini2

1IIT Tirupati, India; 2Radon Institute (RICAM), Austria

We are concerned with the time harmonic elastic scattering in the presence of multiple small-scaled inclusions. The main property we use in this work is the local enhancement of scattering, which occurs at a specific incident frequency when the medium is perturbed with highly contrasted small inhomogeneities; for instance, one can consider contrast on the mass density. Such highly contrasting inclusions generate few local spots at their locations. These spots are generated as possible body waves related to elastic resonances. A family of these resonances is related to the eigenvalues of the elastic Newtonian operator.

\par Our goal is to derive the approximation of elastic scattered field for incident frequencies near to elastic resonances with suitable sufficient conditions. The dominating field generated due to the multiple interactions between a cluster of small inhomogeneities, of sub-wavelength size, is the Foldy-Lax field. The derived result has several applications, to mention a few, firstly in the theory of effective medium to design the materials with desired properties and, secondly, in elastic imaging to solve the inverse problem of recovering the properties of the background medium.


Microlocal Analysis of Multistatic Synthetic Aperture Radar Imaging

David McMahon, Clifford Nolan

University of Limerick, Ireland

We consider Synthetic Aperture Radar (SAR) in which scattered waves, simultaneously emitted from a pair of stationary emitters, are measured along a flight track traversed by an aircraft. A linearized mathematical model of scattering is obtained using a Fourier integral operator. This model can then be used to form an image of the ground terrain using backprojection together with a carefully designed data acquisition geometry.

The data is composed of two parts, corresponding to the received signals from each emitter. A backprojection operator can be easily chosen that correctly reconstructs the singularities in the wave speed using just one emitter. One would expect this to lead to a reasonable image of the terrain. However, we expect that application of this backprojection operator to the data from the other emitter will lead to unwanted artifacts in the image. We analyse the operators associated with this situation, and use microlocal analysis to determine configurations of flight path and emitter locations so that we may mitigate the artifacts associated to such “cross talk” between the two emitters.



An inverse problem for the Riemannian minimal surface equation

Janne Nurminen

University of Jyväskylä, Finland

In this work we study an inverse problem for the Riemannian minimal surface equation, which is a quasilinear elliptic PDE. Consider a Riemannian manifold $(M,g)$ where $M=\mathbb{R}^n$ and the metric is a so called conformally transversally anisotropic metric i.e. $g=c(\hat{g}\oplus e)$, where $\hat{g}$ is a metric on $\mathbb{R}^{n-1}$. Let $u\colon\Omega\subset\mathbb{R}^{n-1}\to \mathbb{R}$ be a smooth function that satisfies the minimal surface equation. Assume that we can make boundary measurements on the graph of $u$, that is we know the Dirichlet-to-Neumann (DN) map which maps the boundary value $u|_{\partial\Omega}=f$ to the normal derivative $\partial_{\nu}u|_{\partial\Omega}=\hat{g}^{ij}\partial_{x_i}u\nu_j|_{\partial\Omega}$. The Dirichlet data $f$ is the height of minimal surface on the boundary. The normal derivative $\partial_{\nu}u|_{\partial\Omega}$ can be thought of as tension on the boundary caused by the minimal surface. In this talk we show that if we have knowledge of two DN maps corresponding to two different metrics in the same conformal class, then we can deduce that the metrics have the same Taylor series up to a constant multiplier.

This work connects some aspects of two previous articles, that is [1] and [2]. We use the technique of higher order linearization (see for example [3]) that has received increasing attention lately.

[1] J. Nurminen. An inverse problem for the minimal surface equation. Nonlinear Anal. 227,113163:19. 2023

[2] C. I. Cârstea, M. Lassas, T. Liimatainen, L. Oksanen. An inverse problems for the riemannian minimal surface equation, arXiv: 2203.09262:1–18. 2022

[3] M. Lassas, T. Liimatainen, Y.-H. Lin, M. Salo. Inverse problems for elliptic equations with power type nonlinearities. J. Math. Pures Appl. (9) 145: 44– 82. 2021
 
9:00am - 11:00amMS01: Machine Learning for Inverse Problems in Medical Imaging
Location: VG1.102
Session Chair: Christian Fiedler
Session Chair: Jens Flemming
 

Chances and limitations of machine learning approaches to inverse problems

Jens Flemming

Zwickau University of Applied Sciences, Germany

Machine learning techniques, especially artificial neural networks, share many ideas and features with classical (that is, non-ML) methods for solving inverse problems. Examples are underlying Tikhonov-type optimization problems and the interpretation of deep neural networks as iterative methods structured like typical forward-backward splitting. In the talk we discuss those similarities and draw conclusions on possible directions for future research. Chances and limitations of ML techniques are discussed in the context of inverse problems from medical imaging. Of particular interest will be susceptibility weighted MR imaging (SWI).



From Manual to Automatic: Streamlining MRI Marker Detection and Localization for Surgical Planning

Christian Fiedler, Silke Kolbig

Zwickau University of Applied Sciences, Germany

The accurate detection and localization of natural or artificial structures in medical images is essential for effective diagnostics and surgical planning. In particular, determining the pose of artificial markers in MRI images is a foundational step for subsequent spatial adjustments, such as the registration between imaging modalities and with surgical devices. Manual detection and localization of these markers can be tedious and time-consuming, which has prompted the exploration of reliable, and highly automated approaches that can significantly reduce the need for human interaction. In recent years, automatic approaches based on neural networks have shown remarkable success in the detection and semantic segmentation of natural, anatomical structures. In contrast to these structures, the geometry of artificial markers is typically known, which enables the development of relatively simple algorithms that can perform well without the need for complex neural network architectures. The challenge often lies in the incomplete and inhomogeneous representation of the markers within the MRI images, due to noise, distortions, artifacts and further image defects. In this talk, we will explore different automatic approaches to MRI marker localization from a practical perspective, including conventional image processing pipelines utilizing basic methods such as convolution filters or connected component analysis and labeling as well as approaches based on neural networks. By addressing the benefits and challenges of using these methods, we gain a better understanding of their potential applications and impact on clinical image processing workflows.


Approaches on Feature and Model Selection for high-dimensional data in Medical Research and Analysis

Paul-Philipp Jacobs, Timm Denecke, Harald Busse

Leipzig University Medicine, Germany

In recent years the availability of multi-omics data, had a great impact on medical research. Such high-dimensional data-sets contain molecular as well as radiological variables from genomics, epigenomics, transcriptomics, proteomics, metabolomics, microbiomics and radiomics. The challenge when working with this kind of data is to find a subset of meaningful variables in order to deduce disease specific characteristics or make predictions on clinical endpoint variables. The process of eliminating non-informative and redundant features is called feature selection. Feature selection can be considered a necessary pre-processing for the actual modeling step in which statistical or machine-learning models are built in order to conduct classification tasks or time-to-event-data analysis. Given the pre-selected subset of features and a variety of candidate models, finding the most accurate as well as informative model is thereafter the remaining challenge also referred to as model selection. The goal of model selection is eliciting a parsimonious model, which uses only a small set of explanatory variables, which can then be considered as clinical covariates or biomarkers and in turn provide information how the treatment of the disease can be improved. Further, model selection is a step to prevent misleading conclusions due to possible over-fitting of the data inherent noise. In this talk, we present recent methodologies in feature and model selection. An introduction of rather simple feature selection techniques like statistical filter and classifier performance focused methods is followed by a description of more sophisticated regularization and shrinkage methods as well as the utilization of decision tree analysis algorithms. Finally we discuss how statistical as well as machine-learning models can benefit from the application of various information criteria for model selection.


Deceptive performance of artificial neural networks in semantic segmentation tasks on the example of lung delineation

Marcus Wittig

Westsächsische Hochschule Zwickau, Germany

The presentation focuses on a critical analysis of the performance of artificial neural networks (ANNs) in the context of semantic segmentation of organs as reported in scientific reports. In recent years, extensive research has been performed on combining loss functions and developing non-trainable layers for ANNs in order to optimize the boundary regions of semantic segmentation. These boundaries are particularly crucial for various segmentation tasks such as the detection of water retention in the lungs for COVID-19 diagnosis, the localization of organs at risk in radiotherapy treatment planning or the identification of white matter hyperintensities in the brain. With the U-Net, Ronneberger et al. have designed a powerful network architecture for any type of segmentation task. With a reported IOU (intersection over union) of $92\%$ and $77.5\%$ for the datasets used in the original work, respectively, it was far superior to the follow-up network. On this basis, U-Net was used for many segmentation tasks in the following years. In the field of semantic organ segmentation, the U-Net and U-Net-like artificial neural networks achieved very high accuracies. Since then, the reported accuracy has hardly improved. However, the underlying calculation of accuracy is misleading, as the improvements in recent years have been aimed at improving boundary regions, but these are usually unfavourably proportionate to the inner area.Hence, improvements in boundary segmentation accuracy has only a marginal impact on the overall accuracy. To illustrate this, we will look at the reported performance and improvements of artificial neural networks in both single-task and multi-task applications, using lung segmentation as an example. Typical evaluation methods, specifically Dice coefficient or Hausdorff distance, will be presented with current values and improvements. An overview of new evaluation methods and a discussion of the current way of reporting will also be addressed.
 
9:00am - 11:00amMS05 1: Numerical meet statistical methods in inverse problems
Location: VG2.102
Session Chair: Martin Hanke
Session Chair: Markus Reiß
Session Chair: Frank Werner
 

Aggregation by the Linear Functional Strategy in Regularized Domain Adaptation

Sergei Pereverzyev

The Johann Radon Institute for Computational and Applied Mathematics (RICAM), Austria

In this talk we are going to discuss the problem of hyperparameters tuning in the context of learning from different domains known also as domain adaptation. The domain adaptation scenario arises when one studies two input-output relationships governed by probabilistic laws with respect to different probability measures, and uses the data drawn from one of them to minimize the expected prediction risk over the other measure.

The problem of domain adaptation has been tackled by many approaches, and most domain adaptation algorithms depend on the so-called hyperparameters that change the performance of the algorithm and need to be tuned. Usually, algorithm performance variation can be attributed to just a few hyperparameters, such as a regularization parameter in kernel ridge regression, or batch size and number of iterations in stochastic gradient descent training.

In spite of its importance, the question of selecting these parameters has not been much studied in the context of domain adaptation. In this talk, we are going to shed light on this issue. In particular, we discuss how a regularization of the Radon-Nikodym differentiation can be employed in hyperparameters tuning. Theoretical results will be illustrated by application to stenosis detection in different types of arteries.

The talk is based on the recent joint work [1] performed within COMET-Module project S3AI funded by the Austrian Research Promotion Agency (FFG).

[1] E.R. Gizewski, L. Mayer, B.A. Moser, D.H. Nguyen, S. Pereverzyev Jr., S.V. Pereverzyev, N. Shepeleva, W. Zellinger. On a regularization of unsupervised domain adaptation in RKHS. Appl. Comput. Harmon. Anal. 57: 201-227, 2022.


The Henderson problem and the relative entropy functional

Fabio Marc Frommer, Martin Hanke

Johannes Gutenberg Universität Mainz, Germany

The inverse Henderson problem of statistical mechanics is the theoretical foundation for many bottom-up coarse-graining techniques for the numerical simulation of complex soft matter physics. This inverse problem concerns classical particles in continuous space which interact according to a pair potential depending on the distance of the particles. Roughly stated, it asks for the interaction potential given the equilibrium pair correlation function of the system. In 1974 Henderson proved that this potential is uniquely determined in a canonical ensemble and recently it has been argued by Rosenberger et al. that this potential minimises a relative entropy. Here we provide a rigorous extension of these results for the thermodynamical limit and define a corresponding relative entropy density for this. We investigate further properties of this functional for suitable classes of pair potentials.


Early stopping for $L^{2}$-boosting in high-dimensional linear models

Bernhard Stankewitz

Bocconi University Milano, Italy

We consider $ L^{2} $-boosting in a sparse high-dimensional linear model via orthogonal matching pursuit (OMP). For this greedy, nonlinear subspace selection procedure, we analyze a data-driven early stopping time $ \tau $, which is sequential in the sense that its computation is based on the first $ \tau $ iterations only. Our approach is substantially less costly than established model selection criteria, which require the computation of the full boosting path.

We prove that sequential early stopping preserves statistical optimality in this setting in terms of a general oracle inequality for the empirical risk and recently established optimal convergence rates for the population risk. The proofs include a subtle $ \omega $-pointwise analysis of a stochastic bias-variance trade-off, which is induced by the greedy optimization procedure at the core of OMP. Simulation studies show that, at a significantly reduced computational cost, these types of methods match or even exceed the performance of other state of the art algorithms such as the cross-validated Lasso or model selection via a high-dimensional Akaike criterion based on the full boosting path.



Early stopping for conjugate gradients in statistical inverse problems

Laura Hucker, Markus Reiß

Humboldt-Universität zu Berlin, Germany

We consider estimators obtained by applying the conjugate gradient algorithm to the normal equation of a prototypical statistical inverse problem. For such iterative procedures, it is necessary to choose a suitable iteration index to avoid under- and overfitting. Unfortunately, classical model selection criteria can be prohibitively expensive in high dimensions. In contrast, it has been shown for several methods that sequential early stopping can achieve statistical and computational efficiency by halting at a data-driven index depending on previous iterates only. Residual-based stopping rules, similar to the discrepancy principle for deterministic problems, are well understood for linear regularization methods. However, in the case of conjugate gradients, the estimator depends nonlinearly on the observations, allowing for greater flexibility. This significantly complicates the error analysis. We establish adaptation results in this setting.
 
9:00am - 11:00amMS06 1: Inverse Acoustic and Electromagnetic Scattering Theory - 30 years later
Location: VG0.110
Session Chair: Fioralba Cakoni
Session Chair: Houssem Haddar
 

Celebrating Colton and Kress Contributions

Fioralba Cakoni2, Houssem Haddar1

1INRIA, France; 2Rutgers University

The first edition of the book "Inverse Acoustic and Electromagnetic Scattering Theory" by D. Colton and R. Kress appeared in 1992. It was a comprehensive exposition of fundamental mathematical background as well as exciting developments happening at the time in inverse scattering theory, from uniqueness results to reconstruction algorithms. The book became a classic in the field. The fourth edition of this book was published in 2019, about 30 years later, in a much extended version. The added 200 pages represent a part of the myriad directions that the research in inverse scattering has taken. This includes development of novel non-iterative reconstruction approaches, such as factorization, generalized linear sampling and other direct imaging methods, the design and analysis of more advanced and efficient optimization algorithms, the investigation of special sets of frequencies, namely transmission eigenvalues, non-scattering wave numbers and scattering poles, and their applications in solving inverse scattering problems. We shall review some of the key moments, places and anecdotes that contributed to this achievement.


Passive inverse obstacle scattering problems

Thorsten Hohage, Meng Liu

Universität Göttingen, Germany

We report on the determination of the shape and location of scattering obstacles by passive imaging techniques. More precisely, we assume that the available data are correlations of randomly excited waves with zero mean. Passive imaging techniques are employed in seismology, ocean acoustics, experimental aeroacoustics, ultrasonics, and local helioseismology. They have also been thoroughly investigated mathematically, typically as a qualitative imaging modality, but the study of inverse obstacle problems seems to be new in this context.

We assume that wave propagation is described by the Helmholtz equation in two or three space dimensions. Furthermore, the random source is assumed to be uncorrelated and either compactly supported or at infinite distance. The source strength is considered as an additional unknown of the inverse problem.

As a main theoretical result, we show that both the shape of a smooth obstacle without holes and the source strength are uniquely determined by correlation data, both in the near-field and in the far-field. We also show numerical simulations supporting our theoretical results.



Target Signatures for Thin Surfaces

Peter Monk

University of Delaware, United States of America

In 1994, just within the 30 years mentioned in the title of this minisymposium, Colton and Kirsch proposed a set of target signatures for imperfectly conducting obstacles at fixed frequency [1]. These are characterized by using the far field equation. Today there are many families of target signatures including transmission eigenvalues, Steklov eigenvalues and modified transmission eigenvalues. All of these relate to scattering by a target of non-zero volume, and they can all be determined from scattering data using appropriate modifications of the far field equation [2].

In this presentation I will continue by describing recently developments target signatures for screens. Screen are open surfaces, and hence have no volume. A typical example is a resistive screen modeled using transmission conditions across the screen. The goal is to design target signatures that are computable from scattering data in order to detect changes in the material properties of the screen. This target signature is characterized by a mixed Steklov eigenvalue problem for a domain whose boundary contains the screen.

Following [3], I shall show that the corresponding eigenvalues can be determined from an appropriately modified far field equation. Numerical experiments using the classical linear sampling method are presented to support our theoretical results.

[1] D.L. Colton, A. Kirsch. Target signatures for imperfectly conducting obstacles at fixed frequency. Quart. J. Mech. Appl. Math. 47:1--15, 1994.

[2] D.L. Colton, F. Cakoni, H. Haddar. Inverse Scattering Theory and Transmission Eigenvalues, 2nd edition, CBMS-NSF, Regional Conference Series in Applied Mathematics, SIAM Publications, 98, 2022.

[3] F. Cakoni, P. Monk, Y. Zhang. Target signatures for thin surfaces. Inverse Probl. 38, 025011, 28 pp, 2021. doi: 10.1088/1361-6420/ac4154


Learning Dynamical Models and Model Components from Observations

Roland Potthast

Deutscher Wetterdienst, Germany

Dynamical models are the basis for forecasting in important application regimes such as weather or climate forecasting. Numerical models are based on a combination of PDEs from fluid flow, simulation of electromagnetic radiation and microphysics. For synchronization of such systems with reality data assimilation methods are used. These methods combine observations with short range forecasts into so-called analysis of components of the earth system, e.g. the atmosphere, land or the ocean. This is repeated for global atmospheric models every three hours, for high-resolution atmospheric models every hour, for ocean forecasting once per day. In climate science monthly means are assimilated for seasonal or decadal forecasting. The cycled run of short range forecasts and assimilation steps is known as data assimilation cycle. Observations include radiative transfer codes for microwave or infrared measurements, leading to integral-equation type observation operators as the basis of high-resolution global or regional data assimilation.

Here, we will address the task to learn dynamical models or model components iteratively which running such a data assimilation cycle. To this end we will employ either iterated Tikhonov regularization or its more elaborate version, the Kalman filter. We will demonstrate that model learning can be carried out very efficiently based on a particular representation of the model based on a sufficiently large variety of observations to be exploited in each step of the assimilation cycle. Examples from popular academic models such as the Lorenz 63 or 96 systems and more real-word systems will be demonstrated.

 
9:00am - 11:00amMS11: Defying the Curse of Dimensionality – Theory and Algorithms for Large Dimensional Bayesian Inversion
Location: VG1.108
Session Chair: Rafael Flock
Session Chair: Yiqiu Dong
 

Efficient high-dimensional Bayesian multi-fidelity inverse analysis for expensive legacy solvers

Jonas Nitzler1,2, Wolfgang A. Wall1, Phaedon-Stelios Koutsourelakis2

1Institute for Computational Mechanics, Technical University of Munich, Germany; 2Professorship of Data-driven Materials Modeling, Technical University of Munich, Germany

Bayesian inverse analysis can be computationally burdensome when dealing with large scale numerical models dependent on high-dimensional stochastic input, and especially when model derivatives are unavailable, as is the case for many high-fidelity legacy codes. To overcome these limitations, we introduce a novel approach called Bayesian multi-fidelity inverse analysis (BMFIA), which utilizes computationally inexpensive lower fidelity models to construct a multi-fidelity likelihood function. This function can be learned robustly, and potentially adaptively, from a few high-fidelity simulations (100-300). Our approach incorporates in the resulting, multi-fidelity posterior the epistemic uncertainty stemming from the limited high-fidelity data and the information loss caused by the multi-fidelity approximation. BMFIA can handle non-linear dependencies between low- and high-fidelity models. In particular, when the former are differentiable the solution of high-dimensional problems can be achieved while maintaining the high-fidelity accuracy by the multi-fidelity likelihood. Bayesian inference is performed using state-of the-art sampling-based or variational methods which require solely evaluations of the lower-fidelity model. We demonstrate the applicability of BMFIA to large-scale biomechanical and coupled multi-physics problems.


Goal-oriented Uncertainty Quantification for Inverse Problems via Variational Encoder-Decoder Networks

Julianne Chung1, Matthias Chung1, Babak Afkham2

1Emory University, United States of America; 2Technical University of Denmark, Denmark

In this work, we describe a new approach that uses variational encoder-decoder (VED) networks for efficient goal-oriented uncertainty quantification for inverse problems. Contrary to standard inverse problems, these approaches are goal-oriented in that the goal is to estimate some quantities of interest (QoI) that are functions of the solution of an inverse problem, rather than the solution itself. Moreover, we are interested in computing uncertainty metrics associated with the QoI, thus utilizing a Bayesian approach for inverse problems that incorporates the prediction operator and techniques for exploring the posterior. By harnessing recent advancements in the field of machine learning for large-scale inverse problems, in particular, by exploiting VED networks, we describe a data-driven approach for real-time goal-oriented uncertainty quantification for inverse problems.


Coupled Parameter and Data Dimension Reduction for Bayesian Inference

Qiao Chen1,2, Elise Arnaud1,2, Ricardo Baptista3, Olivier Zahm1,2

1Inria, France; 2Université Grenoble Alpes, France; 3California Institute of Technology, USA

We introduce a new method to reduce the dimension of the parameter and data space of high-dimensional Bayesian inverse problems. Commonly, different dimension reduction methods are applied separately to the two spaces. However, choosing a low-dimensional informed parameter subspace influences which data subspace is informative and vice versa. We thus propose a coupled method that, in addition, naturally reveals optimal experimental designs. Our method is based on the gradient of the forward operator of a Gaussian likelihood. It computes two projectors with an efficient and simple alternating singular value decomposition. Moreover, we control the approximation error through a certified $L^2$-error bound on the forward operator. We demonstrate the method on a large-scale Bayesian inverse problem in ocean modelling and use it to derive optimal sensor placements.


Certified Coordinate Selection for large-dimensional Bayesian Inversion

Rafael Flock1, Yiqiu Dong1, Olivier Zahm2, Felipe Uribe3

1Technical University of Denmark, Denmark; 2Univ. Grenoble Alpes, Inria; 3Lappeenranta-Lahti University of Technology

We are presenting a method to solve large dimensional Bayesian inverse problems where the parameter vector is assumed to be sparse. To this end, we use a Laplace-prior and show how the posterior density can be approximated based on a suitable coordinate splitting. Using an upper bound on the Hellinger distance between the exact and approximated posterior density, we show how the coordinate splitting can be performed.

Sampling from the approximated posterior is then straightforward and very efficient. Our theoretical framework allows also for sampling the exact posterior using a pseudo-marginal MCMC. However, this algorithm is relying heavily on a good coordinate splitting which might not be feasible in practice. Therefore, we propose a modified random-scan MCMC algorithm to sample from the exact posterior which is more robust and flexible.

In the end, we first illustrate the methodology on a simple example and then present the practical applicability on a large-dimensional 2D deblurring problem.
 
9:00am - 11:00amMS19 3: Theory and algorithms of super-resolution in imaging and inverse problems
Location: VG3.103
Session Chair: Habib Ammari
Session Chair: Ping Liu
 

On Beurling-Selberg Approximations and the Stability of Super-Resolution

Maxime Ferreira Da Costa

CentraleSupélec, Université Paris-Saclay, France

Of particular interest for super-resolution is the line spectrum estimation problem, which consists in recovering a stream of spikes (point sources) from the noisy observation of a few number of its first trigonometric moments weighted by the ones of the point-spread function (PSF). The empirical feasibility of this problem has been known since the work of Rayleigh on diffraction to be essentially driven by the separation between the spikes to recover.

We present a novel statistical framework based on the spectrum of the Fisher information matrix (FIM) to quantify the stability limit of super-resolution as a function of the PSF. In the regime where the minimal separation is inversely proportional to the number of acquired moments, we show the existence of a separation constant above which the minimal eigenvalue of the FIM is not asymptotically vanishing—defining a statistical resolution limit. Notably, a relationship between the total variation of the autocorrelation function of the PSF and its association resolution limit is highlighted. Those novel bounds are derived by relating the extremal eigenvalues of the FIM with a higher-order Beurling–Selberg type extremal approximation problem over the functions of bounded variation, for which we provide solutions.


A Parameter Identification Algorithm for Gaussian Mixture Models

Xinyu Liu, Hai Zhang

Hong Kong University of Science and Technology, Hong Kong S.A.R. (China)

In this talk, we consider the problem of learning the parameters from the Fourier measurements of the one-dimensional Gaussian mixture models(GMM). Unlike most algorithms requires the number of Gaussians a prior, our method only need to know the number of different variances as prior information. We also illustrate that for stably recovering all the components under certain noise level, a separation condition for the variances is necessary. Our method can be generalized into high dimensional cases.


Super-localisation of a point-like emitter in a resonant environment : correction of the mirage effect

Pierre Millien

CNRS, France

In this paper, we show that it is possible to overcome one of the fundamental limitations of super-resolution microscopy techniques: the necessity to be in an $\text{optically homogeneous}$ environment. Using recent modal approximation results we show as a proof of concept that it is possible to recover the position of a single point-like emitter in a $\text{known resonant environment}$ from far-field measurements with a precision two orders of magnitude below the classical Rayleigh limit. The procedure does not involve solving any partial differential equation, is computationally light (optimisation in $\mathbb{R}^d$ with $d$ of the order of 10) and therefore suited for the recovery of a very large number of single emitters.


Optimal super-resolution of close point sources and stability of Prony's method

Rami Katz, Nuha Diab, Dmitry Batenkov

Tel Aviv Universtiy, Israel

We consider the problem of recovering a linear combination of Dirac masses from noisy Fourier samples, also known as the problem of super-resolution. Following recent derivation of min-max bounds for this problem when some of the sources collide, we develop an optimal algorithm which provably achieves these bounds in such a challenging scenario. Our method is based on the well-known Prony's method for exponential fitting, and a novel analysis of its stability in the near-colliding regime, combined with the decimation technique for improving the conditioning of the problem.

Based on joint works with N.Diab and R.Katz:

[1] R. Katz, N. Diab, D. Batenkov. Decimated Prony's Method for Stable Super-resolution. 2022. http://arxiv.org/abs/2210.13329

[2] R. Katz, N. Diab, D. Batenkov. On the accuracy of Prony's method for recovery of exponential sums with closely spaced exponents. 2023. http://arxiv.org/abs/2302.05883
 
9:00am - 11:00amMS22 3: Imaging with Non-Linear Measurements: Tomography and Reconstruction from Phaseless or Folded Data
Location: VG1.101
Session Chair: Matthias Beckmann
Session Chair: Robert Beinert
Session Chair: Michael Quellmalz
 

Phase retrieval from time-frequency structured data

Rima Alaifari

ETH Zurich, Switzerland

Certain imaging and audio processing applications require the reconstruction of an image or signal from its phaseless time-frequency or time-scale representation, such as the magnitude of its Gabor or wavelet transform.

Such problems are inherently unstable, however, we formulate a relaxed notion of solution, meaningful for audio processing applications, under which stability can be restored.

The question of uniqueness becomes particularly delicate in the sampled setting. There, we show the first result evidencing the fundamental non-uniqueness property of phase retrieval from Gabor transform measurements. By restricting to appropriate function classes, positive results on the uniqueness can be obtained.

Furthermore, we present our most recent result which establishes uniqueness of phase retrieval from sampled wavelet transform measurements, without restricting the function class, when 3 wavelets are employed.


Computational Imaging from Structured Noise

Ayush Bhandari

Imperial College London, United Kingdom

Almost all modern day imaging systems rely on digital capture of information. To this end, hardware and consumer technologies strive for high resolution quantization based acquisition. Antithetical to folk wisdom, we show that sampling quantization noise results in unconventional advantages in computational sensing and imaging. In particular, this leads to a novel, single-shot, high-dynamic-range imaging approach. Application areas include consumer and scientific imaging, computed tomography, sensor array imaging and time-resolved 3D imaging. In each case, we present a mathematically guaranteed recovery algorithm and also demonstrate a first hardware prototype for basic digital acquisition of quantization noise.


Phase retrieval framework for direct reconstruction of the projected refractive index applied to ptychography and holography

Johannes Hagemann1, Felix Wittwer1,2, Christian G. Schroer1,3

1CXNS — Center for X-ray and Nano Science CXNS, Deutsches Elektronen-Synchrotron DESY, Notkestraße 85, 22607 Hamburg, Germany; 2Current address: NERSC, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA; 3Department Physik, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany

The interaction of an object with a coherent (x-ray) probe often encodes its properties in a complex-valued function, which is then detected in an intensity-only measurement. Phase retrieval methods commonly infer this complex-valued function from the intensity. However, the decoding of the object from the complex-valued function often involves some ambiguity in the phase, e.g., when the phase shift in the object exceeds $2\pi$. Here, we present a phase retrieval framework to directly recover the amplitude and phase of the object. This refractive framework is straightforward to integrate into existing algorithms. As examples, we introduce refractive algorithms for ptychography and near-field holography and demonstrate this method using measured data.


Zero-optics X-ray dark-field imaging using dual energies

Jannis N. Ahlers1, Konstantin M. Pavlov2,1,3, Marcus J. Kitchen1, Kaye S. Morgan1

1Monash University, Australia; 2University of Canterbury, New Zealand; 3University of New England, Australia

Traditional X-ray imaging achieves contrast using the attenuation of photons, making differentiation of materials of a similar density difficult. Improvements in the coherence of X-ray sources opened the way for phase changes in a material to be measured in an intensity image. In addition, the scattered component of the X-ray beam has been probed in X-ray dark-field imaging. Novel dark-field imaging techniques show promise in the detection and assessment of samples with significant micro-scale porosity, such as human lungs. Advanced dark-field imaging techniques rely on measuring sample-induced deviations on a patterned and interferometrically probed beam, requiring a highly-stable set-up and multiple exposures. Propagation-based imaging (PBI) is an experimentally-simple phase-contrast imaging technique, which relies on the downstream interference of refracted and diffracted coherent X-rays to reconstruct sample phase. Recently, PBI has been extended to dark-field reconstruction by modelling the downstream intensity using an X-ray imaging version of the Fokker-Planck diffusion equation [1, 2]. Separating the effects of refraction and diffusion on the beam requires multiple measurements, which was first achieved by imaging the sample at multiple propagation distances [3]. A multi-energy beam creates another possibility; the recent proliferation of energy-discriminating photon-counting detectors has led to an increased interest in spectral methods of coherent X-ray imaging [4]. In this talk we present the first results of inverting and solving the Fokker-Planck equation using spectral information, under assumption of a single-material sample. A linearised model is used to reconstruct sample projected thickness and dark-field in simulated and measured images. Strong attenuation energy-dependence presents challenges in reconstruction when deviating from strict single-material samples. We discuss Fokker-Planck dark-field reconstruction, and present a hybrid approach to the inverse problem, based on treating the post-sample wavefront as pseudo-patterned intensity, which improves stability in multi-material samples. Exploiting spectral dependence to reconstruct phase and dark-field would allow for imaging without having to move any part of the set-up, and would enable single-exposure imaging when combining a polychromatic source with an energy-discriminating detector. This would avoid registration issues, reduce the required dose, and open the door for time-resolved propagation-based dark-field imaging and fast CT.

[1] K. S. Morgan, D. M. Paganin. Applying the Fokker-Planck equation to grating-based x-ray phase and dark-field imaging, Scientific Reports 9(1): 17465, 2019.

[2] D. M. Paganin, K. S. Morgan. X-ray Fokker-Planck equation for paraxial imaging, Scientific Reports 9(1): 17537, 2019.

[3] T. A. Leatham, D. M. Paganin, K. S. Morgan. X-ray dark-field and phase retrieval without optics, via the Fokker-Planck equation, IEEE Transactions on Medical Imaging, 2023.

[4] F. Schaff, K. S. Morgan, J. A. Pollock, L. C. P. Croton, S. B. Hooper, & M. J. Kitchen. Material Decomposition Using Spectral Propagation-Based Phase-Contrast X-Ray Imaging, IEEE Transactions on Medical Imaging 39(12): 3891–3899, 2020.

 
9:00am - 11:00amMS23 2: Recent developments in reconstruction methods for inverse scattering and electrical impedance tomography
Location: VG1.103
Session Chair: Roland Griesmaier
Session Chair: Nuutti Hyvönen
 

Inverse medium scattering for a nonlinear Helmholtz equation

Roland Griesmaier

Karlsruher Institut für Technologie, Germany

The linear Helmholtz equation is used to model the propagation of sound waves or electromagnetic waves of small amplitude in inhomogeneous isotropic media in the time-harmonic regime. However, if the amplitudes are large then intensity-dependent material laws are required and nonlinear Helmholtz equations are more appropriate. A prominent example are Kerr-type nonlinear media. In this talk we discuss an inverse medium scattering problem for a class of nonlinear Helmholtz equations \begin{equation*} \Delta u + k^2 u \,=\, - k^2 q(x,|u|)u \,, \qquad x\in\mathbb{R}^d \,, \;d=2,3 \,, \end{equation*} that covers generalized Kerr-type nonlinear media of the form \begin{equation*} q(x,|z|) \,=\, q_0(x) + \sum_{l=1}^L q_l(x)|z|^{\alpha_l} \,, \qquad x\in\mathbb{R}^d \,,\; z\in\mathbb{C} \,, \end{equation*} where $q_0,\ldots,q_L\in L^\infty(\mathbb{R}^d)$ with support in some bounded open set $D\subset\mathbb{R}^d$, the lowest order term satisfies $\mathrm{essinf} q_0>-1$ in $\mathbb{R}^d$, and the exponents fulfill $0<\alpha_1<\cdots<\alpha_L<\infty$.

Assuming the knowledge of a nonlinear far field operator, which maps Herglotz incident waves to the far field patterns of the corresponding unique small solutions of the nonlinear scattering problem, we show that the nonlinear index of refraction is uniquely determined.

This is joint work with Marvin Knöller and Rainer Mandel (KIT).


Linearised inverse conductivity problem: reconstruction and Lipschitz stability for infinite-dimensional spaces of perturbations

Henrik Garde1, Nuutti Hyvönen2

1Aalto University, Finland; 2Aarhus University, Denmark

The linearised inverse conductivity problem is investigated in a two-dimensional bounded simply connected domain with a smooth enough boundary. After extending the linearised problem for square integrable perturbations, the space of perturbations is orthogonally decomposed and Lipschitz stability, with explicit Lipschitz constants, is proven for each of the infinite-dimensional subspaces. The stability estimates are based on using the Hilbert-Schmidt norm for the Neumann-to-Dirichlet boundary map and its Fréchet derivative with respect to the conductivity coefficient. A direct reconstruction method that inductively yields the orthogonal projections of a conductivity coefficient onto the aforementioned subspaces is devised and numerically tested with data simulated by solving the original nonlinear forward problem.


Optimizing electrode positions in electrical impedance tomography for head imaging

Ruma Rani Maity, Nuutti Hyvönen, Antti Hannukainen, Anton Vavilov

Aalto University, Finland

Electrical impedance tomography is an imaging modality for deducing information about the conductivity inside a physical body from boundary measurements of current and voltage by a finite number of contact electrodes. This work applies techniques of Bayesian experimental design to the linearized forward model of impedance tomography in order to select optimal positions for the available electrodes. The aim is to place the electrodes so that the conditional probability distribution of the (discretized) conductivity given the electrode measurements is as localized as possible in the sense of the A- or D-optimality criterion of Bayesian experimental design. The focus is on difference imaging of a human head under the assumption that an MRI or CT image of the patient in question is available. The algorithm is developed in the computational framework introduced in [1].

[1] V. Candiani, A. Hannukainen, N. Hyvönen. Computational framework for applying electrical impedance tomography to head imaging. SIAM J. Sci. Comput. 41 (2019), no. 5, B1034–B1060. https://doi.org/10.1137/19M1245098


Immersed boundary method for electrical impedance tomography in the frame of electrocardiography

Jérémi Dardé4, Niami Nasr1,2,3, Lisl Weynans1,2,3

1Université de Toulouse, Institut de Mathématiques de Toulouse; 2Univ. Bordeaux, CNRS, INRIA, Bordeaux INP, IHU-LIRYC, IMB; 3Univ. Bordeaux, CNRS, INRIA, Bordeaux INP, IHU-LIRYC, IMB; 4Univ Toulouse, Institut de mathématiques de Toulouse,UMR 5219

EIT is a non-invasive imaging technique that aims, to reconstruct the electrical conductivity distribution inside a domain by imposing electrical currents on the boundary of this domain, and measuring the resulting voltages on the same boundary. It has several applications in the medical field, in particular in lung monitoring and stroke detection. Mathematically, the problem, known as Calderón’s problem or inverse conductivity problem, is a severely ill-posed inverse problem. In practical experiments, the currents are driven inside the body of interest through a collection of surface electrodes, no current being driven between the electrodes. For each current pattern, the potential differences between the electrodes are measured. This practical setting is accurately modeled by the Complete Electrode Model (CEM). It takes into account the shape of the electrodes as well as the shunting effect, that is the thin resistive layer that appears at the interface between the electrodes and the object during the measurements. The CEM is known to correctly predict experimental data, and therefore is widely used in the numerical resolution of both direct and inverse problems related to EIT. The CEM is as follows :

Find the potentials $u\in H^1(\Omega)$ and $U \in \mathbb{R}^M_\diamond$ such that, \begin{equation*} \left\lbrace \begin{array}{rcl} \nabla \cdot (\sigma \nabla u) & = & 0 \text{ in} \ \Omega \\ u + z_m \sigma \partial_\nu u & = & U_m \text{ on } E_m, \\ \sigma \partial_\nu u & = & 0 \text{ on } \partial \Omega \setminus \overline{E}, \\ \displaystyle \int_{E_m} \sigma \partial_\nu u \, ds & = & I_m, \\%\text{ for all } m \text{ in } \left\lbrace 1 ,\ldots , M\right\rbrace. \end{array} \right. \end{equation*} with $E_m$ the $m$th electrode , $z_m$ the associated contact impedance, $I \in \mathbb{R}^M_\diamond$ the current pattern. Where $$ \mathbb{R}^M_\diamond = \left\lbrace I \in \mathbb{R}^M, \sum_{k=1}^M I_k = 0 \right\rbrace. $$

We propose an immersed boundary method (IBM) for the numerical resolution of the CEM in Electrical Impedance Tomography, that we use as a main ingredient in the resolution of inverse problems in medical imaging. Such method allows to use a Cartesian mesh without accurate discretization of the boundary, which is useful in situations where the boundary is complicated and/or changing. We prove the convergence of our method, and illustrate its efficiency with two dimensional direct and inverse problems.

[1] J. Dardé, N. Nasr, L. Weynans. Immersed boundary method for the complete electrode model in electrical impedance tomography. 2022.

[2] M. Cisternino, L. Weynans. A parallel second order Cartesian method for elliptic interface problems. Addison Wesley, Massachusetts, 2nd ed.

 
9:00am - 11:00amMS26 1: Trends and open problems in cryo electron microscopy
Location: VG3.102
Session Chair: Carlos Esteve-Yague
Session Chair: Johannes Schwab
 

Joint Cryo-ET Alignment and Reconstruction with Neural Deformation Fields

Valentin Debarnot1, Sidharth Gupta1,2, Konik Kothari1,2, Ivan Dokmanić1,2

1University of Basel, Switzerland; 2University of Illinois at Urbana-Champaign

We propose a framework to jointly determine the deformation parameters and reconstruct the unknown volume in electron cryotomography (CryoET). CryoET aims to reconstruct three-dimensional biological samples from two-dimensional projections. A major challenge is that we can only acquire projections for a limited range of tilts, and that each projection undergoes an unknown deformation during acquisition. Not accounting for these deformations results in poor reconstruction. The existing CryoET software packages attempt to align the projections, often in a workflow which uses manual feedback. Our proposed method sidesteps this inconvenience by automatically computing a set of undeformed projections while simultaneously reconstructing the unknown volume. We achieve this by learning a continuous representation of the undeformed measurements and deformation parameters. We show that our approach enables the recovery of high-frequency details that are destroyed without accounting for deformations.


Manifold-based Point Cloud Deformations: Theory and Applications to Protein Conformation Processing

Willem Diepeveen1, Carlos Esteve-Yagüe1, Jan Lellmann2, Ozan Öktem3, Carola-Bibiane Schönlieb1

1University of Cambridge, United Kingdom; 2University of Lübeck, Germany; 3KTH–Royal Institute of Technology, Sweden

Motivated by data analysis for protein conformations, we construct a smooth quotient manifold of point clouds and equip it with a non-trivial metric tensor field, that models which point clouds are close together and which are far apart. We analyse properties of the Riemannian manifold and obtain cheap to compute expressions for important manifold mappings. Furthermore, we investigate potential numerical advantages of using the Riemannian manifold structure in several data processing tasks such as interpolation, computing means and principal component analysis of simulated molecular dynamics (MD) data sets. For the latter, we observe that MD trajectories live in a low-dimensional sub-manifold in the proposed metric.


Spectral decomposition of atomic structures in heterogeneous cryo-EM

Carlos Esteve-Yague1, Willem Diepeveen1, Ozan Oktem2, Carola-Bibiane Schönlieb1

1University of Cambridge, United Kingdom; 2KTH Stockholm, Sweden

We consider the problem of recovering the three-dimensional atomic structure of a flexible macromolecule from a heterogeneous single-particle cryo-EM dataset. Our method combines prior biological knowledge about the macromolecule of interest with the cryo-EM images. The goal is to determine the deformation of the atomic structure in each image with respect to a specific conformation, which is assumed to be known. The prior biological knowledge is used to parametrize the space of possible atomic structures. The parameters corresponding to each conformation are then estimated as a linear combination of the leading eigenvectors of a graph Laplacian, constructed by means of the cryo-EM dataset, which approximates the spectral properties of the manifold of conformations of the underlying macromolecule.
 
9:00am - 11:00amMS30 1: Inverse Problems on Graphs and Machine Learning
Location: VG2.103
Session Chair: Emilia Lavie Kyllikki Blåsten
Session Chair: Matti Lassas
Session Chair: Jinpeng Lu
 

Continuum limit for lattice Hamiltonians

Hiroshi Isozaki

University of Tsukuba, Japan

We consider the Hamiltonian perturbed by a potential on an infinite periodic lattice. We are interested in the behavior of the solution for the lattice model to that for the equation for the continuous model as the mesh size tends to 0. For some lattices such as square, triangular and hexagonal lattices, we show that the scattering solutions (i.e. the soultions associated with the continuous spectrum) converge to the solution to the Shrodedinger equation in the continuous model. For the case of the hexagonal lattice, we can also derive the convergence to the massless Dirac equation. The idea of the proof relies on the micro-local calculus for lattice Schroedinger operators and the classical method of the limiting absorption principle.


Quantum computing algorithms for inverse problems on graphs

Joonas Ilmavirta1, Matti Lassas2, Jinpeng Lu2, Lauri Oksanen2, Lauri Ylinen2

1University of Jyväskylä, Finland; 2University of Helsinki, Finland

Quantum computing is a technology that utilizes quantum mechanical phenomena to do computation faster than is believed to be possible with classical computers. It is a rapidly developing and interdisciplinary field comprising of physics, computer science, and mathematics. It is predicted that in the future quantum computers will enable scientists to solve problems outside the capabilities of classical computers in many fields such as molecular simulations in drug discovery and complex combinatorics problems.

In this talk, we consider a quantum algorithm for an inverse travel time problem on a graph. This problem is a discrete version of the inverse travel time problem encountered in seismic and medical imaging and the boundary rigidity problem studied in Riemannian geometry. We also consider the computational complexity of the inverse problem, and show that the quantum algorithm has a quadratic improvement in computational cost when compared to the standard classical algorithm.



Inverse problems for the graph Laplacian

Emilia Blåsten1, Hiroshi Isozaki2, Matti Lassas3, Jinpeng Lu3

1LUT University, Finland; 2University of Tsukuba, Japan; 3University of Helsinki, Finland

We study the discrete version of Gel'fand's inverse spectral problem, formulated as follows for a finite weighted graph and the graph Laplacian on it. Suppose we are given a subset $B$ of vertices and the spectral data $(\lambda_j,\phi_j|_B)$, where $\lambda_j$ are the eigenvalues of the graph Laplacian and $\phi_j|_B$ are the values of the corresponding eigenfunctions on $B$. We consider if these data uniquely determine the graph structure and the weights. In general, this problem is not uniquely solvable without assumptions on the graph or the set $B$ due to counterexamples. We introduce a so-called Two-Points Condition on graphs (with respect to $B$), and prove that the inverse spectral problem is uniquely solvable under this condition. We also consider inverse problems for random walks on finite graphs. We show that under the Two-Points Condition, the graph structure and the transition matrix of the random walk can be uniquely recovered from the distributions of the first passing times on $B$.


Inverse problems on manifolds via graph-based semi-supervised learning

Daniel Sanz-Alonso1, Ruiyi Yang2

1University of Chicago, United States of America; 2Princeton University, United States of America

In this talk I will introduce graphical representations of stochastic partial differential equations with the goals of approximating Matérn Gaussian fields on manifolds and generalizing the Matérn model to abstract point clouds. I will show that these graph-based prior models can give optimal posterior contraction in semi-supervised learning, and illustrate their use in various inverse problems on manifolds.
 
9:00am - 11:00amMS32 1: Parameter identification in time dependent partial differential equations
Location: VG1.104
Session Chair: Barbara Kaltenbacher
Session Chair: William Rundell
 

Spacetime finite element methods for inverse and control problems subject to the wave equation

Lauri Oksanen1, Spyros Alexakis2, Ali Feizmohammadi3

1University of Helsinki, Finland; 2University of Toronto, Canada; 3Fields Institute, Canada

There is a well-known duality between inverse initial source problems and control problems for the wave equation, and analysis of both these boils down to the so-called observability estimates. I will present recent results on numerical analysis of these problems. The inverse initial source problem gives a model for the acoustic step of Photoacoustic tomography.



Mathematical challenges in Full Waveform inversion

Lukas Pieronek

Karlsruhe Institute of Technology, Germany

Full Waveform inversion (FWI) is a state-of-the-art geophysical imaging method that exploits seismic measurements to reconstruct shallow earth parameters. Mathematically, this translates into a non-linear inverse problem where the seismic measurements are modeled as solutions to a time-dependent wave-type system and the searched-for parameters are (some of) the coefficients. In order for numerical solution to be successful, both the parameter and measurement spaces need to be selected carefully: For instance, the reconstruction of sharp material interfaces requires non-smooth parameter spaces which are numerically difficult to cope with. Further, to minimize artifacts and spurious reconstructions, the resulting non-linear objective functional should be as convex as possible, which thus constraints the choice of compatible metrics for the seismic measurements. In this talk, we present novel ideas and solutions regarding these challenges in FWI.


Optimality of pulse energy for photoacoustic tomography

Barbara Kaltenbacher, Phuoc Truong Huynh

University of Klagenfurt, Austria

Photoacoustic tomography (PAT) is a rapidly evolving imaging technique that combines high contrast of optical imaging with high resolution of ultrasound imaging. Using typically noisy measurement data, one is interested in identifying some parameters in the governing PDEs for the photoacoustic tomography system. Hence, an essential factor in estimating these parameters is the design of the system, which typically involves multiple factors that can impact the accuracy of reconstruction. In this work, employing a Bayesian approach to a PAT inverse problem we are interested in optimizing the laser pulse of the PAT system in order to minimize the uncertainty of the reconstructed parameter. Additionally, we take into account wave propagation attenuation for the inverse problem of PAT, which is governed by a fractionally damped wave equation. Finally, we illustrate the effectiveness of our proposed method using a numerical simulation.


Bi-level iterative regularization for inverse problems in nonlinear PDEs

Tram Nguyen

Max Planck Institute for Solar System Research, Germany

We investigate the ill-posed inverse problem of recovering spatially dependent parameters in nonlinear evolution PDEs from linear measurements. We propose a bi-level Landweber scheme, where the upper-level parameter reconstruction embeds a lower-level state approximation. This can be seen as combining the classical reduced setting and the newer all-at-once setting, allowing us to, respectively, utilize well-definedness of the parameter-to-state map, and to bypass having to solve nonlinear PDEs. Using this, we derive stopping rules for lower- and upper-level iterations and convergence of the bi-level method.
 
9:00am - 11:00amMS33 1: Quantifying uncertainty for learned Bayesian models
Location: VG1.105
Session Chair: Marta Malgorzata Betcke
Session Chair: Martin Holler
 

Equivariant Neural Networks for Indirect Measurements

Nick Heilenkötter, Matthias Beckmann

University of Bremen, Germany

In the recent years, deep learning techniques have shown great success in various tasks related to inverse problems, where a target quantity of interest can only be observed through indirect measurements of a forward operator. Common approaches apply deep neural networks in a post-processing step to the reconstructions obtained by classical reconstruction methods. However, the latter methods can be computationally expensive and introduce artifacts that are not present in the measured data and, in turn, can deteriorate the performance on the given task.

To overcome these limitations, we propose a class of equivariant neural networks that can be directly applied to the measurements to solve the desired task. To this end, we build appropriate network structures by developing layers that are equivariant with respect to data transformations induced by symmetries in the domain of the forward operator. We rigorously analyze the relation between the measurement operator and the resulting group representations and prove a representer theorem that characterizes the class of linear operators that translate between a given pair of group actions.

Based on this theory, we extend the existing concepts of Lie group equivariant deep learning to inverse problems and introduce new representations that are the results of the involved measurement operations. This allows us to efficiently solve classification, regression or even reconstruction tasks based on indirect measurements also for very sparse data problems, where a classical reconstruction based approach may be hard or even impossible. To illustrate the effectiveness of our approach, we perform numerical experiments on selected inverse problems and compare our results to existing methods.


Bayesian MRI reconstruction with joint uncertainty estimation using diffusion priors

Guanxiong Luo1, Moritz Blumenthal1,2, Martin Heide1, Martin Uecker1,2,3,4

1University Medical Center Göttingen, Germany; 2Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria; 3German Centre for Cardiovascular Research (DZHK), Partner Site Göttingen, Germany; 4Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells'' (MBExC), University of Göttingen, Germany

The application of generative models in MRI reconstruction is shifting researchers' attention from the unrolled reconstruction networks to the probabilistic methods which can be used for unsupervised medical image reconstruction [1-4]. We formulate the image reconstruction problem from the perspective of Bayesian inference, which enables efficient sampling from the learned posterior probability distributions [1-2]. Different from conventional deep learning-based MRI reconstruction techniques, samples are drawn from the posterior distribution given the measured k-space using the Markov chain Monte Carlo (MCMC) method. Because the generative model can be learned from an image database independently from the forward operator, the same pre-trained models can be applied to k-space acquired with different sampling schemes or receive coils. Here, we present additional results in terms of the uncertainty of reconstruction, the transferability of learned information, and the comparison using data from the fastMRI challenge.

[1] G. Luo, M. Heide, M. Uecker. Using data-driven Markov chains for MRI reconstruction with Joint Uncertainty Estimation. Proc. Intl. Soc. Mag. Reson. Med. 30: 0298.

[2] G. Luo, M. Blumenthal, M. Heide, M. Uecker. Bayesian MRI reconstruction with joint uncertainty estimation using diffusion models. Magn. Reson. Med. 90: 295- 311, 2023.

[3] A. Jalal, M. Arvinte, G. Daras, E. Price, A. Dimakis, J. Tamir. Robust Compressed Sensing MRI with Deep Generative Priors. Neural Information Processing Systems 34: 14938–14954, 2021.

[4] H. Chung, C. Ye. Score-based diffusion models for accelerated MRI, Medical Image Analysis 80: 102479, 2022.


Utilizing variational autoencoders in the Bayesian inverse problem of photoacoustic tomography

Teemu Sahlström, Tanja Tarvainen

University of Eastern Finland, Finland

Photoacoustic tomography (PAT) is an imaging modality based on the photoacoustic effect. In the inverse problem of PAT, an initial pressure distribution induced by absorption of an externally introduced light is estimated from measured photoacoustic data. In the recent years, utilisation of machine learning in the inverse problem of PAT has gained significant interest. However, many of these machine learning-based methods do not provide information regarding the uncertainty of the reconstructed image.

In this work, we proposed a machine learning-based framework for the Bayesian inverse problem of PAT. The approach is based on the variational autoencoder (VAE) and the recently proposed uncertainty quantification variational autoencoder (UQ-VAE). In the VAE and UQ-VAE, an approximation of the true underlying posterior distribution is estimated by minimizing a divergence between the true and estimated posterior distributions using a neural network. The approach is evaluated using numerical simulations both in full and limited view measurement geometries with multiple levels of measurement noise.



Scalable Bayesian uncertainty quantification with learned convex regularisers

Tobías Ignacio Liaudat1, Marta Betcke1, Jason D. McEwen1, Marcelo Pereyra2

1University College London, United Kingdom; 2Heriot Watt University, United Kingdom

The last decade brought us substantial progress in computational imaging techniques for current and next-generation interferometric telescopes, such as the SKA. Imaging methods have exploited sparsity and more recent deep learning architectures with remarkable results.  Despite good reconstruction quality, obtaining reliable uncertainty quantification (UQ) remains a common pitfall of most imaging methods. The UQ problem can be addressed by reformulating the inverse problem in the Bayesian framework. The posterior probability density function provides a comprehensive understanding of the uncertainties. However, computing the posterior in high-dimensional settings is an extremely challenging task. Posterior probabilities are often computed with sampling techniques, but these cannot yet cope with the high-dimensional settings from radio imaging.

This work proposes a method to address uncertainty quantification in radio-interferometric imaging with data-driven (learned) priors for very high-dimensional settings. Our model uses an analytic physically motivated model for the likelihood and exploits a data-driven prior learned from data. The proposed prior can encode complex information learned implicitly from training data and improves results from handcrafted priors (e.g., wavelet-based sparsity-promoting priors). We exploit recent advances in neural-network-based convex regularisers for the prior that allow us to ensure the log-concavity of the posterior while still being expressive. We leverage probability concentration phenomena of log-concave posterior functions that let us obtain information about the posterior avoiding the use of sampling techniques. Our method only requires the maximum-a-posteriori (MAP) estimation and evaluations of the likelihood and prior potentials. We rely on convex optimisation methods to compute the MAP estimation, which are known to be much faster and better scale with dimension than sampling strategies. The proposed method allows us to compute local credible intervals, i.e., Bayesian error bars, and perform hypothesis testing of structure on the reconstructed image. We demonstrate our method by reconstructing simulated radio-interferometric images and carrying out fast and scalable uncertainty quantification.
 
9:00am - 11:00amMS45 2: Optimal Transport meets Inverse Problems
Location: VG0.111
Session Chair: Marcello Carioni
Session Chair: Jan-F. Pietschmann
Session Chair: Matthias Schlottbom
 

Inverse problems in imaging and information fusion via structured multimarginal optimal transport

Johan Karlsson1, Yongxin Chen2, Filip Elvander3, Isabel Haasler4, Axel Ringh5

1KTH Royal Institute of Technology; 2Georgia Institute of Technology; 3Aalto University; 4École polytechnique fédérale de Lausanne; 5Chalmers University of Technology and the University of Gothenburg

The optimal mass transport problem is a classical problem in mathematics, and dates back to 1781 and work by G. Monge where he formulated an optimization problem for minimizing the cost of transporting soil for construction of forts and roads. Historically the optimal mass transport problem has been widely used in economics in, e.g., planning and logistics, and was at the heart of the 1975 Nobel Memorial Prize in Economic Sciences. In the last two decades there has been a rapid development of theory and methods for optimal mass transport and the ideas have attracted considerable attention in several economic and engineering fields. These developments have led to a mature framework for optimal mass transport with computationally efficient algorithms that can be used to address problems in the many areas.

In this talk, we will consider optimization problems consisting of optimal transport costs together with other functionals to address inverse problems in many domains, e.g., in medical imaging, radar imaging, and spectral estimation. This is a flexible framework and allows for incorporating forward models, specifying dynamics of the object and other dependencies. These problem can often be formulated as a multi-marginal optimal transport problem and we show how common problems, such as barycenter and tracking problems, can be seen as special cases of this. This naturally leads to consider structured optimal transport problems, which can be solved efficiently using customized methods inspired by the Sinkhorn iterations.


Wasserstein PDE G-CNN

Olga Mula, Daan Bon

TU Eindhoven, Netherlands, The

PDE GCNNs are neural networks where each layer is seen as a set of PDE-solvers where geometrically meaningful PDE-coefficients become the layer’s trainable weights. In this talk, we present a contribution on building new layers that are based either on Wasserstein gradient flows or on normalizing measures that take inspiration from optimal transport maps. The tunable parameters are either connected to parameters of the gradient flow, or the transport maps, so the whole procedure can be interpreted as an inverse problem.


An Optimal Transport-based approach to Total-Variation regularization for the Diffusion MRI problem

Rodolfo Assereto1, Kristian Bredies1, Marion I. Menzel2, Emanuele Naldi3, Claudio Verdun4

1Karl-Franzens-Universität Graz, Austria; 2GE Global Research, Munich, Germany; 3Technische Universität Braunschweig, Germany; 4Technische Universität München, Germany

Diffusion Magnetic Resonance Imaging (dMRI) is a non-invasive imaging technique that draws structural information from the interaction between water molecules and biological tissues. Common ways of tackling the derived inverse problem include, among others, Diffusion Tensor Imaging (DTI), High Angular Resolution Diffusion Imaging (HARDI) and Diffusion Spectrum Imaging (DSI). However, these methods are structurally unable to recover the full diffusion distribution, only providing partial information about particle displacement. In our work, we introduce a Total-Variation (TV) regularization defined from an optimal transport perspective using 1-Wasserstein distances. Such a formulation produces a variational problem that can be handled by well-known algorithms enjoying good convergence properties, such as the primal-dual proximal method by Chambolle and Pock. It allows for the reconstruction of the complete diffusion spectrum from measured undersampled k/q space data.


A game-based approach to learn interaction rules for systems of rational agents

Mauro Bonafini1, Massimo Fornasier2, Bernhard Schmitzer3

1University of Verona, Italy; 2Technical University of Munich, Germany; 3University of Göttingen, Germany

The modelling of the dynamic of a system of rational agents may take inspiration from various sources, depending on the particular application one has in mind. We can consider for example to model interactions via a Newtonian-like system, taking inspiration from physics, or via a game-based approach stemming from classical game theory or mean field games. In both cases, once we ensured the well-posedness of the proposed model, the model itself can be used as a tool to learn from real world observations, by means of learning (some) unknown components of it.

In [1], the authors study a class of spatially inhomogeneous evolutionary games to model the interactions between a finite number of agents: each agent evolves in space with a velocity which depends on a certain underlying mixed strategy, in turn evolving according to a replicator dynamic. In this talk we move from such a formulation, and introduce an entropic limiting version of it, which boils down to a purely spatial ODE. For a bounded set of pure strategies $U \subset \mathbb{R}^u$, $0 < \eta \in P(U)$ a probability measure on $U$, an ''entropic'' parameter $\varepsilon>0$, and maps $e \colon \mathbb{R}^d \times U \to \mathbb{R}$ and $J \colon \mathbb{R}^d \times U \times \mathbb{R}^d \to \mathbb{R}$, the $N$-agents system we consider is the following: $$ \begin{aligned} \partial_t x_i(t) &= v_i^J(x_1(t),\dots,x_N(t)) \quad \text{for } i = 1,\dots,N\\ v_i^J(x_1,\dots,x_N) &= \int_U e(x_i,u)\, \sigma_{i}^J(x_1,\ldots,x_N)(u)\,\,{d} \eta(u)\\ \sigma_{i}^J(x_1,\ldots,x_N) &= \frac{\exp\left(\tfrac{1}{\varepsilon N}\sum_{j=1}^N J(x_i,\cdot,x_j)\right)}{ \int_U \exp\left(\tfrac{1}{\varepsilon N}\sum_{j=1}^N J(x_i,v,x_j)\right)\,\,{d} \eta(v)}. \\ \end{aligned} $$ We study the well-posedness and the mean field limit of such a system, and use it as the backbone of a learning procedure. In particular, we focus on the learnability of the interaction kernel $J$, all the rest given. Building on ideas of [3, 4, 5], we infer $J$ by penalizing the empirical mean squared error between observed velocities and predicted velocities, and also consider the choice of penalizing observed mixed strategies and predicted mixed strategies. We study the quality of the inferred kernel both as $N$ increases (i.e., as we have observations of an increasingly high number of agents) and in the limit of repeated observations with fixed $N$ (i.e., as we have repeated observations of the same number of agents). We show the effectiveness of the proposed inference on many different examples, from classical Newtonian systems to system modelling pedestrian dynamics.

[1] L. Ambrosio, M. Fornasier, M. Morandotti, G. Savaré. Spatially inhomogeneous evolutionary games, Communications on Pure and Applied Mathematics 74.7: 1353-1402, 2021.

[2] M. Bonafini, M. Fornasier, B. Schmitzer. Data-driven entropic spatially inhomogeneous evolutionary games, European Journal of Applied Mathematics 34.1: 106-159, 2023.

[3] M. Bongini, M. Fornasier, M. Hansen, M. Maggioni. Inferring interaction rules from observations of evolutive systems I: The variational approach, Mathematical Models and Methods in Applied Sciences 27.05: 909-951, 2016.

[4] F. Cucker, S. Smale. On the mathematical foundations of learning, Bulletin of the American mathematical society 39.1: 1-49, 2002.

[5] F. Lu, M. Maggioni, S. Tang. Learning interaction kernels in heterogeneous systems of agents from multiple trajectories, The Journal of Machine Learning Research 22.1: 1518-1584, 2021.
 
9:00am - 11:00amMS47 3: Scattering and spectral imaging: inverse problems and algorithms
Location: VG3.101
Session Chair: Eric Todd Quinto
Session Chair: Gael Rigaud
 

Analytic and Deep learning-based Inversions in Circular Compton Scattering Tomography

Mai K. Nguyen1, Cécilia Tarpau2, Javier Cebeiro3, Ishak Ayad4

1CY Cergy Paris University, France; 2Maxwell Institute for Mathematical Sciences, Bayes Center, University of Edinburgh, Edinburgh, EH8 9BT, United Kingdom and the School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh EH14 4AS, United Kingdom; 3Centro de Matema ́tica Aplicada, Universidad Nacional de San Mart ́ın, Buenos Aires, Argentina; 4CY Cergy Paris University, France

Circular Compton scattering tomography (CCST) where a fixed radiation source and a number of regularly spaced detectors are positioned on a fixed circular frame is recently proposed [1]. It has multiple advantages such as compact and motion-free system, possibility of combination with classic fan-beam CT (computed tomography) as a bi-imaging system, capacity of scanning both small and large objects.

In the case where the detectors are collimated to split up scattered photons coming from two opposite sides of the source-detector segment, the modelling of CCST’s data acquisition leads to a Radon transform on a family of arcs of circles passing through a fixed point (the point source). The analytical inversion of this Radon transform is derived from Cormack’s earlier works.

In the case of non-collimated detectors, the corresponding Radon transform is defined on a specific family of double circular arcs and named DCART (double circular arc Radon transform). The exact inverse formula for this new Radon transform on pair of circles is not available presently. Recently, deep learning-based techniques appear as promising alternatives to solve the ill-posed inverse problems in CT reconstruction from limited-angle and sparse-view projection data. In our work we propose a neural network architecture acting on both image and data domains. The particularity of this architecture lies in its capability to map the projection (Radon domain) to image domain at different scales of the data while extracting important image features used at reconstruction. The obtained results suggest that removing the collimator at detectors in CCST is feasible thanks to deep learning-based techniques.

Another way to by-pass the collimator at detectors is to design a CST with a single detector rotating around a fixed source. The corresponding Radon transform and its inverse are established in [2,3] but the CST is no longer a motion-free system.

[1] C. Tarpau, J. Cebeiro, M. A. Morvidone, M. K. Nguyen. A new concept of Compton scattering tomography and the development of the corresponding circular Radon transform, IEEE Transactions on Radiation and Plasma Medical Sciences (IEEE-TRPMS) 4.4: 433-440, 2020. https://doi.org/10.1109/TRPMS.2019.2943555

[2] C. Tarpau, J. Cebeiro, M. K. Nguyen, G. Rollet, M. A. Morvidone. Analytic inversion of a Radon transform on double circular arcs with applications in Compton Scattering Tomography, IEEE Transactions on Computational Imaging (IEEE-TCI) 6: 958-967, 2020. https://doi.org/10.1109/TCI.2020.2999672

[3] J. Cebeiro, C. Tarpau, M. A. Morvidone, D. Rubio, M. K. Nguyen, On a three-dimensional Compton scattering tomography system with fixed source, Inverse Problems, Special issue on Modern Challenges in Imaging 37: 054001, 2021. https://doi.org/10.1088/1361-6420/abf0f0.


On a cylindrical scanning modality in three-dimensional Compton scatter tomography

James Webber

Brigham and Women's Hospital, United States of America

We present injectivity and microlocal analyses of a new generalized Radon transform, R, which has applications to a novel scanner design in 3-D Compton Scattering Tomography (CST), which we also introduce here. Using Fourier decomposition and Volterra equation theory, we prove that R is injective and show that the image solution is unique. Using microlocal analysis, we prove that R satisfies the Bolker condition (sometimes called the “Bolker assumption”), and we investigate the edge detection capabilities of R. This has important implications regarding the stability of inversion and the amplification of measurement noise. This paper provides the theoretical groundwork for 3-D CST using the proposed scanner design.
 
9:00am - 11:00amMS53: Uniqueness and stability in inverse problems for partial differential equations
Location: VG3.104
Session Chair: Sonia Foschiatti
Session Chair: Elisa Francini
Session Chair: Eva Sincich
 

Stability for the inverse problem of the determination on an inclusion in a Schrödinger type equation using Cauchy data.

Sonia Foschiatti

Università degli Studi di Trieste, Italy

We consider the stability issue for a broad class of inverse problems described by second-order elliptic equations with anisotropic and scalar coefficients that are finite-dimensional. This class of problems encompasses the well-studied conductivity equation, the Helmholtz equation and the Schrödinger equation. The applications of this study range from medicine, for example EIT, where the coefficient to be reconstructed is the conductivity, to the reconstruction of the wave-speed in a medium. It is well known that these inverse problems are ill-posed.

In this talk we prove a logarithmic stability estimate for the inverse problem that regards the determination of an inclusion in terms of local Cauchy data, since the Dirichlet to Neumann map that can encode the data at the boundary is not always available. This talk is based on a joint work with Eva Sincich.


Refined instability estimates for two inverse problems

Jenn-Nan Wang

National Taiwan University, Taiwan

Many inverse problems are known to be ill-posed. The ill-posedness can be manifested by an instability estimate of exponential type, first derived by Mandache. Inspired by Mandache's idea, in this talk, I would like to refinements of the instability estimates for two inverse problems, including the inverse inclusion problem and the inverse scattering problem. The aim is to derive explicitly the dependence of the instability estimates on key parameters.

The first topic of this talk is to show how the instability depends on the depth of the hidden inclusion and the conductivity of the background medium. The second topic is to justify the optimality of increasing stability in determining the near-field of a radiating solution of the Helmholtz equation from the far-field pattern.


Stability estimates for the inverse fractional conductivity problem

Jesse Railo

University of Cambridge, United Kingdom

We study the stability of an inverse problem for the fractional conductivity equation on bounded smooth domains. We obtain a logarithmic stability estimate for the inverse problem under suitable a priori bounds on the globally defined conductivities. The argument has three main ingredients: 1. the logarithmic stability of the related inverse problem for the fractional Schrödinger equation by Rüland and Salo; 2. the Lipschitz stability of the exterior determination problem; 3. utilizing and identifying nonlocal analogies of Alessandrini's work on the stability of the classical Calderón problem. The main contribution of the article is the resolution of the technical difficulties related to the last mentioned step. Furthermore, we show the optimality of the logarithmic stability estimates, following the earlier works by Mandache on the instability of the inverse conductivity problem, and by Rüland and Salo on the analogous problem for the fractional Schrödinger equation.


Uniqueness and stability for anisotropic inverse problems.

Romina Gaburro

University of Limerick, Ireland

In this talk we investigate the issue of uniqueness and stability for certain inverse problems which forward problem is modelled by a second order elliptic partial differential equation. As is well known, there is a fundamental obstruction to uniquely determine physical properties of anisotropic materials from boundary maps/measurements. In fact, any diffeomorphism of the domain under investigation, that keeps the domain's boundary fixed, changes its material's properties without changing its boundary measurements. In this talk we will provide some positive answers to the issues of uniqueness and stability of certain type of anisotropy in terms to the correspondent boundary measurements.
 
11:00am - 11:20amC5: Coffee Break
Location: ZHG Foyer
11:20am - 12:10pmPl 5: Plenary lecture
Location: ZHG 011
Session Chair: Gunther Uhlmann
 

Geometric inverse problems in 2D: a transport twistor perspective

Gabriel Paternain

University of Cambridge, United Kingdom

I will discuss some landmark results in geometric inverse problems in 2D from the point of view of the transport twistor space, a natural complex surface designed to be sensitive to the transport operator (geodesic vector field). Towards the end of the lecture I will present some recent developments and open questions motivated by this point of view.
 
12:15pm - 1:30pmP: Poster Session
Location: ZHG Foyer
 

Adaptive Method for Bayesian EEG/MEG Source Localization to Support Treatment of Focal Epilepsy

Joonas Lahtinen1, Alexandra Koulouri1, Tim Erdbrügger2, Carsten H. Wolters2, Sampsa Pursiainen1

1Computing Sciences, Tampere University, Korkeakoulunkatu 3, Tampere 33072, Finland; 2Institute for Biomagnetism and Biosignalanalysis, University of Münster, Malmedyweg 15, D-48149 Münster, Germany

Non-invasive electrophysiological brain stimulation techniques such as tES and TMS can provide a potential alternative treatment for drug-resistant focal epileptic patients, when a surgical operation to remove the pathological tissue is not feasible. Choosing an appropriate stimulation montage is possible only if the location of the epileptogenic zone (EZ) is known sufficiently. Easiest for the patient is if EZ is localized non-invasively based on EEG/MEG measurements. Non-invasive EEG/MEG source localization, nevertheless, poses a challenging inverse problem the solution of which can be highly sensitive to selected model parameters [1,2].

We introduce a new standardized and adaptive Bayesian method which we show to (1) reconstruct focal sources accurately and (2) to perform robustly with respect to inherent model uncertainties. Our approach follows a hierarchical posterior distribution in which the model related free-parameters are automatically tuned as described in [3,4]. As we have shown previously, the present scheme allows us to obtain sparse vectors to represent the neural activity distribution. In particular, the solution is a pair $(x, \gamma)$ which is obtained via an iterative algorithm that maximizes the posterior for x and the hyperparameter $\gamma$ alternatingly while applying the standardization on each step.

We demonstrate through simulation that our approach localizes a focal epileptic zone for synthetic interictal EEG data. These simulation results are complemented with results obtained with experimental data comparing the source localization outcome to a reference zone designated by specialists. As comparison techniques we use Standardized Shrinking LORETA-FOCUSS (SSLOFO) and Standardized low-resolution brain electromagnetic tomography (sLORETA) which have been used successfully in localization of EZ both with ictal and interictal presurgical data [5,6,7]. Our results suggest that the proposed approach localizes EZ within 1 cm accuracy. We suggest that the reconstructions obtained are more focal compared to those obtained with sLORETA, consequently, making the localization less open to interpretation.

[1] M. B. H. Hall, et al. An evaluation of kurtosis beamforming in magnetoencephalography to localize the epileptogenic zone in drug resistant epilepsy patients. Clinical Neurophysiology. 129: 1221-1229, 2018.

[2] F. Neugebauer, et al. Validating EEG, MEG and combined MEG and EEG beamforming for an estimation of the epileptogenic zone in focal cortical dysplasia. Brain Sciences. 12: 114, 2022.

[3] A. Rezaei, et al. Parametrizing the conditionally Gaussian prior model for source localization with reference to the P20/N20 component of median nerve SEP/SEF. Brain Sciences. 10: 934, 2020.

[4] J. Lahtinen, et al. Conditionally Exponential Prior in Focal Near-and Far-Field EEG Source Localization via Randomized Multiresolution Scanning (RAMUS). Journal of Mathematical Imaging and Vision. 64: 587-608, 2022.

[5] A. J. R. Leal, et al. Analysis of the dynamics and origin of epileptic activity in patients with tuberous sclerosis evaluated for surgery of epilepsy. Clinical Neurophysiology. 119: 853-861, 2008.

[6] K. L. de Gooijer‐van de Groep, et al. Inverse modeling in magnetic source imaging: comparison of MUSIC, SAM (g2), and sLORETA to interictal intracranial EEG. Human brain mapping. 34: 2032-2044, 2013.

[7] A. Coito, et al. Interictal epileptogenic zone localization in patients with focal epilepsy using electric source imaging and directed functional connectivity from low‐density EEG. Epilepsia open. 4: 281-292, 2019.


Edge-Preserving Tomographic Reconstruction with Uncertain View Angles

Per Christian Hansen1, Johnathan M. Bardsley2, Yuqiu Dong1, Nicolai A. B. Riis1, Felipe Uribe3

1Technical University of Denmark, Denmark; 2University of Montana; 3Lappeenranta-Lahti University of Technology

In computed tomography, data consist of measurements of the attenuation of X-rays passing through an object. The goal is to reconstruct an image of the linear attenuation coefficient of the object's interior. For each position of the X-ray source, characterized by its angle with respect to a fixed coordinate system, one measures a set of data referred to as a view. A common assumption is that these view angles are known - but in some applications, they are known with imprecision.

We present a Bayesian inference approach to solving the joint inverse problem for the image and the view angles, while also providing uncertainty estimates. For the image, we impose a Laplace difference prior enabling the representation of sharp edges in the image; this prior has connections to total variation regularization. For the view angles, we use a von Mises prior which is a $2\pi$-periodic continuous probability distribution.

Numerical results show that our algorithm can jointly identify the image and the view angles, while also providing uncertainty estimates of both. We demonstrate our method with simulations of a 2D X-ray computed tomography problems using fan beam configurations.

[1] N. A. B. Riis, Y. Dong, P. C. Hansen, Computed tomography reconstruction with uncertain view angles by iteratively updated model discrepancy, J. Math. Imag., 63,:133–143, 2021. doi 10.1007/s10851-020-00972-7.

[2] N. A. B. Riis, Y. Dong, P. C. Hansen, Computed tomography with view angle estimation using uncertainty quantification, Inverse Problems, 37: 065007, 2021. doi 10.1088/1361-6420/abf5ba.

[3] F. Uribe, J. M. Bardsley, Y. Dong, P. C. Hansen, N. A. B. Riis, A hybrid Gibbs sampler for edge-preserving tomographic reconstruction with uncertain angles, SIAM/ASA J. Uncertain. Quantif., 10:1293–1320, 2022. doi 10.1137/21M1412268.


EIT reconstruction using virtual X-rays and machine learning

Siiri Inkeri Rautio

University of Helsinki, Finland

The mathematical model of electrical impedance tomography (EIT) is the inverse conductivity problem introduced by Calderón. The aim is to recover the conductivity $\sigma$ from the knowledge of the Dirichlet-to-Neumann map $\Lambda_\sigma$. It is a nonlinear and ill-posed inverse problem.

We introduce a new reconstruction algorithm for EIT, which provides a connection between EIT and traditional X-ray tomography, based on the idea of "virtual X-rays". We divide the exponentially ill-posed and nonlinear inverse problem of EIT into separate steps. We start by mathematically calculating so-called virtual X-ray projection data from the DN map. Then, we perform explicit algebraic operations and one-dimensional integration, ending up with a blurry and nonlinearly transformed Radon sinogram. We use a neural network to learn the nonlinear deconvolution-like operation. Finally, we can compute a reconstruction of the conductivity using the inverse Radon transform. We demonstrate the method with simulated data examples.



Frequentist Ensemble Kalman Filter

Maia Tienstra, Sebastian Reich

SFB 1294 / Universität Potsdam, Germany

We are interested in the Tikhonov type regularization of statistical inverse problems. The main challenge is the choice of the regularization parameter. Hierarchical Bayesian methods and Bayesian model selection give us theoretical understanding of how the regularization parameters depend on the data. One popular way to solve statistical inverse problems is using the Ensemble Kalman filter (EnKF). We formulate a frequentist version of the continuous time EnKF, which brings us to the well known bias and variance tradeoff of our estimator dependent upon the regularization parameter. From here we can reformulate the choice of regularization parameter as a choice of stopping time dependent on estimation of the residuals. We are not only interested in recovering a point estimator, as in the case of optimization, but in the ability to correctly estimate the spread of the posterior. We numerically and theoretically explore this through the infinite dimensional linear inverse problem, and a non-linear inverse problem arising from the Schrödinger equation. This is joint work with Prof. Dr. Sebastian Reich. This research has been partially funded by the Deutsche Forschungsgemeinschaft (DFG)- Project-ID 318763901 - SFB1294.



Investigation of the effects of Cowling approximation on adiabatic wave propagation in helioseismology

Hélène Barucq1, Lola Chabat1, Florian Faucher1, Damien Fournier2, Ha Pham1

1Team Makutu, Inria University of Pau and Pays de l'Adour, France; 2Max Planck Institute for Solar System Research, Göttingen, Germany

Helioseismology investigates the interior structures and the dynamics of the Sun from oscillations observed on its visible surface. Ignoring flow and rotation, time-harmonic adiabatic waves in a self-gravitating Sun in Eulerian-Lagrangian description are described by the Lagrangian displacement $\mathbf{\xi}$ and the gravitational potential perturbation $\delta_\phi$ which satisfy Galbrun's equation [1] coupled with a Poisson equation. In most works, perturbation to gravitational potential $\delta_\phi$ is neglected under Cowling's approximation [3]. However, this approximation is known to shift the eigenvalues of the forward operator for low-order harmonic modes [4]. Here, we study the effects of this approximation on numerical solutions and discuss its implication for the inverse problem. Removing Cowling's approximation allows us to accurately simulate waves for low-degree modes, and help us better characterize the deep interior of the Sun.

The investigation is carried out for a Sun with minimum activity, called quiet Sun, whose background coefficients are given by the radially symmetric standard solar model Model S in the interior, with a choice of extension beyond the surface to include the presence of atmosphere cf [5]. Radial symmetry is exploited to decouple the problem on each spherical harmonic mode $\ell$ to give a system of ordinary differential equations in radial variable. This extends previous work [1] which employed Cowling approximation. The modal system is resolved by using the Hybridizable Discontinuous Galerkin method (HDG). For purpose of validation, the equations are coupled with free-surface boundary condition which is adapted for low-frequency modes and is commonly employed in helioseismology cf. [6]. Since eigenvalues are poles of Green's tensor, the magnitude of the latter as a function of frequency peaks around an eigenvalue. As preliminary results, we compare the location where the Green's tensor peaks to the values of eigenvalues computed by the GYRE code [2], between which we find a good agreement.

[1] H. Barucq, F. Faucher, D. Fournier, L. Gizon, H. Pham. Efficient computation of modal Green's kernels for vectorial equations in helioseismology under spherical symmetry, 2021.

[2] RHD Townsend, SA Teitler. GYRE: an open-source stellar oscillation code based on a new Magnus Multiple Shooting scheme, Monthly Notices of the Royal Astronomical Society:3406-3418, 2013.

[3] Thomas G Cowling. The non-radial oscillations of polytropic stars, Monthly Notices of the Royal Astronomical Society:367, 1941.

[4] J. Christensen-Dalsgaard. Lecture notes on stellar oscillations, 2014.

[5] J. Christensen-Dalsgaard. Introductory report : Solar Oscillations, Liege International Astrophysical Colloquia:155-207, 1984.

[6] W. Unno,Y. Osaki, H. Ando, H. Shibahashi. Nonradial oscillations of stars, Tokyo: University of Tokyo Press, 1979.



Dual-grid parameter choice method for total variation regularised image deblurring

Yiqiu Dong1, Markus Juvonen2, Matti Lassas2, Ilmari Pohjola2, Samuli Siltanen2

1Technical University of Denmark, Denmark; 2University of Helsinki, Finland

We present a new parameter choice method for total variation (TV) deblurring of images. The method is based on a dual-grid computation of the solution.

Instead of a single grid we have 2 grids with different discretisation. The first grid is the same were the measurement is given. The origin of the second grid is shifted half a pixel width both horizontally and vertically. Note that the underlying true image is the same for both grids. Assume that the pixel size is much smaller than a typical constant valued area in an image. The premise of the study is that when solving the TV regularised noisy deblurring problem with a large enough parameter the solutions on both grids will converge to the same image.The proposed algorithm looks for the smallest parameter with which convergence can be numerically detected.

The method has been tested on both simulated and real image data. Preliminary computational experiments suggest that an optimal parameter can be chosen by monitoring the difference of the TV seminorms of the dual-grid solutions while changing the regularisation parameter size.



Geodesic slice sampling on the sphere

Mareike Hasenpflug1, Michael Habeck2, Shantanu Kodgirwar2, Daniel Rudolf1

1Universität Passau, Germany; 2Universitätsklinikum Jena, Germany

We introduce a geodesic slice sampler on the Euclidean sphere (in arbitrary but fixed dimension) that can be used for approximate sampling from distributions that have a density with respect to the corresponding surface measure. Such distributions occur e.g. in the modelling of directional data or shapes. We provide some mild conditions which ensure that the geodesic slice sampler is reversible with respect to the distribution of interest. Moreover, if the density is bounded, then we obtain a uniform ergodicity convergence result. Finally, we illustrate the performance of the geodesic slice sampler on the sphere with numerical experiments.


Gibbsian Polar Slice Sampling

Philip Schär1, Michael Habeck1, Daniel Rudolf2

1Friedrich Schiller University Jena, Germany; 2University of Passau, Germany

Polar slice sampling [2] is a Markov chain approach for approximate sampling of distributions that is difficult, if not impossible, to implement efficiently, but behaves provably well with respect to the dimension. By updating the directional and radial components of chain iterates separately, we obtain a family of samplers that mimic polar slice sampling, and yet can be implemented efficiently. Numerical experiments for a variety of settings indicate that our proposed algorithm significantly outperforms the two most closely related approaches, elliptical slice sampling [3] and hit-and-run uniform slice sampling [4]. We prove the well-definedness and convergence of our methods under suitable assumptions on the target distribution.

[1] P. Schär, M. Habeck, D. Rudolf. Gibbsian Polar Slice Sampling. arXiv preprint arXiv:2302.03945, 2023.

[2] G. O. Roberts, J. S. Rosenthal. The Polar Slice Sampler. Stochastic Models 18(2):257-280, 2002.

[3] I. Murray, R. Adams, D. MacKay. Elliptical Slice Sampling. Journal of Machine Learning Research 9:541-548, 2010.

[4] D. MacKay. Information Theory, Inference and Learning Algorithms, Cambridge University Press, 2003.


Hybrid knowledge and data-driven approaches for DOT reconstruction in medical imaging

Alessandra Serianni

Università degli Studi di Milano, Italy

Diffuse Optical Tomography (DOT) is an emerging medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a non-invasive and non-ionizing manner. NIR light undergoes multiple scattering throughout the tissue, making DOT reconstruction a severely ill-conditioned problem [1]. In this contribution, we present our research in adopting hybrid knowledge-driven/data-driven approaches which combine the use of well-known physical models with deep learning techniques integrating the collected data. Our main idea is to leverage neural networks to solve PDE-constrained inverse problems of the form \begin{equation*} \theta^*=\arg\min_\theta \mathcal{L}(y,\tilde{y}) \tag{1} \end{equation*} where $\mathcal{L}$ is a loss function which typically contains a discrepancy measure (or data fidelity) term as well as prior information on the solution. In the context of inverse problems like $(1)$, one seeks the optimal set of physical parameters $\theta$, given the set of observations $y$. Moreover, $\tilde{y}$ is the computable approximation of $y$, which may be both obtained from a neural network and in a classic way via the resolution of a PDE with given input coefficients. The idea underlying our approach is to exploit Graph Neural Networks (GNNs) as a fast forward model that solves PDEs: after an appropriate construction of the graph on the spatial domain of the PDE, the message passing framework allows to directly learn the kernel of the network which approximates the PDE solution [2]. Due to the severe ill-conditioning of the reconstruction problem, we also learn a prior over the space of solutions using an auto-decoder type network which maps the latent code to the estimated physics parameter that is passed to the GNN to finally obtain the prediction.

[1] A. Benfenati, G. Bisazza, P. Causin. A Learned SVD approach for Inverse Problem Regularization in Diffuse Optical Tomography, arXiv preprint arXiv:2111.13401, 2021.

[2] Q. Zhao, D.B. Lindell, G. Wetzstein. Learning to Solve PDE-constrained Inverse Problems with Graph Networks, arXiv preprint arXiv:2206.00711, 2022.


Inverse Level-Set Problems for Capturing Calving Fronts

Daniel Abele1,2,4, Angelika Humbert1,3

1Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, Section Glaciology; Bremerhaven, Germany; 2German Aerospace Center, Institute for Software Technology; Oberpfaffenhofen, Germany; 3University of Bremen, Department of Geosciences; Bremen, Germany; 4Technical University of Munich, School for Computation, Information and Technology; Munich, Germany

Capturing the calving front motion is critical for simulations of ice sheets and ice shelves. Multiple physical processes - besides calving also melting and the forward movement of the ice - need to be understood to accurately model the front. Calving is particularly challenging due to its discontinuous nature and modelers require more tools to examine it.

A common technique for capturing the front in ice simulations is the Level-set method. The front is represented implicitly by the zero isoline of a function. The movement of the front is described by an advection equation, where the velocity field is a combination of ice velocity and frontal ablation rate.

To improve understanding of these processes, we are developing methods to estimate parameters of calving laws based on inverse Level-Set problems. The regularization is chosen so it can handle discontinuous parameters or calving laws to fit discontinuous front positions due to large calving events. The input for the inverse problem is formed by observational data from satellite images that is often sparse. The methods will be applied to large scale models of the Antarctic Ice Sheet.



Microlocal analysis of inverse scattering problems

Gregory Samelsohn

Shamoon College of Engineering, Israel

Microlocal analysis has recently been shown to provide deep insight into the transformation of singularities and the origin of certain artifacts for a variety of tomographic imaging problems with limited data (X-ray CT, electron microscopy, SAR imaging, etc.). In this work, we report on an even more close relation between microlocal analysis and some inverse scattering problems. In particular, a new algorithm [1] proposed for tomographic imaging of impenetrable (e.g., perfectly conducting) scatterers is considered. The boundary value problem is converted into a volume integral equation, with a singular double-layer potential. Fourier-Radon inversion of the resulting far-field pattern is then applied to compute an indicator function. No approximations are made in the construction of the forward model and derivation of the inversion algorithm. Instead, some elementary facts of the microlocal analysis, in particular the pseudo-locality of the corresponding operator, are used to recover the support of the scattering potential, and therefore, the shape of the obstacle. Generalizations of this approach to tomographic imaging of the impedance-type and penetrable objects are also discussed.

[1] G. Samelsohn. Tomographic imaging of perfectly conducting objects. J. Opt. Soc. Am. A 40, 229-236: 2023.


Microlocal Analysis of Multistatic Synthetic Aperture Radar Imaging

David McMahon, Clifford Nolan

University of Limerick, Ireland

We consider Synthetic Aperture Radar (SAR) in which scattered waves, simultaneously emitted from a pair of stationary emitters, are measured along a flight track traversed by an aircraft. A linearized mathematical model of scattering is obtained using a Fourier integral operator. This model can then be used to form an image of the ground terrain using backprojection together with a carefully designed data acquisition geometry.

The data is composed of two parts, corresponding to the received signals from each emitter. A backprojection operator can be easily chosen that correctly reconstructs the singularities in the wave speed using just one emitter. One would expect this to lead to a reasonable image of the terrain. However, we expect that application of this backprojection operator to the data from the other emitter will lead to unwanted artifacts in the image. We analyse the operators associated with this situation, and use microlocal analysis to determine configurations of flight path and emitter locations so that we may mitigate the artifacts associated to such “cross talk” between the two emitters.



Revealing Functional Substructure of Retinal Ganglion Cell Receptive Fields Using Tomography-Based Stimulation

Steffen Krüppel1,2,3, Tim Gollisch1,2,3

1Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany; 2Bernstein Center for Computational Neuroscience, Göttingen, Germany; 3Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), University of Göttingen, Göttingen, Germany

Retinal ganglion cells (RGCs) are the output cells of the retina and perform various computations on the visual signals that are detected by the retina’s photoreceptors. Here, nonlinearities in an RGC’s receptive field – the subset of all photoreceptors that (indirectly) connect to a given RGC – play an essential role. Many RGC types are spatially nonlinear, that is they integrate signals from different areas of their receptive field nonlinearly. This spatial nonlinearity is mediated via so-called subunits which in turn are considered to be linear and thought to correspond to those cells that provide direct excitatory input to RGCs, the retinal bipolar cells. In order to understand RGC responses to the finely structured natural images animals encounter, knowledge of the subunits is critical. In addition, large-scale electrophysiological studies of RGCs are relatively simple, but the same cannot be said about bipolar cells. Efforts have therefore been made to infer the subunits of a given RGC from recordings of the RGC’s activity to visual stimuli presented to the retina. Yet, methods to quickly and consistently infer how many subunits compose an RGC’s receptive field and where they are located are rare. The problem is made more difficult by additional nonlinearities in the system, the unknown shapes of the nonlinearities, and potential interactions between subunits. Our approach is to flash a bar with a preferred-contrast center and sidebands of non-preferred contrast in the receptive field of an RGC at various positions and angles. If the bar width is similar to the expected subunit size, the responses of the RGC should, for a given angle and varying position, roughly correspond to a projection of the subunit layout along the bar’s orientation. Borrowing from tomography, we can thus compose a sinogram out of all responses of an RGC and reconstruct the subunit layout using, e.g., filtered back-projection. In simulations of RGCs with various subunit layouts, we find that those RGC responses that are generated by excitation of a specific subunit are well confined to a small region in the sinogram. This often allows successful reconstruction of the subunit layout, but reconstruction quality of realistic layouts is limited by nonlinearities not accounted for by filtered back-projection. We also performed multi-electrode array recordings from isolated primate retinas where our approach revealed substructure in many RGC receptive fields as well. Altogether, our tomographic subunit detection method is a promising candidate to quickly and reliably infer substructure in the receptive field of an RGC, thereby laying foundations to better predict responses to natural images and indirect large-scale bipolar cell studies.

Acknowledgements: This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—project IDs 154113120 (SFB 889, project C01); 432680300 (SFB 1456, project B05)—and by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement number 724822).


Non-stationary hyperspectral unmixing with learnt regularization

Julia Marie Lascar

Université Paris Saclay, CEA Irfu, France

In astrophysics or remote sensing, spectro-imagers can record cubes of data called hyperspectral images, with two spatial dimensions and a third dimension of energy. Often, the observed data are a mixture of several emitting sources. Thus, the task of source separation is key to perform detailed studies of the underlying physical components.

Most source separation algorithms assume a stationary mixing model, i.e. a sum of spectra, one per component, each multiplied by an amplitude map. But in many cases, this assumption is erroneous, since the spectral shape of each component varies spatially due to physical properties. Our algorithm’s goal is to achieve non-stationary source separation, obtaining for each component a cube with varying spectral shape. This is an ill-posed inverse problem, thus in need of regularization.

For spectral regularization, we use a generative model learned on auto-encoders, which constrains the spectra to interpretable shapes in a semi-supervised scheme. This was combined with a spatial regularization scheme, via a sparse modelling of the generative model’s latent parameters. The optimization was achieved in an algorithm of alternating proximal gradient descent. It was tested for the case study of X-ray astrophysics spectro-imagery, for which results will be shown on realistic simulated data. To our knowledge, this is the first method to extend sparse blind source separation to the non-stationary case.


Reduced Order Methods for Linear Gaussian Bayesian Inverse Problems on separable Hilbert Spaces

Giuseppe Carere, Han Cheng Lie

University of Potsdam, Germany

In Bayesian inverse problems, the computation of the posterior distribution can be computationally demanding, especially in many-query settings such as filtering, where a new posterior distribution must be computed many times. In this work we consider some computationally efficient approximations of the posterior distribution for linear Gaussian inverse problems defined on separable Hilbert spaces. We measure the quality of these approximations using the Kullback-Leibler divergence of the approximate posterior with respect to the true posterior and investigate their optimality properties. The approximation method exploits low dimensional behaviour of the update from prior to posterior, originating from a combination of prior smoothing, forward smoothing, measurement error and limited number of observations, analogous to the results of Spantini et al. [1] for finite dimensional parameter spaces. Since the data is only informative on a low dimensional subspace of the parameter space, the approximation class we consider for the posterior covariance consists of suitable low rank updates of the prior. In the Hilbert space setting, care must be taken when inverting covariance operators. We address this challenge by using the Feldman-Hajek theorem for Gaussian measures.

[1] A. Spantini, A. Solonen, T. Cui, J. Martin, L. Tenorio, Y. Marzouk. Optimal Low-Rank Approximations of Bayesian Linear Inverse Problems. SIAM J. on Sci. Comp. 37, no. 6: A2451–87, 2015


GAN-based motion correction in MRI

Mathias Simon Feinler, Bernadette Hahn

University of Stuttgart, Germany

Magnetic Resonance Imaging allows high resolution data acquisition with the downside of motion sensitivity due to relatively long acquisition times. Even during the acquisition of a single 2D slice, motion can severely corrupt the image. Retrospective motion correction strategies do not interfere during acquisition time but operate on the motion affected data. In most applications, the trajectories are cartesian like in the HASTE sequence. These classical sampling schemes show no or only marginal temporal redundancy by the sensitivity encoding (SENSE) that multiple receiver coils provide. Hence, in practice, residual based optimizations will fail to produce motion artifact free images. In recent years, Generative Adversarial Networks (GANs) have gained interest for motion compensation. Albeit the performance is visually appealing, it cannot be guaranteed that small details of diagnostic relevance are predicted correctly, even if large parts of the image are in fact motion artifact free. To this end we propose a learned iterative procedure to substantiate the reconstructions and achieve data consistency. We show that, dependent on the complexity of deformations, even small details which have initially been erased by GANs can be recovered.


Dynamic Computerized Tomography using Inexact Models and Data-driven Motion Detection

Gesa Sarnighausen, Anne Wald

Georg-August-Universität Göttingen, Germany

Reconstructing a dynamic object with affine motion in computerized tomography leads to motion artefacts if the motion is not taken into account. The iterative RESESOP-Kaczmarz method can - under certain conditions - reconstruct dynamic objects at different time points even if the exact motion is unknown. However, the method is very time consuming. In order to speed the reconstruction process up and obtain better results, the following three steps are used:

1. RESESOP-Kacmarz with only a few iterations is implemented to reconstruct the object at different timepoints.

2. The motion is estimated via deep learning.

3. The estimated motion is integrated into the reconstruction process, allowing to use dynamic filtered backprojection.


Phase retrieval beyond the homogeneous object approximation in X-ray holographic imaging

Jens Lucht1, Leon Merten Lohse1,2, Simon Huhn1, Tim Salditt1

1Georg-August-Universität Göttingen, Germany; 2Deutsches Elektronen-Synchrotron DESY

X-ray near field in-line holographic imaging using highly coherent synchrotron radiation offers spatial resolution down to the nanometer scale. Combined with tomographic methods it allows high resolution three-dimensional imaging with wide applicability in life, natural and material sciences. Using X-ray phase contrast enables the study of samples that show little to no conventional absorption based contrast, for example soft tissue. Since the phase cannot be directly measured, it has to be retrieved from the measured diffraction patterns by solving an ill-posed inverse problem.

To solve this phase problem, one common approximation for X-ray Fresnel holography is the so-called homogeneous or single material object approximation. It restricts the phase shift and absorption part of the object's refractive index to be proportional. Hence, the number of unknowns of the inverse problem can be reduced from two, phase shift and absorption, to one while also imposing restrictions on the sample to satisfy this approximation. Multi material samples naturally violate this assumption and hence reconstructions with homogeneous object assumption show artifacts. To resolve this incompatibility we present a reconstruction method which relaxes the homogeneous object assumption based on linearization of the Fresnel diffraction for weak objects, known as contrast transfer function (CTF), that is also popular in electron microscopy. We demonstrate that reconstruction quality can be significantly improved if physical priors are imposed on the reconstruction with tools of constrained optimization. Furthermore, we discuss the stability and experimental design for the proposed method.


Reconstruction of active forces generated by actomyosin networks

Emily Klass, Anne Wald

Georg-August-University Göttingen, Germany

Biological Cells rely on the interaction of multiple proteins to perform various forms of movement such as cell contraction, division, and migration. In particular, the proteins actin is able to create long branching filament structures which the protein myosin can bind to and slide along on, thereby creating so-called acto-myosin networks. These networks produce mechanical stress resulting in movement of the cell itself, and its interior.

We depict the physical process of the flow inside of cells generated by acto-myosin networks using a 2-dimensional droplet model using the Stokes equation for incompressible Newtonian fluids for non-constant viscosities. Here we add a Neumann boundary condition where the normal component of the velocity field on the boundary of the droplet vanishes to represent that no fluid can flow in or out of the domain. Further, we add a Robyn-type or slip boundary condition to model the interaction with surrounding fluids. We choose a non-constant viscosity to portray the acto-myosin network.

We aim to reconstruct the active forces inside of the droplet from noisy measurements of the velocity field. This results a (deterministic) parameter identification problem for the Stokes equation.


Reconstruction of the potential in a hyperbolic equation in dimension 3

Faguèye Ndiaye Sylla1, Mouhamadou Ngom2, Mariama Ndiaye3, Diaraf Seck1

1Université de Chaikh Anta Diop, Sénégal; 2Université Alioune Diop de Bambey, Sénégal; 3Université Gaston Berger de Saint-Louis, Sénégal

In this paper, we consider the wave equation $\partial_{tt}v(x,t)-\Delta v(x,t)+p(x)v(x,t)=0$ in $B\times(0,T) $, where $B$ is the unit ball in $ {\mathbb R^{3}}$ and, $T>0$. We are interested in the inverse problem of identifying the potential $p(x)$ from the Cauchy data $(f, \partial_n v)$ where $f$ is all possible functions on the boundary $\partial B \times(0,T) $ and $ \partial_n v $ the measurements of the normal derivative of the solution of the wave on $\partial B \times(0,T) $ associated to $f$.

Using spherical harmonics tools and an explicit formula for the Dirichlet-to-Neumann map $\Lambda_{p}$ which associates to all $f$ the measurements $ \partial_n v$ in a unit ball in dimension $3$, we determine an explicit expression for the potential $p(x)$ on the domain edge. We present theoretically and numerically an example.


Compensating motion and model inexactness in nano-CT

Björn Ehlers, Anne Wald

Universität Göttingen, Germany

In Nano CT imaging the scale is so small that we have unwanted and unknown movement of the scanned object relativ to the tomograph, for example due to vibrations of the measuring apparatus. Not incorporating these movements in the radon operator leads to artefact due to the model inexactness. Reconstructing the rigid body motion of the object is possible due to the structure of the range of the Radon operator. The range of a Radon operator which includes the movement is different from one that does not. This can be used to extract the motion, correct the radon operator, correct the data or to estimate the operator error for use in a scheme called sequential subspace optimisation. We will focus on the error estimation for this regularisation method.


Combining Non-Data-Adaptive Transforms For OCT Image Denoising By Iterative Basis Pursuit

Raha Razavi1, Hossein Rabbani2, Gerlind Plonka1

1Georg-August-University of Goettingen, Goettingen, Germany; 2Isfahan University of Medical Sciences, Isfahan, Iran

Optical Coherence Tomography (OCT) images, as well as a majority of medical images, are imposed to speckle noise while capturing. Since the quality of these images is crucial for detecting any abnormalities, we develop an improved denoising algorithm that is particularly appropriate for OCT images. The essential idea is to combine two non-data-adaptive transform-based denoising methods that are capable to preserve different important structures appearing in OCT images while providing a very good denoising performance. Based on our numerical experiments, the most appropriate non-data-adaptive transforms for denoising and feature extraction are the Discrete Cosine Transform (DCT) (capturing local patterns) and the Dual-Tree Complex Wavelet Transform (DTCWT) (capturing piecewise smooth image features). These two transforms are combined using the Dual Basis Pursuit Denoising (DBPD) algorithm. Further improvement of the denoising procedure is achieved by total variation (TV) regularization and by employing an iterative algorithm based on DBPD.


Iterated Arnoldi-Tikhonov method

Davide Bianchi, Marco Donatelli, Davide Furchì, Lothar Reichel

Università degli Studi dell'Insubria, Italy

When solving an ill-posed linear operator equation, most of the analysis does not take the discretization error into account. This paper contributes to address this gap. Building upon the analysis presented in [1], we extend the study to the iterated framework. Firstly, we demonstrate a saturation result for the Arnoldi-Tikhonov solution method outlined in [2]. Subsequently, we extend the analysis to the iterated Arnoldi-Tikhonov method, providing a parameter choice rule, which produces higher-quality computed solutions compared to the standard Arnoldi-Tikhonov method. Theoretical results are supported by relevant computed examples.

[1] A. Neubauer. An a posteriori parameter choice for Tikhonov regularization in the presence of modeling error. Appl. Numer. Math., 4 1986.

[2] R. Ramlau, L. Reichel. Error estimates for Arnoldi-Tikhonov regularization for ill-posed operator equations. IOP Publishing, Inverse Problems 35, 2019.



Measurement and analysis strategies for EUV pump-probe spectroscopic imaging

Gijsbert Simon Matthijs Jansen1, Hannah Strauch1, Thorsten Hohage2, Stefan Mathias1

1I. Physical Institute, University of Goettingen, Germany; 2Institute of Numerical and Applied Mathematics, University of Goettingen, Germany

Interference-based measurement methods allow the extraction of phase differences of electromagnetic waves, thus adding phase information to an intensity-based measurement. In holography, the encoded phase information allows the reconstruction of the complete wavefront, which has powerful applications in imaging. Similarly, Fourier-transform spectroscopy allows spectral information to be extracted from pulse delay-dependent interference measurements. Recently, it was demonstrated that the combination of these interferometric measurements might enable hyperspectral imaging in the extreme ultraviolet (EUV). However, to obtain full spectral information, normally long reference-probe delay scans are required, resulting in long measurement times and large amounts of data.

We aim to address this challenge by implementing a combination of Fourier transform spectroscopy and Fourier transform holography: Two phase-locked EUV pulses are imaged to separate reference and probe positions on the sample plane, which leads to both spectral and spatial information to be encoded in the far-field diffraction pattern. This interferometric approach provides an opportunity to reduce sampling requirements, as a suitable reconstruction algorithm can implement prior knowledge on the spatial domain to constrain the spectral domain (and vice versa). As typical for diffraction microscopy, the measurement data are proportional to the amplitude of the Fourier transform, meaning that although the forward model is nonlinear, it can be efficiently implemented. From simulations and analysis of the forward model, we will discuss ways to adapt both measurement and analysis to facilitate efficient acquisition of multidimensional pump-probe spectroscopy data.

 
3:00pm - 5:30pmExc: Excursions
Location: Forum des Wissens
6:00pm - 10:00pmCD: Evening Talk with Dinner
Location: Alte Mensa
Session Chair: Thorsten Hohage
 

Shaken, not Stirred! – James Bond in the Spotlight of Physics"

Metin Tolan

President of the University of Göttingen, Germany

--
 

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.102+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany