Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Date: Friday, 08/Sept/2023
9:00am - 9:50amPl 9: Plenary lecture
Location: ZHG 011
Session Chair: Matti Lassas
 

Inverse problems for wave equations

Ali Feizmohammadi

University of Toronto, Canada

The main topic will be inverse problems for linear and nonlinear wave equations. I will describe results in both stationary and non-stationary spacetimes. An example of inverse problems in stationary spacetimes is the imaging of internal structure of the earth from surface measurements of seismic waves arising from earthquakes or artificial explosions. Here, the materialistic properties of the internal layers of the earth are generally assumed to be independent of time. On the other hand, inverse problems for non-stationary spacetimes are inspired by the theory of general relativity as well as gravitational waves where waves follow paths that curve not only in space but also in time. We introduce a method of solving such inverse boundary value problems, and show that lower order coefficients can be recovered under certain curvature bounds. The talk is based on joint works with Spyros Alexakis and Lauri Oksanen.
 
9:50am - 10:40amPl 10: Plenary lecture
Location: ZHG 011
Session Chair: William Rundell
 

On inverse problems for piezoelectric equations

Xiang Xu

Zhejiang University, China, People's Republic of

During this talk, we will discuss recent advancements in inverse problems for piezoelectric equations. Specifically, we will present a uniqueness result that pertains to recovering coefficients of piecewise homogeneous piezoelectric equations from a localized Dirichlet-to-Neumann map on partial boundaries. Additionally, we obtained a first-order perturbation formula for the phase velocity of Bleustein-Gulyaev (BG) waves in a specific hexagonal piezoelectric equation. This formula expresses the shift in velocity from its comparative value, caused by the perturbation of the elasticity tensor, piezoelectric tensor, and dielectric tensor. This work has been done in collaboration with G. Nakamura, K. Tanuma, and J. Xu.

 
10:40am - 11:10amC8: Coffee Break
Location: ZHG Foyer
11:10am - 12:00pmPl 11: Plenary lecture
Location: ZHG 011
Session Chair: Lauri Oksanen
 

Reconstruction of spacetime structures in general relativity and Lorentzian geometry

Yiran Wang

Emory University, United States of America

The field of relativistic astrophysics has witnessed a major revolution with the development of increasingly more sensitive telescopes and gravitational wave detectors on Earth and in space. An outstanding question is what can be learned from the observed data. In this talk, we report recent progresses on two inverse problems of reconstructing spacetime structures. The first problem is the recovery of initial status of the universe from the Cosmic Microwave Background. Mathematically, the heart of the problem is an integral transform in Lorentzian geometry, called the light ray transform. We discuss its injectivity, stability and connections to wave equations and kinetic theory. The second problem is the recovery of black hole spacetimes from gravitational wave signals observed by LIGO. In particular, we show how to "hear” the shape of black holes by using the characteristic frequencies (or quasi-normal modes) extracted from the black hole ring-down.
 
12:00pm - 1:30pmLB4: Lunch Break
Location: Mensa
1:30pm - 3:30pmCT10: Contributed talks
Location: VG3.102
Session Chair: Gerlind Plonka
 

Exact Parameter Identification in PET Pharmacokinetic Modeling Using the Irreversible Two Tissue Compartment Model

Erion Morina1, Martin Holler1, Georg Schramm2

1University of Graz, Austria; 2Stanford University, USA

In this talk we consider the identifiability of metabolic parameters from multi-compartment measurement data in quantitative positron emission tomography (PET) imaging, a non-invasive clinical technique that images the distribution of a radio tracer in-vivo.

We discuss how, for the frequently used two-tissue compartment model and under reasonable assumptions, it is possible to uniquely identify metabolic tissue parameters from standard PET measurements, without the need of additional concentration measurements from blood samples. The core assumption requirements for this result are that PET measurements are available at sufficiently many time points, and that the arterial tracer concentration is parametrized by a polyexponential, an approach that is commonly used in practice. Our analytic identifiability result, which holds in the idealized, noiseless scenario, indicates that costly concentration measurements from blood samples in quantitative PET imaging can be avoided in principle. The connection to noisy measurement data is made via a consistency result in Tikhonov regularization theory, showing that exact reconstruction is maintained in the vanishing noise limit.

We further present numerical experiments with a regularization approach based on the Iteratively Regularized Gauss-Newton Method (IRGNM) supporting these analytic results in an application example.


Regularized Maximum Likelihood Estimation for the Random Coefficients Model

Fabian Dunker, Emil Mendoza, Marco Reale

University of Canterbuy

The random coefficients regression model $Y_i={\beta_0}_i+{\beta_1}_i {X_1}_i+{\beta_2}_i {X_2}_i+\ldots+{\beta_d}_i {X_d}_i$, with $\boldsymbol{X}_i$, $Y_i$, $\boldsymbol{\beta}_i$ i.i.d random variables, and $\boldsymbol{\beta}_i$ independent of $\boldsymbol{X}_i$ is often used to capture unobserved heterogeneity in a population. Reconstructing the joint density of random coefficients $\boldsymbol{\beta}_i=({\beta_0}_i,\ldots, {\beta_d}_i)$ implicitly involves the inversion of a Radon transformation. We propose a regularized maximum likelihood method with non-negativity and $\|\cdot\|_{L^1}=1$ constraint to estimate the density. We analyse the convergence of the method under general assumptions and illustrate the performance in a real data application and in simulations comparing it to the method of approximate inverse.


Adaptive estimation of $\alpha-$generalized random fields for statistical linear inverse problems with repeated measurements

Mihaela Pricop-Jeckstadt

University POLITEHNICA of Bucharest, Romania

In this talk we study an adaptive two-step estimation method for statistical linear inverse problems with repeated measurements for smoothness classes expressed as $\alpha-$generalized random fields [1]. In a first step, the minimum fractional singularity order $\alpha$ is estimated, and in the second step the penalized least squares estimator with smoothness estimated in the first step is studied [2]. Rates of convergence for both the process smoothness and the penalized estimator are proven and illustrated through numerical simulations.

[1] M. D. Ruiz-Medina, J. M. Angulo, V. V. Anh. Fractional generalized random fields on bounded domains. Stochastic Anal. Appl. 21: 465--492, 2005.

[2] S. Golovkine, N. Klutchnikoff, V. Patilea. Learning the smoothness of noisy curves with application to online curve estimation. Electron. J. Stat. 16: 1485--1560, 2022.


The Range of Projection Pair Operators

Richard Huber, Rolf Clackdoyle, Laurent Desbat

Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC, 38000 Grenoble, France.

Tomographic techniques have become a vital tool in medicine, allowing doctors to observe patients’ interior features. Modeling the measurement process (and the underlying physics) are projection operators, the most well-known one being the classical Radon transform. Identifying the range of projection operators has proven itself useful in various tomography-related applications [1-3], such as geometric calibration, motion detection, or more general projection model parameter identification. Projection operators feature the integration of density functions along certain curves (typically straight lines representing paths of radiation), and are subdivided into individual projections -- data obtained during a single step of the measurement process.

Mathematically, given a bounded open set $\Omega\subset \mathbb{R}^2$ and bounded open sets $R,T\subset \mathbb{R}$, a function $\gamma\colon R\times T \to \mathbb{R}^2$ that diffeomorphically covers $\Omega$ and a function $\rho \colon R\times T \to \mathbb{R}^+$, an individual projection is an operator $p\colon L^2(\Omega) \to L^2(R)$ with $$ [p{f}](r) = \int_{T} f\big(\gamma(r,t)\big) \rho\big(r,t\big) \,\mathrm{d}{t} \qquad \text{for all } r\in R $$ for $f \in \mathcal{C}^\infty_c(\Omega)$ (the unknown density). In other words, $r$ determines an integration curve $\gamma(r,\cdot)$, and $[pf](r)$ is the associated line integral weighted by $\rho$ (representing physical effects such as attenuation). Note that we do not allow projection truncation as $\Omega$ is covered by $\gamma$. More general projection operators are $P\colon L^2(\Omega)\to L^2(R_{1})\times \cdots \times L^2(R_{N})$ with $Pf=(p_{1}f,\dots,p_{N}f)$ with $N$ projections (with associated $\gamma_n,\rho_n,R_n,T_n$ for $n\in \{1,\dots,N\}$). In this work, we are concerned with characterizing the range of what we call projection pair operators, i.e., projection operators $P=(p_1,p_2)$ consisting of only two projections $(N=2)$. Conditions on the range of projection pair operators naturally impose properties on larger projection operators' ranges. These pairwise range conditions are particularly convenient for applications.

A natural approach for identifying the range is determining the range's orthogonal complement. The orthogonal complement being small would facilitate determining whether a projection pair is inside the range. We find that such normal vectors naturally consist of two functions $G_1$ and $G_2$ -- one per projection -- that need to satisfy $$ -\frac{\rho_{1}\big(\gamma_{1}^{-1 }(x)\big) \left |\det\left( \frac{\,\mathrm{d}{ { \gamma_{1}^{-1 }}}}{\,\mathrm{d}{ x} }(x)\right)\right|}{\rho_{2}\big(\gamma_{2}^{-1 }(x)\big) \left|\det\left( \frac{ \mathrm{d}{\gamma_{2}^{-1 }}}{\mathrm{d}{ x}}(x)\right)\right|} = \frac{G_2(r_2(x))}{G_1(r_1(x))} \qquad \text{for a.e. }x\in \Omega, $$ where $r_1(x)$ is such that $x\in \gamma_{1}(r_1(x),\cdot)$ and analogously for $r_2(x)$. This uniquely determines the orthogonal direction; therefore, the orthogonal complement's dimension is at most one. Thus, two projections' information can only overlap in a single way. Due to this equation's specific structure -- the right-hand side is a ratio of functions depending only on $r_1$ and $r_2$, respectively -- it is easy to imagine that this equation is not always solvable. While it is solvable for some standard examples like the conventional and exponential Radon transforms (whose ranges were already characterized [4,5]), we find that no solution exists for the exponential fanbeam transform and for the Radon transform with specific depth-effects. The fact that no solution exists implies that the operator's range is dense. Range conditions of this type can only precisely characterize the range when it is closed (otherwise, only the closure is characterized). In this regard, we find that the question of the range's closedness is equivalent for all projection pair operators whose $\gamma$ and $\rho$ functions are suitably related.

Acknowledgment: This work was supported by the ANR grant ANR-21-CE45-0026 `SPECT-Motion-eDCC'.

[1] F. Natterer. “Computerized Tomography with Unknown Sources”. SIAM Journal on Applied Mathematics 43.5,DOI : 10.1137/0143079:1201–1212 1983.

[2] J. Xu, K. Taguchi, B. Tsui. “Statistical Projection Completion in X-ray CT Using Consistency Conditions”. IEEE Trans. Med. Imaging 29, DOI : 10.1109/TMI.2010. 2048335: 1528–1540 2010.

[3] R. Clackdoyle, L. Desbat. “Data consistency conditions for truncated fanbeam and parallel projections.” Medical physics 42 2:831–45, 2015.

[4] V. Aguilar, P. Kuchment. “Range conditions for the multidimensional exponential X-ray transform”. Inverse Problems 11.5 ,DOI : 10.1088/0266-5611/11/5/002: 977, 1995.

[5] F. Natterer. The Mathematics of Computerized Tomography. Philadelphia: Society for Industrial and Applied Mathematics, Chap. II.4, 2001.

 
1:30pm - 3:30pmCT11: Contributed talks
Location: VG1.108
Session Chair: Housen Li
 

Extension and convergence of sixth order Jarratt-type method

Suma Panathale Bheemaiah

Manipal Institute of Technology, Manipal Academy of Higher Education, India

A sixth order convergence of Jarratt-type method for solving nonlinear equations is considered. Weaker assumptions on the derivative of the involved operator is made, contrary to the earlier studies. The convergence analysis does not depend on the Taylor series expansion and this increases the applicability of the proposed method. Numerical examples and Basins of attractions of the method are provided in this study.

[1] I.K. Argyros , S. Hilout. On the local convergence of fast two-step Newton-like methods for solving nonlinear equations: Journal of Computational and Applied Mathematics 245:1-9, 2013.

[2] A. Cordero , M.A. Hernández-Verón , N. Romero , J.R. Torregrosa. Semilocal convergence by using recurrence relations for a fifth-order method in Banach spaces: Journal of computational and applied mathematics,volume(273):205-213, 2015.

[3] S. George , I.K. Argyros , P. Jidesh , M. Mahapatra, M. Saeed. Convergence Analysis of a Fifth-Order Iterative Method Using Recurrence Relations and Conditions on the First Derivative: Mediterranean Journal of Mathematics,volume(18):1-12, 2021.

[4] P. Jarratt. Some fourth order multipoint iterative methods for solving equations: Mathematics of computation, Vol(20):434-437, 1966.

[5] H. Ren. On the local convergence of a deformed Newton’s method under Argyros-type condition, Journal of Mathematical Analysis and Applications, 321(1):396-404. 2006.

[6] S. Singh, D.K. Gupta, E. Martínez , J.L. Hueso. Semilocal convergence analysis of an iteration of order five using recurrence relations in Banach spaces: Mediterranean Journal of Mathematics. volume(13):4219-4235, 2016.


Optimal design for aeroacoustics with correlation data

Christian Aarset, Thorsten Hohage

University of Göttingen, Germany

A key problem in aeroacoustics is the inverse problem of estimating an unknown random source from correlation data sampled from surrounding sensors. We study optimal design for this and related problems, that is, we identify the sensor placement minimising covariance of the solution to the inverse random source problem, while remaining sparse. To achieve this, we discuss the assumption of gaussianity and how to adapt this to our setting of correlation data, and demonstrate how this model can lead to sparse designs for aeroacoustic experiments.


Source separation for Electron Paramagnetic Resonance Imaging

Mehdi Boussâa, Rémy Abergel, Sylvain Durand, Yves Frapart

Université Paris Cité, France

Electron Paramagnetic Resonance Imaging (EPRI) is a versatile imaging modality that enables the study of free radical molecules or atoms from materials $\textit{in-vitro}$ to$ \textit{in-vivo}$ appplication in biomedical research. Clinical applications are currently under investigation. While recent advancements in EPRI techniques have made it possible to study a single free radical, or source, inside the imaging device [1], the reconstruction of multiple sources, or source separation, remains a challenging task. The state-of-the-art technique heavily relies on time-consuming acquisition and voxel-wise direct inverse methods, which are prone to artifacts and do not leverage the spatial consistency of the source images to reconstruct. To address this issue, we propose a variational formulation of the source separation problem with a Total Variation $\textit{a-priori}$, which emphasizes the spatial consistency of the source. This approach drastically reduces the needed number of acquisitions without sacrificing the quality of the source separation. An EPRI experimental study has been conducted, and we will present some of the results obtained.

[1] S. Durand, Y.-M. Frapart, M. Kerebel. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization. Inverse Problems, 33(11):114002, 2017.

 
1:30pm - 3:30pmCT12: Contributed talks
Location: VG2.104
Session Chair: Frank Werner
 

Designing an algorithm for low-dose Poisson phase retrieval

Benedikt Diederichs1, Frank Filbir1, Patricia Römer1,2

1Helmholtz Center Munich; 2Technical University of Munich

Many experiments in the field of optical imaging are modelled as phase retrieval problems. Motivated by imaging experiments with biological specimens that need to be measured using a preferably low dose of illumination particles, we consider phase retrieval systems with small Poisson noisy measurements. In this talk, we discuss how to formulate a suitable optimization problem. We study reasonable loss functions adapted to the Poisson distribution, optimized for low-dose data. As a solver, we apply gradient descent algorithms with Wirtinger derivatives. For the proposed loss functions, we analyze the convergence of the respective Wirtinger flow type algorithms to stationary points. We present numerical reconstructions from phase retrieval measurements in a low-dose regime to corroborate our theoretical observations.


ADMM methods for Phase Retrieval and Ptychography

Albert Fannjiang

UC Davis, United States of America

We present a systematic derivation and local convergence analysis for various ADMM algorithms in phase retrieval and ptychography.

We also discuss the extension of these algorithms to blind ptychography where the probe is unknown and compare their numerical performance.



Phase retrieval in the wild: In situ optimized reconstruction for X-ray in-line holography

Johannes Dora1,2, Johannes Hagemann2, Silja Flenner3, Christian Schroer2, Tobias Knopp1,4

1Hamburg University of Technology (TUHH), Germany; 2Deutsches Elektronen Synchrotron (DESY), Germany; 3Helmholtz-Zentrum Geesthacht (HEREON), Germany; 4University Medical Center Hamburg-Eppendorf (UKE), Germany

The phase problem is a well known challenge in propagation-based phase-contrast X-ray imaging, describing the situation that whenever a detector measures a complex X-ray wavefield, the phase information is lost, i.e. only the magnitude of the measured wavefield remains as a usable data set. The resulting inverse problem is ill-posed and non-convex, requiring twice as many variables to be reconstructed to obtain the complex-valued image of the object under study.

In a recent development we have changed the representation of the reconstructed image [1]. The classical representation as amplitude and phase suffers from phase wrapping ambiguities. The representation as the projected refractive index of the object avoids these problems. However, this algorithm still suffers from slow convergence and convergence to local minima.

In our work, we have investigated the main causes of slow convergence and local minima for the Nesterov accelerated projected gradient descent type of algorithm that is currently used in practice. We propose a framework of different techniques to address these problems and show that by combining the proposed methods, the reconstruction result can be dramatically improved in terms of reconstruction speed and quality. We apply our proposed methods to several datasets obtained from the nano-imaging experiment at the Hereon-operated beamline P05 at DESY (Hamburg, Germany). We demonstrate that our proposed framework can cope with single-distance measurements, which is a requirement for in-situ/operando experiments, and without a compact support constraint, while maintaining robustness along a wide range of samples.

[1] F. Wittwer, J.Hagemann et al. Phase retrieval framework for direct reconstruction of the projected refractive index applied to ptychography and holography, Optica 9: 295-302, 2022
 
1:30pm - 3:30pmMS06 4: Inverse Acoustic and Electromagnetic Scattering Theory - 30 years later
Location: VG3.103
Session Chair: Fioralba Cakoni
Session Chair: Houssem Haddar
 

Nonlinear integral equations for 3D inverse acoustic and electromagnetic scattering

Olha Ivanyshyn Yaman

Hartree Centre, Science and Technology Facilities Council, UK

We present two extensions of the method, originally developed by Kress and Rundell in 2005 for a 2D inverse boundary value problem for the Laplace equation. In particular, we consider the reconstruction of a 3D perfectly electric conductor obstacle, and the reconstruction of generalized surface impedance functions for acoustic scattering from the knowledge of far-field measurements of a scattered wave associated with a few incident plane waves. Inverse scattering problems are solved numerically by the approach based on the reformulation of a problem as a system of nonlinear and ill-posed integral equations for the unknown boundary (or boundary condition) and the measurements. The iteratively regularized Gauss-Newton method is applied to the resulting system.

[1] R. Kress, W. Rundell. Nonlinear integral equations and the iterative solution for an inverse boundary value problem. Inverse Probl. 21(4): 1207--1223, 2005.

[2] O. Ivanyshyn Yaman, F. Le Lou\"{e}r. Material derivatives of boundary integral operators in electromagnetism and application to inverse scattering problems. Inverse Probl. 32(9): 095003, 2016.

[3] O. Ivanyshyn Yaman. Reconstruction of generalized impedance functions for 3D acoustic scattering. J. Comput. Phys, 392(1): 444--455, 2019.



Inverse scattering in a partially embedded waveguide

Laurent Bourgeois1, Jean-François Fritsch2, Arnaud Recoquillay2

1ENSTA Paris/POEMS, France; 2CEA LIST, France

This talk concerns the identification of defects in a closed waveguide which is partially embedded in a surrounding medium, from scattering measurements on the free part of the waveguide. We wish to model for example a NDT experiment on a steel cable embedded in concrete. There are two main issues: the back-scattering situation and the leakage of waves from the closed waveguide to the surrounding medium. We will first introduce Perfectly Matched Layers in the transverse direction in order to transform the structure into a junction of two closed-half waveguides, one of them being a complex stratified medium. Then, after discussing the well-posedness of the forward problem and its numerical resolution, we will show how we can solve the inverse problem with the help of a modal formulation of the Linear Sampling Method. Some 2D numerical experiments will be shown.


Revisiting the Hybrid method for the inverse scattering transmission problem

Pedro Serranho1,2,3, João Paixão1,3

1Universidade Aberta, Portugal; 2CIBIT, University of Coimbra, Portugal; 3CEMAT, University of Lisbon, Portugal

In this talk we will address the numerical solution of the time-harmonic inverse scattering problem for an obstacle with transmission conditions and with given far-field data. To this end we will revisit the ideas of the hybrid method [1,2,3,4,5] that combines the framework of the Kirsch-Kress decomposition method and the iterative Newton-type method.

Instead of linearizing all the equations at once as in [6,7], we will explore the possibility of in a first ill-posed step reconstructing the scattered exterior field and the interior field by imposing the far-field condition and one of the boundary conditions and then in a second step linearizing on the second boundary condition in order to update the approximation of the boundary of the obstacle. The first and second steps are then iterated until some stopping criteria is achieved.

[1] R. Kress, P. Serranho. A hybrid method for two-dimensional crack reconstruction, Inverse Probl. 21 (2): 773--784, 2005.

[2] P. Serranho. A hybrid method for inverse scattering for shape and impedance, Inverse Probl. 22 (2): 663--680, 2006.

[3] R. Kress, P. Serranho. A hybrid method for sound-hard obstacle reconstruction, J. Comput. Appl. Math. 204 (2): 418--427, 2007.

[4] P. Serranho. A hybrid method for inverse scattering for Sound-soft obstacles in $\mathbb R^{3}$. Inverse Problems and Imaging. 1(4): 691--712, 2007.

[5] O. Ivanyshyn, R. Kress, P. Serranho. Huygens’ principle and iterative methods in inverse obstacle scattering. Adv. Comput. Math. 33 (4): 413--429, 2010.

[6] A. Altundag, R. Kress. An iterative method for a two-dimensional inverse scattering problem for a dielectric. J. Inverse Ill-Posed Probl. 20 (4): 575--590, 2012.

[7] A. Altundag. Inverse obstacle scattering with conductive boundary condition for a coated dielectric cylinder. J. Concr. Appl. Math. 13 ,(1--2): 11--22, 2015.


Single Mode Multi-frequency Factorization Method for the Inverse Source Problem in Acoustic Waveguides

Shixu Meng

Academy of Mathematics and Systems Science, Chinese Academy of Sciences, China, People's Republic of

This talk discusses the inverse source problem with a single propagating mode at multiple frequencies in an acoustic waveguide. The goal is to provide both theoretical justifications and efficient algorithms for imaging extended sources using the sampling methods. In contrast to the existing far/near field operator based on the integral over the space variable in the sampling methods, a multi-frequency far-field operator is introduced based on the integral over the frequency variable. This far-field operator is defined in a way to incorporate the possibly non-linear dispersion relation, a unique feature in waveguides. The factorization method is deployed to establish a rigorous characterization of the range support which is the support of source in the direction of wave propagation. A related factorization-based sampling method is also discussed. These sampling methods are shown to be capable of imaging the range support of the source. Numerical examples are provided to illustrate the performance of the sampling methods, including an example to image a complete sound-soft block.
 
1:30pm - 3:30pmMS07 1: Regularization for Learning from Limited Data: From Theory to Medical Applications
Location: VG1.101
Session Chair: Markus Holzleitner
Session Chair: Sergei Pereverzyev
Session Chair: Werner Zellinger
 

Regularized Radon - Nikodym differentiation and some of its applications

Duc Hoan Nguyen, Sergei Pereverzyev, Werner Zellinger

Johann Radon Institute for Computational and Applied Mathematics, Austria

We discuss the problem of estimation of Radon-Nikodym derivatives. This problem appears in various applications, such as covariate shift adaptation, likelihood-ratio testing, mutual information, and conditional probability estimation. To address the above problem, we employ the general regularization scheme in reproducing kernel Hilbert spaces. The convergence rate of the corresponding regularized learning algorithm is established by taking into account both the smoothness of the derivative and the capacity of the space in which it is estimated. It is done in terms of general source conditions and the regularized Christoffel functions. The theoretical results are illustrated by numerical simulations.


Explicit error rate results in the context of domain generalization

Markus Holzleitner

JKU Linz, Austria

Given labeled data from different source distributions, the problem of domain generalization is to learn a model that is expected to generalize well on new target distributions for which you only have unlabeled samples. We frame domain generalization as a problem of functional regression. This concept leads to a new algorithm for learning a linear operator from marginal distributions of inputs to the corresponding conditional distributions of outputs given inputs. Our algorithm allows a source distribution-dependent construction of reproducing kernel Hilbert spaces for prediction and satisfies non-asymptotic error bounds for the idealized risk. We intend to give a short overview on the required mathematical concepts and proof techinques, and illustrate our approach by a numerical example. The talk is based on [1].

[1] M. Holzleitner, S. V. Pereverzyev, W. Zellinger. Domain Generalization by Functional Regression. arXiv:2302.04724, 2023.



Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation

Marius-Constantin Dinu

JKU Linz, Austria

We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution. We follow the strategy to compute several models using different hyper-parameters, and, to subsequently compute a linear aggregation of the models. While several heuristics exist that follow this strategy, methods are still missing that rely on thorough theories for bounding the target error. In this turn, we propose a method that extends weighted least squares to vector-valued functions, e.g., deep neural networks. We show that the target error of the proposed algorithm is asymptotically not worse than twice the error of the unknown optimal aggregation. We also perform a large scale empirical comparative study on several datasets, including text, images, electroencephalogram, body sensor signals and signals from mobile phones. Our method outperforms deep embedded validation (DEV) and importance weighted validation (IWV) on all datasets, setting a new state-of-the-art performance for solving parameter choice issues in unsupervised domain adaptation with theoretical error guarantees. We further study several competitive heuristics, all outperforming IWV and DEV on at least five datasets. However, our method outperforms each heuristic on at least five of seven datasets. This talk is based on [1].

[1] M.-C. Dinu, M. Holzleitner, M. Beck, H. D. Nguyen, A. Huber, H. Eghbal-zadeh, B. A. Moser, S. Pereverzyev, S. Hochreiter, W. Zellinger. Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation. The Eleventh International Conference on Learning Representations (ICLR), 2023.


Convex regularization in statistical inverse learning problems

Luca Ratti1, Tatiana A. Bubba2, Martin Burger3, Tapio Helin4

1Università degli Studi di Bologna, Italy; 2University of Bath, UK; 3Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany; 4Lappeenranta-Lahti University of Technology, Finland

We consider a problem at the crossroad between inverse problems and statistical learning: namely, the estimation of an unknown function from noisy and indirect measurements, which are only evaluated at randomly distributed design points. This occurs in many contexts in modern science and engineering, where massive data sets arise in large-scale problems from poorly controllable experimental conditions. When tackling this task, a common ground between inverse problems and statistical learning is represented by regularization theory, although with slightly different perspectives. In this talk, I will present a unified approach, leading to convergence estimates of the regularized solution to the ground truth, both as the noise on the data reduces and as the number of evaluation points increases. I will mainly focus on a class of convex, $p$-homogeneous regularization functionals ($p$ being between $1$ and $2$), which allow moving from the classical Tikhonov regularization towards sparsity-promoting techniques. Particular attention is given to the case of Besov norm regularization, which represents a case of interest for wavelet-based regularization. The most prominent application I will discuss is X-ray tomography with randomly sampled angular views. I will finally sketch some connections with recent extensions of our approach, including a more general family of sparsifying transforms and dynamical inverse problems.
 
1:30pm - 3:30pmMS08 1: Integral Operators in Potential Theory and Applications
Location: VG2.102
Session Chair: Doosung Choi
Session Chair: Mikyoung Lim
Session Chair: Stephen Shipman
 

On the identification of small anomaly via MUSIC algorithm without background information

Won-Kwang Park

Kookmin University, Korea, Republic of (South Korea)

MUltiple SIgnal Classification (MUSIC) is a promising non-iterative technique for identifying small anomaly in microwave imaging. For a successful application, accurate values of permittivity, permeability, and conductivity of the background must be known. If one of these values is unknown, inaccurate location will inevitably retrieved by using the MUSIC. To explain this phenomenon, we investigate the structure of the imaging function of MUSIC by establishing a relationship with an infinite series of the Bessel functions of integer order, antenna arrangement, and applied values of permittivity, permeability, and conductivity. The revealed structure explains the theoretical reason why inaccurate location of anomaly is retrieved. Simulation results with synthetic data are illustrated to support the theoretical result.

[1] W.-K. Park. Application of MUSIC algorithm in real-world microwave imaging of unknown anomalies from scattering matrix, Mech. Syst. Signal Proc. 153: Article No. 107501, 2021.

[2] R. Solimene, G. Ruvio, A. Dell'Aversano, A. Cuccaro, Max J. Ammann, R. Pierri. Detecting point-like sources of unknown frequency spectra, Prog. Electromagn. Res. B 50: 347-364, 2013.


Construction of inclusions with vanishing generalized polarization tensors by imperfect interfaces

Doosung Choi1, Mikyoung Lim2

1Louisiana State University, United States of America; 2Korea Advanced Institute of Science and Technology, Republic of Korea

We address this question and provide a new construction scheme to find GPT-vanishing structures by imperfect interfaces. In particular, we construct GPT-vanishing structures of general shape with imperfect interfaces, where the inclusions have arbitrarily finite conductivity.


Spectral theory of surface plasmons in the nonlocal hydrodynamic Drude model

Hyundae Lee1, Matias Ruiz3, Sanhyeon Yu2

1Inha University, South Korea; 2Korea University, South Korea; 3University of Edinburgh, Scotland

We study surface plasmons, which are collective oscillations of electrons at metal-dielectric interfaces that can be excited by light. The local Drude model, which is the standard way to describe surface plasmons, ignores the spatial and quantum variations of the electron gas. These variations matter at the nanoscale and can change how metallic nanostructures interact with light. We use integral operator methods to investigate how the nonlocal hydrodynamic Drude model (HDM), which accounts for these variations, affects the spectral properties of surface plasmons in general shapes with smooth boundaries.


Numerical computation of Laplacian eigenvalues based on the layer potential formulation

Mikyoung Lim, Jiho Hong

Korea Advanced Institute of Science and Technology, Korea, Republic of (South Korea)

In this talk, we will present a numerical method that allows for the Laplacian eigenvalues of a planar, simply connected domain by only using the coefficients of the conformal mapping of the domain. We formulate the eigenvalue problems using the layer potential characterization and geometric density basis functions, resulting in an infinite dimensional matrix, where geometric density basis functions are associated with the conformal mapping of the domain. We will discuss how to compute the eigenvalues by using this infinite-dimensional matrix. Additionally, we will provide some convergence analysis for this approach based on the Gohberg-Sigal theory for operator-valued functions.

 
1:30pm - 3:30pmMS09: Forward and inverse domain uncertainty quantification
Location: VG1.102
Session Chair: Vesa Kaarnioja
Session Chair: Claudia Schillings
 

Isogeometric multilevel quadrature for forward and inverse random acoustic scattering

Jürgen Dölz1, Helmut Harbrecht2, Carlos Jerez-Hanckes3, Michael Multerer4

1University of Bonn, Germany; 2University of Basel, Switzerland; 3USI Lugano, Switzerland; 4Universidad Adolfo Ibáñez, Santiago, Chile

We study the numerical solution of forward and inverse time-harmonic acoustic scattering problems by randomly shaped obstacles in three-dimensional space using a fast isogeometric boundary element method. Within the isogeometric framework, realizations of the random scatterer can efficiently be computed by simply updating the NURBS mappings which represent the scatterer. This way, we end up with a random deformation field. In particular, we show that it suffices to know the deformation field’s expectation and covariance at the scatterer’s boundary to model the surface’s Karhunen–Loève expansion. Leveraging on the isogeometric framework, we employ multilevel quadrature methods to approximate quantities of interest such as the scattered wave’s expectation and variance. By computing the wave’s Cauchy data at an artificial, fixed interface enclosing the random obstacle, we can also directly infer quantities of interest in free space. Adopting the Bayesian paradigm, we finally compute the expected shape and variance of the scatterer from noisy measurements of the scattered wave at the artificial interface. Numerical results for the forward and inverse problems validate the proposed approach.


Evolving surfaces driven by stochastic PDEs

Annika Lang

Chalmers & University of Gothenburg, Sweden

Motivated by evolving shapes such as moving cells, we construct examples of evolving stochastic surfaces by transformation of solutions to stochastic PDEs on spheres. We focus on the stochastic heat equation and its approximation to understand the transformation and simulation methods for the surfaces.


Multilevel domain UQ in computational electromagnetics

Jakob Zech1, Ruben Aylwin2, Carlos Jerez-Hanckes2, Christoph Schwab3

1Universität Heidelberg; 2Universidad Adolfo Ibáñez; 3ETH Zürich

In this talk, we focus on the numerical approximation of time-harmonic electromagnetic fields for the Maxwell lossy cavity problem on uncertain domains. To deal with the different problem geometries, a shape parametrization framework that maps physical domains to a fixed polyhedral nominal domain is adopted. We discuss multilevel Monte Carlo sampling and multilevel sparse-grid quadrature for computing the expectation of the solutions with respect to uncertain domain ensembles. In addition, we analyze sparse-grid interpolation to compute surrogates of the domain-to-solution mappings. A rigorous fully discrete error analysis is provided, and we prove that dimension-independent algebraic convergence is achieved.


Advantages of locality in random field representations for shape uncertainty quantification

Laura Scarabosio, Wouter van Harten

Radboud University, Netherlands, The

We consider the solution to an elliptic partial differential equation on a domain which is subject to uncertain changes in its shape.

When representing uncertain shape variations, using localized basis functions can be appealing from a modeling point of view, as they offer more geometrical flexibility compared to globally supported basis functions. In this talk, we will see that locality of basis functions can also be convenient in terms of approximation properties with respect to the uncertain parameter. Extending ideas from [1,2], it is indeed possible to prove, using pointwise summability bounds, that sparse polynomial approximations to the parameter-to-solution map may converge faster if localized functions are used in the shape representation. We will also see that this approximability result goes beyond shape uncertainty, and it applies in fact to many other parameter-to-solution maps, as long as they are smooth and have some given sparsity properties.

[1] M. Bachmayr, A. Cohen, G. Migliorati. Sparse polynomial approximation of parametric elliptic PDEs. Part I: affine coefficients, ESAIM: Mathematical Modelling and Numerical Analysis 51(1): 321-339, 2017.

[2] M. Bachmayr, A. Cohen, R. DeVore, G. Migliorati, Sparse polynomial approximation of parametric elliptic PDEs. Part II: lognormal coefficients, ESAIM: Mathematical Modelling and Numerical Analysis 51(1): 341-363, 2017
 
1:30pm - 3:30pmMS10 3: Optimization in Inverse Scattering: from Acoustics to X-rays
Location: VG1.103
Session Chair: Radu Ioan Bot
Session Chair: Russell Luke
 

Automated tight Lyapunov analysis for first-order methods

Manu Upadhyaya1, Sebastian Banert1, Adrien Taylor2, Pontus Giselsson1

1Lund University, Sweden; 2INRIA Paris, France

We present a methodology for establishing the existence of quadratic Lyapunov inequalities for a wide range of first-order methods used to solve convex optimization problems. In particular, we consider

i) classes of optimization problems of finite-sum form with (possibly strongly) convex and possibly smooth functional components,

ii) first-order methods that can be written as a linear system on state-space form in feedback interconnection with the subdifferentials of the functional components of the objective function, and

iii) quadratic Lyapunov inequalities that can be used to draw convergence conclusions.

We provide a necessary and sufficient condition for the existence of a quadratic Lyapunov inequality that amounts to solving a small-sized semidefinite program. We showcase our methodology on several first-order methods that fit the framework. Most notably, our methodology allows us to significantly extend the region of parameter choices that allow for duality gap convergence in the Chambolle–Pock method when the linear operator is the identity mapping.


Learned SVD for limited data inversion in PAT and X-ray CT

Markus Haltmeier, Johannes Schwab, Stephan Antholzer

Universität Innsbruck, Austria

We present a data-driven regularization method for inverse problems introduced in [1,2]. Our approach consists of two steps. In the first step, an intermediate reconstruction is performed by applying the truncated singular value decomposition (SVD). To prevent noise amplification, only coefficients corresponding to sufficiently large singular values are used, while the remaining coefficients are set to zero. In a second step, a trained deep neural network is used to recover the truncated SVD coefficients. We show that the proposed scheme yields a convergent regularization method. Numerical results are presented for limited data problems in PAT (photacoustic tomography) and X-ray CT (computed tomography), showing that learned SVD regularization significantly improves pure truncated SVD regularization.

[1] J. Schwab, S. Antholzer, M. Haltmeier. Big in Japan: Regularizing networks for solving inverse problems, Journal of Mathematical Imaging and Vision, 62(3): 445-455, 2020.

[2] J. Schwab, S. Antholzer, R. Nuster, G. Paltauf, M. Haltmeier. Deep learning of truncated singular values for limited view photoacoustic tomography, Photons Plus Ultrasound: Imaging and Sensing 10878: 254-262, 2019.


Multiscale hierarchical decomposition methods for ill-posed problems

Tobias Wolf1, Elena Resmerita1, Stefan Kindermann2

1University of Klagenfurt, Austria; 2Johannes Kepler University Linz, Austria

The Multiscale Hierarchical Decomposition Method (MHDM) is a popular method originating from mathematical imaging. In its original context, it is very well suited to recover fine details of solutions to denoising and deblurring problems. The main idea is to iteratively solve a ROF minimization problem. In every iteration, the data for the ROF functional will be the residual from the previous step, and the approximation to the true data will consist of the sum of all minimizers up to that step. Thus, one obtains iterates that represent a decomposition of the ground truth into multiple levels of detail at different scales. We consider the method in a more general framework, by replacing the total variation seminorm in the ROF functional by more general penalty terms in appropriate settings. We expand existing convergence results for the residual of the iterates in the case when some classes of convex and nonconvex penalties are employed. Moreover, we propose a necessary and sufficient condition under which the iterates of the MHDM agree with Tikhonov regularizers corresponding to suitable regularization parameters.  We discuss the results on several examples, including 1- and 2-dimensional TV-denoising.



Scalable moment relaxations for graph-structured problems with values in a manifold: An optimal transport approach

Robin Kenis, Emanuel Laude, Panagiotis Patrinos

KULeuven, Belgium

In this paper we consider a moment relaxation for large-scale nonsmooth optimization problems with graphical structure and manifold constraints. In the context of probabilistic inference this can be interpreted as MAP-inference in a continuous graphical model. In contrast to classical moment relaxations for global polynomial optimization we exploit the partially separable structure of the optimization problem and leverage Kantorovich–Rubinstein duality for optimal transport to decouple the problem. The proposed formulation is obtained via a dual subspace approximation which allows us to tackle possibly nonpolynomial optimization problems with manifold constraints and geodesic coupling terms. We show that the duality gap vanishes in the limit by proving that a Lipschitz continuous dual multiplier on a unit sphere can be approximated as closely as desired in terms of a Lipschitz continuous polynomial. This is closely related to spherical harmonics and the eigenfunctions of the Laplace–Beltrami operator. The formulation is applied to manifold-valued imaging problems with total variation regularization and graph-based SLAM. In imaging tasks our approach achieves small duality gaps for a moderate degree. In graph-based SLAM our approach often finds solutions which after refinement with a local method are near the ground truth solution.
 
1:30pm - 3:30pmMS13 1: Stochastic iterative methods for inverse problems
Location: VG0.111
Session Chair: Tim Jahn
 

Beating the Saturation of the Stochastic Gradient Descent for Linear Inverse Problems

Bangti Jin1, Zehui Zhou2, Jun Zou1

1The Chinese University of Hong Kong; 2Rutgers University, United States of America

Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems, due to its excellent scalability with respect to data size. The current mathematical theory in the lens of regularization theory predicts that SGD with a polynomially decaying stepsize schedule may suffer from an undesirable saturation phenomenon, i.e., the convergence rate does not further improve with the solution regularity index when it is beyond a certain range. In this talk, I will present our recent results on beating this saturation phenomenon:

(i) By using a small initial step size. We derive a refined convergence rate analysis of SGD, which shows that saturation does not occur if the initial stepsize of the schedule is sufficiently small.

(ii) By using Stochastic variance reduced gradient (SVRG), a popular variance reduction technique for SGD. We prove that, with a suitable constant step size schedule, SVRG can achieve an optimal convergence rate in terms of the noise level (under suitable regularity conditions), which means the saturation does not occur.


Early stopping of untrained convolutional networks

Tim Jahn1, Bangti Jin2

1University of Bonn, Germany; 2Chinese University of Hong Kong

In recent years new regularisation methods based on neural networks have shown promising performance for the solution of ill-posed problems, e.g., in imaging science. Due to the non-linearity of the networks, these methods often lack profound theoretical justification. In this talk we rigorously discuss convergence for an untrained convolutional network. Untrained networks are particulary attractive for applications, since they do not require any training data. Its regularising property is solely based on the architecture of the network. Because of this, appropriate early stopping is essential for the success of the method. We show that the discrepancy principle is an adequate method for early stopping here, as it yields minimax optimal convergence rates.


Stochastic mirror descent method for linear ill-posed problems in Banach spaces

Qinian Jin

The Australian National University, Australia

Consider linear ill-posed problems governed by the system $A_i x = y_i$ for $i= 1,\cdots, p$, where each $A_i$ is a bounded linear operator from a Banach space $X$ to a Hilbert space $Y_i$. In case p is huge, solving the problem by an iterative regularization method using the whole information at each iteration step can be very expensive, due to the huge amount of memory and excessive computational load per iteration. To solve such large-scale ill-posed systems efficiently, we develop a stochastic mirror descent method which uses only a small portion of equations randomly selected at each iteration steps and incorporates convex regularization terms into the algorithm design. Therefore, our method scales very well with the problem size and has the capability of capturing features of sought solutions. The convergence property of the method depends crucially on the choice of step-sizes. We consider various rules for choosing step-sizes and obtain convergence results under a priori stopping rules. Furthermore, we establish an order optimal convergence rate result when the sought solution satisfies a benchmark source condition. Various numerical simulations are reported to test the performance of the method. This is a joint work with Xiliang Lu and Liuying Zhang.


Early stopping for spectral filter estimators regularized by projection

Alain Celisse, Samy Clementz

Paris 1 Panthéon-Sorbonne University, France

When using iterative algorithms such as gradient descent, a classical problem is the choice of the number of iterations to perform in practice. This is a crucial question since the number of iterations determines the final statistical performance of the resulting estimator.

The main purpose of the present talk is to design such data-criven stopping rules called "early stopping rules" (ESR) that will answer the above question not only for gradient descent but also for the broader class of spectral filter estimators among which ridge regression for instance.

Compared to previous works in this direction, the present contribution focuses on the computational issue raised by the use of spectral filter estimators in the context of a huge amount of data. In particular this requires the additional use of regularization by projection techniques for efficiently computing (approximations to) the spectral filter estimators.

In this talk we develop a theoretical analysis of the behavior of these projection-based spectral filter (PSF) estimators. Oracle inequalities also quantify the performance of the data-driven early stopping rule applied to these PSF estimators.
 
1:30pm - 3:30pmMS31 1: Inverse Problems in Elastic Media
Location: VG3.104
Session Chair: Andrea Aspri
Session Chair: Ekaterina Sherina
 

Hybrid Inverse Problems for Nonlinear Elasticity

Alden Waters1, Hugo Carrillo-Lincopi2

1Leibniz University of Hannover, Germany; 2Inria, Chile

We consider the Saint-Venant model in 2 dimensions for nonlinear elasticity. Under the hypothesis the fluid is incompressible, we recover the displaced field and the Lame parameter $\mu$ from power density measurements. A stability estimate is shown to hold for small displacement fields, under some natural hypotheses on the direction of the displacement. The techniques introduced show the difficulties of using hybrid imaging techniques for non-linear inverse problems.


Quantitative reconstruction of viscoelastic media with attenuation model uncertainty.

Florian Faucher1, Otmar Scherzer2

1Makutu, Inria Bordeaux, France; 2Faculty of Mathematics, University of Vienna, Austria

We consider the inverse wave problem of reconstructing the properties of a viscoelastic medium. The data acquisition corresponds to probing waves that are sent and measured outside of the sample of interest, in the configuration of non-intrusive inversion. In media with attenuation, waves lose energy when propagating through the domain. Attenuation is a frequency-dependent phenomenon with several models existing [1], each leading to different models of wave equations. Therefore, in addition to adding unknown coefficients to the inverse problem, the attenuation law characterizing a medium is typically unknown prior to the reconstruction, hence further increasing the ill-posedness. In this work, we consider time-harmonic waves which are convenient to unify the different models of attenuation using complex-valued parameters. We illustrate the difference in wave propagation depending on the attenuation law and carry out the reconstruction with attenuation model uncertainty [2]. That is, we perform the reconstruction procedure with different attenuation models used for the (synthetic) data generation and for the reconstruction. In this way, we show the robustness of the reconstruction method. Furthermore, we investigate a configuration with reflecting boundary surrounding the sample. To handle the resulting multiple reflections, we introduce a strategy of reconstruction with a progression of complex frequencies. We illustrate with experiments of ultrasound imaging.

[1] J. M. Carcione. Wave Fields in Real Media: Wave Propagation in Anisotropic, Anelastic, Porous and Electromagnetic Media, third ed., Elsevier, 2015.

[2] F. Faucher, O. Scherzer. Quantitative inverse problem in visco-acoustic media under attenuation model uncertainty, Journal of Computational Physics 472: 111685, 2023. https://doi.org/10.1016/j.jcp.2022.111685


An Intensity-based Inversion Method for Quasi-Static Optical Coherence Elastography

Ekaterina Sherina1, Lisa Krainz2, Simon Hubmer3, Wolfgang Drexler2, Otmar Scherzer1,3

1University of Vienna, Austria; 2Medical University of Vienna, Austria; 3Johann Radon Institute Linz, Austria

We consider optical coherence elastography, which is an emerging research field but still lacking precision and reproducibility. Elastography as an imaging modality aims at mapping of the biomechanical properties of a given sample. This problem is widely used in Medicine, in particular for the non-invasive identification of malignant formations inside the human skin or tissue biopsies during surgeries. In term of diagnostics accuracy, one is interested in quantitative values mapped on top of the visualisation of the sample rather then only qualitative images.

In this work, we discuss a general intensity-based approach to the inverse problem of quasi-static elastography, under any deformation model. From a pair of tomographic scans obtained by an imaging modality of choice, e.g. as X-ray, ultrasound, magnetic resonance, optical imaging or other, we aim to recover one or a set of unknown material parameters describing the sample. This approach has been briefly introduced in [1], under the name of intensity-based inversion method, and applied for recovery of the Young's modulus of a set of samples imaged with Optical Coherence Tomography. Here, we mainly focus on investigating the intensity-based inversion approach in the Inverse Problems framework. Furthermore, we illustrate the performance of the inversion method on twelve silicone elastomer phantoms with inclusions of varying size and stiffness.

[1] L. Krainz, E. Sherina, S. Hubmer, M. Liu, W. Drexler, O. Scherzer. Quantitative Optical Coherence Elastography: A Novel Intensity-Based Inversion Method Versus Strain-Based Reconstructions. IEEE J. Sel. Topics Quantum Electron. 29(4): 1-16, 2023. DOI: 10.1109/JSTQE.2022.3225108.


On the identification of cavities and inclusions in linear elasticity with a phase-field approach

Elena Beretta1, Andrea Aspri2, Marco Verani3, Elisabetta Rocca4, Cecilia Cavaterra1

1New York University Abu Dhabi, United Arab Emirates; 2Università degli Studi di Milano, Italy; 3Polytechnic University of Milan, Italy; 4University of Pavia, Italy

I analyze the geometric inverse problem of recovering cavities and inclusions embedded in a linear elastic isotropic medium from boundary displacement measurements. Starting from a constrained minimization problem involving a boundary quadratic misfit functional with a regularization term penalizing the perimeter of the cavity or inclusion we consider a family of relaxed functionals using a phase-field approach and derive a robust algorithm for the reconstruction of elastic inclusions and cavities modeled as inclusions with a very small elasticity tensor.
 
1:30pm - 3:30pmMS41 1: Geomathematics
Location: VG3.101
Session Chair: Joonas Ilmavirta
 

Geodesic X-ray tomography on manifolds of low regularity

Antti Kaleva Kykkänen

University of Jyväskylä, Finland

Geodesic X-ray tomography arises in geomathematics as the linearized travel time problem of planets. Planets have non-smooth geometry as the sound speed is generally non-smooth, can have jump discontinuities and other extreme behavior. In this talk we consider the question: How non-smooth can Riemannian geometry be for the X-ray transform of scalar functions (and tensor fields) to remain injective? We prove that the X-ray transform is (solenoidally) injective on Lipschitz functions (tensor fields) when the Riemannian geometry is simple $C^{1,1}$. The class $C^{1,1}$ is the natural lower bound on regularity to have a well-defined X-ray transform. Our proofs are based on energy estimates derived from a Pestov identity, which lives on the non-smooth unit sphere bundle of the manifold. The talk is based on joint work with Joonas Ilmavirta.


Invariance of the elastic wave equation in the context of Finsler geometry

Hjørdis Amanda Schlüter

University of Jyväskylä, Finland

In this talk we address the Euclidean elastic wave equation under change of variables and extend this to Riemannian geometry. This is inspired by previous research that has concerned the principal behavior of the Euclidean elastic wave equation under coordinate transformations. Further research has concerned how the density normalized stiffness tensor gives rise to a Finsler metric. With this in mind we will touch upon what one can say about the density and stiffness tensor fields that give rise to the same Finsler metric. In this context we will talk about how this will affect the full elastic wave equation and not only the principal behavior.


Geometrization of inverse problems in seismology

Joonas Ilmavirta

University of Jyväskylä, Finland

Seismic waves can be modeled by the elastic wave equation, which has two material parameters: the stiffness tensor and the density. The inverse problem is to reconstruct these two fields from boundary data, and the stiffness tensor can be anisotropic. I will discuss how this problem can be tackled by geometric methods and how that leads to geometric inverse problems in Finsler geometry. This talk is related several other talks in the same minisymposium.


Reconstruction of anisotropic stiffness tensors using algebraic geometry

Maarten de Hoop1, Joonas Ilmavirta2, Matti Lassas3, Anthony Varilly-Alvarado1

1Rice University, United States of America; 2University of Jyväskylä, Finland; 3University of Helsinki

Stiffness tensors serve as a fingerprint of a material. We describe how to harness anisotropy, using standard tools from algebraic geometry (e.g., generic geometric integrality, upper-semicontinuity of some standard functions, and Gröbner bases) to uniquely reconstruct the stiffness tensor of a general anisotropic material from an analytically small neighborhood of its corresponding slowness surface.
 
1:30pm - 3:30pmMS46 2: Inverse problems for nonlinear equations
Location: VG1.104
Session Chair: Lauri Oksanen
Session Chair: Teemu Kristian Tyni
 

Inverse source problems for nonlinear equations

Yi-Hsuan Lin

National Yang Ming Chiao Tung University, Taiwan

In this talk, we perform inverse source problems for nonlinear equations. Unlike linear differential equations, which always have gauge invariance. We investigate how the gauge symmetry could be broken for several nonlinear and nonlocal equations, which leads to unique determination results for certain equations.


Inverse problem for the minimal surface equation and nonlinear CGO calculus in dimension 2

Tony Liimatainen

University of Helsinki

We present our recent results regarding inverse problems for the minimal surface equation. Applications of the result include generalized boundary rigidity problem and AdS/CFT correspondence in physics. Minimal surfaces are solutions to a quasilinear elliptic equation and we determine the minimal surface up to an isometry from the corresponding Dirichlet-to-Neumann map in dimension 2. For this purpose we develop a nonlinear calculus for complex geometric optics solutions (CGOs) to handle numerous correction terms that appear in our analysis. We expect the calculus to be applicable to inverse problems for other nonlinear elliptic equations in dimension 2. The talk is based on joint works with Catalin Carstea, Matti Lassas and Leo Tzou.


Inverse scattering problems for semi-linear wave equations on manifolds

Teemu Tyni1, Spyros Alexakis1, Hiroshi Isozaki2, Matti Lassas3

1University of Toronto, Canada; 2University of Tsukuba, Japan; 3University of Helsinki, Finland

We discuss some recent results on inverse scattering problems for semi-linear wave equations. The inverse scattering problem is formulated on a Lorentzian manifold equipped with a Minkowski type infinity. We show that a scattering functional, which roughly speaking maps measurements of solutions of a semi-linear wave equation at the past infinity to the future infinity, determines the manifold, the conformal class of the metric, and the nonlinear potential function up to a gauge. The main tools we employ are a Penrose-type conformal compactification of the Lorentzian manifold, reduction of the scattering problem to the study of the source-to-solution operator, and the use of higher order linearization method to exploit the nonlinearity of the wave equation.

This is a joint work with S. Alexakis, H. Isozaki, and M. Lassas.


Determining a Lorentzian metric from the source-to-solution map for the relativistic Boltzmann equation

Tracey Balehowsky1, Antti Kujanpaa2, Matti Lassas3, Tony Liimatainen3

1University of Calgary, Canada; 2The Hong Kong University of Science and Technology; 3University of Helsinki

In this talk, we consider the following inverse problem: Given the source-to-solution map for a relativistic Boltzmann equation on a neighbourhood $V$ of an observer in a Lorentzian spacetime $(M,g)$ and knowledge of $g|_V$, can we determine (up to diffeomorphism) the spacetime metric $g$ on the domain of causal influence for the set $V$?

We will show that the answer is yes for certain cases. We will introduce the relativistic Boltzmann equation and the concept of an inverse problem. We then will highlight the key ideas of the proof of our main result. One such key point is that the nonlinear term in the relativistic Boltzmann equation which describes the behaviour of particle collisions captures information about a source-to-solution map for a related linearized problem. We use this relationship together with an analysis of the behaviour of particle collisions by classical microlocal techniques to determine the set of locations in $V$ where we first receive light particle signals from collisions in the unknown domain. From this data we are able to parametrize the unknown region and determine the metric.


Determining Lorentzian manifold from non-linear wave observation at a single point

Medet Nursultanov

University of Helsinki, Finland

Our research demonstrates that it is possible to determine the Lorenzian manifold by measuring the source-to-solution map for the semilinear wave equation at a single point. (Joint work with Lauri Oksanen and Leo Tzou).
 
1:30pm - 3:30pmMS50 1: Mathematics and Magnetic Resonance Imaging
Location: VG1.105
Session Chair: Kristian Bredies
Session Chair: Christian Clason
Session Chair: Martin Uecker
 

Deep learning MR image reconstruction and task-based evaluation

Florian Knoll, Jinho Kim, Marc Vornehm, Vanya Saksena, Zhengguo Tan, Bernhard Kainz

Department Artificial Intelligence in Biomedical Engineering, FAU Erlangen-Nuremberg, Germany

The inverse problem of reconstructing MR images $u$ from Fourier ($k$-) space data $f$ takes the form of the optimization problem:

$$\min \| Au - f \|_2^2 + \lambda \mathcal{R}(u).$$ $A=\mathcal{F}_\Omega C$ is the forward operator that describes the MR encoding process. It consists of a Fourier transform $\mathcal{F}_\Omega$ that maps from image space to Fourier ($k$-) space coefficients at the coordinates $\Omega$ and a diagonal matrix $C$ that contains the sensitivity profiles of the receiver coils of the MR system. $\mathcal{R}$ is a regularizer that separates between true image content and artifacts introduced by an accelerated acquisition. It has been demonstrated that deep learning methods that map the image reconstruction optimization problem onto unrolled neural networks and learn a regularizer from training data [1] achieve state of the art performance in public research challenges [2].

In this work, we will present an update on the performance of learned image reconstruction for a range of clinically relevant applications and discuss the issue of missing-, as well as artificially hallucinated fine-detail image features [3]. We will present results for cardiac, oncological and neuroimaging applications, and will also introduce a novel task-based evaluation for the quality of the reconstructed images using the fastMRI+ dataset [4].

[1] K. Hammernik, T. Klatzer, E. Kobler, M. P. Recht, D. K. Sodickson, T. Pock, F. Knoll. Learning a Variational Network for Reconstruction of Accelerated MRI Data, Magnetic Resonance in Medicine 79: 3055–3071, 2018. https://doi.org/10.1002/mrm.26977

[2] M. J. Muckley, B. Riemenschneider, A. Radmanesh, S. Kim, G. Jeong, J. Ko, Y. Jun, H. Shin, D. Hwang, M. Mostapha, S. Arberet, D. Nickel, Z. Ramzi, P. Ciuciu, J.-L. Starck, J. Teuwen, D. Karkalousos, C. Zhang, A. Sriram, Z. Huang, N. Yakubova, Y. W. Lui, F. Knoll. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction, IEEE Transactions on Medical Imaging 40: 2306–2317, 2021. https://doi.org/10.1109/TMI.2021.3075856

[3] A. Radmanesh, M. J. Muckley, T. Murrell, E. Lindsey, A. Sriram, F. Knoll, D. K. Sodickson, Y.W. Lui. Exploring the Acceleration Limits of Deep Learning Variational Network–based Two-dimensional Brain MRI, Radiology: Artificial Intelligence 4, 2022. https://doi.org/10.1148/ryai.210313

[4] R. Zhao, B. Yaman, Y. Zhang, R. Stewart, A. Dixon, F. Knoll, Z. Huang, Y. W. Lui, M. S. Hansen, M. P. Lungren. fastMRI+: Clinical Pathology Annotations for Knee and Brain Fully Sampled Multi-Coil MRI Data, Scientific Data 2022 9: 1–6, 2022. https://doi.org/10.1038/s41597-022-01255-z



Learning Fourier sampling schemes for MRI by density optimization

Alban Gossard1,2, Frédéric de Gournay1,2,3, Pierre Weiss1,2,4

1Institut de Mathématiques de Toulouse, France; 2University of Toulouse; 3INSA Toulouse; 4Centre de Biologie Intégrative (CBI), Laboratoire MCD

An MRI scanner roughly allows measuring the Fourier transform of the image representing a volume at user-specified locations. Finding an optimal sampling pattern and reconstruction algorithm is a longstanding issue. While Shannon and compressed sensing theories dominated the field over the last decade, a recent trend is to optimize the sampling scheme for a specific dataset. Early works investigated algorithms that find the best subset among a set of feasible trajectories. More recently, some works proposed to optimize the positions of the sampling locations continuously [3].

In this talk, we will first show that this optimization problem usually possesses a combinatorial number of spurious minimizers [1]. This effect can however be mitigated by using large datasets of signals and specific preconditioning techniques. Unfortunately, the dataset size, the costly reconstruction processes and the computation of the non-uniform Fourier transform makes the problem computationally challenging. By optimizing the sampling density rather than the points locations, we show that the problem can be solved significantly faster while preserving competitive results [2].

[1] A. Gossard, F. de Gournay, P. Weiss. Spurious minimizers in non uniform Fourier sampling optimization, Inverse Problems 38: 105003, 2022.

[2] A. Gossard, F. de Gournay, P. Weiss. Bayesian Optimization of Sampling Densities in MRI, arXiv: 2209.07170, 2022.

[3] G. Wang, T. Luo, J.-F. Nielsen, D. C Noll, J. A Fessler. B-spline parameterized joint optimization of reconstruction and k-space trajectories (BJORK) for accelerated 2d MRI, IEEE Transactions on Medical Imaging 41: 2318--2330, 2022.


Acceleration strategies for Magnetic Resonance Spin Tomography in Time-Domain (MR‐STAT) reconstructions

Hongyan Liu, Oscar van der Heide, Mandija Stefano, Versteeg Edwin, Fuderer Miha, Cornelis A.T. van den Berg, Alessandro Sbrizzi

Computational Imaging Group for MRI Therapy & Diagnostics, Department of Radiotherapy, University Medical Center Utrecht, Utrecht, Netherlands

Magnetic Resonance Spin Tomography in Time‐Domain (MR-STAT) is an emerging quantitative magnetic resonance imaging technique which aims at obtaining multi-parametric tissue parameter maps (T1, T2, proton density, etc) from short scans. It describes the relationship between the spatial-domain tissue parameters and the time-domain measured signal by using a comprehensive, volumetric forward model. The MR-STAT reconstruction is cast as a large-scale, ODE constrained, nonlinear inversion problem, which is very challenging in terms of both computing time and memory.

In this presentation, I’ll talk about recent progresses about the acceleration strategies for MR-STAT reconstructions, for example, using a neural network model for the solution of the underlying differential equation model, applying alternating direction method of multipliers (ADMM) etc.


Learning Spatio-Temporal Regularization Parameter-Maps for Total Variation-Minimization Reconstruction in Dynamic Cardiac MRI

Andreas Kofler1, Fabian Altekrüger2, Fatima Antarou Ba3, Christoph Kolbitsch1, Evangelos Papoutsellis4,5, David Schote1, Clemens Sirotenko6, Felix Frederik Zimmermann1, Kostas Papafitsoros7

1Physikalisch-Technische Bundesanstalt, Braunschweig and Berlin, Germany, Germany; 2Humboldt-Universit ̈at zu Berlin, Department of Mathematics, Berlin, Germany; 3echnische Universit ̈at Berlin, Institute of Mathematics, Berlin, Germany; 4Finden Ltd, Rutherford Appleton Laboratory, Harwell Campus, Didcot, United Kingdom; 5Science and Technology Facilities Council, Harwell Campus, Didcot, United Kingdom; 6Weierstrass Institute for Applied Analysis and Stochastics, Berlin, Germany; 7School of Mathematical Sciences, Queen Mary University of London, United Kingdom

In dynamic cardiac Magnetic Resonance Imaging (MRI), one is interested in the assessment of the cardiac function based on a series of images which show the beating heart. Because the measurements typically take place during a breathhold of the patients, it is desirable to accelerate the scan by undersampling the data which yields an ill-posed inverse problem which requires the use of regularization methods. A prominent and successful example of regularization method is the well-known total variation (TV)-minimization approach which imposes sparsity of the image in its gradient domain. Thereby, the choice of the regularization parameter which balances between the data-fidelity term and the TV-term plays a crucial role. Moreover, having only a scalar regularization parameter which globally dictates the strength of the regularization seems to be sub-optimal for various reasons. Intuitively speaking, the strength of the TV-term should be locally dependent based on the content of the image. However, obtaining entire regularization parameter-maps for dynamic problems can be a challenging task. In this work, we propose a simple yet efficient approach for estimating patient-dependent spatio-temporal regularization parameter-maps for dynamic MRI based on TV-minimization. The overall approach is based on recent developments on algorithm unrolling using deep Neural Networks (NNs). A first NN estimates a spatio-temporal regularization parameter-map from an input image which is then fixed and used to formulate a reconstruction problem which a second network – an unrolled scheme using the primal dual hybrid gradient method – approximately solves. The approach combines NNs with a well-established model-based variational method and yields an entirely interpretable and convergent reconstruction scheme which can be used to improve over TV with merely scalar regularization parameters.

 
1:30pm - 3:30pmMS59 1: Advanced Reconstruction and Phase Retrieval in Nano X-ray Tomography
Location: VG2.103
Session Chair: Tim Salditt
Session Chair: Anne Wald
 

Resolution of reconstruction from discrete Radon transform data

Alexander Katsevich

University of Central Florida, United States of America

In this talk we overview recent results on the analysis of resolution of reconstruction from discrete Radon transform data. We call our approach Local Resolution Analysis, or LRA. LRA yields simple formulas describing the reconstruction from discrete data in a neighborhood of the singularities of $f$ in a variety of settings. We call these formulas the Discrete Transition Behavior (DTB). The DTB function provides the most direct, fully quantitative link between the data sampling rate and resolution. This link is now established for a wide range of integral transforms, conormal distributions $f$, and reconstruction operators. Recently the LRA was generalized to the reconstruction of objects with rough edges. Numerical experiments demonstrate that the DTB functions are highly accurate even for objects with fractal boundaries.


Deep Learning for Reconstruction in Nano CT

Alice Oberacker1, Anne Wald2, Bernadette Hahn-Rigaud3, Tobias Kluth4, Johannes Leuschner4, Maximilian Schmidt4, Thomas Schuster1

1Saarland University, Germany; 2University of Göttingen; 3University of Stuttgart, Geramany; 4University of Bremen, Germany

Tomographic X-ray imaging on the nano-scale is an important tool to visualise the structure of materials such as alloys or biological tissue. Due to the small scale on which the data acquisition takes place, small perturbances caused by the environment become significant and cause a motion of the object relative to the scanner during the scan.

An iterative reconstruction method called RESESOP-Kaczmarz was introduced in [1] which requires the motion to be estimated. However, since the motion is hard to estimate and its incorporation into the reconstruction process strongly increases the numerical effort, we investigate a learned version of RESESOP-Kaczmarz. Imaging data was programmatically simulated to train a deep network which unrolls the iterative image reconstruction of the original algorithm. The network therefore learns the back-projected image after a fixed number of iterations.

[1] S. E. Blanke et al. Inverse problems with inexact forward operator: iterative regularization and application in dynamic imaging, Inverse Problems 36 124001, 2020.


Learned post-processing approaches for nano-CT reconstruction

Tom Lütjen1, Fabian Schönfeld1, Alice Oberacker2, Maximilian Schmidt1, Johannes Leuschner1, Tobias Kluth1, Anne Wald3

1University of Bremen, Germany; 2Saarland University, Germany; 3Institute for Numerical and Applied Mathematics, University of Göttingen, Germany

X-ray computed tomography on the nano-meter scale is a challenging imaging task. Tiny perturbations, such as environmental vibrations, technology imprecision or a thermal drift over time, lead to considerable deviations in the measured projections for nano-CT. Reconstruction algorithms must take into account the presence of these deviations in order to avoid strong artifacts. We study different learned post-processing approaches for nano-CT reconstruction on simulated datasets featuring relative object shifts and rotations. The initial reconstruction is provided by a classical method (FBP, Kaczmarz) or a deviation-aware method (Dremel method, RESESOP-Kaczmarz). Neural networks are then trained supervisedly to post-process such initial reconstructions. We consider (i) a directly trained U-Net post-processing, and (ii) conditional normalizing flows, which learn an invertible mapping between a simple random distribution and the image reconstruction space, conditioned on the initial reconstruction. Normalizing flows do not only yield a reconstruction, but also an estimate of the posterior density. As a simple indicator of reconstruction uncertainty one may evaluate the pixel-wise standard deviation over samples from the estimated posterior.


X-ray phase and dark-field retrieval from propagation-based images, via the Fokker-Planck Equation

Kaye Susannah Morgan1, Thomas Leatham1, Mario Beltran1, Jannis Ahlers1, Samantha Alloo2, Marcus Kitchen1, Konstantin Pavlov2, David Paganin1

1School of Physics and Astronomy, Monash University, Australia; 2School of Physical and Chemical Sciences, University of Canterbury, New Zealand

Conventional x-ray imaging, which measures the intensity of the transmitted x-ray wavefield, is extremely useful when imaging strongly-attenuating samples like bone, but of limited use when imaging weakly-attenuating samples like the lungs or brain. In recent years, it has been seen that the phase of the transmitted x-ray wavefield contains useful information about these weakly-attenuating samples, however it is not possible to directly measure x-ray phase. This necessitates the use of mathematical models that relate the observed x-ray intensity, which is measurable, to the x-ray phase. These models can then be solved to retrieve how the sample has changed the x-ray phase; the inverse problem of phase retrieval. A widely adopted example is the use of the Transport of Intensity Equation (TIE) to retrieve x-ray phase from an intensity image collected some distance downstream of the sample, a distance at which sample-induced phase variations have resulted in self-interference of the wave and changed the local observed intensity [1]. The use of a single-exposure ‘propagation-based’ set-up like this, where no optics (gratings, crystals etc.) are required, makes for a robust and simple x-ray imaging set-up, which is also compatible with time-sequence imaging.

In this talk, we present an extension to the TIE, the X-ray Fokker-Planck Equation [2, 3], and associated novel retrieval algorithms for extracting x-ray phase and dark-field [5-7] from propagation-based images.

The TIE describes how x-ray intensity evolves with propagation from the sample to a downstream camera, for a wavefield with given phase and intensity. The X-ray Fokker-Planck Equation adds an additional term that incorporates how dark-field effects from the sample will be seen in the observed intensity [2,3]. X-ray dark-field effects are present when the sample contains microstructures that are not directly resolved, but which scatter the wavefield in such a way as to locally reduce image contrast. Examples of dark-field-inducing microstructure include powders, carbon fibres and the air sacs in the lungs. Until very recently [4], it was considered that crystals or gratings were required optical elements in the experimental set-up in order to capture a dark-field image.

Using the X-ray Fokker-Planck Equation, we have derived several novel algorithms that allow dark-field retrieval from propagation-based images. Because phase and dark-field effects evolve differently with propagation, images captured at two different sample-to-detector distances allow the separation and retrieval of dark-field images and phase images [5]. Alternatively, dark-field and phase images can be retrieved by looking at sample-induced changes in a patterned illumination via a Fokker-Planck approach, either using a single short exposure [6], or by scanning the pattern across the sample to access the full spatial resolution of the detector [7]. Incorporating dark-field effects in the TIE not only allows a dark-field image to be extracted from propagation-based images, but also increases the potential spatial resolution of the retrieved phase image. These propagation-based Fokker-Planck approaches are best suited to small samples (e.g. under 10 cm), so may provide avenues for fast and simple phase and dark-field micro/nano-tomography.

[1] D. Paganin, S. C. Mayo, T. E. Gureyev, P. R. Miller, S. W. Wilkins. Simultaneous phase and amplitude extraction from a single defocused image of a homogeneous object, Journal of Microscopy 206(1): 33-40, 2002.

[2] K. S. Morgan, D. M. Paganin. Applying the Fokker–Planck equation to grating-based x-ray phase and dark-field imaging, Scientific Reports 9(1): 17465, 2019.

[3] D.M. Paganin, K. S. Morgan. X-ray Fokker–Planck equation for paraxial imaging, Scientific Reports, 9(1): 17537, 2019.

[4] T.E. Gureyev, D.M. Paganin, B. Arhatari, S. T. Taba, S. Lewis, P. C. Brennan, H. M. Quiney. Dark-field signal extraction in propagation-based phase-contrast imaging, Physics in Medicine & Biology, 65(21): 215029, 2020.

[5] T. A. Leatham, D. M. Paganin, K. S. Morgan. X-ray dark-field and phase retrieval without optics, via the Fokker–Planck equation, IEEE Transactions on Medical Imaging (in press), 2023.

[6] M. A. Beltran, D. M. Paganin, M. K. Croughan, K. S. Morgan. Fast implicit diffusive dark-field retrieval for single-exposure, single-mask x-ray imaging, Optica, 10(4): 422-429, 2023.

[7] S. J. Alloo, K. S. Morgan, D. M. Paganin, K. M. Pavlov. Multimodal intrinsic speckle-tracking (MIST) to extract images of rapidly-varying diffuse X-ray dark-field, Scientific Reports, 13(1): 5424, 2023.
 
3:30pm - 4:00pmC9: Coffee Break
Location: ZHG Foyer
4:00pm - 6:00pmCT13: Contributed talks
Location: VG2.105
Session Chair: Martin Halla
 

Joint Born Inversion of Acoustic and Electromagnetic Wave fields

Anne V. de Wit2, Tristan van Leeuwen1, Felix Lucka1, Dirk J. Verschuur2, Koen W.A. van Dongen2

1Centrum Wiskunde & Informatica, Netherlands, The; 2Delft University of Technology, Delft, The Netherlands

Imaging by inversion of acoustic or electromagnetic wave fields have applications in a wide variety of areas, such as non-destructive testing, biomedical applications, and geophysical exploration. However, each modality suffers from its own application specific limitations with respect to resolution and sensitivity. To exploit the advantages of both imaging modalities, methods to combine them include image fusion, usage of spatial priors and application of joint or multi-physics inversion methods. In this work, a joint inversion algorithm based on structural similarity is presented. In particular, a joint Born inversion (BI) algorithm has been developed and tested successfully. With standard BI, an error functional based on the L2-norm of the mismatch between the measured and modeled wave field is minimized iteratively. To accomplish joint BI, we extend the standard error functional with an additional penalty term based on the L2-norm of the difference between the gradients of the acoustic and electromagnetic contrasts.


Imaging of Gravity Dam-Foundation contact by a shape optimization method using non-destructive seismic waves

Mohamed Aziz Boukraa1,2, Lorenzo Audibert1,2, Marcella Bonazzoli1, Houssem Haddar1, Denis Vautrin2

1INRIA, France; 2EDF R&D, France

The knowledge of concrete-rock foundation interface is a key factor to evaluate the stability of gravity dams as well as understanding their mechanical behavior under water pressure. Being an inaccessible part of the structure, the exploration of this region is a complex procedure. Coring techniques can be used, but they only give limited information about a specific location and can be damaging in some situations. Hence the usefulness of non-destructive seismic waves.

We model several non-destructive seismic waves and we propose an inversion scheme for ob- taining the shape of the interface. Our approach consists in solving an inverse problem using “full-wave inversion” type techniques from wave measurements simulated by the finite element method. The inverse problem is modeled as an optimization for a least square cost functional with perimeter regularization associated with sparse data collected on the dam wall. We model different type of measurements such as elastic waves when the source is on the dam wall or acoustic waves when the source is in the water. Moreover, in order to numerically model the radiation conditions in the rock and in the water we employ PML techniques.

We present some validating results on realistic experiments. We demonstrate in particular how our proposed methodology is capable of accurately reconstructing the interface wile classical reverse time migration techniques fail. We then discuss sensitivity with respect to the position and the number of sensors, the wave number as well as to the propagation medium (for example shape of the dam) and the properties of the materials.


Structure inversions for sound speed differences in solar-like stars

Lynn Buchele1,2, Earl Bellinger3, Sarbani Basu4, Saskia Hekker1,2

1Heidelberg Institute for Theoretical Studies, Heidelberg, Germany; 2University of Heidelberg, Heidelberg, Germany; 3Max Planck Institute for Astrophysics, Garching, Germany; 4Yale University, New Haven, CT, USA

Data from the Kepler Space telescope have allowed stellar astrophysicists to measure the frequencies of oscillation modes in many stars. These frequencies carry information about the internal structure of the stars, providing ways to test stellar theory. One method, called structure inversions, seeks to infer differences in internal sound speed between a star and its model using the differences in oscillation frequencies. While this method was used extensively to study the structure of the Sun, the number of other stars studied with structure inversions remains low. In the case of main-sequence stars without a convective core, sound speed inversion results are currently only available for two stars other than the Sun. I will present the results of structure inversions for about 10 solar-like stars and discuss what these results imply about our current understanding of stellar structure.



Detection of geophysical structures using optical flow methodologies for potential data

Jose Antonio Ramoz León, Emilia Fregoso Becerra, Abel Palafox González

University of Guadalajara, Mexico

The subsurface exploration, as part of the development of the human being's environment, focuses on the location of water and mineral deposits, oil, gas, geological structures, among others. Geophysical methods provide information about natural resources, besides information of structures generated by human beings, namely archaeological structures, from the analysis of their physical properties, density of a source body for instance.

The Euler’s homogeneity equation for geophysical potential data is given by:

\begin{equation*} (x-x_0)\frac{\partial T}{\partial x} + (y-y_0) \frac{\partial T}{\partial y} + (z - z_0)\frac{\partial T}{\partial z} = n(B - T), \end{equation*}

where $(x_0,y_0,z_0 )$ refers to the top of a source object, $(x,y,z)$ refers to the position of the observed potential field $T$, $n$ is the structural index, which depends on the source geometry, and $B$ is the regional value of the total field [1].

The inverse problem we are interested in, consists in locating a set of points $(x_0,y_0,z_0 )$ on the top of the source, from observed potential field data. In the classical Euler deconvolution strategy, this is achieved by solving Euler's homogeneity equation shown above. However, this strategy has an opportunity area in estimating the vertical component $z_0$ of the points composing the top of the source. This limitation is amplified when multiple source objects are present.

In the area of image processing, in particular in optical flow, the movement of pixels between two frames is analyzed. The spatial and temporal displacements are assumed to follow the Lambertian assumption: the pixels intensity remains after displacement. This assumption results in a differential equation very similar to the Euler's homogeneity equation: the Optical Flow equation:

$$\nabla T = 0, $$

where $\nabla$ indicates spatial-temporal derivatives. There exist methodological similarities between standard Euler deconvolution method, and standard Optical Flow methods such as Lucas-Kanade. Thus, our hypothesis is that the improving methods applicable to Optical Flow, such as Horn and Schunck method [2], will benefit analogously to the Euler deconvolution method. By reformulating the Euler deconvolution strategy to be similar to the Horn and Schunck method, the position of the top of the source is estimated by minimizing the energy functional:

\begin{equation*} \begin{aligned} E_{HSED}(u,v,w)=\int \int \int_{}^{}&(T_x u +T_y v +T_z w - n(B - T))^2 + \Big(\lambda_u(u_x^2+u_y^2+u_z^2)\\ & + \lambda_v(v_x^2+v_y^2+v_z^2) + \lambda_w(w_x^2+w_y^2+w_z^2)\Big) d_x d_y d_z , \end{aligned} \end{equation*}

where $u$,$v$ and $w$ are the unknowns, $\lambda_u$, $\lambda_v$ and $\lambda_w$ are regularization parameters and the sub-indices indicate partial derivation. It is noticed that the first term in the integral corresponds to Euler’s equation, meanwhile regularization terms impose smoothness on the source position reconstruction.

In this work it will be shown the results obtained after applying the methodology to synthetic 3D subsurface models. We present evidence that the horizontal location of the sources provided by Horn and Schunck based formulation is comparable to results obtained by standard Euler deconvolution strategy, with the advantage that the depth of the top of the subsurface's source is properly estimated.

[1] D.T . Thompsom., A new technique for making computer-assisted depth estimates from magnetic data.", Vol 47. 1982.

[2] Berthold K.P. Horn, Brian G. Schunck . "Determining Optical Flow", Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cam bridge, MA 02139, 1981.
 
4:00pm - 6:00pmCT14: Contributed talks
Location: VG2.106
Session Chair: Housen Li
 

On accuracy and existence of approximate decoders for ill-posed inverse problems

Nina Maria Gottschling1, Paolo Campodonico3, Vegard Antun2, Anders C. Hansen3

1MF-DAS OP - EO Data Science, DLR, Oberpfaffenhofen, Germany; 2Department of Mathematics, University of Oslo; 3Department of Applied Mathematics and Theoretical Physics, University of Cambridge

Based on work by Cohen, Damen and Devore [1] and Bourrier et al. [2], we propose a framework that highlights the importance of knowing the measurement model $F$ and model class $\mathcal{M}_1$, when solving ill-posed (non-)linear inverse problems, by introducing the concept of kernel size. Previous work has assumed that the problem is injective on the model class $\mathcal{M}_1$ and we obviate the need for this assumption. Thus, it is applicable in Deep Larning (DL) based settings where $\mathcal{M}_1$ can be an arbitrary data set.

$\textbf{Setting and initial main result}$ Let $\mathcal{X}$, $\mathcal{Y}$ and $\mathcal{Z}$ be non-empt sets, $\mathcal{M}_1 \subset \mathcal{X}$ be the $\textit{model class}$, $\mathcal{E}\subset \mathcal{Z}$ be the $\textit{noise class}$" and $F \colon \mathcal{M}_1\times \mathcal{E} \to \mathcal{Y}$ be the $\textit{forward map}$. An inverse problem has the form: $$ \text{Given noisy measurements } y = F(x,e) \text{ of } x \in \mathcal{M}_1\text{ and } e \in \mathcal{E}, \text{ recover } x.\quad (1) $$ Here $e$ represent the model noise and $x$ the signal (function or vector) we wish to recover or approximate. This also includes linear cases with additive noise, where $\mathcal{Y}=\mathcal{Z}$, $A \colon \mathcal{X} \to \mathcal{Y}$ is a linear map between vector spaces and the forward map is $$ F(x,e) = Ax+e. \quad (2) $$

To measure accuracy we equip $\mathcal{X}$, $\mathcal{Y}$ and $\mathcal{Z}$ with metrics $d_{\mathcal{X}}$, $d_{\mathcal{Y}}$ and $d_{\mathcal{Z}}$, such that the induced topology is second countable (it admits a countable base). We assume that the metrics $d_\mathcal{X}$ and $d_\mathcal{Z}$ satisfy the Heine-Borel property, i.e., all closed and bounded sets are compact, and that for every $y \in \mathcal{M}_2^{\mathcal{E}}$ the $\textit{feasible set}$ $$ \pi_1(F^{-1}(y)) = \{x \in \mathcal{M}_1: \exists e\in\mathcal{E} \text{ s.t. } F(x,e)=y\} $$ is compact. Under these assumptions we define the optimality constant as the smallest error any reconstruction map can achieve, and study both the case of worst-case noise (critical when there is risk of adversarial attacks) and average random noise (more typical in practical applications). We prove that the optimality constant is bounded from below and from above (sharply) by the $\textit{kernel size}$, which is a quantity intrinsic to the inverse problem. For simplicity, we restrict to the best worst-case reconstruction error here. As we consider set-valued reconstruction maps $\varphi\colon \mathcal{M}_2^{\mathcal{E}} \rightrightarrows \mathcal{X}$, we use the Hausdorff distance $d_{\mathcal{X}}^H(\cdot, \cdot)$.

$\textbf{Definition: Optimality constant under worst-case noise}$ The $\textit{optimality constant}$ of $(1)$ is $$ c_{\mathrm{opt}}(F,\mathcal{M}_1, \mathcal{E}) = \inf_{\varphi\colon \mathcal{M}_2^{\mathcal{E}} \rightrightarrows \mathcal{X}} \sup_{x\in \mathcal{M}_1} \sup_{e \in \mathcal{E}} \ d_{\mathcal{X}}^H(x, \varphi(F(x,e))) $$ A mapping $\varphi\colon \mathcal{M}_2^{\mathcal{E}} \rightrightarrows \mathcal{X}$ that attains such an infimum is called an $\textit{optimal map}$.

$\textbf{Definition: Kernel size with worst-case noise}$ The kernel size of the problem $(1)$ is $$ \operatorname{kersize}(F,\mathcal{M}_1, \mathcal{E}) = \sup_{\substack{ (x,e),(x',e')\in\mathcal{M}_1\times\mathcal{E} \text{ s.t. }\\ F(x,e) = F(x',e')} } d_{\mathcal{X}}(x,x'). $$

$\textbf{Theorem: Worst case optimality bounds}$ Under the stated assumptions, the following holds.

$(i)$ We have that $$ \operatorname{kersize}(F,\mathcal{M}_1, \mathcal{E})/2 \leq c_{\mathrm{opt}}(F,\mathcal{M}_1, \mathcal{E}) \leq \operatorname{kersize}(F,\mathcal{M}_1, \mathcal{E}). \quad (3) $$

$(ii)$ Moreover, the map

$$ \Psi(y) = \operatorname*{argmin}_{z \in \mathcal{X}}\sup_{(x,e) \in F^{-1}(y)} d_{\mathcal{X}}(x,z) = \operatorname*{argmin}_{z \in \mathcal{X}} d_{\mathcal{X}}^H(z, \pi_1(F^{-1}(y))), $$ has non-empty, compact values and it is an optimal map.

This illustrates a fundamental limit for the inverse problem $(1)$. Indeed, one would hope to find a solution for $(1)$ whose error is as small as possible. The lower bound in $(3)$ shows that there is a constant intrinsic to the problem -- the kernel size -- such that no reconstruction error can be made smaller than this constant for all possible choices of $x \in \mathcal{M}_1$. Note that the above theorem is extended to the average reconstruction error in our work.

$\textbf{Background and related work}$

Linear inverse problems $(2)$ arise in image reconstruction for scientific, industrial and medical applications [3-7]. Traditional image reconstruction methods are model-based, and they have also been studied in a Bayesian setting [8]. Less studied are non-linear inverse problems, which appear in geophysics [9,10] and in inverse scattering problems [11,12]. Accuracy and error bounds have been studied in [13]. In many cases, today, DL-based methods obtain higher accuracy than traditional methods, and an overview is given in [14-17]. The key point of DL based methods for solving inverse problems in imaging is that given enough data a neural network can be trained to approximate a decoder to solve $(2)$.

[1] A. Cohen, W. Dahmen, R. DeVore. Compressed sensing and best k-term approximation. Journal of the American mathematical society, 22(1):211–231, 2009.

[2] A. Bourrier, M. E. Davies, T. Peleg, P. Perez, R. Gribonval. Fundamental performance limits for ideal decoders in high- dimensional linear inverse problems. IEEE Transactions on Information Theory, 60(12):7928–7946, 2014.

[3] C. A. Bouman. Foundations of Computational Imaging: A Model-Based Approach. SIAM, Philadelphia, PA, 2022.

[4] H. H. Barrett, K. J. Myers. Foundations of image science. John Wiley & Sons, 2013.

[5] C. L. Epstein. Introduction to the mathematics of medical imaging. SIAM, 2007.

[6] P. Kuchment. The Radon transform and medical imaging. SIAM, 2013.

[7] F. Natterer, F. Wubbeling. Mathematical methods in image reconstruction. SIAM, 2001.

[8] A. M. Stuart. Inverse problems: A Bayesian perspective. Acta numerica, 19:451–559, 2010.

[9] C. G. Farquharson, D. W. Oldenburg. Non-linear inversion using general measures of data misfit and model structure. Geophysical Journal International, 134(1):213–227, 1998.

[10] C. G. Farquharson, D. W. Oldenburg. A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems. Geophysical Journal International, 156(3):411–425, 2004.

[11] J. L. Mueller, S. Siltanen. Linear and nonlinear inverse problems with practical applications. SIAM, 2012.

[12] M. T. Bevacqua, L. Crocco, L. Di Donato, T. Isernia. An algebraic solution method for nonlinear inverse scattering. IEEE Transactions on Antennas and Propagation, 63(2):601–610, 2014.

[13] N. Keriven, R. Gribonval. Instance optimal decoding and the restricted isometry property. In Journal of Physics: Conference Series, IOP Publishing:012002. 2018.

[14] M. T. McCann, M. Unser. Biomedical image reconstruction: From the foundations to deep neural networks. Foundations and Trends in Signal Processing, 13(3):283–359, 2019.

[15] S. Arridge, P. Maass, O. Oktem, C.-B. Sch onlieb. Solving inverse problems using data-driven models. ¨ Acta Numer., 28:1–174, 2019.

[16] G. Wang, J. C. Ye, K. Mueller, J. A. Fessler. Image reconstruction is a new frontier of machine learning. IEEE Trans. Med. Imaging, 37(6):1289–1296, 2018.

[17] H. Ben Yedder, B. Cardoen, G. Hamarneh. Deep learning for biomedical image reconstruction: A survey. Artificial intelligence review, 54(1):215–251, 2021.


Inverse problems and deep learning in epidemiology and social processes

Olga Krivorotko1,2, Sergey Kabanikhin1, Viktoriya Petrakova3, Nikolay Zyatkov1

1Sobolev Institute of Mathematics of SB RAS, Russian Federation; 2Novosibirsk State University, Novosibirsk, Russian Federation; 3Institute of Computational Modeling SB RAS, Krasnoyarsk, Russian Federation

Mathematical modeling of infectious propagation is strongly connected with social and economic processes. The different types of mathematical models (time-series, differential, agent-based and mean-field games ones) are investigated to formulate adequate models. The parameters (contagiousness, probability of progression of severe disease, mortality, asymptomatic carriers, probability of testing, and others) of models are, as a rule, unknown and should be estimated by solving inverse problems. Inverse problems are ill-posed, i.e. its solutions are unstable and/or non-unique due to incomplete and noisy input data (epidemiological statistics, socio-economic characteristics, etc.). Therefore we use regularization ideas and special technique to achieve the appropriate estimation of the model parameters.

The first step in construction of the mathematical model of epidemiology and social processes propagation consists in data processing using machine learning methods. It helps to fix key characteristics of both processes. For detailed inverse problem data time-series, differential (ordinary or partial equations to account for migration), agent-based models are combined to describe epidemiology situation in a considered region with influence of social processes [1, 2]. Otherwise, mean field approach is used for optimal control of epidemiology and social processes that consists in combination of Kolmogorov-Fokker-Plank (KFP) equation for propagation of the density of representative agent for considered epidemiological status and Hamilton-Jacobi-Bellman (HJB) equation for the optimal control strategy [3]. In [3] it was showed that an incorrect assessment of current social processes in the population leads to significant errors in predicting morbidity. The search for a correct description of the socio-epidemiological situation in mathematical terms leads to the formulation of the inverse mean field problem, where, according to epidemiological statistics and the rate of increase in the number of infected, factors affecting the development of the incidence (for example, antiviral restrictions) can be estimated [5].

The second step based on sensitivity-based identifiability analysis with using Bayesian approach, Monte-Carlo method, and singular value decomposition [4]. It provides the sequence of sensitivity parameters from the more sensitive to the less ones and reduces the boundaries of variation of the unknown parameters for further development of the regularization algorithm.

The inverse problems consist in (1) identification of less sensitive epidemic parameters for models based on differential equations and after that (2) identification of more sensitive epidemic parameters for agent-based models using known approximation of other parameters and processes data. Inverse problems are formulated as a least-squares minimization problem that solved by combination of global (Tree-Structured Parzen Estimator, tensor optimization, differential evolution) and gradient type optimization methods with a priory information about inverse problem solution.

Deep neural and generative adversarial networks for data processing and forecasting of time-series data as well as alternative approach for the direct and inverse problem solutions are applied and investigated. The key features of epidemiological and social processes and Wasserstein metrics are used for construction neural networks.

The numerical results are demonstrated the effectiveness of combination models and approaches to the COVID-19 propagation in different regions using socio-economic processes and optimal control programs of epidemic propagation in different economics.

The work is supported by the Mathematical Center in Akademgorodok under the agreement No. 075-15-2022-281 with the Ministry of Science and Higher Education of the Russian Federation.

[1] O. Krivorotko, M. Sosnovskaia, I. Vashchenko, C. Kerr, D. Lesnic. Agent-based modeling of COVID-19 outbreaks for New York state and UK: parameter identification algorithm. Infectious Disease Modelling. 7: 30-44, 2022.

[2] O.I. Krivorotko, N.Y. Zyatkov. Data-driven regularization of inverse problem for SEIR-HCD model of COVID-19 propagation in Novosibirsk region. Eurasian Journal of Mathematical and Computer Applications. 10(1): 51-68, 2022.

[3] V. Petrakova, O. Krivorotko. Mean field game for modeling of COVID-19 spread. Journal of Mathematical Analysis and Application. 514: 126271, 2022.

[4] O.I. Krivorotko, S.I. Kabanikhin, M.I. Sosnovskaya, D.V. Andornaya. Sensitivity and identifiability analysis of COVID-19 pandemic models. Vavilovskii Zhurnal Genetiki i Selektsiithis. 25(1): 82-91, 2021.

[5] L. Ding, W. W., S. Osher et al. A Mean Field Game Inverse Problem. Journal of Scientific Computing. 92:7, 2021.


Approximation with neural networks in Sobolev setting

Ahmed Abdeljawad

RICAM, Austria

Solutions of the evolution equation generally lie in certain Bochner-Sobolev spaces, in which the solutions may have regularity and integrability properties for the time variable that can be different for the space variables. Therefore, in our paper, we developed a framework that shows that deep neural networks can approximate Sobolev-regular functions with respect to Bochner-Sobolev spaces. In this talk we will present the power of using the so-called Rectified Cubic Unit (ReCU) as an activation function in the networks. This activation function allows us to deduce approximation results of the neural networks. While avoiding issues caused by the non regularity of the most commonly used Rectified Linear Unit (ReLU) activation function. This is a joint work with Philipp Grohs.

[1] A. Abdeljawad, P. Grohs. Approximations with deep neural networks in Sobolev time-space. Analysis and Applications 20.03 (2022): 499-541.
 
4:00pm - 6:00pmMS07 2: Regularization for Learning from Limited Data: From Theory to Medical Applications
Location: VG1.101
Session Chair: Markus Holzleitner
Session Chair: Sergei Pereverzyev
Session Chair: Werner Zellinger
 

Imbalanced data sets in a magnetic resonance imaging case study of preterm neonates: a strategy for identifying informative variables

Sergiy Pereverzyev Jr.

Medical University of Innsbruck, Austria

Background and objective: Variable selection is the process of identifying relevant data characteristics (features, biomarkers) that are predictive of future outcomes. There is an arsenal of methods addressing the variable selection problem, but the available techniques may not work on the so-called imbalanced data sets containing mainly examples of the same outcome. Retrospective clinical data often exhibit such imbalanceness. This is the case for neuroimaging data derived from the magnetic resonance images of prematurely born infants used in attempt to identify prognostic biomarkers of their possible neurodevelopmental delays, which is the main objective of the present study. Methods: The variable selection algorithm used in our study scores the combinations of variables according to the performance of prediction functions involving these variables. The considered functions are constructed by kernel ridge regression with various input variables as regressors. As regression kernels we used universal Gaussian kernels and the kernels adjusted for underlying data manifolds. The prediction functions have been trained using data that were randomly extracted from available clinical data sets. The prediction performance has been measured in terms of area under the Receiver Operating Characteristic Curve, and maximum performance exhibited by prediction functions has been averaged over simulations. The resultant average value is then assigned as the performance index associated with the considered combination of input variables. The variables allowing the largest index value are selected as the informative ones. Results: The proposed variable selection strategy has been applied to two retrospective clinical datasets containing data of preterm infants who received magnetic resonance imaging of the brain at the term equivalent age and at around 12 months corrected age with the developmental evaluation. The first dataset contains data of 94 infants, with 13 of them being later classified as delayed in motor skills. The second set contains data of 95 infants, with 7 of them being later classified as cognitively delayed. The application of the proposed strategy clearly indicates 2 metabolite ratios and 6 diffusion tensor imaging parameters as being predictive of motor outcome, as well as 2 metabolite ratios and 2 diffusion tensor imaging parameters as being predictive of cognitive outcome. Conclusion: The proposed strategy demonstrates its ability to extract the meaningful variables from the imbalanced clinical datasets. The application of the strategy provides independent evidence supporting several previous studies separately suggesting different biomarkers. The application also shows that the predictor involving several informative variables can exhibit better performance than single variable predictors.


On Approximation for Multi-Source Domain Adaptation in the Space of Copulas

Priyanka Roy1,2, Bernhard Moser2, Werner Zellinger3, Susanne Saminger-Platz1

1Institute for Mathematical Methods in Medicine and Data Based Modeling, Johannes Kepler University Linz, Linz, Austria; 2Software Competence Center Hagenberg, Hagenberg, Austria; 3Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences, Linz, Austria

The set of $d$-copulas $(d \geq 2)$, denoted by $\mathcal{C}_d$ is a compact subspace of $(\Xi(\mathbb{I}^d), d_{\infty})$, the space of all continuous functions with domain $\mathbb{I}^d$; where $\mathbb{I}$ is the unit interval, $d_{\infty}(f_1,f_2)=\underset{u \in \mathbb{I}^d} {\text{sup}}|f_1(\textbf{u})-f_2(\textbf{u})|$ and the function $C:\mathbb{I}^d\to \mathbb{I}$ is a $d$-copula if, and only if, the following conditions hold:

(i) $C(u_1,..,u_d)=0$ whenever $u_j=0$ for at least one index $j\in\{1,...,d\}$,

(ii) when all the arguments of $C$ are equal to $1$, but possibly for the $j$-th one, then $$C(1,..,1,u_j,1,..,1)=u_j$$

(iii) $C$ is $d$-increasing i.e., $\forall~ ]\mathbf{a}, \mathbf{b}] \subseteq \mathbb{I}^d, V_C(]\mathbf{a},\mathbf{b}]):=\underset{{\mathbf{v}} \in \text{ver}(]\mathbf{a},\mathbf{b}])}{\sum}\text{sign}(\mathbf{v})C(\mathbf{v}) \geq 0 $ where $\text{sign}(\mathbf{v})=1$, if $v_j=a_j$ for an even number of indices, and $\text{sign}(\mathbf{v})=-1$, if $ v_j=a_j$ for an odd number of indices.

Note that every copula $C\in \mathcal{C}_d$ induces a $d$-fold stochastic measure $\mu_{C}$ on $(\mathbb{I}^d, \mathcal{B}(\mathbb{I})^d)$ defined on the rectangles $R = ]\mathbf{a}, \mathbf{b}]$ contained in $\mathbb{I}^d$, by $$\mu_{C}(R):=V_{C}(]\mathbf{a}, \mathbf{b}]).$$

We will focus on specific copulas whose support is possibly a fractal set and discuss the uniform convergence of empirical copulas induced by orbits of the so-called chaos game (a Markov process induced by transformation matrices $\mathcal{T}$, compare [4]). We aim at learning, i.e., approximating an unknown function $f$ (see also [5]), from random samples based on the examples of patterns, namely the so-called chaos game. Further details on copulas can be found in the monographs [1,2,3].

In this talk, we will first investigate the problem of learning in a relevant function space for an individual domain with the chaos game representation. Within this framework, we further formulate the problem of domain adaptation with multiple sources [6], where we discuss the method of aggregating the already obtained approximated functions in each domain to derive a function with a small error with respect to the target domain.

Acknowledgement:

This research was carried out under the Austrian COMET program (project S3AI with FFG no. 872172, www.S3AI.at, at SCCH, www.scch.at), which is funded by the Austrian ministries BMK, BMDW, and the province of Upper Austria.

[1] F. Durante, C. Sempi. Principles of copula theory. CRC Press, 2016.

[2] R. B. Nelsen. An introduction to copulas. Springer Series in Statistics. Springer, second edition, 2006.

[3] C. Alsina, M. J. Frank, B. Schweizer. Associative functions. Triangular norms and copulas.World Scientific Publishing Co. Pte. Ltd.,2006.

[4] W. Trutschnig, J.F. Sanchez. Copulas with continuous, strictly increasing singular conditional distribution functions. J. Math. Anal. Appl. 410(2): 1014–1027, 2014.

[5] F. Cucker, S. Smale. On the mathematical foundations of learning. Bull. Amer. Math. Soc. (N.S.) 39(1): 1–49, 2002.

[6] Y. Mansour, M. Mohri, A. Rostamizadeh. Domain adaptation with multiple sources. Advances in neural information processing systems 21, 2008.


Learning segmentation on unlabeled MRI data using labeled CT data

Leon Frischauf

University of Vienna, Austria

The goal of supervised learning is that of deducing a classifier from a given labeled data set. In several concrete applications, such as medical imagery, one however often operates in the setup of domain adaptation. Here, a classifier is learnt from a source labeled data set and generalised to a target unlabeled data set, with the two data sets moreover belonging to different domains (e.g. different patients, different machine setups etc.).

In our work, we use the SIFA framework [1] as a basis for medical image segmentation for a cross-modality adaptation between MRI and CT images. We have combined the SIFA algorithm with linear aggregation as well as importance-weighted validation of those trained models to remove the arbitrariness in the choice of parameters.

This presentation shall give an overview of domain adaptation and show the latest version of our experiments.

[1] C. Chen, Q. Dou, H. Chen, J. Qin, P. Heng. Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation. IEEE Transactions on Medical Imaging 39: 2494-2505, 2020.


Parameter choice in distance-regularized domain adaptation

Werner Zellinger, Sergei V. Pereverzyev

Austrian Academy of Sciences, Austria

We address the unsolved algorithm design problem of choosing a justified regularization parameter in unsupervised domain adaptation, the problem of learning from unlabeled data using labeled data from a different distribution. Our approach starts with the observation that the widely-used method of minimizing the source error, penalized by a distance measure between source and target feature representations, shares characteristics with penalized regularization methods. This observation allows us to extend Lepskii’s balancing principle, and it’s related error bound, to unsupervised domain adaptation. This talk is partially based on [1].

[1] W. Zellinger, N. Shepeleva, M.-C. Dinu, H. Eghbal-zadeh, H. D. Nguyen, B. Nessler, S. V. Pereverzyev, B. Moser. The balancing principle for parameter choice in distance-regularized domain adaptation. Advances in Neural Information Processing Systems (NeurIPS) 34, 2021.

 
4:00pm - 6:00pmMS08 2: Integral Operators in Potential Theory and Applications
Location: VG2.102
Session Chair: Doosung Choi
Session Chair: Mikyoung Lim
Session Chair: Stephen Shipman
 

Recovering an elastic inclusion using the shape derivative of the elastic moment tensors

Daehee Cho, Mikyoung Lim

Korea Advanced Institute of Science & Technology, Korea, Republic of (South Korea)

An elastic inclusion embedded in a homogeneous background induces a perturbation for a given far-field loading. This perturbation admits a multipole expansion with coefficients known by Elastic Moment Tensors (EMTs), which contain information on the material and geometric properties of the inclusion. Iterative optimization approaches to recover the shape of the inclusion involving the EMTs have been reported. In this talk, we focus on the shape derivative of the EMTs for planar inclusion. In particular, we derive asymptotic expressions for the shape deformation of inclusion from a disk, based on the complex formulation for the solution to the plane elastostatic problem.


Some aspects of the spectrum of the Neumann-Poincaré operator

Stephen Shipman

Louisiana State University, United States of America

I will discuss some applications of the spectrum of the Neumann-Poincaré operator.


Spectrum of the Neumann-Poincaré operator on thin domains

Kazunori Ando, Hyeonbae Kang, Miyanishi Yoshihisa

Ehime University, Japan

We consider the spectral structure of the Neumann–Poincaré operators defined on the boundaries of thin domains in two and three dimensions. In two dimensions, we consider rectangle-shaped domains. We prove that as the aspect ratio of the domains tends to $\infty$, or equivalently, as the domains get thinner, the spectra of the Neumann–Poincaré operators are densely distributed in $[−\frac{1}{2}, \frac{1}{2} ]$. In three dimensions, we consider two different kinds of thin domains: thin oblate domains and thin cylinders. We show that in the first case the spectra are distributed densely in the interval $[−\frac{1}{2}, \frac{1}{2} ]$ as as the domains get thinner. In the second case, as a partial result, we show that the spectra are distributed densely in the half interval $[−\frac{1}{2}, \frac{1}{2} ]$ as the domains get thinner.
 
4:00pm - 6:00pmMS10 4: Optimization in Inverse Scattering: from Acoustics to X-rays
Location: VG1.103
Session Chair: Radu Ioan Bot
Session Chair: Russell Luke
 

Phase retrieval from overexposed PSFs: theory and practice

Oleg Alexandrovich Soloviev1,2

1TU Delft, the Netherlands; 2Flexible Optical B.V., the Netherlands

In industrial applications, phase retrieval algorithms can be used to obtain information on optical system misalignment. Because of the specific wavelengths used, often the input data for such algorithms are affected by a high level of noise and quantized with a low bit resolution, and the traditional methods fail to restore the phase accurately. The restoration accuracy can be increased with the presented method of phase retrieval from a (single) overexposed measurement of a point-spread function. We demonstrate that under certain conditions, any projection-based phase retrieval method can be adjusted to accept the input data with the saturated pixels. The modification uses a concept of the clipped set which is able to represent and restore the information lost due to overexposure correctly. With moderate levels of overexposure, the phase restoration accuracy is increased due to the improved signal-to-noise ratio of the PSF. The presentation describes the concept of a clipped set and the procedure of calculating the projection on it and demonstrates the application on the simulated and experimental data.


Tensor-free algorithms for lifted quadratic and bilinear inverse problems

Robert Beinert2, Kristian Bredies1

1University of Graz, Austria; 2Technische Universität Berlin, Germany

We present a class of novel algorithms that aim at solving bilinear and quadratic inverse problems. It bases on first-order proximal algorithms for minimizing a Tikhonov functional associated with the respective tensorial lifted problem with nuclear norm regularization [1]. It is well known, however, that a direct application of such algorithms involves computations in the tensor-product space, in particular, singular-value thresholding. Due to the prohibitively high dimension of the latter space, such algorithms are infeasible without appropriate modification. Thus, to overcome this limitation, we show that all computational steps can be adapted to perform on low-rank representations of the iterates, yielding feasible, memory and computationally efficient tensor-free algorithms [2]. We present and discuss the numerical performance of these methods for the two-dimensional Fourier phase retrieval problem. In particular, we show that the incorporation of smoothness constraints within the framework greatly improve image recovery results.

[1] R. Beinert, K. Bredies. Non-convex regularization of bilinear and quadratic inverse problems by tensorial lifting, Inverse Problems 35(1): 015002, 2019.

[2] R. Beinert, K. Bredies. Tensor-free proximal methods for lifted bilinear/quadratic inverse problems with applications to phase retrieval, Foundations of Computational Mathematics 21(5): 1181-1232, 2021.


Implicit regularization via re-parametrization

Cesare Molinari

UniGe, Italy

Recently, the success of optimization is related to re- and over-parametrization, that are widely used - for instance - in neural networks applications. However, there is still an open question of how to find systematically what is the inductive bias hidden behind the model for a particular optimization scheme. The goal of this talk is taking a step in this direction, studying extensively many reparametrization used in the state of the art and providing a common structure to analyze the problem in a unified way. We show that gradient descent on the objective function for many reparametrization is equivalent to mirror descent on the original problem. The mirror function depends on the reparametrization and introduces an inductive bias, which plays the role of the regularizer. Our theoretical results provide asymptotic behavior and convergence results.
 
4:00pm - 6:00pmMS13 2: Stochastic iterative methods for inverse problems
Location: VG0.111
Session Chair: Tim Jahn
 

From inexact optimization to learning via gradient concentration

Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco

Bocconi University Milano, Italy

Optimization in machine learning typically deals with the minimization of empirical objectives defined by training data. The ultimate goal of learning, however, is to minimize the error on future data (test error), for which the training data provides only partial information. In this view, the optimization problems that are practically feasible are based on inexact quantities that are stochastic in nature. In this paper, we show how probabilistic results, specifically gradient concentration, can be combined with results from inexact optimization to derive sharp test error guarantees. By considering unconstrained objectives, we highlight the implicit regularization properties of optimization for learning.


Principal component analysis in infinite dimensions

Martin Wahl

Universität Bielefeld, Germany

In high-dimensional settings, principal component analysis (PCA) reveals some unexpected phenomena, ranging from eigenvector inconsistency to eigenvalue (upward) bias. While such high-dimensional phenomena are now well understood in the spiked covariance model, the goal of this talk is to present some extensions for the case of PCA in infinite dimensions. As an application, we present bounds for the prediction error of spectral regularization estimators in the overparametrized regime.


Learning Linear Operators

Nicole Mücke

TU Braunschweig, Germany

We consider the problem of learning a linear operator $\theta$ between two Hilbert spaces from empirical observations, which we interpret as least squares regression in infinite dimensions. We show that this goal can be reformulated as an inverse problem for $\theta$ with the undesirable feature that its forward operator is generally non-compact (even if $\theta$ is assumed to be compact or of p-Schatten class). However, we prove that, in terms of spectral properties and regularisation theory, this inverse problem is equivalent to the known compact inverse problem associated with scalar response regression. Our framework allows for the elegant derivation of dimension-free rates for generic learning algorithms under Hölder-type source conditions. The proofs rely on the combination of techniques from kernel regression with recent results on concentration of measure for sub-exponential Hilbertian random variables. The obtained rates hold for a variety of practically-relevant scenarios in functional regression as well as nonlinear regression with operator-valued kernels and match those of classical kernel regression with scalar response.


SGD for select inverse problems in Banach spaces

Zeljko Kereta1, Bangti Jin1,2

1University College London; 2The Chinese University of Hong Kong,

In this work we present a mathematical framework and analysis for SGD in Banach spaces for select linear and non-linear inverse problems. Analysis in the Banach space setting presents unique challenges, requiring novel mathematical tools. This is achieved by combining insights from Hilbert space theory with approaches from modern optimisation. The developed theory and algorithms open doors for a wide range of applications, and we present some future challenges and directions.
 
4:00pm - 6:00pmMS31 2: Inverse Problems in Elastic Media
Location: VG3.104
Session Chair: Andrea Aspri
Session Chair: Ekaterina Sherina
 

An inverse problem for the porous medium equation

Catalin Ion Carstea1, Tuhin Ghosh2, Gen Nakamura3

1National Yang Ming Chiao Tung University, Taiwan; 2National Institute of Science Education and Research, India; 3Inha University, Korea

The porous medium equation is a degenerate parabolic type quasilinear equation that models, for example, the flow of a gas through a porous medium. In this talk I will present recent results on uniqueness in the inverse boundary value problem for this equation. These are the first such results to be obtained for a degenerate parabolic equation.


Comparison of variational formulations for the direct solution of an inverse problem in linear elasticity

Paul E. Barbone1, Olalekan Babaniyi2

1Boston University, United States of America; 2Rochester Institute of Technology, United States of America

Given one or more observations of a displacement field within a linear elastic, isotropic, incompressible object, we seek to identify the material property distribution within that object. This is a mildly ill-posed inverse problem in linear elasticity. While most common approaches to solving this inverse problem use forward iteration, several variational formulations have been proposed that allow its direct solution. We review five such direct variational formulations for this inverse problem: Least Squares, Adjoint Weighted Equation, Virtual Fields, Inverse Least Squares, Direct Error in Constitutive Eqn. [1, 2, 3, 4, 5]. We briefly review their derivations, their mathematical properties, and their compatibility with Galerkin discretization and numerical solution. We demonstrate these properties through numerical examples.

[1] P. B. Bochev, M. D. Gunzburger. Finite element methods of least-squares type, SIAM Review, 40(4): 789--837, 1998.

[2] P. E. Barbone, C. E. Rivas, I. Harari, U. Albocher, A. A. Oberai, Y. Zhang. Adjoint-weighted variational formulation for the direct solution of inverse problems of general linear elasticity with full interior data, International Journal for Numerical Methods in Engineering 81(13): 1713--1736, 2010.

[3] F. Pierron, M. Grédiac. The Virtual Fields Method: Extracting Constitutive Mechanical Parameters from Full-field Deformation Measurements, Springer Science & Business Media, 2012.

[4] G. Bal, C. Bellis, S. Imperiale, F. Monard. Reconstruction of constitutive parameters in isotropic linear elasticity from noisy full-field measurements, Inverse Problems 30(12): 125004, 2014.

[5] O. A. Babaniyi, A. A. Oberai, P. E. Barbone. Direct error in constitutive equation formulation for plane stress inverse elasticity problem, Computer Methods in Applied Mechanics and Engineering 314: 3--18, 2017.
 
4:00pm - 6:00pmMS41 2: Geomathematics
Location: VG3.101
Session Chair: Joonas Ilmavirta
 

Inverse scattering: Regularized Lanczos method for the Lippmann-Schwinger equation

Justin Baker1, Elena Cherkaev1, Vladimir Druskin2, Shari Moskow3, Mikhail Zaslavsky4

1University of Utah, U.S.A.; 2Worcester Polytechnic Institute, U.S.A.; 3Drexel University, U.S.A.; 4Southern Methodist University, U.S.A.

Inverse scattering techniques have broad applicability in geophysics, medical imaging, and remote sensing. This talk presents a robust direct reduced-order model method for solving inverse scattering problems. The approach is based on a Lippmann-Schwinger-Lanczos (LSL) algorithm in the frequency domain with two levels of regularization. Numerical experiments for Helmholtz and Schrödinger problems show that the proposed regularization scheme significantly improves the performance of the LSL algorithm, allowing for good reconstructions with noisy data.



Travel Time Tomography in Transversely Isotropic Elasticity via Microlocal Analysis

Yuzhou Zou

Northwestern University

We will discuss recent results of the author regarding the travel time tomography problem in the context of transversely isotropic elasticity. The works build on previous works regarding X-ray and (elastic) travel time tomography and boundary rigidity problems studied by de Hoop, Stefanov, Uhlmann, Vasy, et al., which reduce the inverse problems to the microlocal analysis of certain operators obtained from a pseudolinearization argument. We will discuss the additional analytic complications in this situation, due to the degenerating ellipticity of certain key operators obtained in the pseudolinearization argument, as well as the machinery developed to tackle these additional complications.


An inverse source problem for the elasto-gravitational equations

Lorenzo Baldassari2, Maarten V. de Hoop2, Elisa Francini1, Sergio Vessella1

1Università di Firenze, Italy; 2Rice University, USA

We study an inverse source problem for a system of elastic-gravitational equations, describing the oscillations of the Earth due to an earthquake.

The aim is to determine the seismic-moment tensor and the position of the point source by using only measurements of the disturbance in the gravity field induced by the earthquake, for an arbitrarily small time window.

The problem is inspired by the recently discovered speed-of-light prompt elasto-gravity signals (PEGS), which can prove beneficial for earthquake early warning systems (EEWS).
 
4:00pm - 6:00pmMS46 3: Inverse problems for nonlinear equations
Location: VG1.104
Session Chair: Lauri Oksanen
Session Chair: Teemu Kristian Tyni
4:00pm - 6:00pmMS50 2: Mathematics and Magnetic Resonance Imaging
Location: VG1.105
Session Chair: Kristian Bredies
Session Chair: Christian Clason
Session Chair: Martin Uecker
 

MRI Pulse Design via discrete-valued optimal control

Christian Clason

University of Graz, Austria

Magnetic Resonance Imaging (MRI) is an active imaging methodology that uses radio frequency excitation and response of magnetic spin ensembles under a strong static external magnetic field to measure the distribution of hydrogen atoms in a sample. This distribution correlates with different tissues in a human body, allowing non-invasive medical imaging without ionizing radiation. The mathematical model for the behavior of magnetic spin ensembles under magnetic fields is the so-called Bloch equation, which is a bilinear differential equation. The problem of generating optimal excitation pulses for imaging purposes can thus be formulated and solved as an optimal control problem. We present the basic setup and methods, show practical examples, and discuss how to incorporate structural constraints on the optimal pulses.


Null Space Networks for undersampled Fourier data

Markus Haltmeier

Universität Innsbruck, Austria

Preserving data consistency is a key property of learned image reconstruction. This can be achieved either by specific network architecture or by subsequent projection of the network reconstruction. In this talk, we analyze null-space networks for undersampled image reconstruction. We numerically compare image reconstruction from undersampled Fourier data and investigate the effect integrating data consistency in the network architecture


Deep Learning Approaches for Non-Linear Inverse Problems in MRI Reconstruction

Moritz Blumenthal1,2, Guanxiong Luo2, Martin Schilling2, Martin Uecker1,2

1Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria; 2Institute for Diagnostic and Interventional Radiology of the University Medical Center Göttingen, Germany

MRI is an important tool for clinical diagnosis. Although recognized for being non-invasive and producing images of high quality and excellent soft tissue contrast, its long acquisition times and high cost are problematic. Recently, deep learning techniques have been developed to help solve these issues by improving acquisition speed and image quality.

The multi-coil measurement process is modeled by a linear operator, the SENSE encoding model $$ \begin{aligned} A:\mathbb{C}^{N_x\times N_y}&\to \mathbb{C}^{N_S \times N_C}\\ x &\mapsto y=\mathcal{PFC}x. \end{aligned} $$ The discretized image $x$ corresponds to the complex-valued transversal magnetization in the tissue. In the encoding process, it is first weighted with the coil-sensitivity maps of the $N_C$ receive $\mathcal{C}$oils, then $\mathcal{F}$ourier transformed and finally projected to the $N_S$ sample points of the acquired sampling $\mathcal{P}$attern. Unrolled model-based deep learning approaches are motivated by classical optimization algorithm of the linear inverse problem and integrate learned prior knowledge by learned regularization terms. Typical examples of end-2-end trained networks from the field of MRI are the Variational Network [1] or MoDL [2].

Despite MRI reconstruction often being treated linearly, there are many applications that require non-linear approaches. For instance, the estimation of coil-sensitivity maps can be challenging. An alternative to the use of calibration measurements or pre-estimation of the sensitivity maps from fully-sampled auto-calibration regions is to integrate the estimation into the reconstruction problem. This results in a non-linear - in fact, bilinear - forward model of the form $$ \begin{aligned} F:\mathbb{C}^{N_x\times N_y}\times \mathbb{C}^{N_x\times N_y\times N_c}&\to \mathbb{C}^{N_S \times N_C}\\ x = \begin{pmatrix} x_{\mathrm{img}}\\x_{\mathrm{col}} \end{pmatrix} &\mapsto y=\mathcal{PF}\left(x_{\mathrm{img}}\odot x_{\mathrm{col}}\right)\,. \end{aligned} $$ A possible approach to solve the corresponding inverse problem is the iteratively regularized Gauss-Newton method (IRGNM) [3], which can in turn be combined with deep-learning based regularization [4] similarly to MoDL.

Another source of non-linearity in the reconstruction is the temporal evolution of transverse magnetization. The magnetization follows the Bloch equations, which are parametrized by tissue-specific relaxation parameters $T_1$ and $T_2$. In quantitative (q)MRI, parameter maps $x_{\mathrm{par}}$ are estimated instead of qualitative images $x_{\mathrm{img}}$ of transverse magnetization. In model-based qMRI, physical models that map the parameter maps $x_{\mathrm{par}}$ to the transverse magnetization are combined with encoding models to create non-linear forward models of the form [5]: $$ \begin{aligned} F:\mathbb{C}^{N_x\times N_y\times N_p}\times \mathbb{C}^{N_x\times N_y\times N_c}&\to \mathbb{C}^{N_S \times N_C}\\ x = \begin{pmatrix} x_{\mathrm{par}}\\x_{\mathrm{col}} \end{pmatrix} &\mapsto y=\mathcal{PF}\left(\mathcal{M}(x_{\mathrm{par}})\odot x_{\mathrm{col}}\right) \end{aligned} $$ An efficient way to solve a particular class of such non-linear inverse problems is the approximation of the non-linear signal model in linear subspaces, which in turn can be well combined with deep-learning based regularization [6]. This talk will cover deep-learning based approaches to solve the non-linear inverse problems defined above.

[1] K. Hammernik, T. Klatzer, E. Kobler, M. P. Recht, D. K. Sodickson, T. Pock, F. Knoll. Learning a variational network for reconstruction of accelerated MRI data, Magn. Reson. Med. 79: 3055-3071, 2018.

[2] H. K. Aggarwal, M. P. Mani, M. Jacob. MoDL: Model-Based Deep Learning Architecture for Inverse Problems, IEEE Trans. Med. Imaging. 38: 394-405, 2019.

[3] M. Uecker, T. Hohage, K. T. Block, J. Frahm. Image reconstruction by regularized nonlinear inversion—Joint estimation of coil sensitivities and image content, Magn. Reson. Med. 60: 674-682, 2018.

[4] M. Blumenthal, G. Luo, M. Schilling, M. Haltmeier, M. Uecker, NLINV-Net: Self-Supervised End-2-End Learning for Reconstructing Undersampled Radial Cardiac Real-Time Data, Proc. Intl. Soc. Mag. Reson. Med. 28: 0499, 2022

[5] X. Wang et al. Physics-based reconstruction methods for magnetic resonance imaging. Phil. Trans. R. Soc. A. 379: 20200196, 2021

[6] M. Blumenthal et al. Deep Subspace Learning for Improved T1 Mapping using Single-shot Inversion-Recovery Radial FLASH. Proc. Intl. Soc. Mag. Reson. Med. 28: 0241, 2022


Mathematical Methods in Parallel MRI

Benjamin Kocurov

University of Göttingen, Germany

Magnetic Resonance Imaging (MRI) is an important technique in medical imaging. In the subfield of Parallel MRI, multiple receive coils are used to reconstruct tomographic images with fewer data acquisition steps compared to ordinary MRI. In this talk we will take a deeper look into the mathematical background of some of the prominent reconstruction methods. We will show that, in the course of these methods, implicit assumptions on the structure of the signals and the sensitivity profiles that are associated to the receive coils are made. In order to get a better understanding of the methods at hand and possible improvements, we aim to make these assumptions explicit.
 
4:00pm - 6:00pmMS59 2: Advanced Reconstruction and Phase Retrieval in Nano X-ray Tomography
Location: VG2.103
Session Chair: Tim Salditt
Session Chair: Anne Wald
 

Multi-stage Deep Learning Artifact Reduction for Computed Tomography

Jiayang Shi, Daan Pelt, Joost Batenburg

Leiden University, the Netherlands

Computed Tomography (CT) is a challenging inverse problem that involves reconstructing images from projection data. The CT pipeline typically comprises three stages: 1) acquisition of projection images, 2) transposition of projection images into sinogram images, and 3) computation of reconstruction images. In practice, the projection images are often corrupted, resulting in various imaging artifacts such as noise, zinger artifacts, and ring artifacts in the reconstructed images. Although recent deep learning-based methods have shown promise in reducing noise through post-processing of CT images, they struggle to effectively address globally distributed artifacts along with noise.

Classical artifact reduction methods, on the other hand, have demonstrated success in reducing globally distributed artifacts by targeting individual types of artifacts before the reconstruction stage. These methods operate in the natural domain where the artifacts are most prominent. Inspired by that, we propose to reduce artifacts in all projection, sinogram, and reconstruction stages with deep learning. This approach enables accurate reduction of globally distributed artifacts along with noise, leading to improved CT image quality. Experiments on both simulated and real-world datasets validate the effectiveness of our proposed approach.


Deep learning for phase retrieval from Fresnel diffraction patterns

Max Langer1, Kannara Mom2, Bruno Sixou2

1Univ. Grenoble Alpes, CNRS, UMR 5525, VetAgro Sup, Grenoble INP, TIMC, 38000 Grenoble, France; 2Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, F-69621 Villeurbanne, France

We present our recent developments in phase retrieval from propagation-based X-ray phase contrast images using deep learning-based approaches. Previously, deep convolutional neural networks had been used as post-processing step to a linear phase retrieval algorithm [1]. In a first approach, we investigated the use of deep convolutional neural networks to directly retrieve phase and amplitude from a propagation distance series of phase contrast images [2]. Due to a structure that seems well adapted to the properties of the phase contrast images, taking into account features at several scales and connecting the corresponding feature maps, we chose the mixed-scale dense network (MS-DN) [3] architecture as network structure. We developed a transfer learning approach where the network is trained on simulated phase contrast images generated from projections of random objects with simple geometric shape.

We showed that the use of a simple pre-processing to transform the input to the image domain improved results, providing some support to the hypothesis that including knowledge of the image formation process in the network improves reconstruction quality. Some work has been done in this direction using generative adversarial networks by introducing a model of the image formation in a CycleGAN network [4].

Going one step further, information on how to solve the phase retrieval problem can be introduced into the neural network, algorithm unrolling being one such approach [5]. In algorithm unrolling, parts of an iterative algorithm, usually the regularization part, are replaced by neural networks. The networks learn the steps of the chosen iterative algorithm. These networks can then be applied in a sequential fashion, making the run-time application very efficient, moving the calculation load from the iterative reconstruction to the off-line training of the networks.

Based on this idea, we proposed the Deep Gauss-Newton network (DGN) [6]. Gauss-Newton type algorithms have been successfully used for phase retrieval from Fresnel diffraction patterns [7]. Inspired by this, we developed an unrolling-type algorithm based on a Gauss-Newton iteration. Both the regularization and the inverse Hessian are replaced by neural networks. The same network is used for each iteration, making the method very economical in terms of network weights. An initial reconstruction is not required; the algorithm can be initialized at zero. It can retrieve simultaneously the phase and attenuation sorption from one single diffraction pattern. We applied the DGN to both simulated and experimental data, for which it substantially improved the reconstruction error and the resolution compared both to the standard iterative algorithm and the MSDN-based method.

Future work includes extension of the algorithms to tomographic and time-resolved imaging, as well as to other imaging problems. Code for both algorithms will be made available through the PyPhase package [8] in a future release.

[1] C. Bai, M. Zhou, J. Min, S. Dand, X. Yu, P. Zhang, T. Peng, B. Yao. Robust contrast-transfer-function phase retrieval via flexible deep learning networks, Opt. Lett. 44 (21), 5141–5144, 2019.

[2] K. Mom, B. Sixou, M. Langer. Mixed scale dense convolutional networks for x-ray phase contrast imaging, Appl. Opt. 61, 2497–2505, 2022.

[3] D. M. Pelt, J. A. Sethian, A mixed-scale dense convolutional neural network for image analysis, Proc. Natl.402 Acad. Sci. USA 115, 254–259, 2018.

[4] Y. Zhang, M. A. Noack, P. Vagovic, K. Fezzaa, F. Garcia-Moreno, T. Ritschel, P. Villanueva-Perez. PhaseGAN: a deep-learning phase-retrieval approach for unpaired datasets, Opt. Express 29, 19593–19604, 2021.

[5] V. Monga, Y. Li, Y. C. Eldar. Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing, IEEE Signal Proc. Mag. 38, 18–44, 2021.

[6] K. Mom, M. Langer, B. Sixou. Deep Gauss-Newton for phase retrieval, Opt. Lett. 48, 1136–1139, 2023.

[7] S. Maretzke, M. Bartels, M. Krenkel, T. Salditt, T. Hohage. Regularized newton methods for X-ray phase contrast and general imaging problems, Opt. Express 24, 6490–6506, 2016.

[8] M. Langer, Y. Zhang, D. Figueirinhas, J.-B. Forien, K. Mom, C. Mouton, R. Mokso, P. Villanueva-Perez. PyPhase – a Python package for X-ray phase imaging, J. Synch. Radiat. 28, 1261–1266, 2021.



Time resolved and multi-resolution tomographic reconstruction strategies in practice.

Rajmund Mokso1, Viktor Nikitin2

1DTU Physics, Technical University of Denmark, Lyngby, Denmark; 2Advanced Photon Source, Argonne National Laboratory, Lemont, IL, USA

A collimated X-ray beam is the trademark of synchrotron X-ray sources and comes with certain benefits for tomography, namely the simplicity of parallel beam tomographic reconstruction. Building on this a number of new approaches emerges to reconstruct a 3D volume from truncated X-ray projections. I will mainly consider here truncation in the time domain. One specificity of imaging at synchrotron instruments is that individual angular projections are acquired on a sub-ms time-frame and the entire tomographic dataset in a fraction of a second [1,2]. This enables time resolved studies of dynamic processes at the micrometer spatial and sub-second temporal resolution. Despite this fast acquisition the sample is often evolving at a faster rate, giving rise to motion artefact in the reconstructed volume. One possible approach to reconstruct an artifact-free 3D volume from (in the traditional sense) inconsistent projections is to use the concept of compressed sensing in the way that data in the temporal direction is represented by a linear combination of appropriate basis functions [3]. In our approach we perform L1 norm minimization for the gradient in both spatial and temporal variables. The optimal choice of basis functions is case specific and is the matter of further investigation.

Multiresolution acquisition is an attractive tomographic approach, but comes with it’s own challenges. I will discuss an approach to merge high and low resolution datasets of the same sample [4] for the extension of the reconstructed volume.

[1] R. Mokso, D.A. Schwyn, S.M. Walker et al. Four-dimensional in vivo X-ray microscopy with projection guided gating, Scientific Reports 5 (1), 8727, 2015.

[2] F. Garcia-Moreno et al. Using X-ray tomoscopy to explore the dynamics of foaming metal, Nature Communications. 10(1), 3762, 2019.

[3] V. Nikitin, M. Carlsson, F. Andersson, R. Mokso. Four-dimensional tomographic reconstruction by time domain decomposition, IEEE Transaction on Computational Imaging 5(3), 409, 2019.

[4] L. Varga, R. Mokso. Iterative High Resolution Tomography from Combined High-Low Resolution Sinogram Pairs, Proceedings of International Workshop on Combinatorial Image Analysis, 150–163, 2018.



Tomographic Reconstruction in X-ray Near-field Diffractive Imaging: from Laboratory $\mu$CT to Synchrotron Nano-Imaging

Tim Salditt

Georg-August-Universität Göttingen, Germany

X-rays can provide information about the structure of matter, on multiple length scales from bulk materials to nanoscale devices, from organs to organelle, from the organism to macromolecule. Due to the widespread lack of suitable lenses, the majority of investigations are rather indirect – apart from classical shadow radiography perhaps. While diffraction problems have been solved since long, the modern era has brought about lensless coherent imaging with X-rays, down to the nanoscale. How can we address and implement optimized tomography solutions for phase contrast inhouse and synchrotron data, taking into account partial coherence, propagation and cone beam geometry? We show how solutions and algorithms of mathematics of inverse problems [1-3] help us to meet the challenges of phase retrieval, tomographic reconstruction, and more generally image processing of bulky data. We also include illustrative bioimaging projects such as mapping the human brain [4,6] of fighting infectious diseases [6]. References:

[1] T. Salditt, A. Egner, R. D. Luke (Eds.) Nanoscale Photonic Imaging Springer Nature, TAP, 134, Open Access Book, 2020.

[2] L. M. Lohse, A.-L. Robisch, M. Töpperwien, S. Maretzke, M. Krenkel, J. Hagemann, T. Salditt A phase-retrieval toolbox for X-ray holography and tomography Journal of Synchrotron Radiation, 27, 3, 2020.

[3] S. Huhn, L.M. Lohse, J. Lucht, T. Salditt. Fast algorithms for nonlinear and constrained phase retrieval in near-field X-ray holography based on Tikhonov regularization - arXiv preprint arXiv:2205.01099, 2022.

[4] M. Eckermann, B. Schmitzer, F. van der Meer, J. Franz, O. Hansen, C. Stadelmann, T. Salditt. Three-dimensional virtual histology of the human hippocampus based on phase-contrast computed tomography Proc. Natl. Acad. Sci., 118, 48, e2113835118, 2021.

[5] M. Eckermann, J. Frohn, M. Reichardt, M. Osterhoff, M. Sprung, F. Westermeier, A.Tzankov, C. Werlein, M. Kuehnel, D. Jonigk, T. Salditt. 3d Virtual Patho-Histology of Lung Tissue from Covid-19 Patients based on Phase Contrast X-ray Tomography, eLife, 9:e60408, 2020.

[6] M. Reichardt, P.M. Jensen, V.A. Dahl, A.B. Dahl, M. Ackermann, H. Shah, F. Länger, C. Werlein, M.P. Kuehnel, D. Jonigk, T. Salditt. 3D virtual histopathology of cardiac tissue from Covid-19 patients based on phase-contrast X-ray tomography, eLife, 10:e71359, 2021.

 

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany