Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
 
 
Session Overview
Location: VG2.104
Date: Monday, 04/Sept/2023
1:30pm - 3:30pmMS44 1: Modelling in Earth and planetary sciences by data inversion at various scales
Location: VG2.104
Session Chair: Christian Gerhards
Session Chair: Volker Michel
Session Chair: Frederik J Simons
 

Inverse magnetization problems in geoscience at various scales

Christian Gerhards

TU Bergakademie Freiberg, Germany

The inversion of magnetic field data for the underlying magnetization is a frequent problem in geoscience. It occurs at planetary scales, inverting satellite magnetic field information for lithospheric sources, as well as at microscopic scales, inverting for the sources in thin slices of rock samples. All scales have in common that the inverse problem is nonunique and highly instable. Here, we want to provide an overview on this topic and indicate various scenarios for which additional assumptions may ameliorate some of the issues of ill-posedness. This ranges from the assumption of an (infinitely) thin lithosphere (where the Hardy-Hodge decomposition can be used for the characterization of uniqueness) to a priori knowledge about the location or shape of magnetic inclusions within a rock sample (where the Helmholtz decomposition plays a role for the uniqueness aspect).


Slepian concentration problem for polynomials on the Ball

Xinpeng Huang

TU Bergakademie Freiberg, Germany

The sources of geophysical signals are often spatially localized. Thus, adequate basis functions are required to model such properties. Slepian functions have proven to be a very successful tool.

Here, we consider theoretical properties of the Slepian spatial-spectral concentration problem for the space of multi-variate polynomials on the unit ball in $\mathbb{R}^d$ normalized under Jacobi weights. In particular, we show the phenomena of the step-like shape of the eigenvalue distribution of concentration operators, and characterize the transition by the Jacobi weight $W_{0}$, which serves as an analogue of the $2\Omega T$ rule in the classical Slepian theory. A numerical demonstration is performed for the 3-D ball with Lebesgue weights.



Regularized matching pursuits with a learning add-on for geoscientific inverse problems

Naomi Schneider

University of Siegen, Geomathematics Group Siegen, Germany

We consider challenging inverse problems from the geosciences: the downward continuation of satellite data for the approximation of the gravitational potential as well as the travel time tomography using earthquake data to model the interior of the Earth. Thus, we are able to monitor certain influences on the system Earth, in particular the mass transport of the Earth or its interior anomalies.

For the approximation of these linear(ized) inverse problems, different basis systems can be utilized. Traditionally, we a-priori either choose a global, e.g. spherical harmonics on the sphere or polynomials on the ball, or a local one, e.g. radial basis functions and wavelets or finite elements.

The Learning Inverse Problem Matching Pursuits (LIPMPs), however, have the unique characteristic to enable the combination of global and local trial functions for the approximation of inverse problems. The latter is obtained iteratively from an intentionally overcomplete set of trial functions, the dictionary, such that the Tikhonov functional is reduced. Moreover, the learning add-on allows the dictionary to be infinite such that an a-priori choice of a finite number of trial functions is negligible. Further, it increases the efficiency of the methods.

In this talk, we give details on the LIPMPs and show some current numerical results.


Non-unique Inversions in Earth Sciences - an Underestimated Pitfall?

Volker Michel

University of Siegen, Germany

Earth exploration is in many cases connected to inverse problems, since often regions of interest cannot be accessed sufficiently. This is the case for the recovery of structures in the Earth's interior. However, it is also present in the investigation of processes at the Earth's surface, e.g. if a sufficient global or regional coverage is required or if remote areas are of interest.

Many of these problems are associated to an instability of the inverse problems, which is why a variety of regularization methods for their stabilization has been developed so far. However, a notable number of the problems is also ill-posed because of a non-unique solution. Phantom anomalies and other artefacts might be possible consequences. In some cases, the mathematical structure of the underlying null spaces is entirely understood (e.g. for a certain class of Fredholm integral equations of the first kind). In other cases, such a theory is still missing. Nevertheless, also for mathematically well described cases, numerical methods often ignore what can be visible and what can be invisible in available data.

The purpose of this talk is to create some more sensitivity regarding the challenges of inverse problems with non-unique solutions.

[1] S. Leweke, V. Michel, R. Telschow. On the non-uniqueness of gravitational and magnetic field data inversion (survey article), in: Handbook of Mathematical Geodesy (W. Freeden, M.Z. Nashed, eds.), Birkhäuser, Basel, 883-919, 2018.

[2] V. Michel. Geomathematics - Modelling and Solving Mathematical Problems in Geodesy and Geophysics. Cambridge University Press, Cambridge, 2022.

[3] V. Michel, A.S. Fokas. A unified approach to various techniques for the non-uniqueness of the inverse gravimetric problem and wavelet-based methods, Inverse Problems 24: 25pp, 2008.

[4] V. Michel, S. Orzlowski. On the null space of a class of Fredholm integral equations of the first kind, Journal of Inverse and Ill-Posed Problems 24: 687-710, 2016.
 
4:00pm - 6:00pmMS44 2: Modelling in Earth and planetary sciences by data inversion at various scales
Location: VG2.104
Session Chair: Christian Gerhards
Session Chair: Volker Michel
Session Chair: Frederik J Simons
 

Efficient Parameter Estimation of Sampled Random Fields

Frederik J Simons1, Arthur P. Guillaumin2, Adam M. Sykulski3, Sofia C. Olhede4

1Princeton University, United States of America; 2Queen Mary, University of London, UK; 3Imperial College, London, UK; 4Ecole Polytechnique Federale de Lausanne, Switzerland

Describing and classifying the statistical structure of topography and bathymetry is of much interest across the geophysical sciences. Oceanographers are interested in the roughness of seafloor bathymetry as a parameter that can be linked to internal-wave generation and mixing of ocean currents. Tectonicists are searching for ways to link the shape and fracturing of the ocean floor to build detailed models of the evolution of the ocean basins in a plate-tectonic context. Geomorphologists are building time-dependent models of the surface that benefit from sparsely parameterized representations whose evolution can be described by differential equations. Geophysicists seek access to parameterized forms for the spectral shape of topographic or bathymetric loading at various (sub)surface interfaces in order to use the joint structure of topography and gravity for inversions for the effective elastic thickness of the lithosphere. A unified geostatistical framework involves the Matérn process, a theoretically well justified parameterized form for the spectral-domain covariance of Gaussian processes. We provide a computationally and statistically efficient method for estimating the parameters of a stochastic covariance model observed on a regular spatial grid in any number of dimensions. Our proposed method makes important corrections to the well-known Whittle likelihood to account for large sources of bias caused by boundary effects and aliasing. We generalise the approach to flexibly allow for significant volumes of missing data including those with lower-dimensional substructure, and for irregular sampling boundaries. We provide detailed implementation guidelines which maintains the computational scalability of Fourier and Whittle-based methods for large data sets.



Co-estimation of core and lithospheric signals in satellite magnetic data

Mikkel Otzen, Chris Finlay

Technical University of Denmark, Denmark

Satellite observations of the geomagnetic field contain signals originating both from electrical currents in the core and from magnetized rocks in the lithosphere. At short wavelengths the lithospheric signal dominates, obscuring the signal from the core. Here we present details of a method to co-estimate separate models for the core and lithospheric fields, which are allowed to overlap in spherical harmonic degree, that makes use of prior information regarding the sources. Using a maximum entropy method, we estimate a time-dependent model of the core field together with a static model of the lithospheric field that satisfy the constraints provided by satellite observations as well as statistical prior information, but are otherwise maximally non-committal with regard to the distribution of radial magnetic field at the source surfaces. Tests based on synthetic data are encouraging, demonstrating it is possible to retrieve parts of the core field beyond degree 13 and the lithospheric field below degree 13. Results will be presented from our new model of the time-dependent core surface field up to spherical harmonic degree 30 and implications regarding our understanding of the core dynamo discussed.



Transdimensional joint inversion of gravity and surface wave phase velocities

Wolfgang Szwillus

Kiel University, Germany

A fundamental choice for any geophysical inversion is the parametrization of the subsurface. Voxels and coefficients of basis functions (i.e., spherical harmonics) often are a natural choice, especially since they can simplify forward calculations. An alternative approach is to use a finite collection of discrete anomalies, which leads to transdimensional (TD) techniques, when considered through a Bayesian lens. The most popular form of TD inversion uses a variable number of Voronoi cells as spatial representation. The TD approach addresses the issues of non-uniqueness and lack of resolution in a special way: Instead of smoothing or damping the solutions, the spatial structure of the model is controlled by weighing the number of elements against the achieved data fit. This gives it an intrinsic adaptive behaviour, useful for heterogeneous data coverage. Furthermore, geophysical sensitivity often changes with depth, which TD approaches can also adapt to. In a joint inversion context for several properties (like seismic velocities and densities), spatial coupling between different sought parameters is automatically guaranteed.

In this contribution I will present some examples for using TD inversions on global gravity and surface wave data to simultaneously determine the velocity and density structure within the Earth’s mantle.



The inverse problem of micromagnetic tomography in rock- and paleomagnetism

Karl Fabian

Norwegian University of Science and Technology, Norway

The intrinsic non-uniqueness of potential-field inversion of surface scanning data can be circumvented by solving for the potential field of known individual source regions. A uniqueness theorem characterizes the mathematical background of the corresponding inversion problem, and determines when a potential-field measurement on a surface uniquely defines the magnetic potentials of the individual source regions. For scanning magnetometers in rock magnetism, this result implies that dipole magnetization vectors of many individual magnetic particles can be reconstructed from surface scans of the magnetic field. It is shown that finite sensor size still retains this conceptual uniqueness. The technique of micromagnetic tomography (MMT) combines X-ray micro computed tomography and scanning magnetometry to invert for the magnetic potential of individual magnetic grains within natural and synthetic samples. This provides a new pathway to study the remanent magnetization that carries information about the ancient geomagnetic field and is the basis of all paleomagnetic studies. MMT infers the magnetic potential of individual grains by numerical inversion of surface magnetic measurements using spherical harmonic expansions. Because the full magnetic potential of the individual particles in principle is uniquely determined by MMT, not only the dipole but also more complex, higher order, multipole moments can be recovered. Even though a full reconstruction of complex magnetization structures inside the source minerals is mathematically impossible, these additional constraints by far-field multipole terms can substantially reduce the number of possible micromagnetic energy minima. For complex particles with many micromagnetic energy minima it is possible to include the far-field constraints into the micromagnetic minimization algorithm.
 
Date: Tuesday, 05/Sept/2023
1:30pm - 3:30pmMS58 1: Shape Optimization and Inverse Problems
Location: VG2.104
Session Chair: Lekbir Afraites
Session Chair: Antoine Laurain
Session Chair: Julius Fergy Tiongson Rabago
 

Isogeometric Shape Optimization of Periodic Structures in Three Dimensions

Helmut Harbrecht1, Michael Multerer2, Remo von Rickenbach1

1Universität Basel, Switzerland; 2Università della Svizzera italiana, Switzerland

The optimal design of medical and dental implants, or lightweight structures in aeronautics can be modelled by a periodic structure with an empty, but a-priorily unknown inclusion. Homogenisation of this periodic scaffold structure, i.e., a material containing periodically arranged, identical copies of a cavity, leads to a macroscopic equation involving an effective material tensor $\mathbf{A}_0(\Omega) \in \mathbb{R}^{d \times d}_{\rm sym}$.

This effective tensor is determined by a microscopic problem, defined on the $d$-dimensional, periodic unit cell $Y := [- \frac{1}{2}, \frac{1}{2} ]^d$, containing the void $\Omega \subset Y$. The solutions of the respective cell problems \[ \begin{cases} \Delta w_i = 0 &\text{in } Y \setminus \overline{\Omega}, \\ \partial_{\boldsymbol{n}} w_i = - \langle \boldsymbol{n}, \, \boldsymbol{e}_i \rangle &\text{on } \partial \Omega, \end{cases} \qquad i = 1, \ldots, d, \] define the coefficients of the effective tensor by \[ a_{i, j}(\Omega) = \int_{Y \setminus \overline{\Omega}} \big\langle \boldsymbol{e}_i + \nabla w_i, \ \boldsymbol{e}_j + \nabla w_j \big\rangle \operatorname{d}\!\boldsymbol{y}. \] Therefore, effective material tensor on the macroscopic scale is given by the solution of a problem on the microscopic scale.

Considering a sought material tensor $\mathbf{B} \in \mathbb{R}^{d \times d}_{\rm sym}$, which expresses desired material properties, we may ask the following question: Can we find a cavity $\Omega$ such that the effective tensor is as close to $\mathbf{B}$ as possible? In other terms, we want to minimise the tracking type functional \[ J(\Omega) := \frac{1}{2} \big\| \mathbf{A}_0(\Omega) - \mathbf{B} \big\|_F^2. \]

In [2], formulae for the shape gradient of the functional $J(\Omega)$ have been derived and numerical examples in two dimensions were presented, whereas in [3], integral equations were used to obtain numerical results in three dimensions. These examples include simply connected cavities and also more complex cavities of genus greater than zero. The calculations were performed with the isogeometric C++ library BEMBEL [1].

[1] J. Dölz, H. Harbrecht, S. Kurz, M. Multerer, S. Schöps, F. Wolf. Bembel: The fast isogeometric boundary element C++ library for Laplace, Helmholtz, and electric wave equation, SoftwareX, 11: 100476, 2020.

[2] M. Dambrine, H. Harbrecht. Shape optimization for composite materials and scaffold structures, Multiscale Modeling & Simulation, 18: 1136--1152, 2020.

[3] H. Harbrecht, M. Multerer, R. von Rickenbach. Isogeometric shape optimization of periodic structures in three dimensions, Computer Methods in Applied Mechanics and Engineering, 391: 114552, 2022.


Stokes Traction Method: A Numerical Approach to Volume Constrained Shape Optimization Problems

John Sebastian Hoseña Simon

Institute of Mathematics, Czech Academy of Sciences, Czech Republic

Numerically solving shape optimization problems usually takes advantage of the Zolesio-Hadamard form, which writes the shape derivative of the objective function into a boundary integral of the product of the shape grdient and the deformation field. Intuitively, one can choose the deformation field to take the form of the negative of the shape gradient, evaluated on the free boundary, as a gradient descent direction. However, such choice may cause instabilities and oscillations on the free boundary. This issue is a motivation for extending the deformation field to the computational domain in a smooth manner, this method is known as the traction method [1]. In this talk, solenoidal extensions to solve shape optimization problems with volume constraints will be considered. In particular, the deformation field will be extended to the computational domain by solving an incompressible Stokes equations with a Robin data defined as the negative of the shape gradient and the viscosity constant assumed to be sufficiently small. We apply such method to a vorticity maximization problem for the Navier--Stokes equations and compare with it the augmented Lagrangian method used by C. Dapogny et al. [2].

[1] H. Azegami, K. Takeuchi. A smoothing method for shape optimization: traction method using the Robin condition, Int J Comput Methods. 3(1): 21--33, 2006.

[2] C. Dapogny, P. Frey, F. Omnès, Y. Privat. Geometrical shape optimization in fluid mechanics using FreeFem++, Struct Multidisciplinary Opt 58(6):2761–2788, 2018.


Non-conventional shape optimization methods for solving shape inverse problems

Julius Fergy Tiongson Rabago1, Lekbir Afraites2, Aissam Hadri3

1Kanazawa University, Japan; 2Université Sultan Moulay Slimane, Morocco; 3Université Ibn Zohr, Morocco

We propose non-conventional shape optimization approaches for the resolution of shape inverse problems inspired by non-destructive testing and evaluation. Our main objective is to improve the detection of the concave parts or regions of the unknown inclusion/obstacle/boundary through two different strategies and under shape optimization settings. Firstly, we will introduce the so-called alternating direction method of multipliers or ADMM in shape optimization framework to solve a boundary inverse problem for the Laplacian with Dirichlet condition using a single boundary measurement. Secondly, we will consider a similar problem, but with the Robin condition, and demonstrate how we can effectively detect a void with concavities using several pairs of Cauchy data. We will illustrate the effectiveness of the proposed schemes by testing them to some shape detection problems with pronounced concavities and under noisy data. Examples are given in two and three dimensions.
 
4:00pm - 6:00pmMS58 2: Shape Optimization and Inverse Problems
Location: VG2.104
Session Chair: Lekbir Afraites
Session Chair: Antoine Laurain
Session Chair: Julius Fergy Tiongson Rabago
 

Minimization of blood damage induced by non-newtonian fluid flows in moving domains

Valentin Calisti, Sarka Necasova

Institute of Mathematics of the Czech Academy of Sciences, Czech Republic

The use of blood pumps may be necessary for people with heart problems, but there are potential risks of complications associated with this type of device, in particular hemolysis (destruction of red blood cells). Many engineering works are interested in the parametric optimization of these pumps to minimize hemolysis. In order to generalize this approach in the present work, we study the shape continuity of a coupled system of PDE modeling blood flows and hemolysis evolution in moving domains, governed respectively by non-Newtonian Navier-Stokes and by transport equations.

First, the shape continuity of the blood fluid velocity $u$ is shown. This development, which extends the one led in [1], is based on the recent progress made in [2]. Indeed, the non- Newtonian stress for blood flows can be described by the following rheological law: $$ S(Du) := (1 + |Du|)^{q−2} Du , $$ where $S(Du)$ is the stress tensor, the symmetric gradient is given by $Du := \frac{1}{2} (\nabla u + \nabla u^{\top})$, and where $q < 2$. Such fluids are called shear thinning fluids. Yet in [2], an existence result is provided for the case $q > 6/5$ in moving domains, by means of the study of Generalized Bochner spaces and the Lipschitz truncation method. Thus, these techniques are extended to the present framework of a sequence of converging moving domains.

After calculating the blood flow solutions, the velocity and stress field of the fluid are used as the coefficients for the transport equation governing the evolution of the hemolysis rate $h$: $$ \partial_t h + u \cdot \nabla h = | S(Du) |^{\gamma} (1 − h), $$ where the right hand side plays the role of a source term with saturation, for some $\gamma > 1$. From this, the shape continuity of the hemolysis rate is also proved.

Finally, these results allow to show the existence of minimum for a class of shape optimization problems based on the minimization of the hemolysis rate, in the framework of moving domains. The lack of uniqueness for shear thinning fluids solutions prevents the study of shape sensitivity from being pursued, so that an extension of this work for the purpose of computing a shape gradient must somehow consider a regularization of the present model.

[1] J. Sokolowski, J. Stebel. Shape optimization for non-Newtonian fluids in time-dependent domains, Evol. Equ. Control Theory, 3(2):331–348, 2014.

[2] P. Nagele, M. Ruzicka. Generalized Newtonian fluids in moving domains, J. Differential Equations, 264(2):835–866, 2018.


On the new coupled complex boundary method for shape inverse problem with the Robin homogeneous condition

Lekbir Afraites1, Julius Fergy T. Rabago2

1Sultan Moulay Slimane University, Béni Mellal, Morocco; 2Kanazawa University, Kanazawa, Japan

We consider the problem of identifying an unknown portion $\Gamma$ of the boundary with a Robin condition of a d-dimensional $(d = 2;3)$ body $\Omega$ by a pair of Cauchy data $(f; g)$ on the accessible part $\Sigma$ of the boundary of a harmonic function $u$. For a fixed constant impedance $\alpha$, it is known [1] that a single measurement of $(f; g)$ on $\Sigma$ can give rise to infinitely many different domains . Nevertheless, a well-known approach to numerically solve the problem and obtain fair detection of the unknown boundary is to apply shape optimization methods. First, the inverse problem is recast into three different shape optimization formulations and the shape derivative of the cost function associated with each formulations are obtained [3]. Second, in this investigation, a new application of the so-called coupled complex boundary method – first put forward by Cheng et al. [2] to deal with inverse source problems – is presented to resolve the problem. The over-specified problem is transformed to a complex boundary value problem with a complex Robin boundary condition coupling the Cauchy pair on the accessible exterior boundary. Then, the cost function constructed by the imaginary part of the solution in the whole domain is minimized in order to identify the interior unknown boundary. The shape derivative of the complex state as well as the shape gradient of the cost functional with respect to the domain are computed. In addition, the shape Hessian at the critical point is characterized to study the ill-posedness of the problem. Specifically, the Riesz operator corresponding to the quadratic shape Hessian is shown to be compact. Also, with the shape gradient information, we devise an iterative algorithm based on a Sobolev gradient to solve the minimization problem. The numerical realization of the scheme is carried out via finite element method and is tested to various concrete example of the problem, both in two and three spatial dimensions.

[1] F. Cakoni, R. Kress. Integral equations for inverse problems in corrosion detection from partial cauchy data, Inverse Prob. Imaging, 1:229–245, 2007.

[2] X. L. Cheng, R. F. Gong, W. Han, X. Zheng. A novel coupled complex boundary method for solving inverse source problems, Inverse Problems, 30, 055002, 2014.

[3] L. Afraites, J. F. T. Rabago. Shape optimization methods for detecting an unknown boundary with the Robin condition by a single measurement, Discrete Contin. Dyn. Syst. - S, 2022. [10.3934/dcdss.2022196]


Shape optimization approach for sharp-interface reconstructions in time-domain full waveform inversion.

Antoine Laurain

University of Duisburg-Essen, Germany

Velocity models presenting sharp interfaces are frequently encountered in seismic imaging, for instance for imaging the subsurface of the Earth in the presence of salt bodies. In order to mitigate the oversmoothing of classical regularization strategies such as the Tikhonov regularization, we propose a shape optimization approach for sharp-interface reconstructions in time-domain full waveform inversion. Using regularity results for the wave equation with discontinuous coefficients, we show the shape differentiability of the cost functional measuring the misfit between observed and predicted data, for shapes with low regularity. We propose a numerical approach based on the obtained distributed shape derivative and present numerical tests supporting our methodology.
 
Date: Wednesday, 06/Sept/2023
9:00am - 11:00amCT01: Contributed talks
Location: VG2.104
Session Chair: Philipp Ronald Mickan
 

Some Inverse Problems for Parabolic Equations

Mikhail Klibanov

University of North Carolina at Charlotte, United States of America

Two types of new results of the presenter will be discussed:

1. Holder and Lipschitz stability estimates for coefficient inverse problem and inverse source problem with the final overdetermination [1]. The solution of the parabolic equation is known at $t=0$ and $t=T$. Both Dirichlet and Neumann boundary conditions are known either on part of the boundary or on the entire boundary. A new Carleman estimate for the parabolic operator is the key here. Unlike standard Carleman estimates in this one, the Carleman Weight Function is independent on $t$. The Holder stability estimate is in the case of incomplete boundary data and the Lipshitz stability is in the case of complete boundary data. Both results and the methodology are significantly different from previous ones.

2. Stability estimates and uniquness theorems for some inverse problems for the Mean Field Games system [2]. These results are also new. The Mean Field Games system is a system of two parabolic equations, which was originally proposed by J.-M. Lasry and P.-L. Lions in 2006-2007 and became quite popular nowadays due to a number of very exciting applications. The main challenge here is that the time t is running in two opposite directions in these equations. Therefore, the Volterra-like property of conventional systems of parabolic PDEs is not kept here.

[1] M. V. Klibanov, Stability estimates for some parabolic inverse problems with the final overdetermination via a new Carleman estimate, arxiv: 2301.09735, 2023.

[2] M. V. Klibanov, Yu. V. Aveboukh, Stability and uniqueness of two inverse problems for the Mean Field Games system, in preparation.


Inverse problems for hyperbolic conservation laws

Duc-Lam Duong

LUT University, Finland

Hyperbolic conservation laws are central in the theory of PDEs. One of their typical features is the development of shock waves. This poses many challenges to the mathematical theory of both forward and inverse problems. It is well-known that two different initial data may involve into the same solution. In this talk, we will present a number of ways to overcome this difficulty, with emphasis on the Bayesian approach, and survey some recent results.


X-ray holographic imaging using intensity correlations

Milad Karimi, Thorsten Hohage

Georg August Universität Göttingen, Germany,

Holographic coherent x-ray imaging enables nanoscale imaging of biological cells and tissue, rendering both phase and absorption contrast, i.e. real and imaginary part of the refractive index. A main challenge of this imaging technique is radiation damage. We present a different modality of this imaging technique using a partially incoherent incident beam and time-resolved intensity measurements based on new measurement technologies. This enables the acquisition of intensity correlations in addition to the commonly used expectations of intensities. In this talk we explore information content of holographic intensity correlation data, analytically showing that in the linearized model both phase and absorption contrast can uniquely be determined by the intensity correlation data. The uniqueness theorem is derived using multi-dimensional Kramers-Kronig relations. We also deduce a uniqueness theorem for ghost holography imaging as an unconventional X-ray imaging scheme.

For regularized reconstruction it is important to take into account the statistical distribution of the correlation data. The measured intensity data are described by a so-called Cox-processes, roughly speaking a Poisson process with random intensity. For medium-size data sets, we use adaptations of the iteratively regularized Gauss-Newton method and the FISTA method as reconstruction methods. Our numerical results even in the full nonlinear model confirm that both phase and absorption contrast can jointly be reconstructed from only intensity correlations without the use of average intensities. Although these results are encouraging concerning the information content of the new intensity correlation data, the increased dimensionality of these data causes severe computational challenges.
 
Date: Thursday, 07/Sept/2023
1:30pm - 3:30pmCT04: Contributed talks
Location: VG2.104
Session Chair: Christian Aarset
 

Weighted sparsity regularization for estimating the source term in the potential equation

Ole Løseth Elvetun, Bjørn Fredrik Nielsen

Norwegian University of Life Sciences, Norway

We investigate the possibility for using boundary measurements to recover a sparse source term $f(x)$ in the potential equation. This work is motivated by the observation that standard methods typically suggest that internal sinks and sources are located close to the boundary and hence fail to produce adequate results. That is, the large null space of the associated forward operator is not “correctly handled” by the classical regularization techniques.

Provided that weighted sparsity regularization is used, we derive criteria which assure that several sinks ($f(x)<0$) and sources ($f(x)>0$) can be identified. Furthermore, we present two cases for which these criteria always are fulfilled: a) well-separated sources and sinks, and b) many sources or sinks located at the boundary plus one interior source/sink. Our approach is such that the linearity of the associated forward operator is preserved in the discrete formulation. The theory is therefore conveniently developed in terms of Euclidean spaces, and it can be applied to a wide range of problems. In particular, it can be applied to both isotropic and anisotropic cases. We present a series of numerical experiments.

This work extends the results presented at the "Symposium on Inverse Problems" in Potsdam in September 2022: The theory for the single source case is generalized to the several sources and sinks situation, we do not employ any box constraints and the analysis is carried out for the potential equation instead of focusing on the screened Poisson equation or the Helmholtz equation.



Lipschitz stability for inverse source problems of waves on Lorentzian manifolds

Hiroshi Takase

Kyushu University, Japan

We consider an inverse problem of the wave equation on a Lorentzian manifold, a type of semi-Riemannian manifold. This kind of equation is obtained by linearizing the Einstein equation and is known as the equation satisfied by gravitational waves. In this talk, we prove a global Lipschitz stability for the inverse source problem of determining a source term in the equation. Sobolev spaces on manifolds, semigeodesic coordinates, and Carleman estimates, which are important tools in geometric analysis, will also be discussed.


Logarithmic stability and instability estimates for random inverse source problems

Philipp Ronald Mickan1, Thorsten Hohage1,2

1Georg-August Universität Göttingen, Germany; 2Max Planck Institute for Solar System Research, Göttingen, Germany

We study the inverse source problem to determine the strength of a random acoustic source from correlation data. More precisely, the data of the inverse problems are correlations of random time-harmonic acoustic waves measured on a surface surrounding a region of random, uncorrelated sources. Such a model is used in experimental aeroacoustics to determine the strength of sound sources [1]. Uniqueness has been previously established [1,2]. In this talk we report on logarithmic stability results and logarithmic convergence rates for the Tikhonov regularisation applied to the inverse source problem by establishing a variational source condition under Sobolev type smoothness assumption. We also present logarithmic instability estimates using an entropy argument. Furthermore, we will show numerical experiments supporting our theoretical results.

[1] T. Hohage, H.-G. Raumer, C. Spehr. Uniqueness of an inverse source problem in experimental aeroacoustics. Inverse Problems, 36(7):075012, 2020.

[2] A. J. Devaney. The inverse problem for random sources. Journal of Mathematical Physics, 20(8):1687–1691, 1979.


Combined EEG/MEG source analysis for reconstructing the epileptogenic zone in focal epilepsy

Carsten H. Wolters1, Frank Neugebauer1, Sampsa Pursiainen2, Martin Burger3, Jörg Wellmer4, Stefan Rampp5

1Institute for Biomagnetism and Biosignalanalysis, University of Münster, Germany; 2Tampere University, Finland; 3DESY and University of Hamburg, Germany; 4Ruhr-Epileptology, Dpt. Of Neurology, University Hospital Knappschaftskrankenhaus Bochum, Germany; 5Department of Neurosurgery, University Hospital Erlangen, Germany

MEG and EEG source analysis is frequently used in presurgical evaluation of pharmacoresistant epilepsy patients. The localization quality depends, among other aspects, on the selected inverse and forward approaches and their respective parameter choices. In my talk, I will present new forward and inverse approaches and their application for the identification of the epileptogenic zone in focal epilepsy. The forward approaches are based on the finite element method (FEM). The inverse approaches contain beamforming, hierarchical Bayesian modeling (HBM) and standard dipole scanning techniques. I will discuss advantages and disadvantages of those approaches and compare their performance in a retrospective evaluation study with patients of focal epilepsy.

[1] Neugebauer, F., Antonakakis, M., Unnwongse, K., Parpaley, Y., Wellmer, J., Rampp, S., Wolters, C.H., Validating EEG, MEG and Combined MEG and EEG Beamforming for an Estimation of the Epileptogenic Zone in Focal Cortical Dysplasia. Brain Sci:114, 2022. https://doi.org/10.3390/brainsci12010114.

[2] Aydin, Ü., Rampp, S., Wollbrink, A., Kugel, H., Cho, J.-H., Knösche, T.R.,Grova, C., Wellmer, J., Wolters, C.H., Zoomed MRI guided by combined EEG/MEG source analysis: A multimodal approach for optimizing presurgical epilepsy work-up and its application in a multi-focal epilepsy patient case study, Brain Topography, 30(4):417-433, 2017. https://doi.org/10.1007/s10548-017-0568-9.
 
4:00pm - 6:00pmMS48: Robustness and reliability of Deep Learning for noisy medical imaging
Location: VG2.104
Session Chair: Alessandro Benfenati
Session Chair: Elena Morotti
 

The graphLa+ method: a dynamic regularization based on the graph Laplacian

Davide Bianchi

Harbin Institute of Technology (Shenzhen), China, People's Republic of

We investigate a Tikhonov method that embeds a graph Laplacian operator in the penalty term (graphLa+). The novelty lies in building the graph Laplacian based on a first approximation of the solution derived by any other reconstruction method. Consequently, the penalty term becomes dynamic, depending on and adapting to the observed data and noise. We demonstrate that graphLa+ is a regularization method and we rigorously establish both its convergence and stability properties. Moreover, we present some selected numerical experiments in 2D computerized tomography, where we combine the graphLa+ method with several reconstructors: Filter Back Projection (graphLa+FBP), standard Tikhonov (graphLa+Tik), Total Variation (graphLa+TV) and a trained deep neural network (graphLa+Net). The quality increase of the approximated solutions granted by the graphLa+ approach is outstanding for each given method. In particular, graphLa+Net outperforms any other method, presenting a robust and stable implementation of deep neural networks for applications involving inverse problems.



Investigating the human body by light: the challenge of problem inversion

Paola Causin1, Alessandro Benfenati2

1Department of Mathematics, University of Milano, Italy; 2Department of Environmental Science and Policy, University of Milano, Italy

In the past decades, the use of Computerized Tomography (CT) has increased dramatically owing to its excellent diagnostic performance, easy accessibility, short scanning time, and cost-effectiveness. Enabling CT technologies with a reduced/null radiation dose while preserving/enhancing the diagnostic quality is a key challenge in modern medical imaging. Increased noise levels are, however, an expected downfall of all these new technologies.

In this series of two successive talks we will refer about our research focused on Diffuse Optical Tomography (DOT), a CT technology based on NIR light as investigating signal [1]. Strong light scattering in biological tissues makes the DOT reconstruction problem severely ill-conditioned, so that denoising is a crucial step. In the present talk, after a brief description of the DOT modality, first we will present our results in exploring variational approaches based on partial differential equation models endowed with different regularizers to compute a stable DOT-CT reconstruction [2,3]. Then, we will discuss our recent research on the use of DL-based generative models to produce more effective soft priors which, used in combination with standard forward problem solvers or DL-based forward problem solvers, allow to improve spatial resolution in high contrast zones and reduce noise in low-contrast zones, typical of medical imaging.

[1] S.R. Arridge. Optical tomography in medical imaging, Inverse problems 15(2): R41, 1999.

[2] P. Causin, M.G. Lupieri, G. Naldi, R.M. Weishaeupl. Mathematical and numerical challenges in optical screening of female breast, Int. J. Num. Meth. Biomed. Eng. 36(2): e3286, 2020.

[3] A. Benfenati, P. Causin, M.G. Lupieri, G. Naldi. Regularization techniques for inverse problem in DOT applications. In Journal of Physics: Conference Series (IOP Publishing) 1476(1): 012007, 2020.



Investigating the Human Body by Light: Neural Networks for Data-Driven and Physics-Driven Approches

Alessandro Benfenati1, Paola Causin2

1Environmental and Science Policy department, Università degli studi di Milano La Statale; 2Department of Mathematics,Università degli studi di Milano La Statale

Diffuse Optical Tomography is a medical imaging technique for functional monitoring of body tissues. Unlike other CT technologies (i.e. X-ray CT), DOT employs a non-ionizing light signal and thus can be used for multiple screenings [1]. DOT reconstruction in CW modality leads to an inverse problem for the unknown distribution of the optical absorption coefficient inside the tissue, which has diagnostic relevance.

The classic approach consists in solving an optimization problem, involving a fit-to-data functional (usually, the Least Square functional) coupled with a regularization (e.g., $l^1$, Tikhonov, Elastic Net [2]). In this talk, we refer about our research in adopting a deep learning approach, which exploits both data-driven and hybrid-physics driven techniques. In the first case, we employ neural networks to construct a Learned Singular Value Decomposition [3], whilst in the second case the network architecture is built upon \emph{a priori} knowledge on the physical phenomena. We will present numerical results obtained from synthetic datasets which show robustness even on noisy data.

[1] S. R. Arridge, J. C. Schotland. Optical tomography: forward and inverse problems, Inverse problems 25(12): 123010, 2009.

[2] A. Benfenati, P. Causin, M. G. Lupieri, G. Naldi. Regularization techniques for inverse problem in DOT applications, Journal of Physics: Conference Series (IOP Publishing) 1476(1): 012007, 2020.

[3] A. Benfenati, G. Bisazza, P. Causin. A Learned SVD approach for Inverse Problem Regularization in Diffuse Optical Tomography, 2021. [arXiv preprint arXiv:2111.13401]


Medical image reconstruction in realistic scenarios: what to do if the ground-truth is missing?

Davide Evangelista, Elena Morotti, Elena Loli Piccolomini

University of Bologna, Italy

Deep learning algorithms have recently emerged as the state-of-the-art in solving Inverse Problems, overcoming classical variational methods in terms of both accuracy and efficiency. However, most deep learning algorithms require supervised training, which necessitates a collection of matched low-quality and ground-truth data. This poses a significant challenge in medical imaging, as obtaining such a dataset would require subjecting the patient to approximately double the amount of radiation. As a result, it is common to mathematically simulate the degradation process, which can introduce biases that degrade model performance when tested on real data. To address this issue, we propose a general self-supervised procedure for training neural networks in a setting where the ground truth is missing, but the mathematical model is approximately known. We demonstrate that our proposed method produces results of comparable quality to supervised techniques while being more robust to perturbations. We will provide formal proof of the effectiveness of our proposed method.
 
Date: Friday, 08/Sept/2023
1:30pm - 3:30pmCT12: Contributed talks
Location: VG2.104
Session Chair: Frank Werner
 

Designing an algorithm for low-dose Poisson phase retrieval

Benedikt Diederichs1, Frank Filbir1, Patricia Römer1,2

1Helmholtz Center Munich; 2Technical University of Munich

Many experiments in the field of optical imaging are modelled as phase retrieval problems. Motivated by imaging experiments with biological specimens that need to be measured using a preferably low dose of illumination particles, we consider phase retrieval systems with small Poisson noisy measurements. In this talk, we discuss how to formulate a suitable optimization problem. We study reasonable loss functions adapted to the Poisson distribution, optimized for low-dose data. As a solver, we apply gradient descent algorithms with Wirtinger derivatives. For the proposed loss functions, we analyze the convergence of the respective Wirtinger flow type algorithms to stationary points. We present numerical reconstructions from phase retrieval measurements in a low-dose regime to corroborate our theoretical observations.


ADMM methods for Phase Retrieval and Ptychography

Albert Fannjiang

UC Davis, United States of America

We present a systematic derivation and local convergence analysis for various ADMM algorithms in phase retrieval and ptychography.

We also discuss the extension of these algorithms to blind ptychography where the probe is unknown and compare their numerical performance.



Phase retrieval in the wild: In situ optimized reconstruction for X-ray in-line holography

Johannes Dora1,2, Johannes Hagemann2, Silja Flenner3, Christian Schroer2, Tobias Knopp1,4

1Hamburg University of Technology (TUHH), Germany; 2Deutsches Elektronen Synchrotron (DESY), Germany; 3Helmholtz-Zentrum Geesthacht (HEREON), Germany; 4University Medical Center Hamburg-Eppendorf (UKE), Germany

The phase problem is a well known challenge in propagation-based phase-contrast X-ray imaging, describing the situation that whenever a detector measures a complex X-ray wavefield, the phase information is lost, i.e. only the magnitude of the measured wavefield remains as a usable data set. The resulting inverse problem is ill-posed and non-convex, requiring twice as many variables to be reconstructed to obtain the complex-valued image of the object under study.

In a recent development we have changed the representation of the reconstructed image [1]. The classical representation as amplitude and phase suffers from phase wrapping ambiguities. The representation as the projected refractive index of the object avoids these problems. However, this algorithm still suffers from slow convergence and convergence to local minima.

In our work, we have investigated the main causes of slow convergence and local minima for the Nesterov accelerated projected gradient descent type of algorithm that is currently used in practice. We propose a framework of different techniques to address these problems and show that by combining the proposed methods, the reconstruction result can be dramatically improved in terms of reconstruction speed and quality. We apply our proposed methods to several datasets obtained from the nano-imaging experiment at the Hereon-operated beamline P05 at DESY (Hamburg, Germany). We demonstrate that our proposed framework can cope with single-distance measurements, which is a requirement for in-situ/operando experiments, and without a compact support constraint, while maintaining robustness along a wide range of samples.

[1] F. Wittwer, J.Hagemann et al. Phase retrieval framework for direct reconstruction of the projected refractive index applied to ptychography and holography, Optica 9: 295-302, 2022
 

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany