Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
P: Poster Session
Time:
Wednesday, 06/Sept/2023:
12:15pm - 1:30pm

Location: ZHG Foyer


Show help for 'Increase or decrease the abstract text size'
Presentations

Adaptive Method for Bayesian EEG/MEG Source Localization to Support Treatment of Focal Epilepsy

Joonas Lahtinen1, Alexandra Koulouri1, Tim Erdbrügger2, Carsten H. Wolters2, Sampsa Pursiainen1

1Computing Sciences, Tampere University, Korkeakoulunkatu 3, Tampere 33072, Finland; 2Institute for Biomagnetism and Biosignalanalysis, University of Münster, Malmedyweg 15, D-48149 Münster, Germany

Non-invasive electrophysiological brain stimulation techniques such as tES and TMS can provide a potential alternative treatment for drug-resistant focal epileptic patients, when a surgical operation to remove the pathological tissue is not feasible. Choosing an appropriate stimulation montage is possible only if the location of the epileptogenic zone (EZ) is known sufficiently. Easiest for the patient is if EZ is localized non-invasively based on EEG/MEG measurements. Non-invasive EEG/MEG source localization, nevertheless, poses a challenging inverse problem the solution of which can be highly sensitive to selected model parameters [1,2].

We introduce a new standardized and adaptive Bayesian method which we show to (1) reconstruct focal sources accurately and (2) to perform robustly with respect to inherent model uncertainties. Our approach follows a hierarchical posterior distribution in which the model related free-parameters are automatically tuned as described in [3,4]. As we have shown previously, the present scheme allows us to obtain sparse vectors to represent the neural activity distribution. In particular, the solution is a pair $(x, \gamma)$ which is obtained via an iterative algorithm that maximizes the posterior for x and the hyperparameter $\gamma$ alternatingly while applying the standardization on each step.

We demonstrate through simulation that our approach localizes a focal epileptic zone for synthetic interictal EEG data. These simulation results are complemented with results obtained with experimental data comparing the source localization outcome to a reference zone designated by specialists. As comparison techniques we use Standardized Shrinking LORETA-FOCUSS (SSLOFO) and Standardized low-resolution brain electromagnetic tomography (sLORETA) which have been used successfully in localization of EZ both with ictal and interictal presurgical data [5,6,7]. Our results suggest that the proposed approach localizes EZ within 1 cm accuracy. We suggest that the reconstructions obtained are more focal compared to those obtained with sLORETA, consequently, making the localization less open to interpretation.

[1] M. B. H. Hall, et al. An evaluation of kurtosis beamforming in magnetoencephalography to localize the epileptogenic zone in drug resistant epilepsy patients. Clinical Neurophysiology. 129: 1221-1229, 2018.

[2] F. Neugebauer, et al. Validating EEG, MEG and combined MEG and EEG beamforming for an estimation of the epileptogenic zone in focal cortical dysplasia. Brain Sciences. 12: 114, 2022.

[3] A. Rezaei, et al. Parametrizing the conditionally Gaussian prior model for source localization with reference to the P20/N20 component of median nerve SEP/SEF. Brain Sciences. 10: 934, 2020.

[4] J. Lahtinen, et al. Conditionally Exponential Prior in Focal Near-and Far-Field EEG Source Localization via Randomized Multiresolution Scanning (RAMUS). Journal of Mathematical Imaging and Vision. 64: 587-608, 2022.

[5] A. J. R. Leal, et al. Analysis of the dynamics and origin of epileptic activity in patients with tuberous sclerosis evaluated for surgery of epilepsy. Clinical Neurophysiology. 119: 853-861, 2008.

[6] K. L. de Gooijer‐van de Groep, et al. Inverse modeling in magnetic source imaging: comparison of MUSIC, SAM (g2), and sLORETA to interictal intracranial EEG. Human brain mapping. 34: 2032-2044, 2013.

[7] A. Coito, et al. Interictal epileptogenic zone localization in patients with focal epilepsy using electric source imaging and directed functional connectivity from low‐density EEG. Epilepsia open. 4: 281-292, 2019.


Edge-Preserving Tomographic Reconstruction with Uncertain View Angles

Per Christian Hansen1, Johnathan M. Bardsley2, Yuqiu Dong1, Nicolai A. B. Riis1, Felipe Uribe3

1Technical University of Denmark, Denmark; 2University of Montana; 3Lappeenranta-Lahti University of Technology

In computed tomography, data consist of measurements of the attenuation of X-rays passing through an object. The goal is to reconstruct an image of the linear attenuation coefficient of the object's interior. For each position of the X-ray source, characterized by its angle with respect to a fixed coordinate system, one measures a set of data referred to as a view. A common assumption is that these view angles are known - but in some applications, they are known with imprecision.

We present a Bayesian inference approach to solving the joint inverse problem for the image and the view angles, while also providing uncertainty estimates. For the image, we impose a Laplace difference prior enabling the representation of sharp edges in the image; this prior has connections to total variation regularization. For the view angles, we use a von Mises prior which is a $2\pi$-periodic continuous probability distribution.

Numerical results show that our algorithm can jointly identify the image and the view angles, while also providing uncertainty estimates of both. We demonstrate our method with simulations of a 2D X-ray computed tomography problems using fan beam configurations.

[1] N. A. B. Riis, Y. Dong, P. C. Hansen, Computed tomography reconstruction with uncertain view angles by iteratively updated model discrepancy, J. Math. Imag., 63,:133–143, 2021. doi 10.1007/s10851-020-00972-7.

[2] N. A. B. Riis, Y. Dong, P. C. Hansen, Computed tomography with view angle estimation using uncertainty quantification, Inverse Problems, 37: 065007, 2021. doi 10.1088/1361-6420/abf5ba.

[3] F. Uribe, J. M. Bardsley, Y. Dong, P. C. Hansen, N. A. B. Riis, A hybrid Gibbs sampler for edge-preserving tomographic reconstruction with uncertain angles, SIAM/ASA J. Uncertain. Quantif., 10:1293–1320, 2022. doi 10.1137/21M1412268.


EIT reconstruction using virtual X-rays and machine learning

Siiri Inkeri Rautio

University of Helsinki, Finland

The mathematical model of electrical impedance tomography (EIT) is the inverse conductivity problem introduced by Calderón. The aim is to recover the conductivity $\sigma$ from the knowledge of the Dirichlet-to-Neumann map $\Lambda_\sigma$. It is a nonlinear and ill-posed inverse problem.

We introduce a new reconstruction algorithm for EIT, which provides a connection between EIT and traditional X-ray tomography, based on the idea of "virtual X-rays". We divide the exponentially ill-posed and nonlinear inverse problem of EIT into separate steps. We start by mathematically calculating so-called virtual X-ray projection data from the DN map. Then, we perform explicit algebraic operations and one-dimensional integration, ending up with a blurry and nonlinearly transformed Radon sinogram. We use a neural network to learn the nonlinear deconvolution-like operation. Finally, we can compute a reconstruction of the conductivity using the inverse Radon transform. We demonstrate the method with simulated data examples.



Frequentist Ensemble Kalman Filter

Maia Tienstra, Sebastian Reich

SFB 1294 / Universität Potsdam, Germany

We are interested in the Tikhonov type regularization of statistical inverse problems. The main challenge is the choice of the regularization parameter. Hierarchical Bayesian methods and Bayesian model selection give us theoretical understanding of how the regularization parameters depend on the data. One popular way to solve statistical inverse problems is using the Ensemble Kalman filter (EnKF). We formulate a frequentist version of the continuous time EnKF, which brings us to the well known bias and variance tradeoff of our estimator dependent upon the regularization parameter. From here we can reformulate the choice of regularization parameter as a choice of stopping time dependent on estimation of the residuals. We are not only interested in recovering a point estimator, as in the case of optimization, but in the ability to correctly estimate the spread of the posterior. We numerically and theoretically explore this through the infinite dimensional linear inverse problem, and a non-linear inverse problem arising from the Schrödinger equation. This is joint work with Prof. Dr. Sebastian Reich. This research has been partially funded by the Deutsche Forschungsgemeinschaft (DFG)- Project-ID 318763901 - SFB1294.



Investigation of the effects of Cowling approximation on adiabatic wave propagation in helioseismology

Hélène Barucq1, Lola Chabat1, Florian Faucher1, Damien Fournier2, Ha Pham1

1Team Makutu, Inria University of Pau and Pays de l'Adour, France; 2Max Planck Institute for Solar System Research, Göttingen, Germany

Helioseismology investigates the interior structures and the dynamics of the Sun from oscillations observed on its visible surface. Ignoring flow and rotation, time-harmonic adiabatic waves in a self-gravitating Sun in Eulerian-Lagrangian description are described by the Lagrangian displacement $\mathbf{\xi}$ and the gravitational potential perturbation $\delta_\phi$ which satisfy Galbrun's equation [1] coupled with a Poisson equation. In most works, perturbation to gravitational potential $\delta_\phi$ is neglected under Cowling's approximation [3]. However, this approximation is known to shift the eigenvalues of the forward operator for low-order harmonic modes [4]. Here, we study the effects of this approximation on numerical solutions and discuss its implication for the inverse problem. Removing Cowling's approximation allows us to accurately simulate waves for low-degree modes, and help us better characterize the deep interior of the Sun.

The investigation is carried out for a Sun with minimum activity, called quiet Sun, whose background coefficients are given by the radially symmetric standard solar model Model S in the interior, with a choice of extension beyond the surface to include the presence of atmosphere cf [5]. Radial symmetry is exploited to decouple the problem on each spherical harmonic mode $\ell$ to give a system of ordinary differential equations in radial variable. This extends previous work [1] which employed Cowling approximation. The modal system is resolved by using the Hybridizable Discontinuous Galerkin method (HDG). For purpose of validation, the equations are coupled with free-surface boundary condition which is adapted for low-frequency modes and is commonly employed in helioseismology cf. [6]. Since eigenvalues are poles of Green's tensor, the magnitude of the latter as a function of frequency peaks around an eigenvalue. As preliminary results, we compare the location where the Green's tensor peaks to the values of eigenvalues computed by the GYRE code [2], between which we find a good agreement.

[1] H. Barucq, F. Faucher, D. Fournier, L. Gizon, H. Pham. Efficient computation of modal Green's kernels for vectorial equations in helioseismology under spherical symmetry, 2021.

[2] RHD Townsend, SA Teitler. GYRE: an open-source stellar oscillation code based on a new Magnus Multiple Shooting scheme, Monthly Notices of the Royal Astronomical Society:3406-3418, 2013.

[3] Thomas G Cowling. The non-radial oscillations of polytropic stars, Monthly Notices of the Royal Astronomical Society:367, 1941.

[4] J. Christensen-Dalsgaard. Lecture notes on stellar oscillations, 2014.

[5] J. Christensen-Dalsgaard. Introductory report : Solar Oscillations, Liege International Astrophysical Colloquia:155-207, 1984.

[6] W. Unno,Y. Osaki, H. Ando, H. Shibahashi. Nonradial oscillations of stars, Tokyo: University of Tokyo Press, 1979.



Dual-grid parameter choice method for total variation regularised image deblurring

Yiqiu Dong1, Markus Juvonen2, Matti Lassas2, Ilmari Pohjola2, Samuli Siltanen2

1Technical University of Denmark, Denmark; 2University of Helsinki, Finland

We present a new parameter choice method for total variation (TV) deblurring of images. The method is based on a dual-grid computation of the solution.

Instead of a single grid we have 2 grids with different discretisation. The first grid is the same were the measurement is given. The origin of the second grid is shifted half a pixel width both horizontally and vertically. Note that the underlying true image is the same for both grids. Assume that the pixel size is much smaller than a typical constant valued area in an image. The premise of the study is that when solving the TV regularised noisy deblurring problem with a large enough parameter the solutions on both grids will converge to the same image.The proposed algorithm looks for the smallest parameter with which convergence can be numerically detected.

The method has been tested on both simulated and real image data. Preliminary computational experiments suggest that an optimal parameter can be chosen by monitoring the difference of the TV seminorms of the dual-grid solutions while changing the regularisation parameter size.



Geodesic slice sampling on the sphere

Mareike Hasenpflug1, Michael Habeck2, Shantanu Kodgirwar2, Daniel Rudolf1

1Universität Passau, Germany; 2Universitätsklinikum Jena, Germany

We introduce a geodesic slice sampler on the Euclidean sphere (in arbitrary but fixed dimension) that can be used for approximate sampling from distributions that have a density with respect to the corresponding surface measure. Such distributions occur e.g. in the modelling of directional data or shapes. We provide some mild conditions which ensure that the geodesic slice sampler is reversible with respect to the distribution of interest. Moreover, if the density is bounded, then we obtain a uniform ergodicity convergence result. Finally, we illustrate the performance of the geodesic slice sampler on the sphere with numerical experiments.


Gibbsian Polar Slice Sampling

Philip Schär1, Michael Habeck1, Daniel Rudolf2

1Friedrich Schiller University Jena, Germany; 2University of Passau, Germany

Polar slice sampling [2] is a Markov chain approach for approximate sampling of distributions that is difficult, if not impossible, to implement efficiently, but behaves provably well with respect to the dimension. By updating the directional and radial components of chain iterates separately, we obtain a family of samplers that mimic polar slice sampling, and yet can be implemented efficiently. Numerical experiments for a variety of settings indicate that our proposed algorithm significantly outperforms the two most closely related approaches, elliptical slice sampling [3] and hit-and-run uniform slice sampling [4]. We prove the well-definedness and convergence of our methods under suitable assumptions on the target distribution.

[1] P. Schär, M. Habeck, D. Rudolf. Gibbsian Polar Slice Sampling. arXiv preprint arXiv:2302.03945, 2023.

[2] G. O. Roberts, J. S. Rosenthal. The Polar Slice Sampler. Stochastic Models 18(2):257-280, 2002.

[3] I. Murray, R. Adams, D. MacKay. Elliptical Slice Sampling. Journal of Machine Learning Research 9:541-548, 2010.

[4] D. MacKay. Information Theory, Inference and Learning Algorithms, Cambridge University Press, 2003.


Hybrid knowledge and data-driven approaches for DOT reconstruction in medical imaging

Alessandra Serianni

Università degli Studi di Milano, Italy

Diffuse Optical Tomography (DOT) is an emerging medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a non-invasive and non-ionizing manner. NIR light undergoes multiple scattering throughout the tissue, making DOT reconstruction a severely ill-conditioned problem [1]. In this contribution, we present our research in adopting hybrid knowledge-driven/data-driven approaches which combine the use of well-known physical models with deep learning techniques integrating the collected data. Our main idea is to leverage neural networks to solve PDE-constrained inverse problems of the form \begin{equation*} \theta^*=\arg\min_\theta \mathcal{L}(y,\tilde{y}) \tag{1} \end{equation*} where $\mathcal{L}$ is a loss function which typically contains a discrepancy measure (or data fidelity) term as well as prior information on the solution. In the context of inverse problems like $(1)$, one seeks the optimal set of physical parameters $\theta$, given the set of observations $y$. Moreover, $\tilde{y}$ is the computable approximation of $y$, which may be both obtained from a neural network and in a classic way via the resolution of a PDE with given input coefficients. The idea underlying our approach is to exploit Graph Neural Networks (GNNs) as a fast forward model that solves PDEs: after an appropriate construction of the graph on the spatial domain of the PDE, the message passing framework allows to directly learn the kernel of the network which approximates the PDE solution [2]. Due to the severe ill-conditioning of the reconstruction problem, we also learn a prior over the space of solutions using an auto-decoder type network which maps the latent code to the estimated physics parameter that is passed to the GNN to finally obtain the prediction.

[1] A. Benfenati, G. Bisazza, P. Causin. A Learned SVD approach for Inverse Problem Regularization in Diffuse Optical Tomography, arXiv preprint arXiv:2111.13401, 2021.

[2] Q. Zhao, D.B. Lindell, G. Wetzstein. Learning to Solve PDE-constrained Inverse Problems with Graph Networks, arXiv preprint arXiv:2206.00711, 2022.


Inverse Level-Set Problems for Capturing Calving Fronts

Daniel Abele1,2,4, Angelika Humbert1,3

1Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, Section Glaciology; Bremerhaven, Germany; 2German Aerospace Center, Institute for Software Technology; Oberpfaffenhofen, Germany; 3University of Bremen, Department of Geosciences; Bremen, Germany; 4Technical University of Munich, School for Computation, Information and Technology; Munich, Germany

Capturing the calving front motion is critical for simulations of ice sheets and ice shelves. Multiple physical processes - besides calving also melting and the forward movement of the ice - need to be understood to accurately model the front. Calving is particularly challenging due to its discontinuous nature and modelers require more tools to examine it.

A common technique for capturing the front in ice simulations is the Level-set method. The front is represented implicitly by the zero isoline of a function. The movement of the front is described by an advection equation, where the velocity field is a combination of ice velocity and frontal ablation rate.

To improve understanding of these processes, we are developing methods to estimate parameters of calving laws based on inverse Level-Set problems. The regularization is chosen so it can handle discontinuous parameters or calving laws to fit discontinuous front positions due to large calving events. The input for the inverse problem is formed by observational data from satellite images that is often sparse. The methods will be applied to large scale models of the Antarctic Ice Sheet.



Microlocal analysis of inverse scattering problems

Gregory Samelsohn

Shamoon College of Engineering, Israel

Microlocal analysis has recently been shown to provide deep insight into the transformation of singularities and the origin of certain artifacts for a variety of tomographic imaging problems with limited data (X-ray CT, electron microscopy, SAR imaging, etc.). In this work, we report on an even more close relation between microlocal analysis and some inverse scattering problems. In particular, a new algorithm [1] proposed for tomographic imaging of impenetrable (e.g., perfectly conducting) scatterers is considered. The boundary value problem is converted into a volume integral equation, with a singular double-layer potential. Fourier-Radon inversion of the resulting far-field pattern is then applied to compute an indicator function. No approximations are made in the construction of the forward model and derivation of the inversion algorithm. Instead, some elementary facts of the microlocal analysis, in particular the pseudo-locality of the corresponding operator, are used to recover the support of the scattering potential, and therefore, the shape of the obstacle. Generalizations of this approach to tomographic imaging of the impedance-type and penetrable objects are also discussed.

[1] G. Samelsohn. Tomographic imaging of perfectly conducting objects. J. Opt. Soc. Am. A 40, 229-236: 2023.


Microlocal Analysis of Multistatic Synthetic Aperture Radar Imaging

David McMahon, Clifford Nolan

University of Limerick, Ireland

We consider Synthetic Aperture Radar (SAR) in which scattered waves, simultaneously emitted from a pair of stationary emitters, are measured along a flight track traversed by an aircraft. A linearized mathematical model of scattering is obtained using a Fourier integral operator. This model can then be used to form an image of the ground terrain using backprojection together with a carefully designed data acquisition geometry.

The data is composed of two parts, corresponding to the received signals from each emitter. A backprojection operator can be easily chosen that correctly reconstructs the singularities in the wave speed using just one emitter. One would expect this to lead to a reasonable image of the terrain. However, we expect that application of this backprojection operator to the data from the other emitter will lead to unwanted artifacts in the image. We analyse the operators associated with this situation, and use microlocal analysis to determine configurations of flight path and emitter locations so that we may mitigate the artifacts associated to such “cross talk” between the two emitters.



Revealing Functional Substructure of Retinal Ganglion Cell Receptive Fields Using Tomography-Based Stimulation

Steffen Krüppel1,2,3, Tim Gollisch1,2,3

1Department of Ophthalmology, University Medical Center Göttingen, Göttingen, Germany; 2Bernstein Center for Computational Neuroscience, Göttingen, Germany; 3Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), University of Göttingen, Göttingen, Germany

Retinal ganglion cells (RGCs) are the output cells of the retina and perform various computations on the visual signals that are detected by the retina’s photoreceptors. Here, nonlinearities in an RGC’s receptive field – the subset of all photoreceptors that (indirectly) connect to a given RGC – play an essential role. Many RGC types are spatially nonlinear, that is they integrate signals from different areas of their receptive field nonlinearly. This spatial nonlinearity is mediated via so-called subunits which in turn are considered to be linear and thought to correspond to those cells that provide direct excitatory input to RGCs, the retinal bipolar cells. In order to understand RGC responses to the finely structured natural images animals encounter, knowledge of the subunits is critical. In addition, large-scale electrophysiological studies of RGCs are relatively simple, but the same cannot be said about bipolar cells. Efforts have therefore been made to infer the subunits of a given RGC from recordings of the RGC’s activity to visual stimuli presented to the retina. Yet, methods to quickly and consistently infer how many subunits compose an RGC’s receptive field and where they are located are rare. The problem is made more difficult by additional nonlinearities in the system, the unknown shapes of the nonlinearities, and potential interactions between subunits. Our approach is to flash a bar with a preferred-contrast center and sidebands of non-preferred contrast in the receptive field of an RGC at various positions and angles. If the bar width is similar to the expected subunit size, the responses of the RGC should, for a given angle and varying position, roughly correspond to a projection of the subunit layout along the bar’s orientation. Borrowing from tomography, we can thus compose a sinogram out of all responses of an RGC and reconstruct the subunit layout using, e.g., filtered back-projection. In simulations of RGCs with various subunit layouts, we find that those RGC responses that are generated by excitation of a specific subunit are well confined to a small region in the sinogram. This often allows successful reconstruction of the subunit layout, but reconstruction quality of realistic layouts is limited by nonlinearities not accounted for by filtered back-projection. We also performed multi-electrode array recordings from isolated primate retinas where our approach revealed substructure in many RGC receptive fields as well. Altogether, our tomographic subunit detection method is a promising candidate to quickly and reliably infer substructure in the receptive field of an RGC, thereby laying foundations to better predict responses to natural images and indirect large-scale bipolar cell studies.

Acknowledgements: This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—project IDs 154113120 (SFB 889, project C01); 432680300 (SFB 1456, project B05)—and by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement number 724822).


Non-stationary hyperspectral unmixing with learnt regularization

Julia Marie Lascar

Université Paris Saclay, CEA Irfu, France

In astrophysics or remote sensing, spectro-imagers can record cubes of data called hyperspectral images, with two spatial dimensions and a third dimension of energy. Often, the observed data are a mixture of several emitting sources. Thus, the task of source separation is key to perform detailed studies of the underlying physical components.

Most source separation algorithms assume a stationary mixing model, i.e. a sum of spectra, one per component, each multiplied by an amplitude map. But in many cases, this assumption is erroneous, since the spectral shape of each component varies spatially due to physical properties. Our algorithm’s goal is to achieve non-stationary source separation, obtaining for each component a cube with varying spectral shape. This is an ill-posed inverse problem, thus in need of regularization.

For spectral regularization, we use a generative model learned on auto-encoders, which constrains the spectra to interpretable shapes in a semi-supervised scheme. This was combined with a spatial regularization scheme, via a sparse modelling of the generative model’s latent parameters. The optimization was achieved in an algorithm of alternating proximal gradient descent. It was tested for the case study of X-ray astrophysics spectro-imagery, for which results will be shown on realistic simulated data. To our knowledge, this is the first method to extend sparse blind source separation to the non-stationary case.


Reduced Order Methods for Linear Gaussian Bayesian Inverse Problems on separable Hilbert Spaces

Giuseppe Carere, Han Cheng Lie

University of Potsdam, Germany

In Bayesian inverse problems, the computation of the posterior distribution can be computationally demanding, especially in many-query settings such as filtering, where a new posterior distribution must be computed many times. In this work we consider some computationally efficient approximations of the posterior distribution for linear Gaussian inverse problems defined on separable Hilbert spaces. We measure the quality of these approximations using the Kullback-Leibler divergence of the approximate posterior with respect to the true posterior and investigate their optimality properties. The approximation method exploits low dimensional behaviour of the update from prior to posterior, originating from a combination of prior smoothing, forward smoothing, measurement error and limited number of observations, analogous to the results of Spantini et al. [1] for finite dimensional parameter spaces. Since the data is only informative on a low dimensional subspace of the parameter space, the approximation class we consider for the posterior covariance consists of suitable low rank updates of the prior. In the Hilbert space setting, care must be taken when inverting covariance operators. We address this challenge by using the Feldman-Hajek theorem for Gaussian measures.

[1] A. Spantini, A. Solonen, T. Cui, J. Martin, L. Tenorio, Y. Marzouk. Optimal Low-Rank Approximations of Bayesian Linear Inverse Problems. SIAM J. on Sci. Comp. 37, no. 6: A2451–87, 2015


GAN-based motion correction in MRI

Mathias Simon Feinler, Bernadette Hahn

University of Stuttgart, Germany

Magnetic Resonance Imaging allows high resolution data acquisition with the downside of motion sensitivity due to relatively long acquisition times. Even during the acquisition of a single 2D slice, motion can severely corrupt the image. Retrospective motion correction strategies do not interfere during acquisition time but operate on the motion affected data. In most applications, the trajectories are cartesian like in the HASTE sequence. These classical sampling schemes show no or only marginal temporal redundancy by the sensitivity encoding (SENSE) that multiple receiver coils provide. Hence, in practice, residual based optimizations will fail to produce motion artifact free images. In recent years, Generative Adversarial Networks (GANs) have gained interest for motion compensation. Albeit the performance is visually appealing, it cannot be guaranteed that small details of diagnostic relevance are predicted correctly, even if large parts of the image are in fact motion artifact free. To this end we propose a learned iterative procedure to substantiate the reconstructions and achieve data consistency. We show that, dependent on the complexity of deformations, even small details which have initially been erased by GANs can be recovered.


Dynamic Computerized Tomography using Inexact Models and Data-driven Motion Detection

Gesa Sarnighausen, Anne Wald

Georg-August-Universität Göttingen, Germany

Reconstructing a dynamic object with affine motion in computerized tomography leads to motion artefacts if the motion is not taken into account. The iterative RESESOP-Kaczmarz method can - under certain conditions - reconstruct dynamic objects at different time points even if the exact motion is unknown. However, the method is very time consuming. In order to speed the reconstruction process up and obtain better results, the following three steps are used:

1. RESESOP-Kacmarz with only a few iterations is implemented to reconstruct the object at different timepoints.

2. The motion is estimated via deep learning.

3. The estimated motion is integrated into the reconstruction process, allowing to use dynamic filtered backprojection.


Phase retrieval beyond the homogeneous object approximation in X-ray holographic imaging

Jens Lucht1, Leon Merten Lohse1,2, Simon Huhn1, Tim Salditt1

1Georg-August-Universität Göttingen, Germany; 2Deutsches Elektronen-Synchrotron DESY

X-ray near field in-line holographic imaging using highly coherent synchrotron radiation offers spatial resolution down to the nanometer scale. Combined with tomographic methods it allows high resolution three-dimensional imaging with wide applicability in life, natural and material sciences. Using X-ray phase contrast enables the study of samples that show little to no conventional absorption based contrast, for example soft tissue. Since the phase cannot be directly measured, it has to be retrieved from the measured diffraction patterns by solving an ill-posed inverse problem.

To solve this phase problem, one common approximation for X-ray Fresnel holography is the so-called homogeneous or single material object approximation. It restricts the phase shift and absorption part of the object's refractive index to be proportional. Hence, the number of unknowns of the inverse problem can be reduced from two, phase shift and absorption, to one while also imposing restrictions on the sample to satisfy this approximation. Multi material samples naturally violate this assumption and hence reconstructions with homogeneous object assumption show artifacts. To resolve this incompatibility we present a reconstruction method which relaxes the homogeneous object assumption based on linearization of the Fresnel diffraction for weak objects, known as contrast transfer function (CTF), that is also popular in electron microscopy. We demonstrate that reconstruction quality can be significantly improved if physical priors are imposed on the reconstruction with tools of constrained optimization. Furthermore, we discuss the stability and experimental design for the proposed method.


Reconstruction of active forces generated by actomyosin networks

Emily Klass, Anne Wald

Georg-August-University Göttingen, Germany

Biological Cells rely on the interaction of multiple proteins to perform various forms of movement such as cell contraction, division, and migration. In particular, the proteins actin is able to create long branching filament structures which the protein myosin can bind to and slide along on, thereby creating so-called acto-myosin networks. These networks produce mechanical stress resulting in movement of the cell itself, and its interior.

We depict the physical process of the flow inside of cells generated by acto-myosin networks using a 2-dimensional droplet model using the Stokes equation for incompressible Newtonian fluids for non-constant viscosities. Here we add a Neumann boundary condition where the normal component of the velocity field on the boundary of the droplet vanishes to represent that no fluid can flow in or out of the domain. Further, we add a Robyn-type or slip boundary condition to model the interaction with surrounding fluids. We choose a non-constant viscosity to portray the acto-myosin network.

We aim to reconstruct the active forces inside of the droplet from noisy measurements of the velocity field. This results a (deterministic) parameter identification problem for the Stokes equation.


Reconstruction of the potential in a hyperbolic equation in dimension 3

Faguèye Ndiaye Sylla1, Mouhamadou Ngom2, Mariama Ndiaye3, Diaraf Seck1

1Université de Chaikh Anta Diop, Sénégal; 2Université Alioune Diop de Bambey, Sénégal; 3Université Gaston Berger de Saint-Louis, Sénégal

In this paper, we consider the wave equation $\partial_{tt}v(x,t)-\Delta v(x,t)+p(x)v(x,t)=0$ in $B\times(0,T) $, where $B$ is the unit ball in $ {\mathbb R^{3}}$ and, $T>0$. We are interested in the inverse problem of identifying the potential $p(x)$ from the Cauchy data $(f, \partial_n v)$ where $f$ is all possible functions on the boundary $\partial B \times(0,T) $ and $ \partial_n v $ the measurements of the normal derivative of the solution of the wave on $\partial B \times(0,T) $ associated to $f$.

Using spherical harmonics tools and an explicit formula for the Dirichlet-to-Neumann map $\Lambda_{p}$ which associates to all $f$ the measurements $ \partial_n v$ in a unit ball in dimension $3$, we determine an explicit expression for the potential $p(x)$ on the domain edge. We present theoretically and numerically an example.


Compensating motion and model inexactness in nano-CT

Björn Ehlers, Anne Wald

Universität Göttingen, Germany

In Nano CT imaging the scale is so small that we have unwanted and unknown movement of the scanned object relativ to the tomograph, for example due to vibrations of the measuring apparatus. Not incorporating these movements in the radon operator leads to artefact due to the model inexactness. Reconstructing the rigid body motion of the object is possible due to the structure of the range of the Radon operator. The range of a Radon operator which includes the movement is different from one that does not. This can be used to extract the motion, correct the radon operator, correct the data or to estimate the operator error for use in a scheme called sequential subspace optimisation. We will focus on the error estimation for this regularisation method.


Combining Non-Data-Adaptive Transforms For OCT Image Denoising By Iterative Basis Pursuit

Raha Razavi1, Hossein Rabbani2, Gerlind Plonka1

1Georg-August-University of Goettingen, Goettingen, Germany; 2Isfahan University of Medical Sciences, Isfahan, Iran

Optical Coherence Tomography (OCT) images, as well as a majority of medical images, are imposed to speckle noise while capturing. Since the quality of these images is crucial for detecting any abnormalities, we develop an improved denoising algorithm that is particularly appropriate for OCT images. The essential idea is to combine two non-data-adaptive transform-based denoising methods that are capable to preserve different important structures appearing in OCT images while providing a very good denoising performance. Based on our numerical experiments, the most appropriate non-data-adaptive transforms for denoising and feature extraction are the Discrete Cosine Transform (DCT) (capturing local patterns) and the Dual-Tree Complex Wavelet Transform (DTCWT) (capturing piecewise smooth image features). These two transforms are combined using the Dual Basis Pursuit Denoising (DBPD) algorithm. Further improvement of the denoising procedure is achieved by total variation (TV) regularization and by employing an iterative algorithm based on DBPD.


Iterated Arnoldi-Tikhonov method

Davide Bianchi, Marco Donatelli, Davide Furchì, Lothar Reichel

Università degli Studi dell'Insubria, Italy

When solving an ill-posed linear operator equation, most of the analysis does not take the discretization error into account. This paper contributes to address this gap. Building upon the analysis presented in [1], we extend the study to the iterated framework. Firstly, we demonstrate a saturation result for the Arnoldi-Tikhonov solution method outlined in [2]. Subsequently, we extend the analysis to the iterated Arnoldi-Tikhonov method, providing a parameter choice rule, which produces higher-quality computed solutions compared to the standard Arnoldi-Tikhonov method. Theoretical results are supported by relevant computed examples.

[1] A. Neubauer. An a posteriori parameter choice for Tikhonov regularization in the presence of modeling error. Appl. Numer. Math., 4 1986.

[2] R. Ramlau, L. Reichel. Error estimates for Arnoldi-Tikhonov regularization for ill-posed operator equations. IOP Publishing, Inverse Problems 35, 2019.



Measurement and analysis strategies for EUV pump-probe spectroscopic imaging

Gijsbert Simon Matthijs Jansen1, Hannah Strauch1, Thorsten Hohage2, Stefan Mathias1

1I. Physical Institute, University of Goettingen, Germany; 2Institute of Numerical and Applied Mathematics, University of Goettingen, Germany

Interference-based measurement methods allow the extraction of phase differences of electromagnetic waves, thus adding phase information to an intensity-based measurement. In holography, the encoded phase information allows the reconstruction of the complete wavefront, which has powerful applications in imaging. Similarly, Fourier-transform spectroscopy allows spectral information to be extracted from pulse delay-dependent interference measurements. Recently, it was demonstrated that the combination of these interferometric measurements might enable hyperspectral imaging in the extreme ultraviolet (EUV). However, to obtain full spectral information, normally long reference-probe delay scans are required, resulting in long measurement times and large amounts of data.

We aim to address this challenge by implementing a combination of Fourier transform spectroscopy and Fourier transform holography: Two phase-locked EUV pulses are imaged to separate reference and probe positions on the sample plane, which leads to both spectral and spatial information to be encoded in the far-field diffraction pattern. This interferometric approach provides an opportunity to reduce sampling requirements, as a suitable reconstruction algorithm can implement prior knowledge on the spatial domain to constrain the spectral domain (and vice versa). As typical for diffraction microscopy, the measurement data are proportional to the amplitude of the Fourier transform, meaning that although the forward model is nonlinear, it can be efficiently implemented. From simulations and analysis of the forward model, we will discuss ways to adapt both measurement and analysis to facilitate efficient acquisition of multidimensional pump-probe spectroscopy data.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany