Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
 
 
Session Overview
Location: VG1.108
Date: Monday, 04/Sept/2023
1:30pm - 3:30pmMS43 1: Inverse Problems in radiation protection and nuclear safety
Location: VG1.108
Session Chair: Lorenz Kuger
Session Chair: Samuli Siltanen
 

Passive Gamma Emission Tomography (PGET) of spent nuclear fuel

Riina Virta1,2, Tatiana A. Bubba3, Mikael Moring1, Samuli Siltanen4, Tapani Honkamaa1, Peter Dendooven2

1Radiation and Nuclear Safety Authority, Finland; 2Helsinki Institute of Physics, University of Helsinki, Finland; 3Department of Mathematical Sciences of the University of Bath, United Kingdom; 4Department of Mathematics and Statistics of the University of Helsinki, Finland

The world’s first deep underground repository for spent nuclear fuel will soon start operations in Eurajoki, Finland. Disposal tunnels have been excavated 430 meters below the ground surface in bedrock, and the spent nuclear fuel will be placed in deposition holes in copper canisters. After the fuel is disposed of, it will be practically unreachable [1]. For safeguarding nuclear material, all fuel items need to be reliably verified prior to disposal in the geological repository. Fuel assembly integrity is investigated to make sure all nuclear material stays in peaceful use.

Passive Gamma Emission Tomography (PGET) is a non-destructive assay method that allows for accurate 2D slice images of the fuel assembly to be reconstructed. Fuel assembly types we have studied are rectangular or hexagonal objects, about 4 meters long and about 15 cm in diameter, consisting of a bunch of 63-126 individual fuel rods in a fixed geometric arrangement. Spent nuclear fuel emits gamma-rays at a very high rate and with specific energies, providing a method to verify the presence of fuel rods in the assemblies. Gamma emission data is collected with the torus-shaped PGET device which has two highly collimated CdZnTe gamma detector banks that rotate a full 360 degree around the fuel assembly, which is placed in the central hole of the device. Gamma-rays are significantly attenuated in the fuel material, and thus the attenuation map of the object is reconstructed simultaneously with the activity map. The mathematical approach to this unique inverse problem is described in another presentation in this minisymposium, while the context of the method and the measurements are presented in more detail in this contribution [2,3].

During 2017-2022, over 100 spent nuclear fuel assemblies have been measured at the Finnish nuclear power plants with the PGET method [3,4]. The imaged fuel has had a range of characteristics and 10 different geometrical types. The measurement campaigns have concentrated on refining the measurement parameters for improving detection of possible empty rod positions. Data acquisition gamma energy windows have been fine-tuned, different sets of angles out of the full 360 angle data have been used in the reconstructions and different methods for quantitatively investigating the image quality have been developed. Even the use of less abundant but higher-energy and higher-penetrating gamma-rays were investigated to improve the detection of missing rods in the central parts of the fuel.

The PGET method has shown to detect individual missing rods with high confidence and has even demonstrated the ability to reproduce intra-rod activity differences. We have also shown that the method is able to distinguish activity differences in the axial direction of the fuel, which we show with a set of axial measurements conducted over a fuel assembly with partial-length fuel rods.

A variety of results from the measurement campaigns will be presented, illustrating the usability of the method for safeguards purposes in the Finnish context.

[1] www.posiva.fi

[2] R. Backholm, T. A. Bubba, C. Bélanger-Champagne, T. Helin, P. Dendooven, S. Siltanen. Simultaneous reconstruction of emission and attenuation in passive gamma emission tomography of spent nuclear fuel, Inv. Probl. Imag. 14: 317-337, 2020.

[3] P. Dendooven, T.A. Bubba. Gamma ray emission imaging in the medical and nuclear safeguards fields, Lecture Notes in Physics 1005: 245-295, 2022.

[4] R. Virta, R. Backholm, T. A. Bubba, T. Helin, M. Moring, S.Siltanen, P. Dendooven, T. Honkamaa. Fuel rod classification from Passive Gamma Emission Tomography (PGET) of spent nuclear fuel assemblies, ESARDA Bulletin 61: 10-21, 2020.

[5] R. Virta, T. A. Bubba, M. Moring, S. Siltanen, T. Honkamaa, P. Dendooven. Improved Passive Gamma Emission Tomography image quality in the central region of spent nuclear fuel, Scientific Reports 12: 12473, 2022.


Bayesian modelling and inference for radiation source localisation

Cécilia Tarpau1,2, Ming Fang3, Yoann Altmann4, Angela Di Fulvio3, Marcelo Pereyra1,2, Konstantinos Zygalakis2,5

1School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, United Kingdom; 2Maxwell Institute for Mathematical Sciences, University of Edinburgh, Edinburgh, United Kingdom; 3Department of Nuclear, Plasma and Radiological Engineering, University of Illinois Urbana Champaign, Champaign, United States; 4School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, United Kingdom; 5School of Mathematics, University of Edinburgh, Edinburgh, United Kingdom

In this work, we study a Compton Imager, made of an array of scintillation crystals. This imaging system differs from a more classical Compton camera since the sensor array acts simultaneously as a set of scatterers and absorbers. From the recorded data, the objective is to localize the positions of point-like sources responsible for the emission of the measured radiation. The inverse problem is formulated within a Bayesian framework, and a Markov chain Monte Carlo method is investigated to infer the source locations.


Image reconstruction for Passive Gamma Emission Tomography of spent nuclear fuel

Peter Dendooven1, Riina Virta1,3, Tatiana A. Bubba2, Mikael Moring3, Samuli Siltanen4, Tapani Honkamaa3

1Helsinki Institute of Physics, University of Helsinki, Finland; 2Department of Mathematical Sciences, University of Bath, UK; 3Radiation and Nuclear Safety Authority (STUK), Vantaa, Finland; 4Department of Mathematics and Statistics, University of Helsinki, Finland

A Passive Gamma Emission Tomography (PGET) device is part of the IAEA-approved tools for safeguards inspections of spent nuclear fuel assemblies. In Finland, PGET has been selected to be part of the nuclear safeguards procedures at the geological repository for spent nuclear fuel (SNF), ONKALO [1]. In recent years, we have developed the PGET method for this purpose. This contribution will focus on the data analysis and image reconstruction methods. It will show how the methods chosen are dictated and influenced by the requirements and the physics of the application, as well as the characteristics of the tomographic device that is being used. The characteristics and performance of the reconstruction algorithm will be illustrated with examples from PGET measurements at the SNF storage pools at the Finnish nuclear power plants. The design and operation of the PGET device and the most important results will be discussed in a separate contribution to this minisymposium.

A safeguards inspection aims to verify that all nuclear material is present as declared, to assure that none has been diverted for non-declared use, most specifically the development of nuclear weapons. Because of this requirement, a PGET measurement should not assume any prior information on the object under tomographic investigation. An SNF assembly consists essentially of rods of highly radioactive uranium dioxide, highly attenuating for gamma rays, immersed in non-radioactive water with much lower gamma ray attenuation. Good images thus require some form of attenuation correction. It is in practice very challenging to independently measure an attenuation map of SNF, e.g. by transmission tomography. Also, given the binary nature of the object (fuel rods and water), a good attenuation map needs knowledge of the geometry of the SNF. We have dealt with this conflict between the need of a good attenuation map and the requirement not to use prior information by developing an image reconstruction algorithm that reconstructs a gamma ray emission and attenuation image simultaneously, mathematically treating both in the same way.

The image reconstruction involves 2 steps. The first step is a filtered back-projection (FBP) which uses no prior information at all. Experimentally we observe that the quality of the FBP image is good enough to deduce the SNF assembly geometry. The second step, which produces the final image, is an iterative image reconstruction algorithm which uses the knowledge of the assembly geometry as a regularization term, thus favouring images that resemble the SNF assembly type identified from the FBP image in step 1. The reconstruction problem is formulated as a constrained minimization problem with a least squares data mismatch term (i.e. it implicitly assumes a Gaussian distribution for the noise) and several regularization terms. Next to the geometry regularization term, there are 2 terms related to corrections for the variation of the sensitivity amongst the detectors. Physics knowledge is used to establish upper and lower bounds on the image of attenuation coefficients. The attenuation image is constrained to values between the attenuation coefficient of the relevant gamma ray energy in water (lower bound) and uranium dioxide (upper bound). Most often, imaging is focused on the 662 keV gamma rays emitted by 137Cs, the dominant gamma ray emitter in SNF. The PGET image reconstruction method and its practical implementation will be discussed in some detail. Full details are given in [2-4].

Since 2017, over 100 different SNF assemblies have been measured at the Finnish nuclear power plants. Some representative examples from this vast data set will be used to highlight the performance of the image reconstruction method, especially in identifying missing fuel rods [3,5].

Points for improvement that have been identified over the past few years will be discussed. These are e.g. careful selection of the set of viewing angles, careful selection of the gamma ray energy windows and combining sinograms from different gamma ray energy windows. Improving imaging of the centre of spent fuel assemblies is a major development goal.

[1] www.posiva.fi

[2] P. Dendooven, T. A. Bubba. Gamma ray emission imaging in the medical and nuclear safeguards fields, Lecture Notes in Physics 1005: 245-295, 2022.

[3] R. Virta, R. Backholm, T. A. Bubba, T. Helin, M. Moring, S. Siltanen, P. Dendooven, T. Honkamaa. Fuel rod classification from Passive Gamma Emission Tomography (PGET) of spent nuclear fuel assemblies, ESARDA Bulletin 61: 10-21, 2020.

[4] R. Backholm, T. A. Bubba, C. Bélanger-Champagne, T. Helin, P. Dendooven, S. Siltanen. Simultaneous reconstruction of emission and attenuation in passive gamma emission tomography of spent nuclear fuel, Inv. Probl. Imag. 14: 317-337, 2020.

[5] R. Virta, T. A. Bubba, M. Moring, S. Siltanen, T. Honkamaa, P. Dendooven. Improved Passive Gamma Emission Tomography image quality in the central region of spent nuclear fuel, Scientific Reports 12: 12473, 2022.


Exact inversion of an integral transform arising in passive detection of gamma-ray sources with a Compton camera

Fatma Terzioglu

NC State University, United States of America

This talk addresses the overdetermined problem of inverting the n-dimensional cone (or Compton) transform that integrates a function over conical surfaces in $\mathbb{R}^n$. The study of the cone transform originates from Compton camera imaging, a nuclear imaging method for the passive detection of gamma-ray sources. We present a new identity relating the n-dimensional cone and Radon transforms through spherical convolutions with arbitrary weight functions. This relationship leads to various inversion formulas in n-dimensions under a mild assumption on the geometry of detectors. We present two such formulas along with the results of their numerical implementation using synthetic phantoms.
 
4:00pm - 6:00pmMS43 2: Inverse Problems in radiation protection and nuclear safety
Location: VG1.108
Session Chair: Lorenz Kuger
Session Chair: Samuli Siltanen
 

Gamma spectrum analysis in nuclear decommissioning

Michelle Bruch1, Lorenz Kuger1,2, Martin Burger2,3

1Friedrich-Alexander-Universität Erlangen-Nürnberg; 2Deutsches Elektronen-Synchrotron DESY; 3Universität Hamburg

In the radiological characterisation of nuclear power stations, gamma spectroscopy builds the basis for many further investigation methods. The measurements of scintillation detectors in, e.g. a Compton camera, can be used to identify a priori the present radioactive nuclides. We formulate this gamma spectrum analysis problem as a Bayesian inverse problem with Poisson-distributed data. Techniques from convex analysis are used to compute the resulting maximum likelihood estimator given by a list of present nuclides and their corresponding intensities. The approach is tested on coincidence data measured with a Compton camera in potential use cases.



Practical gamma ray imaging with monolithic scintillation detector Compton cameras

Lorenz Kuger1,2, Martin Burger2,3

1FAU Erlangen-Nürnberg, Germany; 2Deutsches Elektronen-Synchrotron DESY, Germany; 3Universität Hamburg, Germany

Compton cameras are stationary, uncollimated gamma ray imaging devices that use the Compton effect to reconstruct a spatially resolved activity distribution. Due to the missing collimation, the cameras exhibit high sensitivities, particularly so for setups with large detectors in close distance of each other. This allows reconstruction in relatively low-count regimes and hence a flexible application in areas of low activity. For cameras with spatially non-resolved detectors, the size of the detectors however results in large angular uncertainty and distorted measurements due to multiple scattering contributions in the data. In this talk, we address the design and corresponding modeling approaches of Compton cameras with such monolithic, spatially non-resolved scintillation detectors. Numerical results on measured data support the theoretical considerations.


Machine Learning Techniques applied to Compton Cameras

Sibylle Petrak, Karsten Hölzer

Hellma Materials GmbH, Germany

Compton cameras have a long tradition in $\gamma$-ray astronomy and increasingly find new applications in radiation protection and nuclear safety. We have built three prototypes of Compton cameras to assist the decommissioning process of safely removing a nuclear facility from service and reducing residual radioactivity to permissible levels. In this talk, the inverse problem of Compton cameras is addressed with two techniques, one based on a Bayesian framework, and another graph-based approach that makes full use of the discrete nature of ionizing radiation interactions with matter. We have implemented a new physics concept in our Compton cameras whereby we no longer label radiation detectors according to their function as either scattering or absorbing detectors but rather characterize them by their materials, most importantly their effective atomic number $Z_\text{eff}$. As essentially no detector exists that would exclusively absorb radiation, we propose to record coincidence events between all pairs of detectors in which at least one detector material has $Z_\text{eff} > 30$. This new trigger condition includes coincidences of detector pairs where both materials have $Z_\text{eff} > 30$ which would traditionally be labeled absorbing detectors and would normally not be recorded by a Compton camera. These changes in the electronics setup of our Compton cameras yield an enlarged data sample available for subsequent inversion treatment.

We will present experimental results obtained with the relevance vector machine and a graph heuristic used for assigning coincidence events to emission points. The measurements were carried out at the radiation laboratory of the University of Applied Sciences Zittau/Görlitz. We gratefully acknowledge financial support by the Federal Ministry of Education and Research (BMBF) through the FORKA program under Grant No. 15S9431A-D.

 
Date: Tuesday, 05/Sept/2023
1:30pm - 3:30pmMS28 1: Modelling and optimisation in non-Euclidean settings for inverse problems
Location: VG1.108
Session Chair: Luca Calatroni
Session Chair: Claudio Estatico
Session Chair: Dirk Lorenz
 

A lifted Bregman formulation for the inversion of deep neural networks

Xiaoyu Wang1, Martin Benning2,3

1University of Cambridge, United Kingdom; 2Queen Mary University of London, United Kingdom; 3The Alan Turing Institute, United Kingdom

We propose a novel framework for the regularised inversion of deep neural networks. The framework is based on the authors' recent work on training feed-forward neural networks without the differentiation of activation functions. The framework lifts the parameter space into a higher dimensional space by introducing auxiliary variables and penalises these variables with tailored Bregman distances. We propose a family of variational regularisations based on these Bregman distances, present theoretical results and support their practical application with numerical examples. In particular, we present the first convergence result (to the best of our knowledge) for the regularised inversion of a single-layer perceptron that only assumes that the solution of the inverse problem is in the range of the regularisation operator.


A Bregman-Kaczmarz method for nonlinear systems of equations

Maximilian Winkler

TU Braunschweig, Germany

We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses exact or relaxed Bregman projections onto the solution space of a Newton equation. As such, it generalizes the Sparse Kaczmarz method which finds sparse solutions to linear equations, as well as the nonlinear Kaczmarz method, which performs euclidean projections. The relaxed Bregman projection is achieved by using the step size from the nonlinear Kaczmarz method. Local convergence is established for systems with full rank Jacobian under the local tangential cone condition. We show examples in which the proposed method outperforms similar methods with the same memory requirements.


Regularization in non-Euclidean spaces meets numerical linear algebra

Claudio Estatico1, Brigida Bonino2, Luca Calatroni3, Fabio Di Benedetto1, Marta Lazzaretti1, Flavia Lenti4

1University of Genoa, Italy; 2Istituto di Matematica Applicata e Tecnologie Informatiche, Italy; 3Laboratory of Computer Science, Signals and Systems of Sophia Antipolis, France; 4Eumetsat, Germany

Inverse problems modeled by a functional equation $A(x)=y$ characterized by an ill-posed operator $A:X \longrightarrow Y$ between two non-Euclidean normed spaces $X$ and $Y$ are here considered. The iterative minimization of a functional based on the residual $\|A(x)-y\|_Y$ is a common approach in this setting, where generally no (closed form of the) inverse of $A$ exists. In particular, one-step gradient methods act as implicit regularization algorithms, when combined with an early-stopping criterion to prevent over-fitting of the noise on the data $y$.

In this talk, we review iterative methods involving the dual spaces $X^*$ and $Y^*$, showing that they can be fully understood in the context of proximal operator theory, with suitable Bregman distances as proximity measure [1]. Moreover, many relationships of such iterative methods with classical projection algorithms, such us Cimmino and ART (Algebraic Reconstruction Techniques) ones, are discussed, as well as with classical preconditioning theory for structured linear systems arising in numerical linear algebra. Applications to deblurring and inverse scattering problems will be shown.

[1] B. Bonino, C. Estatico, M. Lazzaretti. Dual descent regularization algorithms in variable exponent Lebesgue spaces for imaging, Numer. Algorithms 92: 149-182, 2023.

[2] M. Lazzaretti, L. Calatroni, C. Estatico. Modular-proximal gradient algorithms in variable exponent Lebesgue spaces, SIAM J. Sci. Compu. 44: 1-23, 2022.
 
4:00pm - 6:00pmMS28 2: Modelling and optimisation in non-Euclidean settings for inverse problems
Location: VG1.108
Session Chair: Luca Calatroni
Session Chair: Claudio Estatico
Session Chair: Dirk Lorenz
 

Gradient descent-based algorithms for inverse problems in variable exponent Lebesgue spaces

Marta Lazzaretti1,3, Zeljko Kereta2, Luca Calatroni3, Claudio Estatico1

1Dip. di Matematica (DIMA), Università di Genova, Italy; 2Dept. of Computer Science, University College London, UK; 3CNRS, UCA, Inria, Laboratoire I3S, Sophia-Antipolis, France

Variable exponent Lebesgue spaces $\ell^{(p_n)}$ have been recently proved to be the appropriate functional framework to enforce pixel-adaptive regularisation in signal and image processing applications (see [1]), combined with gradient descent (GD) or proximal GD strategies. Compared to standard Hilbert or Euclidean settings, however, the application of these algorithms in the Banach setting of $\ell^{(p_n)}$ is not straightforward due to the lack of a closed-form expression and the non-separability property of the underlying norm. We propose the use of the associated separable modular function [2, 3], instead of the norm, to define algorithms based on GD in $\ell^{(p_n)}$ and consider a stochastic GD [3, 4] to reduce the per-iteration cost of iterative schemes, used to solve linear inverse real-world image reconstruction problems.

[1] B. Bonino, C. Estatico, and M. Lazzaretti. Dual descent regularization algorithms in variable exponent Lebesgue spaces for imaging, Numer. Algorithms 92(6), 2023.

[2] M. Lazzaretti, L. Calatroni, and C. Estatico. Modular-proximal gradient algorithms in variable exponent Lebesgue spaces, SIAM J. Sci. Compu. 44(6), 2022.

[3] M. Lazzaretti, Z. Kereta, L. Calatroni, and C. Estatico. Stochastic gradient descent for linear inverse problems in variable exponent Lebesgue spaces, 2023. [https://arxiv.org/abs/2303.09182]

[4] Z. Kereta, and B. Jin. On the convergence of stochastic gradient descent for linear inverse problems in Banach spaces, SIAM J. Imaging Sci. (in press), 2023.



Multiscale hierarchical decomposition methods for images corrupted by multiplicative noise

Joel Barnett, Wen Li, Elena Resmerita, Luminita Vese

University of Klagenfurt, Austria

Recovering images corrupted by multiplicative noise is a well known challenging task. Motivated by the success of multiscale hierarchical decomposition methods (MHDM) in image processing, we adapt a variety of both classical and new multiplicative noise removing models to the MHDM form. We discuss well-definedness and convergence of the proposed methods. Through comprehensive numerical experiments and comparisons, we qualitatively and quantitatively evaluate the validity of the considered models. By construction, these multiplicative multiscale hierarchical decomposition methods have the added benefit of recovering many scales of an image, which can provide features of interest beyond image restoration.


Proximal point algorithm in spaces with semidefinite inner product

Emanuele Naldi1, Enis Chenchene2, Dirk A. Lorenz1, Jannis Marquardt1

1TU Braunschweig, Germany; 2University of Graz, Austria

We introduce proximal point algorithms in spaces with semidefinite inner products. We focus our attention in particular on products induced by self-adjoint positive semidefinite linear operators defined on Hilbert spaces. We show convergence for the algorithm under suitable conditions and we investigate applications for splitting methods. As first application, we devise new schemes for finding minimizers of the sum of many convex lower semicontinuous functions and show some applications of these new schemes to congested transport and distributed optimization in the context of Support Vector Machines, investigating their behavior. Finally, we analyze the convergence of the proximal point algorithm letting vary the (semidefinite) metric at each iteration. We discuss applications of this analysis to the primal-dual Douglas-Rachford scheme, investigating adaptive stepsizes for the method.


Asymptotic linear convergence of fully-corrective generalized conditional gradient methods

Kristian Bredies1, Marcello Carioni2, Silvio Fanzon1, Daniel Walter3

1University of Graz, Austria; 2University of Twente, The Netherlands; 3Humboldt-Universität zu Berlin, Germany

We discuss a fully-corrective generalized conditional gradient (FC-GCG) method [1] for the minimization of Tikhonov functionals associated with a linear inverse problem, a convex discrepancy and a convex one-homogeneous regularizer over a Banach space. The algorithm alternates between updating a finite set of extremal points of the unit ball of the regularizer [2] and optimizing on the conical hull of these extremal points, where each iteration requires the solution of one linear problem and one finite-dimensional convex minimization problem. We show that the method converges sublinearly to a solution and that imposing additional assumptions on the associated dual variables accelerates the method to a linear rate of convergence. The proofs rely on lifting, via Choquet's theorem, the considered problem to a particular space of Radon measures well as the equivalence of the FC-CGC method to a primal-dual active point (PDAP) method for which linear convergence was recently established. Finally, we present applications scenarios where the stated assumptions for accelerated convergence can be satisfied [3].

[1] Kristian Bredies, Marcello Carioni, Silvio Fanzon and Daniel Walter. Asymptotic linear convergence of fully-corrective generalized conditional gradient methods, 2021. [ArXiv preprint 2110.06756]

[2] Kristian Bredies and Marcello Carioni. Sparsity of solutions for variational inverse problems with finite-dimensional data, Calculus of Variations and Partial Differential Equations 59(14), 2020.

[3] Kristian Bredies, Marcello Carioni, Silvio Fanzon and Francisco Romero. A generalized conditional gradient method for dynamic inverse problems with optimal transport regularization, Foundations of Computational Mathematics, 2022.
 
Date: Wednesday, 06/Sept/2023
9:00am - 11:00amMS11: Defying the Curse of Dimensionality – Theory and Algorithms for Large Dimensional Bayesian Inversion
Location: VG1.108
Session Chair: Rafael Flock
Session Chair: Yiqiu Dong
 

Efficient high-dimensional Bayesian multi-fidelity inverse analysis for expensive legacy solvers

Jonas Nitzler1,2, Wolfgang A. Wall1, Phaedon-Stelios Koutsourelakis2

1Institute for Computational Mechanics, Technical University of Munich, Germany; 2Professorship of Data-driven Materials Modeling, Technical University of Munich, Germany

Bayesian inverse analysis can be computationally burdensome when dealing with large scale numerical models dependent on high-dimensional stochastic input, and especially when model derivatives are unavailable, as is the case for many high-fidelity legacy codes. To overcome these limitations, we introduce a novel approach called Bayesian multi-fidelity inverse analysis (BMFIA), which utilizes computationally inexpensive lower fidelity models to construct a multi-fidelity likelihood function. This function can be learned robustly, and potentially adaptively, from a few high-fidelity simulations (100-300). Our approach incorporates in the resulting, multi-fidelity posterior the epistemic uncertainty stemming from the limited high-fidelity data and the information loss caused by the multi-fidelity approximation. BMFIA can handle non-linear dependencies between low- and high-fidelity models. In particular, when the former are differentiable the solution of high-dimensional problems can be achieved while maintaining the high-fidelity accuracy by the multi-fidelity likelihood. Bayesian inference is performed using state-of the-art sampling-based or variational methods which require solely evaluations of the lower-fidelity model. We demonstrate the applicability of BMFIA to large-scale biomechanical and coupled multi-physics problems.


Goal-oriented Uncertainty Quantification for Inverse Problems via Variational Encoder-Decoder Networks

Julianne Chung1, Matthias Chung1, Babak Afkham2

1Emory University, United States of America; 2Technical University of Denmark, Denmark

In this work, we describe a new approach that uses variational encoder-decoder (VED) networks for efficient goal-oriented uncertainty quantification for inverse problems. Contrary to standard inverse problems, these approaches are goal-oriented in that the goal is to estimate some quantities of interest (QoI) that are functions of the solution of an inverse problem, rather than the solution itself. Moreover, we are interested in computing uncertainty metrics associated with the QoI, thus utilizing a Bayesian approach for inverse problems that incorporates the prediction operator and techniques for exploring the posterior. By harnessing recent advancements in the field of machine learning for large-scale inverse problems, in particular, by exploiting VED networks, we describe a data-driven approach for real-time goal-oriented uncertainty quantification for inverse problems.


Coupled Parameter and Data Dimension Reduction for Bayesian Inference

Qiao Chen1,2, Elise Arnaud1,2, Ricardo Baptista3, Olivier Zahm1,2

1Inria, France; 2Université Grenoble Alpes, France; 3California Institute of Technology, USA

We introduce a new method to reduce the dimension of the parameter and data space of high-dimensional Bayesian inverse problems. Commonly, different dimension reduction methods are applied separately to the two spaces. However, choosing a low-dimensional informed parameter subspace influences which data subspace is informative and vice versa. We thus propose a coupled method that, in addition, naturally reveals optimal experimental designs. Our method is based on the gradient of the forward operator of a Gaussian likelihood. It computes two projectors with an efficient and simple alternating singular value decomposition. Moreover, we control the approximation error through a certified $L^2$-error bound on the forward operator. We demonstrate the method on a large-scale Bayesian inverse problem in ocean modelling and use it to derive optimal sensor placements.


Certified Coordinate Selection for large-dimensional Bayesian Inversion

Rafael Flock1, Yiqiu Dong1, Olivier Zahm2, Felipe Uribe3

1Technical University of Denmark, Denmark; 2Univ. Grenoble Alpes, Inria; 3Lappeenranta-Lahti University of Technology

We are presenting a method to solve large dimensional Bayesian inverse problems where the parameter vector is assumed to be sparse. To this end, we use a Laplace-prior and show how the posterior density can be approximated based on a suitable coordinate splitting. Using an upper bound on the Hellinger distance between the exact and approximated posterior density, we show how the coordinate splitting can be performed.

Sampling from the approximated posterior is then straightforward and very efficient. Our theoretical framework allows also for sampling the exact posterior using a pseudo-marginal MCMC. However, this algorithm is relying heavily on a good coordinate splitting which might not be feasible in practice. Therefore, we propose a modified random-scan MCMC algorithm to sample from the exact posterior which is more robust and flexible.

In the end, we first illustrate the methodology on a simple example and then present the practical applicability on a large-dimensional 2D deblurring problem.
 
Date: Thursday, 07/Sept/2023
1:30pm - 3:30pmMS51 1: Analysis, numerical computation, and uncertainty quantification for stochastic PDE-based inverse problems
Location: VG1.108
Session Chair: Mirza Karamehmedovic
Session Chair: Faouzi Triki
 

Deep Learning Methods for Partial Differential Equations and Related Parameter Identification Problems

Derick Nganyu Tanyu1, Jianfeng Ning2, Tom Freudenberg1, Nick Heilenkötter1, Andreas Rademacher1, Uwe Iben3, Peter Maass1

1Centre for Industrial Mathematics, University of Bremen, Germany; 2School of Mathematics and Statistics, Wuhan University, China; 3Robert Bosch GmbH, Germany

Recent years have witnessed a growth in mathematics for deep learning—which seeks a deeper understanding of the concepts of deep learning with mathematics, and explores how to make it more robust—and deep learning for mathematics, where deep learning algorithms are used to solve problems in mathematics. The latter has popularised the field of scientific machine learning where deep learning is applied to problems in scientific computing. Specifically, more and more neural network architectures have been developed to solve specific classes of partial differential equations (PDEs). Such methods exploit properties that are inherent to PDEs and thus solve the PDEs better than classical feed-forward neural networks, recurrent neural networks, and convolutional neural networks. This has had a great impact in the area of mathematical modeling where parametric PDEs are widely used to model most natural and physical processes arising in science and engineering, In this work, we review such methods and extend them for parametric studies as well as for solving the related inverse problems. We equally proceed to show their relevance in some industrial applications


Fourier method for inverse source problem using correlation of passive measurements

Kristoffer Linder-Steinlein1, Mirza Karamehmedović1, Faouzi Triki2

1Technical University of Denmark, Denmark; 2Laboratoire Jean Kuntzmann, Université Grenoble-Alpes, Grenoble, France

We consider the inverse source problem for a Cauchy wave equation with passive cross-correlation data. We propose to consider the cross-correlation as a wave equation itself and reconstruct the cross-correlation in the support of the source for the original Cauchy wave equation. Having access to the cross-correlation in the support of the source, we use a Fourier method to reconstruct the source of the original Cauchy wave equation. We show the inverse source problem is ill-posed and suffer from non-uniqueness when the mean of the source is zero, and provide a uniqueness result and stability estimate in case of non-zero mean.


Feynman's inverse problem - an inverse problem for water waves

Adrian Kirkeby1, Mirza Karamehmedović2

1Simula Research Laboratory, Norway; 2Technical University of Denmark

We analyse an inverse problem for water waves proposed by Richard Feynman in the BBC documentary "Fun to imagine". We show how the presence of both gravity and capillary waves makes water an excellent medium for the propagation of information.


Inference in Stochastic Differential Equations using the Laplace Appromixation

Uffe Høgsbro Thygesen

Technical University of Denmark, Denmark

We consider the problem of estimation of solutions to systems of coupled stochastic differential equations, as well as underlying system parameters, based on discrete-time measurements. We concentrate on the case where transition densities are not available in closed form, and focus on the technique of the Laplace approximation for integrating out unobserved state variables in a Bayesian setting. We demonstrate that the direct approach of inserting sufficiently many time points with unobserved states performs well, when the noise is additive. A pitfall arises when the noise intensity in the state equation depends on state variables, i.e., when the noise is not additive: In this case, maximizing the posterior density over unobserved states does not lead to useful state estimates (i.e., they are not consistent in the fine-time limit). This problem can be overcome by focusing in stead on the minimum-energy realization of the noise process which is consistent with data, which provides a connection to calculus of variations. We demonstrate the theory with numerical examples.

 
4:00pm - 6:00pmMS51 2: Analysis, numerical computation, and uncertainty quantification for stochastic PDE-based inverse problems
Location: VG1.108
Session Chair: Mirza Karamehmedovic
Session Chair: Faouzi Triki
 

Spectral properties of radiation for the Helmholtz equation with a random coefficient

Mirza Karamehmedovic, Kristoffer Linder-Steinlein

Technical University of Denmark, Denmark

For the Helmholtz equation with a Gaussian random field coefficient, we approximate and characterize spectrally the source-to-measurement map. To this end, we first analyze the case with a deterministic coefficient, and here discover and quantify a ’spectral leakage’ effect. We compare the theoretically predicted forward operator spectrum with a Finite Element Method computation. Our results are applicable in the analysis of the robustness of solution of inverse source problems in the presence of deterministic and random media.


Optimization under uncertainty for the Helmholtz equation with application to photonic nanojets configuration

Amal Alghamdi1, Peng Chen2, Mirza Karamehmedovic1

1Technical University of Denmark (DTU), Denmark; 2Georgia Institute of Technology, USA

Photonic nanojets (PNJs) have promising applications as optical probes in super-resolution optical microscopy, Raman microscopy, as well as fluorescence microscopy. In this work, we consider optimal design of PNJs using a heterogeneous lens refractive index with a fixed lens geometry and uniform plane wave illumination. In particular, we consider the presence of manufacturing error of heterogeneous lens, and propose a computational framework of Optimization Under Uncertainty (OUU) for robust optimal design of PNJ. We formulate a risk-averse stochastic optimization problem with the objective to minimize both the mean and the variance of a target function, which is constrained by the Helmholtz equation that governs the 2D transverse electric (2D TE) electromagnetic field in a neighborhood of the lens. The design variable is taken as a spatially-varying field variable, where we use a finite element method for its discretization, impose a total variation penalty to promote its sparsity, and employ an adjoint-based BFGS method to solve the resulting high-dimensional optimization problem. We demonstrate that our proposed OUU computational framework can achieve more robust optimal design than a deterministic optimization scheme to significantly mitigate the impact of manufacturing uncertainty.


Posterior consistency for Bayesian inverse Problems with piecewise constant inclusions

Babak Maboudi Afkham.1, Kim Knudsen1, Aksel Rasmussen1, Tanja Tarvainen2

1Technical University of Denmark, Denmark; 2University of Eastern Finland

In Bayesian Inverse Problems the aim is to recover the posterior distribution for the quantity of interest. This distribution is given in terms of the prior distribution modeling a priori knowledge and the likelihood distribution modeling the noise. In many applications, one single estimator, e.g., the posterior mean, is desired and reported, however it is crucial for the fundamental understanding that this estimator is consistent, meaning that the estimator converges in probability to the ground truth when the noise level tends to zero.

In this talk we will explore the fundamental questions and see, how consistency indeed is possible in the case of PDE driven problems such as Photo-Acoustic Tomography with parametrized inclusions.



On uncertainty quantification for nonlinear inverse problems

Kui Ren

Columbia University, United States of America

We study some uncertainty quantification problems in nonlinear inverse coefficient problems for PDEs. We are interested in characterizing the impact of unknown parameters in the PDE models on the reconstructed coefficients. We argue that, unlike the situation in forward problems, uncertainty propagation in inverse problems is influenced by both the forward model and the inversion method used in the reconstructions. For ill-conditioned problems, errors in reconstructions can sometimes dominate the uncertainty caused by the unknown parameters in the model. Based on such observations, we will propose methods that quantify uncertainties more accurately than a generic method by compensating for the errors due to the reconstruction algorithms.
 
Date: Friday, 08/Sept/2023
1:30pm - 3:30pmCT11: Contributed talks
Location: VG1.108
Session Chair: Housen Li
 

Extension and convergence of sixth order Jarratt-type method

Suma Panathale Bheemaiah

Manipal Institute of Technology, Manipal Academy of Higher Education, India

A sixth order convergence of Jarratt-type method for solving nonlinear equations is considered. Weaker assumptions on the derivative of the involved operator is made, contrary to the earlier studies. The convergence analysis does not depend on the Taylor series expansion and this increases the applicability of the proposed method. Numerical examples and Basins of attractions of the method are provided in this study.

[1] I.K. Argyros , S. Hilout. On the local convergence of fast two-step Newton-like methods for solving nonlinear equations: Journal of Computational and Applied Mathematics 245:1-9, 2013.

[2] A. Cordero , M.A. Hernández-Verón , N. Romero , J.R. Torregrosa. Semilocal convergence by using recurrence relations for a fifth-order method in Banach spaces: Journal of computational and applied mathematics,volume(273):205-213, 2015.

[3] S. George , I.K. Argyros , P. Jidesh , M. Mahapatra, M. Saeed. Convergence Analysis of a Fifth-Order Iterative Method Using Recurrence Relations and Conditions on the First Derivative: Mediterranean Journal of Mathematics,volume(18):1-12, 2021.

[4] P. Jarratt. Some fourth order multipoint iterative methods for solving equations: Mathematics of computation, Vol(20):434-437, 1966.

[5] H. Ren. On the local convergence of a deformed Newton’s method under Argyros-type condition, Journal of Mathematical Analysis and Applications, 321(1):396-404. 2006.

[6] S. Singh, D.K. Gupta, E. Martínez , J.L. Hueso. Semilocal convergence analysis of an iteration of order five using recurrence relations in Banach spaces: Mediterranean Journal of Mathematics. volume(13):4219-4235, 2016.


Optimal design for aeroacoustics with correlation data

Christian Aarset, Thorsten Hohage

University of Göttingen, Germany

A key problem in aeroacoustics is the inverse problem of estimating an unknown random source from correlation data sampled from surrounding sensors. We study optimal design for this and related problems, that is, we identify the sensor placement minimising covariance of the solution to the inverse random source problem, while remaining sparse. To achieve this, we discuss the assumption of gaussianity and how to adapt this to our setting of correlation data, and demonstrate how this model can lead to sparse designs for aeroacoustic experiments.


Source separation for Electron Paramagnetic Resonance Imaging

Mehdi Boussâa, Rémy Abergel, Sylvain Durand, Yves Frapart

Université Paris Cité, France

Electron Paramagnetic Resonance Imaging (EPRI) is a versatile imaging modality that enables the study of free radical molecules or atoms from materials $\textit{in-vitro}$ to$ \textit{in-vivo}$ appplication in biomedical research. Clinical applications are currently under investigation. While recent advancements in EPRI techniques have made it possible to study a single free radical, or source, inside the imaging device [1], the reconstruction of multiple sources, or source separation, remains a challenging task. The state-of-the-art technique heavily relies on time-consuming acquisition and voxel-wise direct inverse methods, which are prone to artifacts and do not leverage the spatial consistency of the source images to reconstruct. To address this issue, we propose a variational formulation of the source separation problem with a Total Variation $\textit{a-priori}$, which emphasizes the spatial consistency of the source. This approach drastically reduces the needed number of acquisitions without sacrificing the quality of the source separation. An EPRI experimental study has been conducted, and we will present some of the results obtained.

[1] S. Durand, Y.-M. Frapart, M. Kerebel. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization. Inverse Problems, 33(11):114002, 2017.

 

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany