Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
 
 
Session Overview
Location: VG3.102
Date: Monday, 04/Sept/2023
1:30pm - 3:30pmMS57 1: Inverse Problems in Time-Domain Imaging at the Small Scales
Location: VG3.102
Session Chair: Eric Bonnetier
Session Chair: Xinlin Cao
Session Chair: Mourad Sini
 

Inverse wave scattering in the time domain

Andrea Mantile2, Andrea Posilicano1

1DiSAT, Università dell'Insubria, Como, Italy; 2UMR9008 CNRS et Université de Reims Champagne-Ardenne, Reims, France

Let $\Delta_{\Lambda}\le \lambda_{\Lambda}$ be a semi-bounded self-adjoint realization of the Laplace operator with boundary conditions (Dirichlet, Neumann, semi-transparent) assigned on the Lipschitz boundary of a bounded obstacle $\Omega$. Let $u^{\Lambda}_{f}$ and $u^{0}_{f}$ denote the solutions of the wave equations corresponding to $\Delta_{\Lambda}$ and to the free Laplacian $\Delta$ respectively, with a source term $f$ concentrated at time $t=0$ (a pulse). We show that for any fixed $\lambda>\lambda_{\Lambda}\ge 0$ and any fixed $B\subset\subset{\mathbb R}^{n}\backslash\overline\Omega$, the obstacle $\Omega$ can be reconstructed by the scattering data operator $$ F^{\Lambda}_{\lambda}f(x):=\int_{0}^{\infty}e^{-\sqrt\lambda\,t}\big(u^{\Lambda}_{f}(t,x)-u^{0}_{f}(t,x)\big)\,dt\,,\qquad x\in B\,,\ f\in L^{2}({\mathbb R}^{n})\,,\ \mbox{supp}(f)\subset B\,. $$ A similar result holds for point scatterers; in this case, the locations of the of scatterers are determined by an analog of $F^{\Lambda}_{\lambda}$ acting in a finite dimensional space.


A new approach to an inverse source problem for the wave equation

Mourad Sini1, Haibing Wang2

1RICAM, Austrian Academy of Sciences, Austria; 2School of Mathematics, Southeast University, P.R. China,

Consider an inverse problem of reconstructing a source term from boundary measurements for the wave equation. We propose a novel approach to recover the unknown source through measuring the wave fields after injecting small particles, enjoying a high contrast, into the medium. For this purpose, we first derive the asymptotic expansion of the wave field, based on the time-domain Lippmann-Schwinger equation. The dominant term in the asymptotic expansion is expressed as an infinite series in terms of the eigenvalues of the Newtonian operator (for the pure Laplacian). Such expansions are useful under a certain scale between the size of the particles and their contrast. Second, we observe that the relevant eigenvalues appearing in the expansion have non-zero averaged eigenfunctions. By introducing a Riesz basis, we reconstruct the wave field, generated before injecting the particles, on the center of the particles. Finally, from these last fields, we reconstruct the source term. A significant advantage of our approach is that we only need the measurements for a single point away from the support of the source.


Simultaneous Reconstruction Of Optical And Acoustical Properties In PA-Imaging Using Plasmonics.

Ahcene Ghandriche1, Mourad Sini2

1NCAM, China, People's Republic of; 2RICAM, Austrian Academy of Sciences.

We propose an approach for the simultaneous reconstruction of the electromagnetic and acoustic material parameters, in the given medium $\Omega$ where to image, using the photoacoustic pressure, measured on a single point of the boundary of $\Omega$, generated by plasmonic nano-particles. We prove that the generated pressure, that we denote by $p^{\star}(x, s, \omega)$, depending on only one fixed point $x \in \partial \Omega$, the time variable $s$, in a large enough interval, and the incidence frequency $\omega$, in a large enough band, is enough to reconstruct both the sound speed, the mass density and the permittivity inside $\Omega$. Indeed, from the behaviour of the measured pressure in terms of time, we can estimate the travel time of the pressure, for arriving points inside $\Omega$, then using the eikonal equation we reconstruct the acoustic speed of propagation, inside $\Omega$. In addition, we reconstruct the internal values of the acoustic Green’s function. From the singularity analysis of this Green’s function, we extract the integrals along the geodesics, for internal arriving points, of the logarithmic-gradient of the mass density. Solving this integral geometric problem provides us with the values of the mass density function inside $\Omega$. Finally, from the behaviour of $p^{\star}(x, s, \omega)$ with respect to the frequency $\omega$, we detect the generated plasmonic resonances from which we reconstruct the permittivity inside $\Omega$.


Time domain analysis of body resonant-modes for classical waves

Andrea Mantile1, Andrea Posilicano2

1Université de Reims, France; 2Università dell'Insubria, Como-Varese, Italy

We consider the wave propagation in the time domain in the presence of small inhomogeneities having high contrast with respect to a homogeneous background. This can be interpreted as a reduced scalar-model for the interaction of an electromagnetic wave with dielectric nanoparticles with high refractive indices. Such composite systems are known to exhibit a transition towards a resonant regime where an enhancement of the scattered wave can be observed at specific incoming frequencies, commonly referred to as body resonances. The asymptotic analysis of the stationary scattering problem, in the high-index nanoparticles regime, recently provided accurate estimates of the resonant frequencies and useful point-scatterer expansions for the solution in the far-field approximation. A key role in this analysis is played by the Newton operator related to the inhomogeneity support, whose eigenvalues identify the inverse resonant energies. We point out that a characterization of such singular frequencies in a proper sense spectral requires the spectral analysis of the Hamiltonian associated to the time-dependent problem. Here we focus on this problem by introducing the scale-dependent Hamiltonian of the time-evolution equation. In this framework, we consider the spectral profile with a particular focus on the generalized spectrum close to the branch-cut. We show that, in this region, the resonances are located in small neighbourhoods of the eigenvalues of the inverse Newton operator and provide accurate estimates for their imaginary parts. In particular, this allows a complete computation of the time-propagator in the asymptotic regime, providing in this way the full asymptotic expansion of the time-domain solution.
 
4:00pm - 6:00pmMS57 2: Inverse Problems in Time-Domain Imaging at the Small Scales
Location: VG3.102
Session Chair: Eric Bonnetier
Session Chair: Xinlin Cao
Session Chair: Mourad Sini
 

A mathematical theory of resolution limits for dynamic super-resolution in particle tracking problems

Ping Liu, Habib Ammari

ETH Zurich, Switzerland

Particle tracking in a live cell environment is concerned with reconstructing the trajectories, locations, or velocities of the targeting particles, which holds the promise of revealing important new biological insights. The standard approach of particle tracking consists of two steps: first reconstructing statically the source locations in each time step, and second applying tracking techniques to obtain the trajectories and velocities. In contrast to the standard approach, the dynamic reconstruction seeks to simultaneously recover the source locations and velocities from all frames, which enjoys certain advantages. In this talk, we will present a rigorous mathematical analysis for the resolution limit of reconstructing source number, locations, and velocities by general dynamical reconstruction in particle tracking problems, by which we demonstrate the possibility of achieving super-resolution for dynamic reconstruction. We show that when the location-velocity pairs of the particles are separated beyond certain distances (the resolution limits), the number of particles and the location-velocity pair can be stably recovered. The resolution limits are related to the cut-off frequency of the imaging system, signal-to-noise ratio, and the sparsity of the source. By these estimates, we also derive a stability result for a sparsity-promoting dynamic reconstruction. In addition, we further show that the reconstruction of velocities has a better resolution limit which improves constantly as the particles move. The result is derived from a crucial observation that the inherent cut-off frequency for the velocity recovery can be viewed as the cut-off frequency of the imaging system multiplied by the total observation time, which may lead to a better resolution limit than the one for each diffraction-limited frame. In addition, we propose super-resolution algorithms for recovering the number and values of the velocities in the tracking problem and demonstrate theoretically or numerically their super-resolution capability.


Heat Generation Using Lorentzian Nanoparticles. The Full Maxwell System

Arpan Mukherjee1,2, Mourad Sini1

1Radon Institute (RICAM), Austrian Academy of Sciences, Austria; 2Johannes Keplar Universität Linz, Austria

We analyze and quantify the amount of heat generated by a nanoparticle, injected in a background medium, while excited by incident electromagnetic waves. These nanoparticles are dispersive with electric permittivity following the Lorentz model. The purpose is to determine the quantity of heat generated extremely close to the nanoparticle (at a distance proportional to the radius of the nanoparticle). We show that by exciting the medium with incident frequencies close to the Plasmonic or Dielectric resonant frequencies, we can generate any desired amount of heat close to the injected nanoparticle while the amount of heat decreases away from it. These results offer a wide range of potential applications in the areas of photo-thermal therapy, drug delivery, and material science, to cite a few. To do so, we employ time-domain integral equations and asymptotic analysis techniques to study the corresponding mathematical model for heat generation. This model is given by the heat equation where the body source term comes from the modulus of the electric field generated by the used incident electromagnetic field. Therefore, we first analyze the dominant term of this electric field by studying the full Maxwell scattering problem in the presence of Plasmonic or All-dielectric nanoparticles. As a second step, we analyze the propagation of this dominant electric field in the estimation of the heat potential. For both the electromagnetic and parabolic models, the presence of the nanoparticles is translated into the appearance of large scales in the contrasts for the heat-conductivity (for the parabolic model) and the permittivity (for the full Maxwell system) between the nanoparticle and its surrounding.


Lipschitz stability for some inverse problems for a hyperbolic PDE with space and time dependent coefficients

Venkateswaran P. Krishnan1, Soumen Senapati2, Rakesh Rakesh3

1TIFR CAM, Bangalore, India; 2RICAM, Austria; 3University of Delaware, USA

We study stability aspects for the determination of space and time-dependent lower order perturbations of the wave operator in three space dimensions with point sources. The problems under consideration here are formally determined and we establish Lipschitz stability results for these problems. The main tool in our analysis is a modified version of Bukgheĭm-Klibanov method based on Carleman estimates.

[1] V. P. Krishnan, S. Senapati, Rakesh. Stability for a formally determined inverse problem for a hyperbolic PDE with space and time dependent coefficients, SIAM J. Math. Anal. 53, no. 6, 6822–6846, 2021.

[2] V. P. Krishnan, S. Senapati, Rakesh. Point sources and stability for an inverse problem for a hyperbolic PDE with space and time dependent coefficients, J. Differential Equations 342, 622–665, 2023.


Scattering of electromagnetic waves by small obstacles

Sébastien Tordeux

EPI Makutu, Pau University, Inria, LMAP UMR CNRS 5142

We develop fast, accurate and efficient numerical methods for solving the time harmonic scattering problem of electromagnetic waves in 3D by a multitude of obstacles for low and medium frequencies. Taking into account a large number of heterogeneities can be costly in terms of computation time and memory usage, particularly in the construction process of the matrix. We consider a multi-scale diffraction problem in low-frequency regimes in which the characteristic length of the obstacles is small compared to the incident wavelength. We use the matched asymptotic expansion method which allows for the model reduction. Two types of approximations are distinguished : near-field or quasi-static approximations that descibe the phenomenon at the microscopic scale and far-field approximations that describe the phenomenon at a long distance. In the latter one, small obstacles are no longer considered as geometric constraintsand can be modelled by equivalent point-sources which are interpreted in terms of electromagnetic multipoles.

[1] J. Labat, V. Péron, S. Tordeux. Equivalent multipolar point-source modeling of small spheres for fast and accurate electromagnetic wave scattering computations, Wave Motion 92: 102409, 2020.
 
Date: Tuesday, 05/Sept/2023
1:30pm - 3:30pmMS49 1: Applied parameter identification in physics
Location: VG3.102
Session Chair: Tram Nguyen
Session Chair: Anne Wald
 

Photoacoustic imaging in acoustic attenuating media

Otmar Scherzer, Peter Elbau, Cong Shi

University Vienna, Austria

Acoustic attenuation describes the loss of energies of propagating waves. This effect is inherently frequency dependent. Typical attenuation models are derived phenomenologically and experimentally without the use of conservation principles. Because of these general strategy a zoo of models has been developed over decades.

Photoacoustic imaging is a hybrid imaging technique where the object of interest is excited by a laser and the acoustic response of the medium is measured outside of the object. From this the ability of the medium to convert laser excitation into acoustic waves is computationally reconstructed. For photoacoustics, which is a linear inverse problem, we will determine its spectral values, and we shall see that there are two kind of attenuating models, resulting in mildly and severely inverse photoacoustic problems.



Fully Stochastic Reconstruction Methods in Coupled Physics Imaging

Simon Robert Arridge

University College London, United Kingdom

Coupled Physics Imaging methods combine image contrast from one physical process with observations using a secondary process; several modalities in acousto-optical imaging follow this concept wherein optical contrast is observed with acoustic measurements. For the inverse problem both an optical and acoustic model need to be inverted. Classical methods that involve a non-linear optimisation approach can be combined with advances in stochastic subsamplings strategies that are in part inspired by machine learning applications. In such approaches the forward problem is considered deterministic and the stochasticity involves splitting of an objective function into sub functions that approach the fully sampled problem in an expectation sense.

In this work we consider where the forward problem is also solved stochastically, by a Monte Carlo simulation of photon propagation. By adjusting the batch size in the forward and inverse problems together, we can achieve better performance than if subsampling is performed seperately.

Joint work with : S. Powell, C. Macdonald, N. Hänninen, A. Pulkkinen, T. Tarvainen


Some coefficient identification problems from boundary data satisfying range invariance for Newton type methods

Barbara Kaltenbacher

University of Klagenfurt, Austria

Range invariance is a property that - like the tangential cone condition - enables a proof of convergence of iterative methods for inverse problems. In contrast to the tangential cone condition it can also be verified for some parameter identification problems in partial differential equations PDEs from boundary measurements, as relevant, e.g., in tomographic applications. The goal of this talk is to highlight some of these examples of coefficient identification from boundary observations in elliptic and parabolic PDEs, among them: combined diffusion and absorption identification (e.g., in steady-state diffuse optical tomography), reconstruction of a boundary coefficient (e.g. in corrosion detection), reconstruction of a coefficient in a quasilinear wave equation (for nonlinearity coefficient imaging).



Traction force microscopy – a testbed for solving the inverse problem of elasticity

Ulrich Schwarz

Heidelberg University, Germany

During the last three decades, the new field of mechanobiology has demonstrated that mechanical forces play a key role for the decision making of biological cells. The standard way to estimate cell forces is traction force microscopy on soft elastic substrates, whose deformations can be tracked with fiducial marker beads. To infer the corresponding cell forces, one can either solve the inverse problem of elasticity, which usually is done in Fourier space, or calculate strain and stress tensors directly from the deformation data. In both cases, some type of regularization is required to deal with experimental noise. Here we discuss recent developments in this field, including microparticle traction force microscopy and machine learning approaches.
 
4:00pm - 6:00pmMS49 2: Applied parameter identification in physics
Location: VG3.102
Session Chair: Tram Nguyen
Session Chair: Anne Wald
 

A phase-field approach to shape optimization of acoustic waves in dissipative media

Vanja Nikolic

Radboud University, The Netherlands

In this talk, we will discuss the problem of finding the optimal shape of a system of acoustic lenses in a dissipative medium. The problem is tackled by introducing a phase-field formulation through diffuse interfaces between the lenses and the surrounding fluid. The resulting formulation is shown to be well-posed and we rigorously derive first-order optimality conditions for this problem. Additionally, a relation between the diffuse interface problem and a perimeter-regularized sharp interface shape optimization problem can be established via the $\Gamma$-limit of the reduced objective.


Parameter identification in helioseismology

Damien Fournier

Max Planck Institute for Solar System Research, Germany

Helioseismology aims at recovering the properties of the solar interior from the observations of line-of-sight Doppler velocities at the surface. Interpreting these observations requires first to solve a forward problem describing the propagation of waves in a highly-stratified medium representing the interior of the Sun. Considering only acoustic waves, the forward problem can be written as $$\mathcal{L}\psi := -\frac{1}{\rho c^2} (\omega^2 + 2 i \omega \gamma + 2 i \omega \mathbf{u} \cdot \nabla) \psi - \nabla \left( \frac{1}{\rho} \nabla \psi \right) = s,$$ where $\rho$ is the density, $c$ the sound speed, $\mathbf{u}$ the flow and $\psi$ the Lagrangian pressure perturbation. The source term $s$ is stochastic and caused by the excitation of waves by convection. As the signal is incoherent, we cannot study directly the wavefield $\psi$ but only its cross-covariance $C(r_1,r_2,\omega) = \psi(r_1,\omega)^\ast \psi(r_2,\omega)$. Under the hypothesis of energy equipartition, the expectation value of the cross-covariance is proportional to the imaginary part of the Green's function associated to $\mathcal{L}$. The inverse problem is then to reconstruct the parameters $q \in \{\rho, c, \mathbf{u} \}$ from the observations of Im[$G(r_1,r_2,\omega)$] for any two points $r_1$, $r_2$ at the solar surface. To increase the signal-to-noise ratio and reduce the size of the input data, wave travel times are usually extracted from the cross-covariances and serve as input data for the inversions. We will present inversions of large-scale flows (differential rotation and meridional circulation) from travel-time measurements using synthetic and observed data.


Parameter identification in magnetization models for large ensembles of magnetic nanoparticles

Hannes Albers, Tobias Kluth

University of Bremen, Germany

Magnetic particle imaging (MPI) is a tracer based imaging modality which exploits the magnetization behavior of magnetic nanoparticles (MNPs) to obtain spatially distributed concentration images from voltage measurements. Proper modeling, which is still an unsolved problem in MPI, relies on magnetization dynamics of individual nanoparticles which typically include Neel and Brownian magnetic moment rotation dynamics. In the context of MPI large ensembles of MNPs and their magnetization behavior need to be considered. Taking into account Neel/Brownian rotation, the ensembles magnetization behavior can be described by a Fokker-Planck equation, i.e., a linear parabolic PDE which models the temporal evolution of the probability that the magnetic moment of a nanoparticle has a certain orientation. The resulting behavior is strongly influenced by time-dependent parameters in the PDE. In this talk we discuss the physical modeling as well as time-dependent parameter identification problems related to the magnetization dynamics based on a Fokker-Planck equation.


Lipschitz stable determination of polyhedral conductivity inclusions from local boundary measurements

Andrea Aspri

Università degli Studi di Milano, Italy

In this talk, we consider the problem of determining a polyhedral conductivity inclusion embedded in a homogeneous isotropic medium from boundary measurements. Specifically, we prove global Lipschitz stability for the polyhedral inclusion from the local Dirichlet-to-Neumann map.
 
Date: Wednesday, 06/Sept/2023
9:00am - 11:00amMS26 1: Trends and open problems in cryo electron microscopy
Location: VG3.102
Session Chair: Carlos Esteve-Yague
Session Chair: Johannes Schwab
 

Joint Cryo-ET Alignment and Reconstruction with Neural Deformation Fields

Valentin Debarnot1, Sidharth Gupta1,2, Konik Kothari1,2, Ivan Dokmanić1,2

1University of Basel, Switzerland; 2University of Illinois at Urbana-Champaign

We propose a framework to jointly determine the deformation parameters and reconstruct the unknown volume in electron cryotomography (CryoET). CryoET aims to reconstruct three-dimensional biological samples from two-dimensional projections. A major challenge is that we can only acquire projections for a limited range of tilts, and that each projection undergoes an unknown deformation during acquisition. Not accounting for these deformations results in poor reconstruction. The existing CryoET software packages attempt to align the projections, often in a workflow which uses manual feedback. Our proposed method sidesteps this inconvenience by automatically computing a set of undeformed projections while simultaneously reconstructing the unknown volume. We achieve this by learning a continuous representation of the undeformed measurements and deformation parameters. We show that our approach enables the recovery of high-frequency details that are destroyed without accounting for deformations.


Manifold-based Point Cloud Deformations: Theory and Applications to Protein Conformation Processing

Willem Diepeveen1, Carlos Esteve-Yagüe1, Jan Lellmann2, Ozan Öktem3, Carola-Bibiane Schönlieb1

1University of Cambridge, United Kingdom; 2University of Lübeck, Germany; 3KTH–Royal Institute of Technology, Sweden

Motivated by data analysis for protein conformations, we construct a smooth quotient manifold of point clouds and equip it with a non-trivial metric tensor field, that models which point clouds are close together and which are far apart. We analyse properties of the Riemannian manifold and obtain cheap to compute expressions for important manifold mappings. Furthermore, we investigate potential numerical advantages of using the Riemannian manifold structure in several data processing tasks such as interpolation, computing means and principal component analysis of simulated molecular dynamics (MD) data sets. For the latter, we observe that MD trajectories live in a low-dimensional sub-manifold in the proposed metric.


Spectral decomposition of atomic structures in heterogeneous cryo-EM

Carlos Esteve-Yague1, Willem Diepeveen1, Ozan Oktem2, Carola-Bibiane Schönlieb1

1University of Cambridge, United Kingdom; 2KTH Stockholm, Sweden

We consider the problem of recovering the three-dimensional atomic structure of a flexible macromolecule from a heterogeneous single-particle cryo-EM dataset. Our method combines prior biological knowledge about the macromolecule of interest with the cryo-EM images. The goal is to determine the deformation of the atomic structure in each image with respect to a specific conformation, which is assumed to be known. The prior biological knowledge is used to parametrize the space of possible atomic structures. The parameters corresponding to each conformation are then estimated as a linear combination of the leading eigenvectors of a graph Laplacian, constructed by means of the cryo-EM dataset, which approximates the spectral properties of the manifold of conformations of the underlying macromolecule.
 
Date: Thursday, 07/Sept/2023
1:30pm - 3:30pmMS26 2: Trends and open problems in cryo electron microscopy
Location: VG3.102
Session Chair: Carlos Esteve-Yague
Session Chair: Johannes Schwab
 

High Dimensional Covariance Estimation in Cryo-EM

Marc Aurèle Gilles, Amit Singer

Princeton University, United States of America

Cryogenic electron-microscopy (cryo-EM) is an imaging technique able to recover the 3D structures of proteins at near-atomic resolution. A unique characteristic of cryo-EM is the possibility of recovering the structure of flexible proteins in different conformations from a single electron microscopy image dataset. One way to estimate these conformations relies on estimating the covariance matrix of the scattering potential directly from the electron data. From that matrix, one can perform principal component analysis to recover the distribution of conformations of a protein. While theoretically attractive, this method has been constrained to low resolutions because of high storage and computational complexity; indeed, the covariance matrix contains $N^6$ entries where images are of size $N\times N$. In this talk, we present a new estimator for the covariance matrix and show that we can compute it in a rank k-approximate covariance in $O(kN^3)$. Finally, we demonstrate on simulated and real datasets that we can recover the conformations of structures at high resolution.


Bayesian random tomography

Michael Habeck

Jena University Hospital, Germany

The reconstruction problem in random tomography is to reconstruct a 3D volume from 2D projection images acquired in unknown random directions. Random tomography is a common problem in imaging science and highly relevant to cryo-electron microscopy. This talk outlines a Bayesian approach to random tomography [1, 2]. At the core of the approach is a meshless representation of the 3D volume based on a Gaussian radial basis function kernel. Each Gaussian can be interpreted as a particle such that the unknown volume is represented by a cloud of particles. The particle representation allows us to speed up the computation of projection images and to represent a large variety of molecular structures accurately and efficiently. Another innovation is the use of Markov chain Monte Carlo algorithms to infer the particle positions as well as the unknown orientations. Posterior sampling is challenging due to the high dimensionality and multimodality of the posterior distribution. We tackle these challenges by using Hamiltonian Monte Carlo and a recently developed Geodesic slice sampler [3]. We demonstrate the strengths of the approach on various simulated and real datasets.

[1] P. Joubert, M. Habeck. Bayesian inference of initial models in cryo-electron microscopy using pseudo-atoms, Biophysical journal 108(5): 1165-1175, 2015.

[2] N. Vakili, M. Habeck. Bayesian Random Tomography of Particle Systems, Frontiers in Molecular Biosciences 8, 2021. [658269]

[3] M. Habeck, M. Hasenpflug, S. Kodgirwar, D. Rudolf. Geodesic slice sampling on the sphere, arXiv preprint arXiv:2301.08056, 2023.



Advancements and New Questions in Analysing the Geometry of Molecular Conformations in Cryo-EM

Roy R. Lederman

Yale U, United States of America

Cryo-Electron Microscopy (cryo-EM) is an imaging technology revolutionizing structural biology. One of the great promises of cryo-EM is to study mixtures of conformations of molecules. We will discuss recent advancements in the analysis of continuous heterogeneity - the continuum of conformations in flexible macromolecule. We will discuss some of the mathematical and technical questions arising from these recent algorithms.


Optimal transport: a promising tool for cryo-electron microscopy

Amit Moscovich

Tel Aviv University, Israel

Optimal transport is a branch of mathematics whose central problem is minimizing the cost of transporting a given source distribution to a target distribution. The Wasserstein metric is defined to be the cost of a minimizing transport plan. For mass distributions in Euclidean space, the Wasserstein metric is closely related to physical motion, making it a natural choice for many of the core problems in cryo-electron microscopy.

Historically, computational bottlenecks have limited the applicability of optimal transport for image processing and volumetric processing. However, recent advances in computational optimal transport have yielded fast approximation schemes that can be readily used for the analysis of high-resolution images and volumetric arrays.

In this talk, we will present the optimal transportation problem and some of its key properties. Then we will discuss several promising applications to cryo-electron microscopy, including particle picking, class averaging and continuous heterogeneity analysis.

 
4:00pm - 6:00pmMS26 3: Trends and open problems in cryo electron microscopy
Location: VG3.102
Session Chair: Carlos Esteve-Yague
Session Chair: Johannes Schwab
 

Stochastic optimization for high-resolution refinement in cryo-EM

Bogdan Toader1, Marcus A. Brubaker2, Roy R. Lederman1

1Yale University, United States of America; 2York University, Canada

Cryo-EM reconstruction with traditional iterative algorithms is often split into two separate stages: ab initio, where an initial estimation of the volume and the pose variables are estimated, and high-resolution refinement, where a high-resolution volume is obtained using local optimization. While state-of-the-art results for high-resolution refinement are obtained using the Expectation-Maximization algorithm, this requires marginalization over the pose variables for all the 2D particle images at each iteration. In contrast, ab initio reconstruction is often performed using a variation of the stochastic gradient descent algorithm, which only uses a subset of the data at each iteration. In this talk, we present an approach that has the potential to enable the use of stochastic optimization algorithms for high-resolution refinement with improved running time. We present an analysis related to the conditioning of the problem that motivates our approach and show preliminary numerical results.


Fast Principal Component Analysis for Cryo-EM Images

Nicholas Marshall1, Oscar Mickelin2, Yunpeng Shi2, Amit Singer2

1Oregon State University, United States of America; 2Princeton University, United States of America

Principal component analysis (PCA) plays an important role in the analysis of cryo-EM images for various tasks such as classification, denoising, compression, and ab-initio modeling. We introduce a fast method for estimating a compressed representation of the 2-D covariance matrix of noisy cryo-electron microscopy projection images that enables fast PCA computation. Our method is based on a new algorithm for expanding images in the Fourier-Bessel basis (the harmonics on the disk), which provides a convenient way to handle the effect of the contrast transfer functions. For $N$ images of size $L$ by $L$, our method has much lower time and space complexities compared to the previous work. We demonstrate our approach on synthetic and experimental data and show acceleration by factors of up to two orders of magnitude.


Reconstructing Molecular Flexibility in Cryogenic Electron Microscopy

Johannes Schwab, Dari Kimanius, Sjors Scheres

MRC-Laboratory of Molecular Biology, United Kingdom

Cryogenic electron microscopy (cryo-EM) is a powerful technique to obtain the 3D structure of macromolecules from thousands of noisy projection images. Since these macromolecules are flexible by nature, the areas where a protein moves yields in a local drop of resolution in the reconstruction. We propose a method named dynamight, that represents the molecule with gaussian basis functions and estimates deformation fields for every experimental image. We further use the estimated deformations to better resolve the flexible regions in the reconstruction using a filtered backprojection algorithm along curved lines. We present results on real data showing that we obtain improved 3D reconstruction
 
Date: Friday, 08/Sept/2023
1:30pm - 3:30pmCT10: Contributed talks
Location: VG3.102
Session Chair: Gerlind Plonka
 

Exact Parameter Identification in PET Pharmacokinetic Modeling Using the Irreversible Two Tissue Compartment Model

Erion Morina1, Martin Holler1, Georg Schramm2

1University of Graz, Austria; 2Stanford University, USA

In this talk we consider the identifiability of metabolic parameters from multi-compartment measurement data in quantitative positron emission tomography (PET) imaging, a non-invasive clinical technique that images the distribution of a radio tracer in-vivo.

We discuss how, for the frequently used two-tissue compartment model and under reasonable assumptions, it is possible to uniquely identify metabolic tissue parameters from standard PET measurements, without the need of additional concentration measurements from blood samples. The core assumption requirements for this result are that PET measurements are available at sufficiently many time points, and that the arterial tracer concentration is parametrized by a polyexponential, an approach that is commonly used in practice. Our analytic identifiability result, which holds in the idealized, noiseless scenario, indicates that costly concentration measurements from blood samples in quantitative PET imaging can be avoided in principle. The connection to noisy measurement data is made via a consistency result in Tikhonov regularization theory, showing that exact reconstruction is maintained in the vanishing noise limit.

We further present numerical experiments with a regularization approach based on the Iteratively Regularized Gauss-Newton Method (IRGNM) supporting these analytic results in an application example.


Regularized Maximum Likelihood Estimation for the Random Coefficients Model

Fabian Dunker, Emil Mendoza, Marco Reale

University of Canterbuy

The random coefficients regression model $Y_i={\beta_0}_i+{\beta_1}_i {X_1}_i+{\beta_2}_i {X_2}_i+\ldots+{\beta_d}_i {X_d}_i$, with $\boldsymbol{X}_i$, $Y_i$, $\boldsymbol{\beta}_i$ i.i.d random variables, and $\boldsymbol{\beta}_i$ independent of $\boldsymbol{X}_i$ is often used to capture unobserved heterogeneity in a population. Reconstructing the joint density of random coefficients $\boldsymbol{\beta}_i=({\beta_0}_i,\ldots, {\beta_d}_i)$ implicitly involves the inversion of a Radon transformation. We propose a regularized maximum likelihood method with non-negativity and $\|\cdot\|_{L^1}=1$ constraint to estimate the density. We analyse the convergence of the method under general assumptions and illustrate the performance in a real data application and in simulations comparing it to the method of approximate inverse.


Adaptive estimation of $\alpha-$generalized random fields for statistical linear inverse problems with repeated measurements

Mihaela Pricop-Jeckstadt

University POLITEHNICA of Bucharest, Romania

In this talk we study an adaptive two-step estimation method for statistical linear inverse problems with repeated measurements for smoothness classes expressed as $\alpha-$generalized random fields [1]. In a first step, the minimum fractional singularity order $\alpha$ is estimated, and in the second step the penalized least squares estimator with smoothness estimated in the first step is studied [2]. Rates of convergence for both the process smoothness and the penalized estimator are proven and illustrated through numerical simulations.

[1] M. D. Ruiz-Medina, J. M. Angulo, V. V. Anh. Fractional generalized random fields on bounded domains. Stochastic Anal. Appl. 21: 465--492, 2005.

[2] S. Golovkine, N. Klutchnikoff, V. Patilea. Learning the smoothness of noisy curves with application to online curve estimation. Electron. J. Stat. 16: 1485--1560, 2022.


The Range of Projection Pair Operators

Richard Huber, Rolf Clackdoyle, Laurent Desbat

Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC, 38000 Grenoble, France.

Tomographic techniques have become a vital tool in medicine, allowing doctors to observe patients’ interior features. Modeling the measurement process (and the underlying physics) are projection operators, the most well-known one being the classical Radon transform. Identifying the range of projection operators has proven itself useful in various tomography-related applications [1-3], such as geometric calibration, motion detection, or more general projection model parameter identification. Projection operators feature the integration of density functions along certain curves (typically straight lines representing paths of radiation), and are subdivided into individual projections -- data obtained during a single step of the measurement process.

Mathematically, given a bounded open set $\Omega\subset \mathbb{R}^2$ and bounded open sets $R,T\subset \mathbb{R}$, a function $\gamma\colon R\times T \to \mathbb{R}^2$ that diffeomorphically covers $\Omega$ and a function $\rho \colon R\times T \to \mathbb{R}^+$, an individual projection is an operator $p\colon L^2(\Omega) \to L^2(R)$ with $$ [p{f}](r) = \int_{T} f\big(\gamma(r,t)\big) \rho\big(r,t\big) \,\mathrm{d}{t} \qquad \text{for all } r\in R $$ for $f \in \mathcal{C}^\infty_c(\Omega)$ (the unknown density). In other words, $r$ determines an integration curve $\gamma(r,\cdot)$, and $[pf](r)$ is the associated line integral weighted by $\rho$ (representing physical effects such as attenuation). Note that we do not allow projection truncation as $\Omega$ is covered by $\gamma$. More general projection operators are $P\colon L^2(\Omega)\to L^2(R_{1})\times \cdots \times L^2(R_{N})$ with $Pf=(p_{1}f,\dots,p_{N}f)$ with $N$ projections (with associated $\gamma_n,\rho_n,R_n,T_n$ for $n\in \{1,\dots,N\}$). In this work, we are concerned with characterizing the range of what we call projection pair operators, i.e., projection operators $P=(p_1,p_2)$ consisting of only two projections $(N=2)$. Conditions on the range of projection pair operators naturally impose properties on larger projection operators' ranges. These pairwise range conditions are particularly convenient for applications.

A natural approach for identifying the range is determining the range's orthogonal complement. The orthogonal complement being small would facilitate determining whether a projection pair is inside the range. We find that such normal vectors naturally consist of two functions $G_1$ and $G_2$ -- one per projection -- that need to satisfy $$ -\frac{\rho_{1}\big(\gamma_{1}^{-1 }(x)\big) \left |\det\left( \frac{\,\mathrm{d}{ { \gamma_{1}^{-1 }}}}{\,\mathrm{d}{ x} }(x)\right)\right|}{\rho_{2}\big(\gamma_{2}^{-1 }(x)\big) \left|\det\left( \frac{ \mathrm{d}{\gamma_{2}^{-1 }}}{\mathrm{d}{ x}}(x)\right)\right|} = \frac{G_2(r_2(x))}{G_1(r_1(x))} \qquad \text{for a.e. }x\in \Omega, $$ where $r_1(x)$ is such that $x\in \gamma_{1}(r_1(x),\cdot)$ and analogously for $r_2(x)$. This uniquely determines the orthogonal direction; therefore, the orthogonal complement's dimension is at most one. Thus, two projections' information can only overlap in a single way. Due to this equation's specific structure -- the right-hand side is a ratio of functions depending only on $r_1$ and $r_2$, respectively -- it is easy to imagine that this equation is not always solvable. While it is solvable for some standard examples like the conventional and exponential Radon transforms (whose ranges were already characterized [4,5]), we find that no solution exists for the exponential fanbeam transform and for the Radon transform with specific depth-effects. The fact that no solution exists implies that the operator's range is dense. Range conditions of this type can only precisely characterize the range when it is closed (otherwise, only the closure is characterized). In this regard, we find that the question of the range's closedness is equivalent for all projection pair operators whose $\gamma$ and $\rho$ functions are suitably related.

Acknowledgment: This work was supported by the ANR grant ANR-21-CE45-0026 `SPECT-Motion-eDCC'.

[1] F. Natterer. “Computerized Tomography with Unknown Sources”. SIAM Journal on Applied Mathematics 43.5,DOI : 10.1137/0143079:1201–1212 1983.

[2] J. Xu, K. Taguchi, B. Tsui. “Statistical Projection Completion in X-ray CT Using Consistency Conditions”. IEEE Trans. Med. Imaging 29, DOI : 10.1109/TMI.2010. 2048335: 1528–1540 2010.

[3] R. Clackdoyle, L. Desbat. “Data consistency conditions for truncated fanbeam and parallel projections.” Medical physics 42 2:831–45, 2015.

[4] V. Aguilar, P. Kuchment. “Range conditions for the multidimensional exponential X-ray transform”. Inverse Problems 11.5 ,DOI : 10.1088/0266-5611/11/5/002: 977, 1995.

[5] F. Natterer. The Mathematics of Computerized Tomography. Philadelphia: Society for Industrial and Applied Mathematics, Chap. II.4, 2001.

 

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany