Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS28 1: Modelling and optimisation in non-Euclidean settings for inverse problems
Time:
Tuesday, 05/Sept/2023:
1:30pm - 3:30pm

Session Chair: Luca Calatroni
Session Chair: Claudio Estatico
Session Chair: Dirk Lorenz
Location: VG1.108


Show help for 'Increase or decrease the abstract text size'
Presentations

A lifted Bregman formulation for the inversion of deep neural networks

Xiaoyu Wang1, Martin Benning2,3

1University of Cambridge, United Kingdom; 2Queen Mary University of London, United Kingdom; 3The Alan Turing Institute, United Kingdom

We propose a novel framework for the regularised inversion of deep neural networks. The framework is based on the authors' recent work on training feed-forward neural networks without the differentiation of activation functions. The framework lifts the parameter space into a higher dimensional space by introducing auxiliary variables and penalises these variables with tailored Bregman distances. We propose a family of variational regularisations based on these Bregman distances, present theoretical results and support their practical application with numerical examples. In particular, we present the first convergence result (to the best of our knowledge) for the regularised inversion of a single-layer perceptron that only assumes that the solution of the inverse problem is in the range of the regularisation operator.


A Bregman-Kaczmarz method for nonlinear systems of equations

Maximilian Winkler

TU Braunschweig, Germany

We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses exact or relaxed Bregman projections onto the solution space of a Newton equation. As such, it generalizes the Sparse Kaczmarz method which finds sparse solutions to linear equations, as well as the nonlinear Kaczmarz method, which performs euclidean projections. The relaxed Bregman projection is achieved by using the step size from the nonlinear Kaczmarz method. Local convergence is established for systems with full rank Jacobian under the local tangential cone condition. We show examples in which the proposed method outperforms similar methods with the same memory requirements.


Regularization in non-Euclidean spaces meets numerical linear algebra

Claudio Estatico1, Brigida Bonino2, Luca Calatroni3, Fabio Di Benedetto1, Marta Lazzaretti1, Flavia Lenti4

1University of Genoa, Italy; 2Istituto di Matematica Applicata e Tecnologie Informatiche, Italy; 3Laboratory of Computer Science, Signals and Systems of Sophia Antipolis, France; 4Eumetsat, Germany

Inverse problems modeled by a functional equation $A(x)=y$ characterized by an ill-posed operator $A:X \longrightarrow Y$ between two non-Euclidean normed spaces $X$ and $Y$ are here considered. The iterative minimization of a functional based on the residual $\|A(x)-y\|_Y$ is a common approach in this setting, where generally no (closed form of the) inverse of $A$ exists. In particular, one-step gradient methods act as implicit regularization algorithms, when combined with an early-stopping criterion to prevent over-fitting of the noise on the data $y$.

In this talk, we review iterative methods involving the dual spaces $X^*$ and $Y^*$, showing that they can be fully understood in the context of proximal operator theory, with suitable Bregman distances as proximity measure [1]. Moreover, many relationships of such iterative methods with classical projection algorithms, such us Cimmino and ART (Algebraic Reconstruction Techniques) ones, are discussed, as well as with classical preconditioning theory for structured linear systems arising in numerical linear algebra. Applications to deblurring and inverse scattering problems will be shown.

[1] B. Bonino, C. Estatico, M. Lazzaretti. Dual descent regularization algorithms in variable exponent Lebesgue spaces for imaging, Numer. Algorithms 92: 149-182, 2023.

[2] M. Lazzaretti, L. Calatroni, C. Estatico. Modular-proximal gradient algorithms in variable exponent Lebesgue spaces, SIAM J. Sci. Compu. 44: 1-23, 2022.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany