Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS28 2: Modelling and optimisation in non-Euclidean settings for inverse problems
Time:
Tuesday, 05/Sept/2023:
4:00pm - 6:00pm

Session Chair: Luca Calatroni
Session Chair: Claudio Estatico
Session Chair: Dirk Lorenz
Location: VG1.108


Show help for 'Increase or decrease the abstract text size'
Presentations

Gradient descent-based algorithms for inverse problems in variable exponent Lebesgue spaces

Marta Lazzaretti1,3, Zeljko Kereta2, Luca Calatroni3, Claudio Estatico1

1Dip. di Matematica (DIMA), Università di Genova, Italy; 2Dept. of Computer Science, University College London, UK; 3CNRS, UCA, Inria, Laboratoire I3S, Sophia-Antipolis, France

Variable exponent Lebesgue spaces $\ell^{(p_n)}$ have been recently proved to be the appropriate functional framework to enforce pixel-adaptive regularisation in signal and image processing applications (see [1]), combined with gradient descent (GD) or proximal GD strategies. Compared to standard Hilbert or Euclidean settings, however, the application of these algorithms in the Banach setting of $\ell^{(p_n)}$ is not straightforward due to the lack of a closed-form expression and the non-separability property of the underlying norm. We propose the use of the associated separable modular function [2, 3], instead of the norm, to define algorithms based on GD in $\ell^{(p_n)}$ and consider a stochastic GD [3, 4] to reduce the per-iteration cost of iterative schemes, used to solve linear inverse real-world image reconstruction problems.

[1] B. Bonino, C. Estatico, and M. Lazzaretti. Dual descent regularization algorithms in variable exponent Lebesgue spaces for imaging, Numer. Algorithms 92(6), 2023.

[2] M. Lazzaretti, L. Calatroni, and C. Estatico. Modular-proximal gradient algorithms in variable exponent Lebesgue spaces, SIAM J. Sci. Compu. 44(6), 2022.

[3] M. Lazzaretti, Z. Kereta, L. Calatroni, and C. Estatico. Stochastic gradient descent for linear inverse problems in variable exponent Lebesgue spaces, 2023. [https://arxiv.org/abs/2303.09182]

[4] Z. Kereta, and B. Jin. On the convergence of stochastic gradient descent for linear inverse problems in Banach spaces, SIAM J. Imaging Sci. (in press), 2023.



Multiscale hierarchical decomposition methods for images corrupted by multiplicative noise

Joel Barnett, Wen Li, Elena Resmerita, Luminita Vese

University of Klagenfurt, Austria

Recovering images corrupted by multiplicative noise is a well known challenging task. Motivated by the success of multiscale hierarchical decomposition methods (MHDM) in image processing, we adapt a variety of both classical and new multiplicative noise removing models to the MHDM form. We discuss well-definedness and convergence of the proposed methods. Through comprehensive numerical experiments and comparisons, we qualitatively and quantitatively evaluate the validity of the considered models. By construction, these multiplicative multiscale hierarchical decomposition methods have the added benefit of recovering many scales of an image, which can provide features of interest beyond image restoration.


Proximal point algorithm in spaces with semidefinite inner product

Emanuele Naldi1, Enis Chenchene2, Dirk A. Lorenz1, Jannis Marquardt1

1TU Braunschweig, Germany; 2University of Graz, Austria

We introduce proximal point algorithms in spaces with semidefinite inner products. We focus our attention in particular on products induced by self-adjoint positive semidefinite linear operators defined on Hilbert spaces. We show convergence for the algorithm under suitable conditions and we investigate applications for splitting methods. As first application, we devise new schemes for finding minimizers of the sum of many convex lower semicontinuous functions and show some applications of these new schemes to congested transport and distributed optimization in the context of Support Vector Machines, investigating their behavior. Finally, we analyze the convergence of the proximal point algorithm letting vary the (semidefinite) metric at each iteration. We discuss applications of this analysis to the primal-dual Douglas-Rachford scheme, investigating adaptive stepsizes for the method.


Asymptotic linear convergence of fully-corrective generalized conditional gradient methods

Kristian Bredies1, Marcello Carioni2, Silvio Fanzon1, Daniel Walter3

1University of Graz, Austria; 2University of Twente, The Netherlands; 3Humboldt-Universität zu Berlin, Germany

We discuss a fully-corrective generalized conditional gradient (FC-GCG) method [1] for the minimization of Tikhonov functionals associated with a linear inverse problem, a convex discrepancy and a convex one-homogeneous regularizer over a Banach space. The algorithm alternates between updating a finite set of extremal points of the unit ball of the regularizer [2] and optimizing on the conical hull of these extremal points, where each iteration requires the solution of one linear problem and one finite-dimensional convex minimization problem. We show that the method converges sublinearly to a solution and that imposing additional assumptions on the associated dual variables accelerates the method to a linear rate of convergence. The proofs rely on lifting, via Choquet's theorem, the considered problem to a particular space of Radon measures well as the equivalence of the FC-CGC method to a primal-dual active point (PDAP) method for which linear convergence was recently established. Finally, we present applications scenarios where the stated assumptions for accelerated convergence can be satisfied [3].

[1] Kristian Bredies, Marcello Carioni, Silvio Fanzon and Daniel Walter. Asymptotic linear convergence of fully-corrective generalized conditional gradient methods, 2021. [ArXiv preprint 2110.06756]

[2] Kristian Bredies and Marcello Carioni. Sparsity of solutions for variational inverse problems with finite-dimensional data, Calculus of Variations and Partial Differential Equations 59(14), 2020.

[3] Kristian Bredies, Marcello Carioni, Silvio Fanzon and Francisco Romero. A generalized conditional gradient method for dynamic inverse problems with optimal transport regularization, Foundations of Computational Mathematics, 2022.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany