Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS10 1: Optimization in Inverse Scattering: from Acoustics to X-rays
Time:
Thursday, 07/Sept/2023:
1:30pm - 3:30pm

Session Chair: Radu Ioan Bot
Session Chair: Russell Luke
Location: VG1.103


Show help for 'Increase or decrease the abstract text size'
Presentations

Numerical Linear Algebra Networks for Solving Linear Inverse Problems

Otmar Scherzer

University Vienna, Austria

We consider solving a probably ill-conditioned linear operator equation, where the operator is not modeled, but given indirectly via training samples. The method de- and encoding is very useful for such applications and will be analyzed from a point of regularization theory. This analysis in particular shows that preprocessing of the image data is required, which is implemented by linear algebra networks.


Regularization by Randomization: The Case of Partially Separable Optimization

Russell Luke

University of Göttingen, Germany

We present a Markov-chain analysis of blockwise-stochastic algorithms for solving partially block-separable optimization problems. Our main contributions to the extensive literature on these methods are statements about the Markov operators and distributions behind the iterates of stochastic algorithms, and in particular the regularity of Markov operators and rates of convergence of the distributions of the corresponding Markov chains. This provides a detailed characterization of the moments of the sequences beyond just the expected behavior. This also serves as a case study of how randomization restores favorable properties to algorithms that iterations of only partial information destroys. We demonstrate this on stochastic blockwise implementations of the forward-backward and Douglas-Rachford algorithms for nonconvex (and, as a special case, convex), nonsmooth optimization.



Fast convex optimization via closed-loop time scaling of gradient dynamics

Hedy Attouch1, Radu Ioan Bot2, Dang-Khoa Nguyen2

1Univ. Montpellier, France; 2University of Vienna, Austria

In a Hilbert setting, for convex differentiable optimization, we develop a general framework for adaptive accelerated gradient methods. They are based on damped inertial dynamics where the coefficients are designed in a closed-loop way. Specifically, the damping is a feedback control of the velocity, or of the gradient of the objective function. For this, we develop a closed-loop version of the time scaling and averaging technique introduced by the authors. We thus obtain autonomous inertial dynamics which involve vanishing viscous damping and implicit Hessian driven damping. By simply using the convergence rates for the continuous steepest descent and Jensen's inequality, without the need for further Lyapunov analysis, we show that the trajectories have several remarkable properties at once: they ensure fast convergence of values, fast convergence of the gradients towards zero, and they converge to optimal solutions. Our approach leads to parallel algorithmic results, that we study in the case of proximal algorithms. These are among the very first general results of this type obtained using autonomous dynamics.

[1] H. Attouch, R.I. Bot, D.-K. Nguyen. Fast convex optimization via time scale and averaging of the steepest descent, arXiv:2208.08260, 2022

[2] H. Attouch, R.I. Bot, D.-K. Nguyen. Fast convex optimization via closed-loop time scaling of gradient dynamics, arXiv:2301.00701, 2023


Fast Optimistic Gradient Descent Ascent method in continuous and discrete time

Radu Ioan Bot, Ernö Robert Csetnek, Dang-Khoa Nguyen

University of Vienna, Austria

In this talk we address continuous in time dynamics as well as numerical algorithms for the problem of approaching the set of zeros of a single-valued monotone and continuous operator $V$ ([1,2]). The starting point of our investigations is a second order dynamical system that combines a vanishing damping term with the time derivative of $V$ along the trajectory, which can be seen as an analogous of the Hessian-driven damping in case the operator is originating from a potential. Our method exhibits fast convergence rates for the norm of the operator along the trajectory and also for the restricted gap function, which is a measure of optimality for variational inequalities. We also prove the weak convergence of the trajectory to a zero of $V$.

Temporal discretizations of the dynamical system generate implicit and explicit numerical algorithms, which can be both seen as accelerated versions of the Optimistic Gradient Descent Ascent (OGDA) method for monotone operators, for which we prove that they share the asymptotic features of the continuous dynamics. All convergence rate statements are last iterate convergence results. Numerical experiments indicate the that explicit numerical algorithm outperform other methods designed to solve equations governed by monotone operators.

[1] R. I. Bot, E. R. Csetnek, D.-K. Nguyen. Fast OGDA in continuous and discrete time, arXiv:2203.10947, 2022.

[2] R. I. Bot, D.-K. Nguyen. Fast Krasnosel'skii-Mann algorithm with a convergence rate of the fixed point iteration of $o(1/k)$, arXiv:2206.09462, 2022



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany