Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS13 2: Stochastic iterative methods for inverse problems
Time:
Friday, 08/Sept/2023:
4:00pm - 6:00pm

Session Chair: Tim Jahn
Location: VG0.111


Show help for 'Increase or decrease the abstract text size'
Presentations

From inexact optimization to learning via gradient concentration

Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco

Bocconi University Milano, Italy

Optimization in machine learning typically deals with the minimization of empirical objectives defined by training data. The ultimate goal of learning, however, is to minimize the error on future data (test error), for which the training data provides only partial information. In this view, the optimization problems that are practically feasible are based on inexact quantities that are stochastic in nature. In this paper, we show how probabilistic results, specifically gradient concentration, can be combined with results from inexact optimization to derive sharp test error guarantees. By considering unconstrained objectives, we highlight the implicit regularization properties of optimization for learning.


Principal component analysis in infinite dimensions

Martin Wahl

Universität Bielefeld, Germany

In high-dimensional settings, principal component analysis (PCA) reveals some unexpected phenomena, ranging from eigenvector inconsistency to eigenvalue (upward) bias. While such high-dimensional phenomena are now well understood in the spiked covariance model, the goal of this talk is to present some extensions for the case of PCA in infinite dimensions. As an application, we present bounds for the prediction error of spectral regularization estimators in the overparametrized regime.


Learning Linear Operators

Nicole Mücke

TU Braunschweig, Germany

We consider the problem of learning a linear operator $\theta$ between two Hilbert spaces from empirical observations, which we interpret as least squares regression in infinite dimensions. We show that this goal can be reformulated as an inverse problem for $\theta$ with the undesirable feature that its forward operator is generally non-compact (even if $\theta$ is assumed to be compact or of p-Schatten class). However, we prove that, in terms of spectral properties and regularisation theory, this inverse problem is equivalent to the known compact inverse problem associated with scalar response regression. Our framework allows for the elegant derivation of dimension-free rates for generic learning algorithms under Hölder-type source conditions. The proofs rely on the combination of techniques from kernel regression with recent results on concentration of measure for sub-exponential Hilbertian random variables. The obtained rates hold for a variety of practically-relevant scenarios in functional regression as well as nonlinear regression with operator-valued kernels and match those of classical kernel regression with scalar response.


SGD for select inverse problems in Banach spaces

Zeljko Kereta1, Bangti Jin1,2

1University College London; 2The Chinese University of Hong Kong,

In this work we present a mathematical framework and analysis for SGD in Banach spaces for select linear and non-linear inverse problems. Analysis in the Banach space setting presents unique challenges, requiring novel mathematical tools. This is achieved by combining insights from Hilbert space theory with approaches from modern optimisation. The developed theory and algorithms open doors for a wide range of applications, and we present some future challenges and directions.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany