Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS05 1: Numerical meet statistical methods in inverse problems
Time:
Wednesday, 06/Sept/2023:
9:00am - 11:00am

Session Chair: Martin Hanke
Session Chair: Markus Reiß
Session Chair: Frank Werner
Location: VG2.102


Show help for 'Increase or decrease the abstract text size'
Presentations

Aggregation by the Linear Functional Strategy in Regularized Domain Adaptation

Sergei Pereverzyev

The Johann Radon Institute for Computational and Applied Mathematics (RICAM), Austria

In this talk we are going to discuss the problem of hyperparameters tuning in the context of learning from different domains known also as domain adaptation. The domain adaptation scenario arises when one studies two input-output relationships governed by probabilistic laws with respect to different probability measures, and uses the data drawn from one of them to minimize the expected prediction risk over the other measure.

The problem of domain adaptation has been tackled by many approaches, and most domain adaptation algorithms depend on the so-called hyperparameters that change the performance of the algorithm and need to be tuned. Usually, algorithm performance variation can be attributed to just a few hyperparameters, such as a regularization parameter in kernel ridge regression, or batch size and number of iterations in stochastic gradient descent training.

In spite of its importance, the question of selecting these parameters has not been much studied in the context of domain adaptation. In this talk, we are going to shed light on this issue. In particular, we discuss how a regularization of the Radon-Nikodym differentiation can be employed in hyperparameters tuning. Theoretical results will be illustrated by application to stenosis detection in different types of arteries.

The talk is based on the recent joint work [1] performed within COMET-Module project S3AI funded by the Austrian Research Promotion Agency (FFG).

[1] E.R. Gizewski, L. Mayer, B.A. Moser, D.H. Nguyen, S. Pereverzyev Jr., S.V. Pereverzyev, N. Shepeleva, W. Zellinger. On a regularization of unsupervised domain adaptation in RKHS. Appl. Comput. Harmon. Anal. 57: 201-227, 2022.


The Henderson problem and the relative entropy functional

Fabio Marc Frommer, Martin Hanke

Johannes Gutenberg Universität Mainz, Germany

The inverse Henderson problem of statistical mechanics is the theoretical foundation for many bottom-up coarse-graining techniques for the numerical simulation of complex soft matter physics. This inverse problem concerns classical particles in continuous space which interact according to a pair potential depending on the distance of the particles. Roughly stated, it asks for the interaction potential given the equilibrium pair correlation function of the system. In 1974 Henderson proved that this potential is uniquely determined in a canonical ensemble and recently it has been argued by Rosenberger et al. that this potential minimises a relative entropy. Here we provide a rigorous extension of these results for the thermodynamical limit and define a corresponding relative entropy density for this. We investigate further properties of this functional for suitable classes of pair potentials.


Early stopping for $L^{2}$-boosting in high-dimensional linear models

Bernhard Stankewitz

Bocconi University Milano, Italy

We consider $ L^{2} $-boosting in a sparse high-dimensional linear model via orthogonal matching pursuit (OMP). For this greedy, nonlinear subspace selection procedure, we analyze a data-driven early stopping time $ \tau $, which is sequential in the sense that its computation is based on the first $ \tau $ iterations only. Our approach is substantially less costly than established model selection criteria, which require the computation of the full boosting path.

We prove that sequential early stopping preserves statistical optimality in this setting in terms of a general oracle inequality for the empirical risk and recently established optimal convergence rates for the population risk. The proofs include a subtle $ \omega $-pointwise analysis of a stochastic bias-variance trade-off, which is induced by the greedy optimization procedure at the core of OMP. Simulation studies show that, at a significantly reduced computational cost, these types of methods match or even exceed the performance of other state of the art algorithms such as the cross-validated Lasso or model selection via a high-dimensional Akaike criterion based on the full boosting path.



Early stopping for conjugate gradients in statistical inverse problems

Laura Hucker, Markus Reiß

Humboldt-Universität zu Berlin, Germany

We consider estimators obtained by applying the conjugate gradient algorithm to the normal equation of a prototypical statistical inverse problem. For such iterative procedures, it is necessary to choose a suitable iteration index to avoid under- and overfitting. Unfortunately, classical model selection criteria can be prohibitively expensive in high dimensions. In contrast, it has been shown for several methods that sequential early stopping can achieve statistical and computational efficiency by halting at a data-driven index depending on previous iterates only. Residual-based stopping rules, similar to the discrepancy principle for deterministic problems, are well understood for linear regularization methods. However, in the case of conjugate gradients, the estimator depends nonlinearly on the observations, allowing for greater flexibility. This significantly complicates the error analysis. We establish adaptation results in this setting.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany