Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS07 1: Regularization for Learning from Limited Data: From Theory to Medical Applications
Time:
Friday, 08/Sept/2023:
1:30pm - 3:30pm

Session Chair: Markus Holzleitner
Session Chair: Sergei Pereverzyev
Session Chair: Werner Zellinger
Location: VG1.101


Show help for 'Increase or decrease the abstract text size'
Presentations

Regularized Radon - Nikodym differentiation and some of its applications

Duc Hoan Nguyen, Sergei Pereverzyev, Werner Zellinger

Johann Radon Institute for Computational and Applied Mathematics, Austria

We discuss the problem of estimation of Radon-Nikodym derivatives. This problem appears in various applications, such as covariate shift adaptation, likelihood-ratio testing, mutual information, and conditional probability estimation. To address the above problem, we employ the general regularization scheme in reproducing kernel Hilbert spaces. The convergence rate of the corresponding regularized learning algorithm is established by taking into account both the smoothness of the derivative and the capacity of the space in which it is estimated. It is done in terms of general source conditions and the regularized Christoffel functions. The theoretical results are illustrated by numerical simulations.


Explicit error rate results in the context of domain generalization

Markus Holzleitner

JKU Linz, Austria

Given labeled data from different source distributions, the problem of domain generalization is to learn a model that is expected to generalize well on new target distributions for which you only have unlabeled samples. We frame domain generalization as a problem of functional regression. This concept leads to a new algorithm for learning a linear operator from marginal distributions of inputs to the corresponding conditional distributions of outputs given inputs. Our algorithm allows a source distribution-dependent construction of reproducing kernel Hilbert spaces for prediction and satisfies non-asymptotic error bounds for the idealized risk. We intend to give a short overview on the required mathematical concepts and proof techinques, and illustrate our approach by a numerical example. The talk is based on [1].

[1] M. Holzleitner, S. V. Pereverzyev, W. Zellinger. Domain Generalization by Functional Regression. arXiv:2302.04724, 2023.



Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation

Marius-Constantin Dinu

JKU Linz, Austria

We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution. We follow the strategy to compute several models using different hyper-parameters, and, to subsequently compute a linear aggregation of the models. While several heuristics exist that follow this strategy, methods are still missing that rely on thorough theories for bounding the target error. In this turn, we propose a method that extends weighted least squares to vector-valued functions, e.g., deep neural networks. We show that the target error of the proposed algorithm is asymptotically not worse than twice the error of the unknown optimal aggregation. We also perform a large scale empirical comparative study on several datasets, including text, images, electroencephalogram, body sensor signals and signals from mobile phones. Our method outperforms deep embedded validation (DEV) and importance weighted validation (IWV) on all datasets, setting a new state-of-the-art performance for solving parameter choice issues in unsupervised domain adaptation with theoretical error guarantees. We further study several competitive heuristics, all outperforming IWV and DEV on at least five datasets. However, our method outperforms each heuristic on at least five of seven datasets. This talk is based on [1].

[1] M.-C. Dinu, M. Holzleitner, M. Beck, H. D. Nguyen, A. Huber, H. Eghbal-zadeh, B. A. Moser, S. Pereverzyev, S. Hochreiter, W. Zellinger. Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation. The Eleventh International Conference on Learning Representations (ICLR), 2023.


Convex regularization in statistical inverse learning problems

Luca Ratti1, Tatiana A. Bubba2, Martin Burger3, Tapio Helin4

1Università degli Studi di Bologna, Italy; 2University of Bath, UK; 3Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany; 4Lappeenranta-Lahti University of Technology, Finland

We consider a problem at the crossroad between inverse problems and statistical learning: namely, the estimation of an unknown function from noisy and indirect measurements, which are only evaluated at randomly distributed design points. This occurs in many contexts in modern science and engineering, where massive data sets arise in large-scale problems from poorly controllable experimental conditions. When tackling this task, a common ground between inverse problems and statistical learning is represented by regularization theory, although with slightly different perspectives. In this talk, I will present a unified approach, leading to convergence estimates of the regularized solution to the ground truth, both as the noise on the data reduces and as the number of evaluation points increases. I will mainly focus on a class of convex, $p$-homogeneous regularization functionals ($p$ being between $1$ and $2$), which allow moving from the classical Tikhonov regularization towards sparsity-promoting techniques. Particular attention is given to the case of Besov norm regularization, which represents a case of interest for wavelet-based regularization. The most prominent application I will discuss is X-ray tomography with randomly sampled angular views. I will finally sketch some connections with recent extensions of our approach, including a more general family of sparsifying transforms and dynamical inverse problems.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany