Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS03 2: Compressed Sensing meets Statistical Inverse Learning
Time:
Monday, 04/Sept/2023:
4:00pm - 6:00pm

Session Chair: Tatiana Alessandra Bubba
Session Chair: Luca Ratti
Session Chair: Matteo Santacesaria
Location: VG2.103


Show help for 'Increase or decrease the abstract text size'
Presentations

SGD for statistical inverse problems

Abhishake Abhishake

LUT University Lappeenranta, Finland

We study a statistical inverse learning problem, where we observe the noisy image of a quantity through an operator at some random design points. We consider the SGD schemes to reconstruct the estimator of the quantity for the ill-posed inverse problem. We develop a theoretical analysis for the minimizer of the regularization scheme using the approach of reproducing kernel Hilbert spaces. We discuss the rates of convergence for the proposed scheme, uniformly over classes of admissible solutions, defined through appropriate source conditions.


Convex regularization in statistical inverse learning problems

Tapio Helin

LUT University, Finland

Statistical inverse learning aims at recovering an unknown function $f$ from randomly scattered and possibly noisy point evaluations of another function $g$, connected to $f$ via an ill-posed mathematical model. In this talk I blend statistical inverse learning theory with convex regularization strategies and derive convergence rates for the corresponding estimators.


An off-the-grid approach to multi-compartment magnetic fingerprinting

Clarice Poon

University of Bath, United Kingdom

We propose a off-the-grid numerical approach to separate multiple tissue compartments in image voxels and to estimate quantitatively their nuclear magnetic resonance (NMR) properties and mixture fractions, given magnetic resonance fingerprinting (MRF) measurements. One of the challenge is that fine-grid discretisation of the multi-dimensional NMR properties creates large and highly coherent MRF dictionaries that can challenge scalability and precision of the numerical methods for sparse approximation. To overcome this issues, we propose an off-the-grid approach equipped with an extended notion of the sparse group lasso regularisation for sparse approximation using continuous Bloch response models. Through numerical experiments on simulated and in-vivo healthy brain MRF data, we demonstrate the effectiveness of the proposed scheme compared to baseline multi-compartment MRF methods.

This is joint work with Mohammad Golbabaee.



How many Neurons do we need? A refined Analysis.

Mike Nguyen, Nicole Mücke

Technische Universität Braunschweig, Germany

We present new results for random feature approximation in kernel methods and discuss the connection to generalization properties of two-layer neural networks in the NTK regime. Here, we aim at improving the results of Nitanda and Suzuki [1] in various directions. More precisely, we aim at overcoming the saturation effect appearing in Nitanda and Suzuki [1] by providing fast rates of convergence for smooth objectives. On our way, we also precisely keep track of the number of hidden neurons required for generalization.

[1] A. Nitanda, T. Suzuki. Functional Gradient Boosting for Learning Residual-like Networks with Statistical Guarantees. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. 108: 2981--2991. 2020.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany