Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS03 1: Compressed Sensing meets Statistical Inverse Learning
Time:
Monday, 04/Sept/2023:
1:30pm - 3:30pm

Session Chair: Tatiana Alessandra Bubba
Session Chair: Luca Ratti
Session Chair: Matteo Santacesaria
Location: VG2.103


Show help for 'Increase or decrease the abstract text size'
Presentations

Compressed sensing for the sparse Radon transform

Giovanni S. Alberti1, Alessandro Felisi1, Matteo Santacesaria1, S. Ivan Trapasso2

1University of Genoa, Italy; 2Politecnico di Torino, Italy

Compressed sensing allows for the recovery of sparse signals from few measurements, whose number is proportional to the sparsity of the unknown signal, up to logarithmic factors. The classical theory typically considers either random linear measurements or subsampled isometries and has found many applications, including accelerated magnetic resonance imaging, which is modeled by the subsampled Fourier transform. In our work, we develop a general theory of infinite-dimensional compressed sensing for abstract inverse problems, possibly ill-posed, involving an arbitrary forward operator. This is achieved by considering a generalized restricted isometry property, and a quasi-diagonalization property of the forward map. As a notable application, we obtain rigorous recovery estimates for the sparse Radon transform (i.e., with a finite number of angles $\theta_1,\dots,\theta_m$), which models computed tomography. In the case when the unknown signal is $s$-sparse with respect to an orthonormal basis of compactly supported wavelets, we prove exact recovery under the condition $m\gtrsim s,$ up to logarithmic factors.

[1] G. S. Alberti, A. Felisi, M. Santacesaria, I. Trapasso. Compressed sensing for inverse problems and the sample complexity of the sparse Radon transform, ArXiv e-prints, arXiv:2302.03577, 2023.


Regularization for learning from unlabeled data using related labeled data

Werner Zellinger, Sergei V. Pereverzyev

Austrian Academy of Sciences, Austria

We consider the problem of learning from unlabeled target datasets using related labeled source datasets, e.g. learning from an image-dataset from a target medical patient using expert-annotated datasets from related source patients. This problem is complicated by (a) missing target labels, e.g. no target expert-annotations of a tumor, and, (b) possible differences in the source and target data generating distributions, e.g. caused by medical patents’ human variations. The major three methods for this problem, are special cases of multiple or cascade regularization methods, i.e., methods involving simultaneously more than one regularization. This talk is based on [1-3] and reviews non-asymptotic (w.r.t. dataset size) error bounds of the major three methods.

[1] W. Zellinger, N. Shepeleva, M.-C. Dinu, H. Eghbal-zadeh, H. D. Nguyen, B. Nessler, S. V. Pereverzyev, B. Moser. The balancing principle for parameter choice in distance-regularized domain adaptation. Advances in Neural Information Processing Systems (NeurIPS). 34: 20798--20811, 2021.

[2] E.R. Gizewski, L. Mayer, B. Moser, D.H. Nguyen, S. Pereverzyev Jr, S.V. Pereverzyev, N. Shepeleva, and W. Zellinger. On a regularization of unsupervised domain adaptation in RKHS. Appl. Comput. Harmon. Anal. 57: 201--227. 2022. https://doi.org/10.1016/j.acha.2021.12.002

[3] M. Holzleitner, S.V. Pereverzyev, W. Zellinger. Domain Generalization by Functional Regression. arXiv preprint arXiv:2302.04724 (2023). https://doi.org/10.48550/arXiv.2302.04724



Random tree Besov priors for detail detection

Hanne Kekkonen1, Matti Lassas2, Samuli Siltanen2

1Delft University of Technology, Netherlands, The; 2University of Helsinki, Finland

Besov priors are well fitted for imaging since smooth functions with few local irregularities have a sparse expansion in the wavelet basis which is encouraged by the prior. The edge preservation of Besov priors can be enhanced by introducing a new random variable T that takes values in the space of ‘trees’, and which is chosen so that the realizations have jumps only on a small set. The density of the tree, and so the size of the set of jumps, is controlled by a hyperparameter. In this talk I will show how this hyperparameter can be optimized for the data and what the optimal values tell us about behaviour of the signal or image.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany