Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
MS21 2: Prior Information in Inverse Problems
Time:
Tuesday, 05/Sept/2023:
4:00pm - 6:00pm

Session Chair: Andreas Horst
Session Chair: Jakob Lemvig
Location: VG2.103


Show help for 'Increase or decrease the abstract text size'
Presentations

Regularized, pretrained and subspace-restricted Deep Image Prior for CT reconstruction

Riccardo Barbano1, Javier Antorán2, Johannes Leuschner3, Bangti Jin4, José Miguel Hernández-Lobato2, Zeljko Kereta1, Daniel Otero Baguer3, Maximilian Schmidt3, Alexander Denker3, Andreas Hauptmann5,1, Peter Maaß3

1Department of Computer Science, University College London, United Kingdom; 2Department of Engineering, University of Cambridge, United Kingdom; 3Center for Industrial Mathematics, University of Bremen, Germany; 4Department of Mathematics, The Chinese University of Hong Kong, P. R. China; 5Research Unit of Mathematical Sciences, University of Oulu, Finland

Computed tomography (CT) is an important tool in both medicine and industry. By now, a great variety of deep learning (DL) techniques has been developed for inverse imaging tasks including CT reconstruction. In constrast to most DL approaches, the deep image prior (DIP) is an unsupervised framework that does not rely on a large training dataset, but only on the single degraded observation. The central observation with DIP is that the early-stopped optimization an untrained networks can lead to favorable solutions, thus acting as an implicit prior.

We extend the DIP in several ways. First, we add an explicit prior in the form of a total variation regularization term, which can stabilize and improve the reconstruction. Second, we pretrain on a post-processing task with easy-to-generate synthetic data, which induces prior information, learned from the synthetic image class and the operator-specific degradation, into the subsequent unsupervised DIP optimization. This two-stage procedure of supervised pretraining and unsupervised fine-tuning is called the educated DIP (EDIP) and often requires a significantly shorter optimization time in the fine-tuning stage compared to untrained DIP. Finally, we experiment with restricting the parameter space in the fine-tuning stage of EDIP. Using an affine linear subspace, which is expanded around the pretraining parameters with a sparsified basis obtained from many checkpoints saved during the pretraining, both overfitting behaviour can be reduced and second order optimization methods become feasible, enabling more stable and faster reconstruction.


Monitoring of hemorrhagic stroke using Electrical Impedance Tomography

Ville Kolehmainen

University of Eastern Finland, Finland

In this talk, we present recent progress in development of electrical impedance tomography (EIT) based bedside monitoring of hemorrhagic stroke. We present the practical setup and pipeline for this novel application of EIT and the CT prior informed image reconstruction method we have developed for it. Feasibility of the approach is studied with simulated data from anatomically highly accurate simulation models and experimental phantom data from a laboratory setup.


Edge-preserving inversion with $\alpha$-stable priors

Jarkko Suuronen1, Tomás Soto1, Neil Chada2, Lassi Roininen1

1LUT University, Finland; 2Heriot Watt University

The $\alpha$-stable distributions are a family of heavy-tailed and infinitely divisible distributions that are well-suited to edge-preserving inversion in the context of discretization of infinite-dimensional continuous-time statistical inverse problems. In this talk we discuss some of the technical issues arising from the application of such priors.


Optimal learning of high-dimensional classification problems using deep neural networks

Felix Voigtlaender

Katholische Universität Eichstätt-Ingolstadt, Germany

We study the problem of learning classification functions from noiseless training samples, under the assumption that the decision boundary is of a certain regularity. We establish universal lower bounds for this estimation problem, for general classes of continuous decision boundaries. For the class of locally Barron-regular decision boundaries, we find that the optimal estimation rates are essentially independent of the underlying dimension and can be realized by empirical risk minimization methods over a suitable class of deep neural networks. These results are based on novel estimates of the $L^1$ and $L^\infty$ entropies of the class of Barron-regular functions.

This is joint work with Philipp Petersen (University of Vienna).


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany