Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
CT05: Contributed talks
Time:
Thursday, 07/Sept/2023:
1:30pm - 3:30pm

Session Chair: Tram Nguyen
Location: VG2.105


Show help for 'Increase or decrease the abstract text size'
Presentations

Quaternary image decomposition with cross-correlation-based multi-parameter selection

Laura Girometti, Martin Huska, Alessandro Lanza, Serena Morigi

University of Bologna, Italy

Separating different features in images is a challenging problem, especially in the separation of the textural component when the image is noisy. In the last two decades many papers were published on image decomposition, addressing modeling and algorithmic aspects and presenting the use of image decomposition in cartooning, texture separation, denoising, soft shadow/spot light removal and structure retrieval. Given the desired properties of the image components, all the valuable contributions to this problem rely on a variational-based formulation which minimizes the sum of different energy norms: total variation semi-norm, $L^1$-norm, G-norm, approximation of the G-norm by the $div(L^p)$-norm and by the $H^{-1}$-norm, homogeneous Besov space, to model the oscillatory component of an image. The intrinsic difficulty with these minimization problems comes from the numerical intractability of the considered norms, from the tuning of the numerous model parameters, and, overall, from the complexity of extracting noise from a textured image, given the strong similarity between these two components.

In this talk, I will present a two-stage variational model for the additive decomposition of images into piecewise constant, smooth, textured and white noise components. Then, I will discuss how the challenging separation of noise from textured images can be successfully overcome by integrating a whiteness constraint in the model, and how the selection of the regularization parameters can be performed based on a novel multi-parameter cross-correlation principle. Finally, I will present numerical results that show the potentiality of the proposed model for the decomposition of textured images corrupted by several kinds of additive white noises.


Heuristic parameter choice from local minimum points of the quasioptimality function for the class of regularization methods

Uno Hämarik, Toomas Raus

University of Tartu, Estonia

We consider an operator equation \begin{equation*} Au=f, \quad f\in R(A),\tag{1} \end{equation*} where $A\in L(H, F)$ is the linear continuous operator between real Hilbert spaces $H$ and $F$. In general this problem is ill-posed: the range $R(A)$ may be non-closed, the kernel $N(A)$ may be non-trivial. Instead of an exact right-hand side $f_*$ we have only an approximation $f \in F$. For the regularization of problem (1) we consider the following class of regularization methods: \begin{equation*} u_r = (I- A^* A g_r (A^* A )) u_0 + g_r ( A^* A) A^* f. \end{equation*} Here $u_0$ is the initial approximation, $r$ is the regularization parameter, $I$ is the identity operator and the generating function $g_r(\lambda)$ satisfies the conditions \begin{equation*} \sup_{0\leq \lambda \leq \|A^*A\|} \left| g_r (\lambda)\right| \leq \gamma r, \quad r\geq 0, \quad \gamma>0. \end{equation*} \begin{equation*} \sup_{0\leq \lambda \leq \|A^*A\|} \lambda^p \left| 1-\lambda g_r (\lambda)\right| \leq \gamma_p r^{-p}, \quad r\geq 0, \quad 0\leq p \leq p_0, \quad \gamma_p>0. \end{equation*} Examples of methods of this class are (iterated) Tikhonov method, Landweber iteration method, implicite iteration method, method of asymptotical regularization, the truncated singular value decomposition methods etc.

If the noise level of data is unknown, for the choice of the regularization parameter $r$ heuristic rule is needed. We propose to choose $r$ from the set $L_{min}$ of the local minimum points of the quasioptimality criterion function \begin{equation*} \psi_{Q}(r)=r \|A^*(I-AA^*g_r(AA^*))^{\frac{2}{p_0}}(Au_r-f)\| \end{equation*} on the set of parameters $\Omega=\left\{r_{j}: \,r_{j}=q r_{j-1},\, j=1,2,...,M, \, q>1 \right\} $. Then the following error estimates hold:

a) \begin{equation*} \min_{r \in L_{min}}\left\|u_r-u_{*}\right\| \leq C \min_{r_0 \leq r \leq r_M} \left\{ \left\| u_r^{+}-u_{*}\right\|+\left\| u_r-u_r^{+}\right\| \right\}. \end{equation*} Here $u_{*}$ and $u_r^{+}$ are the exact and regularized solutions of equation $Au=f_*$ and the constant $C \leq c_q \ln(r_M / r_0) $ can be computed for each individual problem $Au=f$.

b) Let $u_{*}=(A^{*}A)^{p/2}v, \, \left\|v\right\| \leq \rho $. If $r_0=1, \, r_M=c \left\|f-f_{*}\right\|^{-2}, \, c=(2 \left\|u_{*}\right\|)^{2}$, then \begin{equation*} \min_{r \in L_{min}}\left\|u_r-u_{*}\right\| \leq c_{p,q} \rho^{1/(p+1)} \left| \ln \left\| f-f_{*} \right\| \right| \left\| f-f_{*} \right\|^{p/(p+1)}, 0<p \leq 2 p_0 . \end{equation*} We consider some algorithms for parameter choice from the set $L_{min}$.


Choice of the regularization parameter in case of over- or underestimated noise level of data

Toomas Raus

University of Tartu, Estonia

We consider an operator equation \begin{equation*} Au=f_{*}, \quad f_{*}\in R(A), \end{equation*} where $A\in L(H,F)$ is the linear continuous operator between real Hilbert spaces $H$ and $F$. We assume that instead of the exact right-hand side $f_{*}$ we have only an approximation $f\in F$ with supposed noise level $\delta$. To get the regularized solution we consider Tikhonov method $u_\alpha=(\alpha I+A^{*}A)^{-1}A^{*}f,$ where $\alpha>0$ is the regularization parameter.

In article [1] is shown that at least one local minimum point $m_k $ of the quasioptimality criterion function \begin{equation*} \psi_{Q}(\alpha)=\alpha \left\|du_{\alpha}/d\alpha\right\|=\alpha^{-1}\left\|A^{*}(Au_{2,\alpha}-f)\right\|, \quad u_{2,\alpha}=(\alpha I+A^{*}A)^{-1}(\alpha u_{\alpha}+A^{*}f), \end{equation*} is always a good regularization parameter. We will use this fact to choose proper regularization parameter in case of a possible over- or underestimation of the noise level.

If the actual noise level $ \left\| f - f_*\right\|$ can be less than $\delta$, then we propose the following rule.

Rule 1. Let $c>1$ and the parameter $\alpha(\delta)$ is choosen according to the modified discrepancy principle or monotone error rule (see [2]). For the regularization parameter choose smallest local minimum point $m_k \leq \alpha(\delta) $ of the quasioptimality criterion function for which holds \begin{equation*} \max_{\alpha,\alpha', m_k \leq \alpha' < \alpha \leq \alpha(\delta)} \frac{\psi_{Q}(\alpha')}{\psi_{Q}(\alpha)} \leq c. \qquad \qquad \qquad(1) \end{equation*} If such local minimum point does not exist, then choose $\alpha(\delta)$.

If the actual noise level can be both larger or smaller than $\delta$ then we propose the following rule.

Rule 2. Let $c>1$ and the parameter $\alpha(\delta)$ is chosen according to the balancing principle (see [2]). If there exists local minimum point $m_{k_0} > \alpha(\delta)$ for which $\psi_{Q}(\alpha(\delta)) > c \psi_{Q}(m_{k_0})$, then choose $m_{k_0}$ for the regularization parameter. Otherwise, choose smallest local minimum point $m_k \leq \alpha(\delta) $ for which holds inequality (1). If such local minimum point does not exist, then choose $\alpha(\delta)$.

[1] T. Raus, U. Hämarik. Heuristic parameter choice in Tikhonov method from minimizers of the quasi-optimality function. In: Hofmann, Bernd, Leitao, Antonio, Zubelli, Jorge P. (Ed.). New Trends in Parameter Identification for Mathematical Models (1−18). Birkhäuser De Gruyter:227 - 244, 2018.

[2] T. Raus, U. Hämarik. About the Balancing Principle for Choice of the Regularization Parameter. Numerical Functional Analysis and Optimization, 30:9-10, 951 - 970, 2008.


Multi-Penalty TV Regularisation for Image Denoising: A Study

Kemal Raik

University of Vienna, Austria

A common method for image denoising would be through TV regularisation, i.e., $$ \frac{1}{2}\|k\ast x-y^\delta\|^2+\alpha\operatorname{TV}(x)\to\min_x, $$ with $k=\operatorname{id}$ and $\alpha>0$ as the parameter determining the trade-off between the accuracy and computational stability of your solution. The noise level $\|y-y^\delta\|\le\delta$ is usually unknown, and therefore in this talk, I have opted to present a numerical study of the performance of several heuristic (i.e., noise-level free) parameter choice rules for total variation regularisation, both isotropic and anisotropic, with a focus on image denoising.

This is a prelude, however, to the more ominous multi-parameter choice problem [2] through the example of semiblind deconvolution [1], in which one only has an approximation $k_\eta$ of a blurring kernel $k$, with $\|k-k^\eta\|\le\eta$ (and $\eta$ is known, thus the expression "semi"-blind). The functional we would like to minimise would then be

$$ \frac{1}{2}\|k\ast x-y^\delta\|^2+\alpha\operatorname{TV}(x)+\beta\|k-k^\eta\|^2\to\min_{x,k}. $$ To quote a famous science-fiction film: "now there are two of them" ($\alpha$ and $\beta$, that is).

[1] A. Buccini, M. Donatelli, R. Ramlau, A Semiblind Regularization Algorithm for Inverse Problems with Application to Image Deblurring, SIAM Journal on Scientific Computing, 2018. https://epubs.siam.org/doi/10.1137/16M1101830.

[2] M. Fornasier, V. Naumova, S. V. Pereverzyev, Parameter Choice Strategies for Multipenalty Regularization, SIAM Journal on Numerical Analysis, 2014. https://epubs.siam.org/doi/10.1137/130930248.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AIP 2023
Conference Software: ConfTool Pro 2.8.101+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany