Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Location indicates the building first and then the room number!
Click on "Floor plan" for orientation in the builings and on the campus.
|
Session Overview |
Session | ||
S13 (5): Nonparametric and asymptotic statistics
Session Topics: 13. Nonparametric and asymptotic statistics
| ||
Presentations | ||
1:40 pm - 2:05 pm
Convergence Rates for the Maximum A Posteriori Estimator in PDE-Regression Models with Random Design Ruprecht-Karls-Universität Heidelberg, Germany
We consider the statistical inverse problem of recovering a parameter $\theta \in H^\alpha$ from data arising from a Gaussian regression model given by
$Y = \mathscr{G}(\theta)(Z) + \varepsilon$,
where $\mathscr{G}: \mathbb{L}^2 \to \mathbb{L}^2$ is a nonlinear forward map, $Z$ represents random design points, and $\varepsilon$ denotes Gaussian noise. Our estimation strategy is based on a least squares approach with $\|\cdot\|_{H^\alpha}$-constraints. We establish the existence of a least squares estimator $\hat{\theta}$ as a maximizer of a specified functional under Lipschitz-type assumptions on the forward map $\mathscr{G}$. A general concentration result is shown, which is used to prove consistency of $\hat{\theta}$ and establish upper bounds for the prediction error. The corresponding rates of convergence reflect not only the smoothness of the parameter of interest but also the ill-posedness of the underlying inverse problem. We apply this general model to the Darcy problem, where the recovery of an unknown coefficient function $f$ is the primary focus. For this example, we also provide specific rates of convergence for both the prediction and estimation errors. Additionally, we briefly discuss the applicability of the general model to other problems.
2:05 pm - 2:30 pm
Shift-Dispersion Decompositions of Wasserstein and Cramér Distances 1Technical University of Munich, Germany; 2Karlsruhe Institute for Technology, Germany; 3Heidelberg University
Divergence functions are measures of distance or dissimilarity between probability distributions that
serve various purposes in statistics and applications. We propose decompositions of Wasserstein and
Cramér distances—which compare two distributions by integrating over their differences in distribution
or quantile functions—into directed shift and dispersion components. These components are obtained by dividing the differences between the quantile functions into contributions arising from shift and dispersion, respectively. Our decompositions add information on how the distributions differ in a condensed form and consequently enhance the interpretability of the underlying divergences. We show that our decompositions satisfy a number of natural properties and are unique in doing so in location-scale families. The decompositions allow us to derive sensitivities of the divergence measures to changes in location and dispersion, and they give rise to weak stochastic order relations that are linked to the usual stochastic and the dispersive order. Our theoretical developments are illustrated in two applications, where we focus on forecast evaluation of temperature extremes and on the design of probabilistic surveys in economics.
2:30 pm - 2:55 pm
Uncovering Intrinsic Decompositions: A Tool to Interpret Statistical Distances Karlsruhe Institute of Technology (KIT), Germany
There is an increasing trend in the field of applied statistics away from only
considering summary statistics towards considering entire distributions,
especially in prediction tasks. While this allows for a more nuanced treatment
of the given distribution or sample, e.g. by calculating some statistical
distance measure between two distributions, its lack of interpretability is a
considerable downside. In this talk, a decomposition of statistical distances
is proposed, dividing them into easily interpretable components of location
and dispersion as well as asymmetric and symmetric shape. The decomposition
algorithm sequentially minimizes the distance that can be attained
by changing only one of these characteristics in the considered distributions.
For that, we use transformations that are invariant with respect to all
characteristics other than the one we are interested in. These transformations
follow directly from stochastic orders that are commonly used to define
measures of location, dispersion, etc. This approach can be applied to all
statistical distances and lets the chosen distance measure induce the measurement
of the individual components. The decomposition is empirically
illustrated using the comparison of historical and recent temperature data.
2:55 pm - 3:20 pm
Unlinked regression under vanishing variance 1University of Heidelberg; 2Catholic University of Eichstätt-Ingolstadt
A standard problem in shape-constrained curve estimation is isotonic regression where the regression function is non-decreasing and is estimated by means of observed data pairs $(x_i,Y_i)$, $i=1,\dots,n$. We remove the assumption of linked regression data, i.e., we do not know to which design point $x_j$ the response $Y_i$ belongs.
In this model, we study an estimator of the regression function that essentially relies on inverting the estimated distribution function. We derive convergence rates, under the assumption that the variance in the noise terms decays to zero at a suitable rate. Here, we distinguish both a kernel smoothed and an unsmoothed version of our estimator and argue when the smoothed version is superior. We also provide a local functional central limit theorem for the unsmoothed estimator. Finally, we present a numerical illustration supporting our results.
|
Contact and Legal Notice · Contact Address: Conference: GPSD 2025 |
Conference Software: ConfTool Pro 2.8.105 © 2001–2025 by Dr. H. Weinreich, Hamburg, Germany |