Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Location indicates the building first and then the room number!
Click on "Floor plan" for orientation in the builings and on the campus.
|
Session Overview |
Date: Monday, 10/Mar/2025 | |
4:00 pm - 7:00 pm | Conference Office Location: Foyer Potthoff Bau Floor plan |
Date: Tuesday, 11/Mar/2025 | |
8:00 am - 9:00 am | Conference Office Location: Foyer Potthoff Bau Floor plan |
9:00 am - 9:15 am | Opening Ceremony Location: POT 81 Floor plan |
9:15 am - 10:15 am | Plenary I Location: POT 81 Floor plan Session Chair: Claudia Kirch |
|
9:15 am - 10:15 am
Causality in Dynamical Systems
Causal models can help us with the following two tasks: (1) they can predict how a real-world system reacts under an active perturbation; (2) they suggest ways to robustly predict a response variable under a distribution shift, that is, in a scenario, where training and test distributions differ. Many causal methods and theoretical results have been developed for settings where data follow an i.i.d. structure.
Often, however, data come from a dynamical system whose temporal structure cannot be ignored. In this talk, we argue that considering time-dependence does not only come with technical difficulties but also with benefits: we develop causal methods that do not have a direct correspondence in the i.i.d. world and show how they can be used for separating the effects of internal variability and external forcing in Earth system science, for example.
|
10:15 am - 10:45 am | Coffee Break Location: Foyer Potthoff Bau Floor plan |
10:15 am - 10:45 am | Coffee Break Location: POT 168 Floor plan |
10:45 am - 12:25 pm | S 1 (1): Machine Learning Location: POT 06 Floor plan Session Chair: Merle Behr Session Chair: Alexandra Carpentier |
|
10:45 am - 11:10 am
Deep Learning of Multivariate Extremes via a Geometric Representation 1TU Drsden, Germany; 2ScaDS.AI; 3University of Arkansas; 4University of Edinburgh
Geometric representations for multivariate extremes, derived from the shapes of scaled sample clouds and their so-called limit sets, are becoming an increasingly popular modelling tool. Recent work has shown that limit sets connect several existing extremal dependence concepts and offer a high degree of practical utility for inference of multivariate extremes. However, existing geometric approaches are limited to low-dimensional settings, and some of these techniques make strong assumptions about the form of the limit set.
In this talk, we introduce DeepGauge - the first deep learning approach for limit set estimation. By leveraging the predictive power and computational scalability of neural networks, we construct asymptotically-justified yet highly flexible semi-parametric models for extremal dependence. Unlike existing techniques, DeepGauge can be applied in high-dimensional settings and does not impose any assumptions on the resulting limit set estimates. Moreover, we also introduce a range of novel theoretical results pertaining to the geometric framework and our limit set estimator. We showcase the efficacy of our deep approach by modelling the complex extremal dependence between metocean variables sampled from the North Sea.
11:10 am - 11:35 am
Affine Invariance in Continuous-Domain Convolutional Neural Networks University of Hamburg
The notion of group invariance helps neural networks recognize patterns and features under geometric transformations. Indeed, it has been shown that group invariance can largely improve deep learning performances in practice, where such transformations are very common. This research studies affine invariance on continuous-domain convolutional neural networks. While existing research only considers isometric invariance or similarity invariance so far, we focus on the full structure of affine transforms generated by the generalized linear group $\mathrm{GL}_2(\mathbb{R})$. We introduce a criterion to assess the similarity of two input signals under affine transformations. Then, we investigate the convolution of lifted signals and compute the corresponding integration over $G_2$ (the affine Lie group $\mathbb{R}^2 \ltimes \mathrm{GL}_2(\mathbb{R})$). Our research could eventually extend the scope of geometrical transformations that practical deep-learning pipelines can handle.
11:35 am - 12:00 pm
PAC-Bayesian optimization for deep stochastic neural networks using spatio-temporal data. TU Chemnitz, Germany
Raster data cubes collect measurements of a spatio-temporal random field at regularly spaced points and equidistant times. We design an ensemble forecasting methodology for cubes generated by an influenced mixed moving average field with finite second-order moments. The latter does not have, in general, a known predictive distribution. We then use the setting and the causal embedding discussed in [1] and employ a deep (stochastic) neural network to determine ensemble forecasts. The parameter's distribution of the network is assumed to be Gaussian and determined by minimizing the PAC-Bayesian bound for $\theta$-lex weakly dependent data proven in [1].
[1] I.V. Curato, O. Furat, L. Proietti and B. Ströh, (2024),
Mixed moving average field guided learning for spatio-temporal data,
arXiv:2301.00736.
|
10:45 am - 12:25 pm | S 2 (1): Spatial stochastics, disordered media, and complex networks Location: POT 251 Floor plan Session Chair: Chinmoy Bhattacharjee Session Chair: Benedikt Jahnel |
|
10:45 am - 11:10 am
Survival of an infection under dilutions in space and time 1TU Braunschweig; 2WIAS Berlin
We study survival and extinction of a long-range infection process on a diluted one-
dimensional lattice in discrete time. The infection can spread to distant vertices according to
a Pareto distribution, however spreading is also prohibited at random times. We prove a phase
transition in the recovery parameter via block arguments. This contributes to a line of research
on directed percolation with long-range correlations in nonstabilizing random environments.
11:10 am - 11:35 am
On the contact process on dynamical random graphs with degree dependent dynamics University of Göttingen, Germany
Recently, there has been increasing interest in interacting particle systems on evolving random graphs, respectively in time evolving random environments. In this talk we present results on the contact process in an evolving edge random environment on infinite (random) graphs. We in particular consider (infinite) Galton-Watson trees as the underlying random graph. We focus on an edge random environment that is given by a dynamical percolation whose opening and closing rates and probabilities are degree dependent. Our results concern the dependence of the critical infection rate for weak and strong survival on the random environment.
11:35 am - 12:00 pm
Meeting times via singular value decomposition 1Julius-Maximilians-Universität Würzburg, Germany; 2Universität Duisburg-Essen
We suggest a non-asymptotic matrix perturbation-theoretic approach to get sharp bounds on the expected meeting time of random walks on large (possibly random) graphs. We provide a formula for the expected meeting time in terms of the singular value decomposition of the diagonally killed generator of a pair of independent random walks, which we view as a perturbation of the generator. Employing a rank-one approximation of the diagonally killed generator as the proof of concept, we work out sharp bounds on the expected meeting time of simple random walks on sufficiently dense Erdős-Rényi random graphs.
12:00 pm - 12:25 pm
A Random Walk Approach to Broadcasting on Random Recursive Trees Johannes Gutenberg-University Mainz, Germany
In the broadcasting problem on trees, a $\{-1,1\}$-message originating in an unknown node is passed along the tree with a certain error probability $q$. The goal is to estimate the original message without
knowing the order in which the nodes were informed. We show a connection to random walks with memory effects and use this to develop a novel approach to analyse the majority estimator on random recursive
trees. With this powerful approach, we study the entire group of very simple increasing trees as well as shape exchangeable trees together. This also extends Addario-Berry et al. (2022) who investigated this
estimator for uniform and linear preferential attachment random recursive trees.
|
10:45 am - 12:25 pm | S 5 (1): Stochastic modelling in life sciences Location: POT 13 Floor plan Session Chair: Matthias Birkner |
|
10:45 am - 11:10 am
Fitness valleys, pit stops and changing environment 1Universität Bonn - Institute for Applied Mathematics, Germany; 2St. Olaf College, Minnesota
We consider a stochastic individual-based model of adaptive dynamics for an asexually reproducing population with mutation. To depict repeating changes of the environment, all of the model parameters vary over time as piecewise constant and periodic functions, on an intermediate time scale between those of stabilization of the resident population (fast) and exponential growth of mutants (slow). This can biologically interpreted as the influence of seasons or the variation of drug concentration during medical treatment. The typical evolutionary behaviour can be studied by looking at limits of large populations and rare mutations.
Analysing the crossing of fitness valleys in a changing environment leads to various interesting phenomena on different time scales which depend on the length of the valley. By carefully examining the influences of the changing environment carefully on each time scale, we are able to determine the effective growth rates of emergent mutants and their ability to invade the resident population.
Eventually, we investigate the special situation of pit stops, where single intermediate mutants within the valley have phases of positive fitness and can thus grow to a diverging size before dying out again. This significantly accellerates the traversal of the valley and leads to a interesting new time scale.
This is joint work with Anna Kraut.
11:10 am - 11:35 am
The two-size Wright--Fisher model: an analysis via (uniform) renewal theory 1University of Münster; 2BOKU University; 3Bielefeld University
Consider a population with two types of (one-dimensional) individuals, where type is interpreted as size (length): large individuals are of size $1$ and small individuals are of fixed size $\vartheta$, $\vartheta \in (0,1)$. Each generation has an available space of length $R$.
To form a new generation, individuals from the current generation are sampled one by one, and if there is at least some available space, they reproduce and their offspring are added to the new generation. The probability of sampling an individual whose offspring is small is given by $\mu^R(x)$, where $x$ is the proportion of small individuals in the current generation. We call this stochastic model in discrete time the two-size Wright--Fisher model. The function $\mu^R$ can be used to model mutation and/or various forms of frequency-dependent selection.
Denoting by $(X_t^R)_{t \geq 0}$ the frequency process of small individuals, we show convergence on the evolutionary time scale $Rt$ to the solution of the SDE
$$\mathrm{d} X_t = \big(-(1-\vartheta) X_t(1-X_t)+\mu(X_t)\big)\mathrm{d} t + \sqrt{X_t(1-X_t)(1-(1-\vartheta) X_t)}\, \mathrm{d} B_t,$$
where $\mu(x)=\lim_{R \to \infty} R(\mu^R(x)-x)$, and $B$ is a standard Brownian motion.
To prove this statement, the dynamics inside one generation of the model are considered as a renewal process, with the population size as the first-passage time $\tau(R)$ above level $R$. Methods from (uniform) renewal theory are applied and in particular a uniform version of Blackwell's renewal theorem (for binary, non-arithmetic random variables) is established.
In order to understand the underlying genealogical picture of the model, different concepts of duality are used.
11:35 am - 12:00 pm
Coalescents with migration in the moderate regime 1Bielefeld University; 2BOKU University; 3University of Vienna
Multi-type models have recently experienced renewed interest in the stochastic modeling of evolution. This is partially due to their mathematical analysis often being more challenging than their single-type counterparts; an example of this is the site-frequency spectrum of a colony-based population with moderate migration.
In this talk, we model the genealogy of such a population via a multi-type coalescent starting with $N(K)$ colored singletons with $d \geq 2$ possible colors (colonies). The process is described by a continuous-time Markov chain with values on the colored partitions of the colored integers in $\{1, \ldots, N(K)\}$; blocks of the same color coalesce at rate $1$, while they are also allowed to change color at a rate proportional to $K$ (migration).
Given this setting, we study the asymptotic behavior, as $K\to\infty$ at small times, of the vector of empirical measures, whose $i$-th component keeps track of the blocks of color $i$ at time $t$ and of the initial coloring of the integers composing each of these blocks. We show that, in the proper time-space scaling, it converges to a multi-type branching process (case $N(K) \sim K$) or a multi-type Feller diffusion (case $N(K) \gg K$). Using this result, we derive an applicable representation of the site-frequency spectrum.
This is joint work with Fernando Cordero and Emmanuel Schertzer.
12:00 pm - 12:25 pm
Consistency and Central Limit Results in the (Recent) Admixture Model University of Freiburg, Germany
The Admixture Model describes the probability that an individual $i$ possesses zero, one, or two copies of allele $j \in \{1, \ldots, J\}$ at marker $m$, based on the frequencies $p_{k,j,m}$ of allele $j$ in population $k$ at marker $m$ and the ancestry proportions $q_{i,k}$, which represent the fraction of individual $i$'s genome inherited from population $k$. A key extension of this model is the Recent Admixture Model, which generalizes the framework by accounting for the ancestry of the individual’s parents, rather than just the individual. This allows for a more refined representation of genetic inheritance in recently admixed populations. We denote the data of individual $i$ at marker $m$ and allele $j$ by $X_{i,j,m}$ and assume (in the Admixture Model)
$$X_{i, \cdot,m} \sim Multi(2, (\langle q_{i, \cdot}, p_{\cdot, j,m}\rangle)_{j = 1, \ldots, J}).$$
In the Recent Admixture Model, we have the same assumption about the distribution of the data, but for the parents and not for the individual.
In both the Admixture and Recent Admixture Models, two settings are typically considered: the supervised setting, where the allele frequencies $p_{k,j,m}$ are known a priori, and the unsupervised setting, where these frequencies must be estimated from the data. This study focuses on the theoretical properties of the maximum likelihood estimators (MLEs) for both models, in both contexts. Specifically, it examines the consistency and central limit behavior of these estimators, which are important for understanding the reliability and accuracy of ancestry inference. The MLEs in these models can be efficiently computed using popular algorithms such as STRUCTURE or ADMIXTURE, which are widely used in population genetics.
Since the standard theory from Hoadley concerning the consistency of MLEs for not identically distributed random variables is not directly applicable to our case, we changed his proof to show consistency in the supervised setting under weak constraints. In the unsupervised setting, the MLEs are usually non-unique (see Heinzel, Baumdicker, Pfaffelhuber "Revealing the range of maximum likelihood estimates in the admixture model.", bioRxiv). Hence, we name constraints that consequences, even in this setting, the consistency of the MLEs. In addition, we have proven that the constraints imposed in all cases are indeed necessary. Our results on the consistency of the estimators form the basis for establishing central limit theorems in both the supervised and unsupervised settings. A key aspect of our analysis is the comparison of the asymptotic behavior of the estimators when the true parameter lies on the boundary of the parameter space versus when it is located in the interior. From a mathematical standpoint, the boundary case is particularly intriguing. In this case, we demonstrate that the asymptotic variance of the estimators is much smaller compared to the case where the parameter space is open, highlighting the impact of boundary constraints on the efficiency of the estimators.
We apply our theoretical results to simulated data and to data from the 1000 Genomes Project, i.e. we see that the central limit results in the (Recent) Admixture Model can be used to estimate the variance of the estimator even for a small number of markers and individuals well. Finally, we demonstrate the usefulness of our results in an application settings, e.g. in the forensic genetics to select genetic markers and to find an optimal marker set.
|
10:45 am - 12:25 pm | S 6 (1): Stochastic modelling in natural sciences Location: POT 112 Floor plan Session Chair: Michael Kupper |
|
10:45 am - 11:10 am
Reconstruction of inhomogeneous turbulence based on stochastic Fourier-type integrals 1Universität Kassel, Germany; 2Universität Trier, Germany; 3Fraunhofer ITWM, Kaiserslautern, Germany
We develop and analyze a random field model for the reconstruction of inhomogeneous turbulence from characteristic flow quantities provided by $k$-$\varepsilon$ simulations. The model is based on stochastic integrals that combine moving average and Fourier-type representations in time and space, respectively, where both the time integration kernel and the spatial energy spectrum depend on the macroscopically varying characteristic quantities.
The structure of the model is derived from standard spectral representations of homogeneous fields by means of a two-scale approach in combination with specific stochastic integral transformations. Our approach allows for a rigorous analytical verification of the desired statistical properties and is accessible to numerical simulation.
11:10 am - 11:35 am
Stability of travelling wave solutions to reaction-diffusion equations driven by additive noise with Hölder continuous paths 1TU Berlin; 2Berlin Mathematical School
We investigate stability of travelling wave solutions to a class of reaction-diffusion equations perturbed by infinite-dimensional additive noise with Hölder continuous paths, covering in particular fractional Wiener processes with general Hurst parameter. In the latter example, we obtain explicit error bounds on the maximal distance from the solution of the stochastic reaction-diffusion equation to the set of travelling wave fronts in terms of the Hurst parameter and the spatial regularity for small noise amplitude. Our bounds can be optimised for short times in terms of the Hurst parameter and for large times in terms of the spatial regularity of the noise covariance of the driving fractional Wiener process.
11:35 am - 12:00 pm
Analysis of anomalous diffusion processes with random parameters Wrocław University of Science and Technology, Poland
In this talk we discuss several results about anomalous diffusion processes with random paramaters, which are inspired by recent single particle tracking biological experiments. We focus on three processes, namely: fractional Brownian motion with random Hurst exponent (FBMRE), Riemann-Liouville fractional Brownian motion with random Hurst exponent (RLFBMRE), and scaled Brownian motion with random anomalous diffusion exponent (SBMRE). In all cases, we present the basic probabilistic properties like transition density, q-th moment of absolute value of the process, autocovariance function, and expectation of time-averaged mean squared displacement. Moreover, we analyze ergodic properties of all three processes. Additionally, for SBMRE we analyze its martingale properties and law of large numbers. Together with theoretical analysis, we provide the numerical anlysis of obtained results. The talk is based on [1] and [2].
[1] H. Woszczek, A. Chechkin, A. Wyłomańska, Scaled Brownian motion with random anomalous diffusion exponent, Communications in Nonlinear Science and Numerical Simulation, 2025, vol. 140, pt. 1, art. 108388, s. 1-27
[2] H. Woszczek, A. Wyłomańska, A. Chechkin, Riemann-Liouville fractional Brownian motion with random Hurst exponent, preprint, arXiv:2410.11546
12:00 pm - 12:25 pm
Some stochastic aspects of stochastic elliptic inverse problems TU Bergakademie Freiberg, Germany
Stochastic elliptic problems arise mainly by substituting
deterministic parameters in elliptic problems
by certain random parameters.
Then one issue in the consideration of random equations is the
measurability of desired solutions. Based on the fact that there
exist different measurability concepts it is important to use
the appropriate measurability concept for each problem.
Hereby mainly the Borel, weak and strong measurability concepts are
of interest.
In the talk these measurability concepts are
presented and some of the relations between them are discussed.
This is important, because in elliptic problems also
non-separable Banach spaces play a certain role and in these
spaces the measurability concepts mentioned above do not
coincide necessarily.
Based on these findings measurability properties of solutions of
elliptic problems are investigated.
Furthermore it will be shown exemplarily, which stochastic
elliptic inverse problems can be treated as abstract
elliptic inverse problems and which such stochastic inverse problems
require a specific stochastic investigation.
|
10:45 am - 12:25 pm | S 7 (1): Stochastic processes: theory, statistics and numerics Location: POT 51 Floor plan Session Chair: Andreas Neuenkirch Session Chair: Jakob Söhl |
|
10:45 am - 11:10 am
Uniform ergodicity for the geodesic slice sampler on compact Riemannian manifolds Universität Passau, Germany
Distributions with non-Euclidean domains allow to incorporate more knowledge into mathematical models. At the same time, analysing such models e.g.\ with Bayesian inference brings the need to (approximately) sample from distributions on Riemannian manifolds. To this end, we can use the geodesic slice sampler, which is a slice sampling based Markov chain Monte Carlo method that employs geodesics. Establishing performance guarantees in a compact setting, we show that the geodesic slice sampler is uniformly ergodic for target distributions on compact Riemannian manifolds that have a bounded density with respect to the Riemannian measure.
11:10 am - 11:35 am
Multilevel Picard approximations for high-dimensional semilinear second-order PDEs with Lipschitz nonlinearities Bielefeld university, Germany
The recently introduced full-history recursive multilevel Picard (MLP) approximation methods have turned out to be quite successful in the numerical approximation of solutions of high-dimensional nonlinear PDEs. In particular, there are mathematical convergence results in the literature which prove that MLP approximation methods do overcome the curse of dimensionality in the numerical approximation of nonlinear second-order PDEs in the sense that the number of computational operations of the proposed MLP approximation method grows at most polynomially in both the reciprocal of the prescribed approximation accuracy and the PDE dimension . However, in each of the convergence results for MLP approximation methods in the literature it is assumed that the coefficient functions in front of the second-order differential operator are affine linear. In particular, until today there is no result in the scientific literature which proves that any semilinear second-order PDE with a general time horizon and a non affine linear coefficient function in front of the second-order differential operator can be approximated without the curse of dimensionality. It is the key contribution of this article to overcome this obstacle and to propose and analyze a new type of MLP approximation method for semilinear second-order PDEs with possibly nonlinear coefficient functions in front of the second-order differential operators. In particular, the main result of this article proves that this new MLP approximation method does indeed overcome the curse of dimensionality in the numerical approximation of semilinear second-order PDEs.
arXiv:2009.02484 and arXiv:2204.08511
11:35 am - 12:00 pm
Bounding adapted Wasserstein metrics 1Carnegie Mellon University, United States of America; 2Stanford University, United States of America
The Wasserstein distance $\mathcal{W}_p$ is an important instance of an optimal transport cost. Its numerous mathematical properties as well as applications to various fields such as mathematical finance and statistics have been well studied in recent years. The adapted Wasserstein distance $\mathcal{A}\mathcal{W}_p$ extends this theory to laws of discrete time stochastic processes in their natural filtrations, making it particularly well suited for analyzing time-dependent stochastic optimization problems.
While the topological differences between $\mathcal{A}\mathcal{W}_p$ and $\mathcal{W}_p$ are well understood, their differences as metrics remain largely unexplored beyond the trivial bound $\mathcal{W}_p\lesssim \mathcal{A}\mathcal{W}_p$. This paper closes this gap by providing upper bounds of $\mathcal{A}\mathcal{W}_p$ in terms of $\mathcal{W}_p$ through investigation of the smooth adapted Wasserstein distance. Our upper bounds are explicit and are given by a sum of $\mathcal{W}_p$, Eder's modulus of continuity and a term characterizing the tail behavior of measures. As a consequence, upper bounds on $\mathcal{W}_p$ automatically hold for $\mathcal{AW}_p$ under mild regularity assumptions on the measures considered. A particular instance of our findings is the inequality $\mathcal{A}\mathcal{W}_1\le C\sqrt{\mathcal{W}_1}$ on the set of measures that have Lipschitz kernels.
Our work also reveals how smoothing of measures affects the adapted weak topology. In fact, we find that the topology induced by the smooth adapted Wasserstein distance exhibits a non-trivial interpolation property, which we characterize explicitly: it lies in between the adapted weak topology and the weak topology, and the inclusion is governed by the decay of the smoothing parameter.
This talk is based on joint work with Jose Blanchet, Martin Larsson and Jonghwa Park.
|
10:45 am - 12:25 pm | S 7 (2): Stochastic processes: theory, statistics and numerics Location: POT 151 Floor plan Session Chair: Andreas Neuenkirch Session Chair: Jakob Söhl |
|
10:45 am - 11:10 am
Bayesian inference in semi-linear SPDEs using spatial information 1Humboldt Universität zu Berlin; 2Imperial College London
We consider the Bayesian non-parametric estimation of the reaction term in a semi-linear parabolic SPDE. Consistency is achieved by making use of the spatial ergodicity of the SPDE while the time horizon is fixed. The analysis of the estimation error requires new concentration results for spatial averages of transformation of the SPDE, which are based the combination of the Clark-Ocone formula with bounds on the marginal densities. The general methodology is exemplified in the asymptotic regime, where the diffusivity level and the noise level of the SPDE tend to zero in a realistic coupling.
11:10 am - 11:35 am
Parameter estimation for the stochastic Burgers equation Università di Pavia, Italy
We estimate the diffusivity (or drift) in the stochastic Burgers equation driven by additive space-time white noise. Our estimator is based on the local measurements, i.e., we assume that the solution is measured locally in space and over a finite time interval. Such estimator has been introduced in [2] for linear SPDEs with additive noise, but [3] considered also the multiplicative noise case and [1] applied it to a large class of semilinear SPDEs, namely to the stochastic Burgers equation with "smooth" noise. Our work contributes to the topic and extends achieved results.
In our talk, we first assert the regularity of both the linear part of the solution (i.e., the stochastic convolution) and the nonlinear part. Then we show that our proposed estimator is strong consistent and asymptotically normal.
This is a joint work with Professor Enrico Priola.
References:
[1] Altmeyer, R., Cialenco, I., Pasemann, G., (2023): Parameter estimation for semilinear SPDEs from local measurements. Bernoulli 29(3),
2035–2061.
[2] Altmeyer, R., Reiss, M., (2021): Nonparametric estimation for linear SPDEs from local measurements. Annals of Applied Probability 31(1),
1–38.
[3] Janak, J., Reiss, M., (2024): Parameter estimation for the stochastic heat equation with multiplicative noise from local measurements. Stochastic Processes and their Applications 175.
11:35 am - 12:00 pm
Parameter estimation in hyperbolic linear SPDEs from multiple measurements HU-Berlin, Germany
The coefficients of elastic and dissipative operators in a linear hyperbolic SPDE are jointly estimated using multiple spatially localised measurements. As the resolution level of the observations tends to zero, we establish the asymptotic normality of an augmented maximum likelihood estimator. The rate of convergence for the dissipative coefficients matches rates in related parabolic problems, whereas the rate for the elastic parameters also depends on the magnitude of the damping. The analysis of the observed Fisher information matrix relies upon the asymptotic behaviour of rescaled M,N-functions generalising the operator sine and cosine families appearing in the undamped wave equation. In the undamped case, the observed Fisher information is intrinsically related to the kinetic energy within a deterministic wave equation and the notion of Riemann-Lebesgue operators.
|
10:45 am - 12:25 pm | S 8 (1): Finance, insurance and risk: Modelling Location: POT 361 Floor plan Session Chair: Peter Hieber Session Chair: Frank Seifried |
|
10:45 am - 11:10 am
Should I invest in the market portfolio? - A parametric approach Universität zu Kiel, Germany
This study suggests a parsimonious stationary diffusion model for the dynamics of stock prices relative to the entire market. Its aim is to contribute to the question how to choose the relative weights in a
diversified portfolio and, in particular, whether the market portfolio behaves close to optimally in terms of the long-term growth rate. Specifically, we introduce the elasticity bias as a measure of the market portfolio's suboptimality. We heavily rely on the observed long-term stability of the capital distribution curve, which also served as a starting point for the Stochastic Portfolio Theory in the sense of Fernholz.
11:10 am - 11:35 am
Pathwise stability of log-optimal portfolios 1Durham University, United Kingdom; 2Universität Mannheim, Germany; 3ShanghaiTech University, China
Classical approaches to optimal portfolio selection problems are based on probabilistic models for the asset returns or prices. However, by now it is well observed that the performance of optimal portfolios is highly sensitive to model misspecifications. To account for various type of model risk, robust and model-free approaches have gained increasing importance in portfolio theory. In this talk, we develop a pathwise framework and methodology to analyze the stability of well-known 'optimal' portfolios in local volatility models under model uncertainty. In particular, we study the pathwise stability of the classical log-optimal portfolio with respect to the model parameters and investigate the pathwise error created by trading with respect to a time-discretized version of the log-optimal portfolio.
11:35 am - 12:00 pm
Sufficient Conditions for Utility Functions in Robust Utility Optimization RPTU Kaiserslautern-Landau, Germany
Traditional portfolio optimization methods often rely on precise probabilistic models, which may be inadequate for reflecting the full extent of uncertainty present in financial markets. In response, robust optimization approaches have emerged, focusing on worst-case scenarios by considering a set of plausible probability measures instead of relying on a single model. The objective of this research is to develop a general framework for robust utility maximization. We introduce a novel assumption on utility functions that guarantees certain desirable properties, which allow us to derive a minimax result in general settings. This assumption is satisfied for log and power utility functions. The minimax result guarantees the existence of optimal strategies, particularly in continuous-time financial markets with uncertainty in both drift and volatility, without the need for a dominating reference measure. We compare our setting with similar approaches from literature.
12:00 pm - 12:25 pm
Robust Utility Maximization in Continuous Time: Convergence and Updating the Uncertainty Sets RPTU Kaiserslautern-Landau, Germany
In financial markets simple portfolio strategies often outperform more sophisticated optimized ones. E.g., in a one-period setting the equal weight or $1/N$-strategy often provides more stable results than mean-variance-optimal strategies. This is due to the estimation error for the mean and can be rigorously explained by showing that for increasing uncertainty on the means the equal weight strategy becomes optimal, which is due to its robustness. In earlier work, we extended this result to continuous-time strategies in a multivariate Black-Scholes type market. To this end we derived optimal trading strategies for maximizing expected utility of terminal wealth under CRRA utility when we have Knightian uncertainty on the drift, meaning that the only information is that the drift parameter lies in an uncertainty set. The investor takes this into account by considering the worst possible drift within this set. We showed that indeed a uniform strategy is asymptotically optimal when uncertainty increases. After presenting new results on the convergence, we then focus on a financial market with a stochastic drift process in view of uncertainty. We combine the worst-case approach with filtering techniques by defining an ellipsoidal uncertainty set based on the filters. We demonstrate that investors need to choose a robust strategy to profit from additional information.
|
10:45 am - 12:25 pm | S13 (1): Nonparametric and asymptotic statistics Location: ZEU 250 Floor plan Session Chair: Alexander Kreiss Session Chair: Leonie Selk |
|
10:45 am - 11:10 am
A Test of Independence over Periods of Time for Locally Stationary Processes Helmut Schmidt University, Germany
Testing for independence can be done either time-point-wise or, more thoroughly, over periods of time up to the whole observed time horizon. Hence, we extended the testing procedure for locally stationary processes proposed in Beering (2021), which uses a characteristic function-based weighted distance inspired by the distance covariance defined by Székely et al. (2007) and its use by Jentsch et al. (2020). The refined testing procedure allows for time spans to be taken into consideration. As this test is supported by a bootstrap procedure, we present the theoretical results of both the real and the bootstrap world. Additionally, we show the practical performance of the testing procedure via a simulation study aiming to detect independence as well as dependence and a real-data example stemming from the field of structural health monitoring of infrastructural buildings.
References:
Beering, C. (2021). A functional central limit theorem and its bootstrap analogue for locally stationary processes with application to independence testing. Dissertation. Technische Universität Braunschweig.
Jentsch, C., Leucht, A., Meyer, M. and Beering, C. (2020). Empirical characteristic functions-based estimation and distance correlation for locally stationary processes. Journal of Time Series Analysis 41, 110-133.
Székely, G.J., Rizzo, M.L. and Bakirov, N.K. (2007). Measuring and testing dependence by correlation of distances. The Annals of Statistics 35, 2769-2794.
11:10 am - 11:35 am
Estimation for Markov Chains with periodically missing observations Texas A&M University / Universität Rostock
When we observe a stationary time series with observations missing at periodic time points, we can still estimate its marginal distribution well, but the dependence structure of the time series may not be recoverable at all, or the usual estimators may have much larger variance than in the fully observed case. We show how nonparametric estimators can often be improved by adding unbiased estimators. We consider a simple setting, first-order Markov chains on a finite state space, and an observation pattern in which a fixed number of consecutive observations is followed by an observation gap of fixed length, say workdays and weekends.
In this talk I will focus on the simplest reasonable scenario, namely when every third observation is missing. The new estimators perform astonishingly well, as illustrated with simulations for this scenario.
This talk is based on joint work with Anton Schick and Wolfgang Wefelmeyer.
11:35 am - 12:00 pm
Tapered covariance matrix estimation for lattice processes TU Dortmund, Germany
For stationary $\mathbb{R}^d$-valued lattice processes on $\mathbb{Z}^g$, for $d,g\in \mathbb{N}$, we consider the estimation of the whole covariance function of the data. In this sense, we generalize the estimation of large covariance matrices via tapering as proposed by McMurry and Politis (2010) and Jentsch and Politis (2015) for univariate and multivariate time series, respectively, to more general lattice processes. Considering the vectorization $vec(\mathbf{X})$ of (multivariate) lattice data $\mathbf{X}$, we construct suitable (tapering) estimators for the covariance matrix $Cov(vec(\mathbf{X}))$ of the whole data set. Note that the dimension of $Cov(vec(\mathbf{X}))$ is growing with increasing sample size. We prove estimation consistency with respect to the spectral norm and discuss computational challenges caused by the high dimensionality of this task. To achieve efficiency gains, we discuss various forms of separability imposed on the covariance function and examine lattice processes in both non-separable and separable covariance setups. For this purpose, we propose an alternative tapered estimator tailored for separable covariance functions and establish its consistency under separability. To assess their performance, we conduct simulation studies exploring how these estimators behave with different separability and spatial dependence scenarios.
|
12:50 pm - 2:00 pm | Lunch |
2:00 pm - 2:50 pm | S13 Keynote: Nonparametric and asymptotic statistics - presented by MDPI Location: POT 81 Floor plan Session Chair: Alexander Kreiss Session Chair: Leonie Selk |
|
2:00 pm - 2:50 pm
Modeling multiplex networks Multiple networks are complex and highly non-Euclidean objects. This talk will focus on what is known as multiplex networks, representing multiple interaction between the same set of objects, but where the interactions may be of several different types. We shall discuss how a graph limit framework can be used for such objects, and how to naturally characterise the complexity of the whole system of interactions. This characterisation will be used on complexity measures introduced in information theory, and the notion of entropy. This gives us a tool to generally think about systems of relationship. |
2:00 pm - 3:40 pm | S 1 (2): Machine Learning Location: POT 06 Floor plan Session Chair: Merle Behr Session Chair: Alexandra Carpentier |
|
2:00 pm - 2:25 pm
Clustering Experts with Bandit Feedback of their Performance in Multiple Tasks 1INRAE, Mistea, Institut Agro, Univ Montpellier, Montpellier, France; 2Institut für Mathematik, Universität Potsdam, Potsdam, Germany
We study the problem of clustering a set of experts from their performances in many tasks. We assume that a set of $N$ experts can be partitioned into two groups, where experts within the same group exhibit identical performance on any given task, over a possibly large number of tasks $d$. We consider a sequential and adaptive setting: at each time step $t$, the learner selects an expert-task pair and receives a noisy observation of the expert’s performance, which depends on both the task and the expert's group. The learner’s objective is to recover the correct partition of the experts with as few observations as possible.
We propose an efficient $\delta$-PAC algorithm that, with probability at least $1-\delta$, accurately recovers the partition. The algorithm leverages the sequential halving method and optimally balances exploration across tasks — to estimate performance gaps between groups — and across experts — to infer the correct partition. We establish an instance-dependent upper bound on the number of observations required for partition recovery, which holds with probability at least $1-\delta$, and provide a matching lower bound, up to poly-logarithmic factors.
2:25 pm - 2:50 pm
Permutation Estimation for Crowdsourcing 1Universität Potsdam, Germany; 2INRAE, Univ. Montpellier, France
We consider a ranking problem where a set of experts answers to a set of questions. The aim is to rank experts by competence based on their answers. We assume that, for every pair of experts, one of the experts has for every question at least the same probability to answer correctly as the other expert. Moreover, we suppose that the questions can be ordered by difficulty in the same sense. Storing the probabilities of a correct answer for each expert and every question yields a matrix, and our assumption means that this matrix is bi-isotonic up to permutations of its rows and columns.
In the general setting of ranking over this class of permuted bi-isotonic matrices, no algorithm is known that is at the same time computationally efficient and optimal in the minimax-sense. In our work, we focus on the special case of bi-isotonic matrices taking only two values, and present a polynomial time method that matches the minimax lower bound up to poly-logarithmic factors.
2:50 pm - 3:15 pm
A random measure approach to reinforcement learning in continuous time 1Department of Mathematics, Saarland University, Germany; 2Department of Mathematics, Vinh University, Vietnam
We propose a random measure framework for modeling exploration, i.e. the execution of measure-valued controls, in continuous-time reinforcement learning (RL) with controlled diffusion and jumps. We first address the situation where sampling the randomized control in continuous time takes place on a discrete-time grid, and we reformulate the resulting stochastic differential equation (SDE) as an equation driven by appropriate random measures. These random measures are constructed using the Brownian motion and Poisson random measure, which represent the sources of randomness in the original model dynamics, along with additional random variables sampled on the grid for control execution. We then establish the vague convergence for these random measures as the grid’s mesh-size tends to zero. This limit theorem suggests a grid-sampling limit SDE driven by both white noise random measures and a Poisson random measure, which models the control problem with randomized controls in continuous time. Moreover, we discuss the grid-sampling limit SDE in comparison with the exploratory SDE (e.g., [4]) and the sample state process (e.g., [2, 3]) used in recent continuous-time RL literature.
References
[1] C. Bender and N.T. Thuan. A random measure approach to reinforcement learning in continuous time. arXiv:2409.17200, preprint (2024).
[2] Y. Jia and X.Y. Zhou. Policy gradient and actor-critic learning in continuous time and space: Theory and algorithms. J. Mach. Learn. Res. 23 (2022) 1--50.
[3] Y. Jia and X.Y. Zhou. $q$-Learning in continuous time. J. Mach. Learn. Res. 24 (2023) 1--61.
[4] H. Wang, T. Zariphopoulou and X.Y. Zhou. Reinforcement learning in continuous time and space: A stochastic control approach. J. Mach. Learn. Res. 21 (2020) 1--34.
|
2:00 pm - 3:40 pm | S 2 (2): Spatial stochastics, disordered media, and complex networks Location: POT 251 Floor plan Session Chair: Chinmoy Bhattacharjee Session Chair: Benedikt Jahnel |
|
2:00 pm - 2:25 pm
Preferential attachment trees with vertex death University of Augsburg, Germany
Preferential attachment models are a well-known class of random graphs that model the evolution of real-world complex networks over time. We study a more general model that incorporates vertex death and thus, more realistically, models evolving networks that not only increase but also decrease in size. An important result in the study of preferential attachment models is the occurence of persistence of the maximum degree, where a fixed vertex attains the maximum degree for all but finitely many steps. A clear phase transition is known to exist for the occurence of persistence. We present recent findings for persistence in preferential attachment trees with vertex death, in which we uncover regimes in which a similar phase transition exists, and regimes where persistence never occurs. This is joint work with Markus Heydenreich.
2:25 pm - 2:50 pm
Cluster sizes in subcritical soft Boolean models 1TU Braunschweig; 2WIAS Berlin; 3University of Bath
We consider the soft Boolean model, a model that interpolates between the Boolean model and long-range percolation, where vertices are given via a stationary Poisson point process. Each vertex carries an independent heavy-tailed radius and each pair of vertices is assigned another independent heavy-tailed edge-weight with a potentially different tail exponent. Two vertices are now connected if they are within distance of the larger radius multiplied by the edge weight. We determine the tail behaviour of the Euclidean diameter and the number of points of a typical maximally connected component in a subcritical percolation phase.
For this, we present a sharp criterion in terms of the tail exponents of the edge-weight and radius distributions that distinguish a regime where the tail behaviour is controlled only by the edge exponent from a regime in which both exponents are relevant. We explain the principle mechansims in both regimes and explain how they lead to the observed behaviour. If time allows, we sketch the most steps of the proof.
2:50 pm - 3:15 pm
Inhomogeneous random graphs of preferential attachment type: Supercritical behaviour University of Cologne, Germany
We consider the preferential attachment graph with vertices $\{1,\dots,n\}$ where we connect two vertices $i$ and $j$ independently with probability $\beta\, (i \vee j)^{\gamma-1} \, (i \wedge j)^{-\gamma}$. In the regime of $\gamma\in(\tfrac{1}{2},1)$ it can be shown that the largest connected component always makes up a large fraction of the vertices, that is this model is robust under percolation with critical parameter $\beta_c =0$. Moreover the degrees of the vertices have infinite variance, which makes it mathematically interesting. We show the exact size of the largest component is $\exp\left(-\tfrac{c}{\beta}\right)$ if $\beta\to0$, by a constructive proof which consists of path counting techniques. There we rely heavily on the fact that these networks are ultra-small, i.e. typical distances between vertices are only of order $\log\log n$.
3:15 pm - 3:40 pm
Loops vs percolation TU Darmstadt
In recent years, many models in mathematical physics have been encoded into graphical models, which are more accessible through the lens of probability theory. These graphical models often exhibit a natural percolation structure which is easier to investigate in terms of criticallity of the model. One of the main questions is whether there exists a difference in the critical value for loops and the percolation, i.e. $\beta_c^{\text{link}}<\beta_c^{\text{loop}} $.
In my talk I want to give an introduction to the topic and an overview of the results which are known so far.
|
2:00 pm - 3:40 pm | S 4 (1): Limit theorems, large deviations and extremes Location: ZEU 160 Floor plan Session Chair: Jan Nagel Session Chair: Marco Oesting |
|
2:00 pm - 2:25 pm
Information criteria for the number of directions of extremes in high-dimensional data Karlsruhe Institute of Technology, Germany
In multivariate extreme value analysis, estimating the extremal dependence structure is a challenging task, especially in the context of high-dimensional data. Therefore, a common approach is to reduce the dimensionality by considering only the directions in which extreme values occur. Typically, the underlying models are assumed to be multivariate regularly varying, which under mild assumptions is equivalent to sparse regularly varying, recently introduced by Meyer and Wintenberger (2021). Sparse regular variation has the advantage of capturing the sparsity structure in which extreme events occur better than multivariate regular variation. Therefore, in this talk, we use the concept of sparse regular variation to present different information criteria for the number of directions in which extreme events occur, such as a Bayesian information criterion (BIC), a mean-squared error information criterion (MSEIC) and a quasi-Akaike information criterion (QAIC) based on the Gaussian likelihood function. A result is that the AIC of Meyer and Wintenberger (2023) and the MSEIC are inconsistent information criteria whereas the BIC and the QAIC are consistent information criteria. Finally, the performance of the different information criteria is compared in a simulation study.
2:25 pm - 2:50 pm
Principal component analysis for max-stable distributions Otto-von-Guericke University Magdeburg
Principal component analysis (PCA) is one of the most popular dimension reduction
techniques in statistics and is especially powerful when a multivariate distribution
is concentrated near a lower-dimensional subspace. Multivariate extreme value distributions have turned out to provide challenges for the application of PCA since
their constraint support impedes the detection of lower-dimensional structures and
heavy-tails can imply that second moments do not exist, thereby preventing the application of classical variance-based techniques for PCA. We adapt PCA to max-stable
distributions using a regression setting and employ max-linear maps to project the
random vector to a lower-dimensional space while preserving max-stability. We also
provide a characterization of those distributions which allow for a perfect reconstruction
from the lower-dimensional representation. Finally, we demonstrate how an optimal
projection matrix can be consistently estimated and show viability in practice with a
simulation study and application to a benchmark dataset.
2:50 pm - 3:15 pm
An Alternative Approach to Power Law Dynamics in Preferential Attachment Models Otto-von-Guericke-Universität Magdeburg, Germany
A common feature observed in large real-world networks are degree distributions that resemble power laws. Since this has serious practical implications, many models have been proposed over time that aim to reflect this property. One of these is the class of preferential attachment models, which gained popularity soon after their introduction by Barabási and Albert in 1999. They describe a discrete-time growing graph process in which, at each time step, a newly added vertex randomly establishes a certain number of edges to existing vertices with a probability that is an affine function of their degrees. This 'rich-get-richer' dynamic provides an intuitive explanation for power-law distributions and has furthermore been proven to lead to these distributions asymptotically.
In this talk, we provide a complementary approach aimed at analysing individual vertices and their interactions in large networks. To this end, we select a fixed number of the oldest vertices and let them evolve for a heavy-tailed random number of time steps. Utilising tools from extreme value theory, such as the tail coefficient and spectral measure, we can then make predictions for the chosen degree vector in large networks, which we interpret as just the extremal realisations of our model. We discuss several model specifications, such as finite versus infinite dimensions, and fixed versus random numbers of outgoing edges per vertex.
3:15 pm - 3:40 pm
Gaussian Approximation and Moderate Deviations of Poisson Shot Noises with Application to Compound Generalized Hawkes Processes 1Imperial College London; 2Consiglio Nazionale delle Ricerche
In this presentation, we give explicit bounds on the Wasserstein and the Kolmogorov distances between random variables lying in the first chaos of the Poisson space and the standard Normal distribution, using the results proved by Last, Peccati and Schulte. Relying on the theory developed in the work of Saulis and Statulevicius and on a fine control of the cumulants of the first chaoses, we also derive moderate deviation principles, Bernstein-type concentration inequalities and Normal approximation bounds with Cramér correction terms for the same variables. The aforementioned results are then applied to Poisson shot-noise processes and, in particular, to the generalized compound Hawkes point processes (a class of stochastic models which generalizes classical Hawkes processes). This extends the recent results available in the literature regarding the Normal approximation and moderate deviations.
|
2:00 pm - 3:40 pm | S 5 (2): Stochastic modelling in life sciences Location: POT 13 Floor plan Session Chair: Matthias Birkner |
|
2:00 pm - 2:25 pm
3D-Analysis of tumor spheroids HTWD - University of Applied Sciences Dresden, Germany
Tumor spheroids are pre-clinical cell culture systems for assessing the impact of combinatorial radio(chemo)therapy. In contrast to 2D-in-vitro-experiments, they exhibit therapeutically relevant in-vivo-like characteristics, from three-dimensional (3D) cell-cell and cell-matrix interactions to radial pathophysiological gradients related to proliferative activity and nutrient/oxygen supply, all altering cellular radioresponse. The analysis of 3D tumor spheroid assays comprises of reading out the growth kinetics from brightfield microscopy images taken every second day and of classifing the therapeutic outcome into the cases ,,controlled’’ (no growth recurrence) and ,,relapse’’ (growth recurrence). To assist in the required evaluation of up to several thousands of microscopy images per treatment arm and to give support in evaluating the effect mechanisms, we develop a (semi-) automated spheroid analysis pipeline. It integrates automated spheroid segmentation as well as classification of the therapeutic outcome based on statistical and machine learning algorithms with knowledge-driven mechanistic modelling.
We present an efficient mathematical model for three-dimensional multicellular tumor spheroids, capable to explain experimental tumor spheroid growth data of several cell lines with and without radiotherapy, which facilitates efficient parameter calibration. The latter is accomplished since we effectively reduce computational cost by exploiting radial symmetry. We further demonstrate how this model can be integrated into a pipeline for automated 3D tumor spheroid analysis.
[1] Franke F, Michlikova S, Aland S, Kunz-Schughart LA, Voss-Böhme A, Lange S. Efficient Radial-Shell Model for 3D Tumor Spheroid Dynamics with Radiotherapy. Cancers 2023; 15(23):5645 doi:10.3390/cancers15235645
[2] Streller M, Michlikova S, Ciecior W, Lönnecke K, Kunz-Schughart LA, Lange S, Voss-Böhme A. Image segmentation of treated and untreated tumor spheroids by Fully Convolutional Networks. arXiv:2405.01105
2:25 pm - 2:50 pm
A maximum likelihood estimator for composite models 1Bielefeld University, Germany; 2University of Twente, the Netherlands; 3Ghent University, Belgium; 4Universidade Nova de Lisboa, Portugal
Structural equation modeling (SEM) is a popular and widely applied method that predominantly models latent variables by means of common factor models. Yet, in recent years, the composite model has gained increasing research attention. In contrast to common factor models, approaches to estimate composite models are limited. We contribute a full-information maximum likelihood (ML) estimator for composite models. We present the general composite model including its model-implied variance-covariance matrix and derive a full-information ML approach to estimate
the parameters of composite models. Moreover, a test is provided to assess the overall fit of composite models. To demonstrate the performance of the ML estimator and to compare it to its closest contender, i.e., partial least squares path modeling (PLS-PM), in finite samples, a Monte Carlo simulation is conducted. The Monte Carlo simulation reveals that, overall, the ML estimator performs well and is similar to PLS-PM in finite samples. Hence, under the considered conditions, the proposed estimator is a valid alternative with known superior statistical properties.
2:50 pm - 3:15 pm
Robust Multivariate linear models for multivariate longitudinal data Sungkyunkwan University, Korea, Republic of (South Korea)
Linear models commonly employed in longitudinal data analysis often assume a multivariate normal distribution. However, the presence of outliers can introduce bias in estimating the mean parameter within these models. In response to this challenge, alternative linear models have been proposed, assuming either a multivariate t distribution or a multivariate Laplace distribution, known for their robustness in mean parameter estimation. This paper conducts a comprehensive review of existing multivariate linear models and introduces multivariate Laplace linear models tailored for analyzing multivariate longitudinal data in the presence of outliers. Additionally, we present methodologies for addressing the covariance matrix or scale matrix, utilizing modified Cholesky decomposition and hyperspace decomposition. The comparison of these models is facilitated through simulations and examples, aiming to provide insights into their appropriate utilization.
|
2:00 pm - 3:40 pm | S 6 (2): Stochastic modelling in natural sciences Location: POT 112 Floor plan Session Chair: Alexandra Blessing Session Chair: Anton Klimovsky |
|
2:00 pm - 2:25 pm
The interacting Bose gas, loops and interlacements 1Weierstrass Institute; 2Technische Universität Berlin
We prove a variational characterisation of the free energy of the interacting Bose gas in the thermodynamic limit. The formula minimises the sum of entropy and energy over point processes of loops and interlacements. The starting point is a Poisson-point process representation of the gas that is based on a by now classical path-integral representation via the Feynman--Kac formula in terms of the Brownian loop soup. The new contribution is a clear separation between short and long loops, the latter yield the interlacement part. On the way, we introduce two new notions of specific relative entropy densities.
2:25 pm - 2:50 pm
Fitting spatial 3D models from stochastic geometry to 2D image data using methods from generative AI Ulm University, Germany
This talk introduces a computational method for generating digital twins of the 3D morphology of (functional) materials through stochastic geometry models, calibrated by means of 2D image data. By systematic variations of model parameters a wide spectrum of structural scenarios can be investigated, such that the corresponding digital twins can be exploited as geometry input for numerical simulations of macroscopic effective properties [1,2]. For calibrating models that can generate virtual 3D microstructures by stochastic simulation, generative adversarial networks (GANs) have gained an increased popularity [3]. While classical (non-parametric) GANs offer a data-driven approach for modeling complex 3D morphologies, the systematic variation of their model parameters for generating diverse, not yet measured structural scenarios can be difficult.
In contrast, parametric models of stochastic geometry (e.g., Gaussian random fields) allow for the generation of realistic, yet unobserved, structures through systematic parameter variation. However, as model complexity increases---such as for excursion sets of more general random fields or random tessellations induced by marked point processes, which are necessary to capture more intricate microstructures---the required number of model parameters increases substantially. This makes classical model calibration by means of interpretable descriptors impractical. Combining GANs with advanced stochastic geometry models can overcome these limitations and, in addition, allows for the calibration of model parameters solely based on 2D image data of planar sections through the 3D structure [4]. These parametric hybrid models are flexible enough to stochastically model complex 3D morphologies, enabling the systematic exploration of different structures. Moreover, by combining stochastic and numerical simulations, the impact of morphological descriptors on macroscopic effective properties can be investigated and quantitative structure-property relationships can be established. Thus, the presented method allows for the generation of a wide spectrum of virtual 3D morphologies, that can be used for identifying structures (e.g., cathodes in Li-ion batteries) with optimized functional properties.
References
[1] B. Prifling, M. Röding, P. Townsend, M. Neumann and V. Schmidt, Large-scale statistical learning for mass transport prediction in porous materials using 90,000 artificially generated microstructures. Frontiers in Materials 8 (2021) 786502.
[2] O. Furat, L. Petrich, D. Finegan, D. Diercks, F. Usseglio-Viretta, K. Smith and V. Schmidt, Artificial generation of representative single Li-ion electrode particle architectures from microscopy data. npj Computational Materials 7 (2021) 105.
[3] S. Kench and S.J. Cooper, Generating three-dimensional structures from a two-dimensional slice with generative adversarial network-based dimensionality expansion. Nature Machine Intelligence 3 (2021) 299-305.
[4] L. Fuchs, O. Furat, D.P. Finegan, J. Allen, F.L.E. Usseglio-Viretta, B. Ozdogru, P.J. Weddle, K. Smith and V. Schmidt, Generating multi-scale NMC particles with radial grain architectures using spatial stochastics and GANs. arXiv preprint arXiv:2407.05333 (2024).
2:50 pm - 3:15 pm
Existence and Non-Existence of Ground States in the Spin-Boson Model 1Technical University of Darmstadt; 2Paderborn University; 3University of Geneva
The Spin-Boson model describes the interaction between a quantum mechanical two-level system and a bosonic field. Its Hamiltonian, a self-adjoint and lower-bounded operator, is said to have a ground state if the infimum of its spectrum is an eigenvalue. Using the Feynman-Kac formula, one can express matrix elements of the semigroup generated by the Hamiltonian in terms of a self-attractive jump process. Associated with this process is a continuum FK-percolation model which randomly partitions the half-axis into intervals, with distinct intervals possibly being connected by bonds. Applying this representation, we show that, in the infrared-critical case, a phase transition occurs: as the coupling strength increases, the system transitions from having a ground state to having none. Based on recent work with Volker Betz, Benjamin Hinrichs and Mino Nicola Kraft.
3:15 pm - 3:40 pm
Enhanced binding for a quantum particle coupled to scalar quantized field 1TU Darmstadt, Germany; 2Harvard University, USA
A quantum particle coupled to a quantised field behaves as if it were effectively heavier than its actual mass. Enhanced binding refers to the phenomenon that due to this effective mass of the particle, the system admits a ground state, even if this is not the case for the uncoupled system. Feynman-Kac formulas allow a probabilistic interpretation of the problem: one studies Brownian motion perturbed by two attractive potentials,
$$\hat{\mathbb{P}}_{\delta,\alpha,T}(\mathrm{d}x) \propto \mathrm{e}^{ \delta \int_0 ^T \mathrm{d}s V(x_s) + \alpha \int_0 ^T \int_0 ^T \mathrm{d}s \mathrm{d}t W( x_t - x_s, t-s)} 1_{B(0,R)}(x_T) \mathbb{P}(\mathrm{d}x).$$
Here, $V$ can be thought of as rewarding the path for being close to the origin and $W$ as giving rewards to the path if it stays locally where it came from. $\alpha > 0$ is a parameter which determines how much Brownian motion is "slowed down" by the pair potential $W$. The main challenge is then showing that
$$\liminf\limits_{T \rightarrow \infty}\hat{\mathbb{P}}_{\delta,\alpha,T} \left( \Vert x_{T/2} \Vert \le R \right) > 0,$$
which can be interpreted as the particle localising. This can be seen to imply the existence of a ground state in the quantum system. This is joint work with Volker Betz and Mark Sellke.
|
2:00 pm - 3:40 pm | S 7 (3): Stochastic processes: theory, statistics and numerics Location: POT 51 Floor plan Session Chair: Andreas Neuenkirch Session Chair: Jakob Söhl |
|
2:00 pm - 2:25 pm
Numerical Methods for SDEs on Manifolds Georg-August-Universität Göttingen, Germany
In the past few years there has been a surge in interest in the estimation of solutions to stochastic differential equations on Riemannian manifolds. The need for such methods arose from a desire to be able to sample from probability measures on Riemannian manifolds. Taking inspiration from Euclidean space, the Euler discretisation of the over-damped Langevin diffusion has been used to tackle the problem of sampling. However rates for weak convergence of the algorithm had not yet been proved without a reliance on embedding the manifold into a high dimensional copy of Euclidean space.
Analysis of the Euler scheme in the weak sense also raises the question about convergence rates in the strong sense. By following closely to the approach laid out in the seminal works of Milstein, we show how to generate high order strong schemes on a Riemannian manifold with non-positive curvature. In particular, we present the Milstein correction to the Euler scheme which yields a scheme of order 1.
The talk will give an overview of recent results obtained in joint work with Karthik Bharath, Akash Sharma, and Michael Tretyakov.
2:25 pm - 2:50 pm
Regularizaiton by noise and approximations of singular kinetic SDEs University of Augsburg, Germany
Regularisation by noise in the context of stochastic differential equations (SDEs) with coefficients of low regularity, known as singular SDEs, refers to the beneficial effect produced by noise so that the singularity from the coefficients is smoothed out yielding well-behaved equations. Kinetic SDEs, also sometimes called second order SDEs, as one typical type of stochastic Hamiltonian systems, describe the motion of a particle perturbed by some random external force. The difference of comparing it with usual SDEs is that the noise of the space position vanishes and only appears in the direction of velocity, hence less noise gets involved in the system. In this talk we will discuss about the regularization effect by the degenerate noise for the singular kinetic SDEs from numerical approximation and also particles approximation view point.
2:50 pm - 3:15 pm
Adaptive approximation of jump-diffusion SDEs with discontinuous drift University of Klagenfurt, Austria
In this talk the approximation of jump-diffusion stochastic differential equations with discontinuous drift, possibly degenerate diffusion coefficient, and Lipschitz continuous jump coefficient is studied. These stochastic differential equations can be approximated with a jump-adapted approximation scheme with a convergence rate $3/4$ in $L^p$. This rate is optimal for jump-adapted approximation schemes. We present an advanced adaptive approximation scheme to improve this convergence rate. Our scheme obtains a strong convergence rate of at least $1$ in $L^p$ in terms of the average number of evaluations of the driving noises.
|
2:00 pm - 3:40 pm | S 7 (4): Stochastic processes: theory, statistics and numerics Location: POT 151 Floor plan Session Chair: Andreas Neuenkirch Session Chair: Jakob Söhl |
|
2:00 pm - 2:25 pm
Stability of geometrically recurrent time-inhomogeneous Markov chains Taras Shevchenko National University of Kyiv, Ukraine
This result is devoted to establishing upper bounds for a difference of $n$-step transition probabilities for two different, time-inhomogeneous, discrete-time Markov chains with values in a locally compact space when their one-step transition probabilities are close.
This stability result is applied to the functional autoregression in $\mathbb{R}^n$.
We study a pair of independent, discrete-time Markov chains
$\{X^{(i)}_n, n\ge 0\}$, $i\in \{1,2\}$ with values in a locally compact space $E$ equipped with a $\sigma$-field $\mathfrak{E}$.
Since $E$ is locally compact, there exists a sequence of compact sets $\{K_n, n\ge 0\}$, such that
$K_n \subset K_{n+1},\ n\ge 0$ and $\bigcup_{n\ge 0} K_n = E$.
Let us introduce Markov kernels
$$ P_{in}(x, A) = \mathbb{P}\left\{X^{(i)}_{n+1} \in A \middle|\ X^{(i)}_n = x\right\}, n\ge 1, x\in E, A\in \mathfrak{E}. $$
Since we are interested in stability, the kernels $P_{1n}$ and $P_{2n}$ should be close in some sense, which we will define next.
To this end, we introduce substochastic kernels
$$
Q_n(x, \cdot) = \left(P_{1n}\wedge P_{2n}\right)(x, \cdot),
$$
where $\wedge$ should be understood as a minimum of two measures and put
$$
\varepsilon := 1 - \inf_{n, x} Q_n(x, E) \le 1.
$$
From now on, we assume that $\varepsilon < 1$. We denote the residual substochastic kernels by
$$ R_{in}(x, A) = P_{in}(x, A) - Q_n(x, A), $$
so that
$$ R_{in}(x, E) \le \varepsilon. $$
Assume the following conditions hold.
\textbf{Condition M} (Minorization condition)
Assume that there exist a set $C\in \mathfrak{E}$, a sequence of real numbers $\{a_n, n\ge 1\}, a_n \in (0,1)$ and a sequence of probability measures $\nu_n$ on $(E, \mathfrak{E})$ such that:
$$ \inf_{x\in C} P_{in}(x, A) \ge a_n \nu_n(A),\ i\in \{1,2\},$$
$$\inf_{n} \nu_n(C) > 0, $$
$$ 0 < a_* := \inf_n a_n \le a_n \le a^* := \sup_n a_n < 1,$$
for all $A\in \mathfrak{E}$ and $n\ge 1$.
This condition can be understood as a local mixing condition, where mixing occurs on a set $C$.
Let us introduce the following condition of geometric recurrence of the pair of chains $\left(X^{(1)}, X^{(2)}\right)$.
\textbf{Condition GR} (Geometric Recurrence)
Assume independent chains $X^{(1)}$ and $X^{(2)}$ satisfy \textbf{Condition M} and $C$ is a corresponding set. Then, there exist constant $\psi > 1$ such that
$$ h(x,y) = \sup_n \mathbb{E}^n_{xy}\left[\psi^{\sigma_{C\times C}}\right] < \infty, $$
for all $x,y \in E$, where
$$\sigma_{C\times C} = \inf\left\{ k \ge 1\ :\ \left(X^{(1)}_k, X^{(2)}_k\right) \in C\times C\right\}. $$
When used in the context of $\mathbb{P}^n_{xy}$ by $\sigma_{C\times C}$ we mean
$$\sigma_{C\times C} = \inf\left\{ k \ge n+1\ :\ \left(X^{(1)}_{n+k}, X^{(2)}_{n+k}\right) \in C\times C\right\}. $$
\textbf{Condition T} (Tails condition).
Denote by $A_m=K_{m+1}\setminus K_m$. Assume that there exist sequences $\{\hat S_n, n\ge 1\}$ and $\{\hat r_n, n\ge 1\}$, such that
$$\hat m = \sum\limits{m\ge 1},\ \hat S_m < \infty,\ \Delta = \sum_{m\ge 1} \hat r_m \hat S_m < \infty, $$ and
\begin{equation}\label{cond_t_1}
\begin{array}{c}
\left(\prod\limits_{k=1}^{n} Q_{t+k}\right) (x, A_m) \le \left(\prod\limits_{k=1}^n Q_{t+k}\right)(x, E)\hat S_m,\ x\in C,\\
\nu_t \left(\prod\limits_{k=1}^{n} Q^{t+k}\right) (A_m) \le \nu_t \left(\prod\limits_{k=1}^n Q_{t+k}\right)(E)\hat S_m,\ x\in C,
\end{array}
\end{equation}
and
$$
\sup_{x, t\in A_m} \int_{E^2\setminus C\times C} {R_{1t}(x, dy) R_{2t}(x, dz)\over 1-Q_t(x,E) } h(y,z) \le \hat r_m.
$$
for all $t\ge 0$.
\begin{theorem}\label{thm_stability}
Let $X^{(i)}$, $i\in \{1,2\}$, be two Markov chains defined above that satisfy \textbf{Condition M}, \textbf{Condition GR} and \textbf{Condition T}.
Assume also that $\varepsilon < 1$.
Then there exist constants $M_1, M_2\in \mathbb{R}$, such that for every $x \in C$
\begin{align}\label{main_estimate}
\left|\left| \mathbb{P}^t_x\left\{X^{(1)}_{n} \in \cdot \right\} - \mathbb{P}^t_x\left\{X^{(2)}_{n} \in \cdot \right\} \right|\right| &\le\varepsilon \hat m M_1 + \Delta M_2, \\ \nonumber
\end{align}
where $\hat m$ and $\Delta$ are defined in \textbf{Condition T}.
For every $x\notin C$, the following inequality holds true
\begin{align}\label{main_estimate2}
\left|\left| \mathbb{P}^t_x\left\{X^{(1)}_{n} \in \cdot \right\} - \mathbb{P}^t_x\left\{X^{(2)}_{n} \in \cdot \right\} \right|\right| &\le\varepsilon ( 2 \hat m M_1 + \hat\mu(x))+ 2\Delta M_2, \\ \nonumber
\end{align}
where
$$\hat\mu(x) = \sup_t \sum\limits_{k \ge 1} \left(\prod\limits_{j=0}^{k-1}Q_{t+j}\mathbb{1}_{E \setminus C}\right)(x, E\setminus C) \le 1.$$
\end{theorem}
2:25 pm - 2:50 pm
Analyticity of the Capacity of the Range of Random Walks University of Passau, Germany
In this talk we study the asymptotic capacity of the range of random walks. First, I will give a quick introduction to the concept of the capacity of the range, which has been investigated mostly on $\matbbb{Z}^d$. However, there are not many results going beyond. We will focus in this talk on random walks on groups having infinitely many ends, where we sketch the basic idea of the proof that the asymptotic capacity varies real-analytically in terms of probability measures of constant support.
|
2:00 pm - 3:40 pm | S 8 (2): Finance, insurance and risk: Modelling Location: POT 361 Floor plan Session Chair: Peter Hieber Session Chair: Frank Seifried |
|
2:00 pm - 2:25 pm
XVA analysis in incomplete markets 1University of Munich (LMU), Germany; 2University of Verona, Italy
This paper presents a study of a XVA framework in which the underlying market is incomplete due to the absence of credit default swaps for the bank and the counterparty. Therefore, neither the BSDE-based replication nor the equivalent discounting approach for XVAs can be applied in this case. We address this issue by employing the local risk-minimisation approach for hedging the bank's position. As a result, we are able to describe the resulting strategy in a multi-curve setting via BSDEs.
2:25 pm - 2:50 pm
Estimation of dynamically recalibrated affine and polynomial models in finance Christian-Albrechts-Universität zu Kiel, Germany
Dynamic recalibration of risk-neutral parameters in stochastic models to align with observed prices of financial derivatives is a widely used industry practice that, however, often lacks a tractable underlying mathematical framework. We address this gap by proposing a novel methodology wherein recalibrated parameters are treated as unobservable components embedded within a larger-scale affine or polynomial model. The estimation of the dynamics of this background model then boils down to a two-step procedure, in which unobservable states are first calibrated to observed option prices using classic least-squares optimization techniques, and risk-neutral and physical model parameters are then jointly estimated to fit the trajectories of observed components alongside with the recalibrated states. We embed this joint estimation of both measures into the general framework of estimating functions and establish weak consistency and asymptotic normality of the resulting estimators. Moreover, we derive explicitly computable expressions of the asymptotic estimator covariance matrix.
2:50 pm - 3:15 pm
Weak Error Rates for Local Stochastic Volatility Models 1TU Berlin, Germany; 2WIAS Berlin, Germany; 3CERMICS, France; 4Qube Research and Technologies, Singapore
Local stochastic volatility refers to a popular model class in applied mathematical finance that allows for "calibration-on-the-fly", typically via a particle method, derived from a formal McKean-Vlasov equation. Well-posedness of this limit is a well-known problem in the field with the general case still being largely open, despite recent progress in Markovian situations.
Our approach is to start with a well-defined Euler approximation to the formal McKean-Vlasov equation, followed by a newly established "half-step"-scheme, allowing for good approximations of conditional expectations.
We show that this scheme converges with weak rate one regarding the step-size, plus error terms that account for the said approximation. Furthermore, the case of particle approximation is discussed in detail and the weak error rate, in dependence of all parameters used, is derived.
3:15 pm - 3:40 pm
Discrete approximation of risk-based pricing under volatility uncertainty University of Konstanz, Germany
We discuss the limit of risk-based prices of European contingent claims in discrete-time financial markets under volatility uncertainty when the number of intermediate trading periods goes to infinity. The limiting dynamics are obtained using recently developed results for the construction of strongly continuous convex monotone semigroups. We connect the resulting dynamics to the semigroups associated to G-Brownian motion, showing in particular that the worst-case bounds always give rise to a larger bid-ask spread than the risk-based bounds. Moreover, the worst-case bounds are achieved as limit of the risk-based bounds as the agent’s risk aversion tends to infinity. The talk is based on joint work with Jonas Blessing and Alessandro Sgarabottolo.
|
2:50 pm - 3:40 pm | S 3 Keynote: Stochastic Analysis and S(P)DEs Location: POT 81 Floor plan Session Chair: Vitalii Konarovskyi Session Chair: Aleksandra Zimmermann |
|
2:50 pm - 3:40 pm
Mean field control with infinite dimensional common noise Mean field control and its twin theory of mean field games have undergone many developments since the early works of Lasry and Lions, and Huang, Caines, and Malhamé. In a nutshell, both theories aim at the asymptotic analysis of equilibria within large populations of rational agents in weak interaction. In the case of control, the equilibria are asymptotically understood as solutions to an optimization problem posed on a McKean-Vlasov type dynamics. From an Eulerian point of view, the value function is the solution of a Hamilton-Jacobi equation on the space of probability measures. The underlying objective of this presentation is to construct stochastic versions of this problem in which the measure-valued dynamics is itself random, and in particular subject to a noise that is sufficiently regularizing to smooth the Hamilton-Jacobi equation. In this presentation, we give two examples of such noise. Based on joint works with W. Hammersley, M. Martini and G. Sodini |
2:50 pm - 3:40 pm | S13 (2): is dropped Location: ZEU 250 Floor plan Session Chair: Alexander Kreiss Session Chair: Leonie Selk |
3:40 pm - 4:20 pm | Coffee Break Location: Foyer Potthoff Bau Floor plan |
3:40 pm - 4:20 pm | Coffee Break Location: POT 168 Floor plan |
4:20 pm - 5:10 pm | S 7 Keynote: Stochastic processes: theory, statistics and numerics Location: POT 81 Floor plan Session Chair: Andreas Neuenkirch Session Chair: Jakob Söhl |
|
4:20 pm - 5:10 pm
Kalikow decomposition for the study of neuronal networks: simulation and learning Université Côte d'Azur, CNRS, France Kalikow decomposition is a decomposition of stochastic processes (usually finite state discrete time processes but also more recently point processes) that consists in picking at random a finite neighborhood in the past and then make a transition in a Markov manner. This kind of approach has been used for many years to prove existence of some processes, especially their stationary distribution. In particular, it allows to prove the existence of processes that model infinite neuronal networks, such as Hawkes like processes or Galvès-Löcherbach processes. But beyond mere existence, this decomposition is a wonderful tool to simulate such network, as an open physical system, that from a computational point of view could be competitive with the most performant brain simulations. This decomposition is also a source of inspiration to understand how local rules at each neuron can make the whole network learn. |
4:20 pm - 6:00 pm | S 1 (3): Machine Learning Location: POT 06 Floor plan Session Chair: Merle Behr Session Chair: Alexandra Carpentier |
|
4:20 pm - 4:45 pm
Effective fluctuating continuum models for SGD with small learning rate, or in overparameterized limits MPI MIS Leipzig & Universität Bielefeld, Germany
In this talk we present recent results on the derivation of effective models for the training dynamics of SGD in limits of small learning rate or large, shallow networks. The focus lies on developing effective limiting models that also capture the fluctuations inherent in SGD. This will lead to novel concepts of stochastic modified flows, and distribution dependent modified flows. The advantage of these limiting models is that they match the SGD dynamics to higher order, and recover the correct multi-point distributions.
This is joint work with Vitalii Konarovskyi and Sebastian Kassing.
4:45 pm - 5:10 pm
Learning of deep convolutional network image classifiers via stochastic gradient descent and over-parametrization 1TU Darmstadt, Germany; 2Concordia University, Canada
Image classification from independent and identically distributed random variables is considered. Image classifiers are defined which are based on a linear combination of deep convolutional networks with max-pooling layer. Here all the weights are learned by stochastic gradient descent.
A general result is presented which shows that the image classifiers are able to approximate the best possible deep convolutional network. In case that the a posteriori probability satisfies a suitable hierarchical composition model it is shown that the corresponding deep convolutional neural network image classifier achieves a rate of convergence which is independent of the dimension of the images.
5:10 pm - 5:35 pm
Optimal Rates for Forward Gradient Descent based on Multiple Queries University of Twente, Netherlands, The
We investigate the prediction error of forward gradient descent based on
multiple queries
in the linear model. It is shown that if the number of queries is chosen suitably large,
the minimax optimal rate of convergence can be achieved, matching the performance of
stochastic gradient descent. The results also address the case of low-rank training data,
which can be beneficial in high-dimensional problems. As forward gradient descent only
requires forward passes through a network, which are feasible in the human brain, our
results show that rate-optimal results are achievable by biologically plausible optimization
methods.
5:35 pm - 6:00 pm
Stochastic Modified Flows for Riemannian Stochastic Gradient Descent 1Universität Bielefeld, Germany; 2Technische Universität Berlin, Germany; 3Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany; 4University of York, UK
We give quantitative estimates for the rate of convergence of Riemannian stochastic
gradient descent (RSGD) to Riemannian gradient flow and to a diffusion process, the so-called Riemannian stochastic modified flow (RSMF). Using tools from stochastic differential geometry we show that, in the small learning rate regime, RSGD can be approximated by the solution to the RSMF driven by an infinite-dimensional Wiener process. The RSMF accounts for the random fluctuations of RSGD and, thereby, increases the order of approximation compared to the deterministic Riemannian gradient flow. The RSGD is build using the concept of a retraction map, that is, a cost efficient approximation of the exponential map, and we prove quantitative bounds for the weak error of the diffusion approximation under assumptions on the retraction map, the geometry of the manifold, and the random estimators of the gradient.
|
4:20 pm - 6:00 pm | S 2 (3): Spatial stochastics, disordered media, and complex networks Location: POT 251 Floor plan Session Chair: Chinmoy Bhattacharjee Session Chair: Benedikt Jahnel |
|
4:20 pm - 4:45 pm
$L^1$-compression and percolation on graphs University of Münster, Germany
The $L^1$-compression exponent of a graph quantifies the extent to which the graph fails to bi-Lipschitz embed into $L^1$-spaces. These spaces provide a natural target geometry and the compression exponent is known to capture useful geometric information, for instance about the speed of random walks. In this talk, we will describe a probabilistic characterization of this exponent in terms of connectivity decay of bond percolation models with large marginals.
In the special case of Cayley graphs of groups, the result makes progress on a program suggested by Russell Lyons. Notably, the general result does not require any symmetry assumption. The proof uses a generalization of a construction of bond percolations developed by the authors previously for the symmetric setting, which uses Poisson point processes on so-called spaces with measured walls and yields explicit bounds on the connectivity function.
This talk is based on joint work with Chiranjib Mukherjee.
4:45 pm - 5:10 pm
Edge Exchangeable Random Graphs: Connectedness, Completeness and Normality MPI Leipzig, Germany
A sequence of random variables is called exchangeable if the its law is invariant under finite permutations. This is a weaker assumption than independence and often quite plausible in applications. It approximately corresponds to the intuition that the random variables are observed in no particular order. One thus expects edge exchangeable models to be appropriate when we observe \emph{interactions} rather than \emph{agents} in no particular order. We study certain graph-theoretic properties of graphs produced by edge exchangeable processes.
We restrict ourselves to the setting of undirected simple (multi-)graphs with a countable latent vertex set. Let $\mathbb{N}_2$ be the set of unordered pairs of distinct natural numbers. For a set $A$ denote by $\mathcal{P}(A)$ the set of probability measures on $A$. A De-Finietti type result for edge exchangeable models is known and in our setting says that all such models can be obtained by a suitable choice of $\mathcal{M}$ in the following sampling procedure. Let $G_0=\{\emptyset,\emptyset\}$ be the empty undirected multigraph. Let $\mathcal{M} \in \mathcal{P}(\mathcal{P}(\mathbb{N}_2))$. Sample, $\mathcal{W} \sim \mathcal{M}$. Then recursively set
$$G_{n+1} := G_n \cup e_{n+1}, e_{n+1} \sim \mathcal{W}.$$
Less tersely, start with an empty graph and given $\mathcal{W} \in P(\mathcal{M})$ (which may have been random), repeatedly sample edges iid from $\mathcal{W}$ and add them, as well as any necessary vertices, to the graph. It is also natural, and technically useful, to consider the poissonized version in which for each $e \in \mathbb{N}_2$ we run an independent of everything else poisson process with intensity $\mathcal{W}(\{e\})$ which counts the number of copies of edge $e$. We will be interested in properties (number of vertices, connectedness, completeness) which are invariant under identifying parallel edges, which we will henceforth do. We abuse notation and use $G_n$ also for this process. In the poissonized case we see that this reduces to studying a model in which each edge arrives with an independent exponential time with parameter $\mathcal{W}(\{e\})$. Our results also apply to and are in fact proven for this exponential time version and recovered for the initial setting by de-poissonization.
Notice that graphs produced in this way have vertex sets with random size and random contents and that isolated vertices never occur. This is quite different from classical random graph settings. Moreover, we study the full trajectory of the graph-valued stochastic process rather than a sequence of independent graphs.
We define an apparently new notion for graph-valued stochastic processes. Namely, that of eventual forever connectedness, which as the name implies, is the event that there exists some (random) time after which the graph is always connected. We make three main contributions, which we do not state explicitly as to avoid introducing more notation and definitions. Conditioning on $\mathcal{W}$ we,
\begin{itemize}
\item give a necessary and sufficient condition on $\mathcal{W}$ for $G_n$ to be eventually forever connected and therein illustrate that it is not a very restrictive assumption.
\item show that if $G_n$ is eventually forever connected and the variance of the number of vertices goes to infinity then the number of vertices is asymptotically Gaussian. This proves a conjecture of Janson in the connected regime.
\item give a necessary and sufficient condition on $\mathcal{W}$ for $G_n$ to be eventually forever essentially the complete graph. Janson had previously found a sufficient condition.
\end{itemize}
A new component is created precisely when an edge brings two new vertices. For the first result the main difficulty is that these events are dependent. We proceed by bounding the amount of dependence and use a pseudo-converse of the first Borel-Cantelli lemma. For the second, the main difficulty is that whether a vertex is present is not independent of the other vertices. We notice that the phenomenon which is an obstacle to connectedness (edges which bring two new vertices) witnesses this dependence. Therefore it turns out that making the topological assumption of eventual forever connectedness helps prove our distributional result. We consider this surprising observation the highlight of the associated forthcoming paper. The proof proceeds by constructing a coupling to an urn scheme which displays the necessary independence. For the final result we proceed by proving a general result regarding the ordering of collections of exponential random variables and then realize the theorem as a corollary.
5:10 pm - 5:35 pm
Robustness in the Poisson Boolean model with convex grains 1University of Leeds, England; 2University of Cologne, Germany
Consider a homogenous Poisson Point Process on $\mathbb{R}^d$, equipped with convex grains, i.e. i.i.d. copies of a random convex body that is rotationally-invariant in distribution. We define for such a convex body a non-increasing sequence of diameters. The first diameter is the classical diameter of the convex body. The $i$-th diameter is then defined as the diameter of the orthogonal projection of the body from the previous step along the $(i-1)$-st diameter onto the $(d-i+1)$-dimensional hyperplane. This projected body is then projected further when determining the next diameter. We state several criteria on the diameter distribution and moment conditions for the volume of the convex body that result in either a dense process,
i.e. the whole space is covered by the grains, or robustness, i.e. the union of the grains has an unbounded connected component for any intensity of the underlying Poisson process. Importantly, we do not impose any conditions on the joint distribution of the diameters.
If the grains are chosen to be euclidean balls, it is known that density and robustness are equivalent. We show in our general model that in any dimension $d\geq 2$ there exists grain distributions where robustness does not imply density.
5:35 pm - 6:00 pm
Dimensions of limsup sets on trees Johannes Gutenberg University Mainz, Germany
Let $T$ be a supercritical Galton-Watson tree with mean offspring number $m>1$ and finite variance.
Let $\alpha >0$ and assume that every node in the tree on level $n$ is "marked" with a probability of $e^{-n\alpha}$, independently of all other nodes. Let $\partial T$ be the set of infinite self-avoiding paths in the tree, starting at the root.
There is a standard metric on $\partial T$: The distance of two elements in $\partial T$ is $e^{-n}$ if $n$ is the level of the last node they have in common. We show that for GW-almost every tree, the subset of paths which are marked infinitely often does almost surely have Hausdorff dimension $\log(m)-\alpha$, if $m>e^{\alpha}$. For $m<e^{\alpha}$, the set is empty almost surely. In the critical case $m=e^\alpha$ the set is nonempty with dimension $0$.
|
4:20 pm - 6:00 pm | S 3 (1): Stochastic Analysis and S(P)DEs Location: POT 151 Floor plan Session Chair: Vitalii Konarovskyi Session Chair: Aleksandra Zimmermann |
|
4:20 pm - 4:45 pm
Optimal Control of a fractional Noise-Perturbed Nonlinear Schroedinger Equation Martin-Luther-University Halle-Wittenberg
An optimal control problem for a class of stochastic Schroedinger equations with power-type nonlinearity driven by a multiplicative fractional Brownian motion with Hurst index $H \in (0, 1)$ is discussed. The state equation is defined in variational sense. A separation approach is used
and the solution is given by the product of the solution of a controlled pathwise problem and the solution of an SDE. A general cost function for the optimal control problem is introduced. Finite dimensional Galerkin approximations and a linearization method are presented and used to
derive $\varepsilon$-optimal solutions. \\
This talk is based on joint work with Hannelore Lisei.
4:45 pm - 5:10 pm
Finite Dimensional Projections of HJB Equations in the Wasserstein Space Georgia Institute of Technology, United States of America
In this talk, we consider the optimal control of particle systems with mean-field interaction and common noise, and their limit as the number of particles tends to infinity. First, we prove the convergence of the value functions $u_n$ corresponding to control problems of $n$ particles to the value function $V$ corresponding to an appropriately defined infinite dimensional control problem. Then, we prove, under certain additional assumptions, $C^{1,1}$ regularity of $V$ in the spatial variable.
In the second part, we discuss conditions under which the value function $V$ projects precisely onto the value functions $u_n$. Using this projection property, we show that optimal controls of the finite dimensional problem correspond to optimal controls of the infinite dimensional problem and vice versa.
This talk is based on [A. Swiech, L. Wessels: Finite Dimensional Projections of HJB Equations in the Wasserstein Space, https://arxiv.org/abs/2408.07688, 2024].
5:10 pm - 5:35 pm
On the maximal regularity of SPDEs on non-smooth domains Universität Kassel, Germany
Although there exists an almost fully-fledged $L_p$-theory for (semi-)linear second order stochastic partial differential equations (SPDEs, for short) on \textit{smooth} domains, very little is known about the regularity of these equations on \textit{non-smooth} domains. As it is already known from the deterministic theory, boundary singularities like corners, edges, or cusps may have a negative effect on the regularity of the solution. For stochastic equations, this effect comes on top of the already known incompatibility of noise and boundary condition. In this talk I will present some new results towards a better understanding of the impact of these effects on the behaviour of solutions to SPDEs on domains with corners and edges, as they naturally appear in applications.
This is joint work with Emiel Lorist (TU Delft), Mark Veraar (TU Delft), and Tobias Werner (Universität Kassel),
5:35 pm - 6:00 pm
Solution theory for the stochastic thin film equation with spatially colored noise 1TU Delft, Netherlands; 2University of Leeds, United Kingdom; 3Universität Bielefeld, Germany; 4MPI MiS Leipzig, Germany
We present recent results on the existence and uniqueness of solutions to stochastic thin-film equations with spatially colored Gaussian noise. We focus on difficulties related to closing relevant energy and (alpha-) entropy estimates for the equation when subjected to nonlinear gradient noise. Subsequently, we present resulting theorems on the existence of weak solutions and the existence/uniqueness of strong solutions to the equation under various model assumptions.
(This talk summarizes the research works making up the PhD thesis of the 5th author, which is available at the repository of the TU Delft: https://repository.tudelft.nl)
|
4:20 pm - 6:00 pm | S 4 (2): is dropped Location: ZEU 160 Floor plan Session Chair: Jan Nagel Session Chair: Marco Oesting |
4:20 pm - 6:00 pm | S 5 (3): Stochastic modelling in life sciences Location: POT 13 Floor plan Session Chair: Matthias Birkner |
|
4:20 pm - 4:45 pm
On multi-type Cannings models and their multi-type limiting coalescents University of Tübingen
A multi-type neutral Cannings population model with mutation and fixed subpopulation sizes is analyzed. Under appropriate conditions, as all subpopulation sizes tend to infinity, the ancestral process, properly time-scaled, converges to a multi-type coalescent with mutation allowing for simultaneous multiple collisions of ancestral lineages. The limiting coalescent shares the exchangeability and consistency property. The proof gains from coalescent theory for single-type Cannings models and from decompositions into reproductive and mutational parts.
4:45 pm - 5:10 pm
A conditional coalescent limit in fixed pedigrees 1Johannes Gutenberg University Mainz, Germany; 2Indiana University, USA; 3Harvard University, Cambridge, MA
We consider a general exchangeable diploid population model (Cannings model) as a model for a random pedigree. Embedded within this pedigree are the genealogies associated with a single, neutral autosomal locus of which each individual carries two copies. Mathematically, they form a system of coalescing random walks in a random environment, given by the pedigree.
Complementing previous work on an `annealed' limit theorem which states that, under mild conditions, the genealogies can asymptotically be described by a $\Xi$-coalescent after averaging over the pedigree, we establish the corresponding `quenched' limit. That is, we fix a realisation of the random pedigree and show that, in the limit of large populations, the genealogies can be described by an \emph{inhomogeneous} coalescent process.
5:10 pm - 5:35 pm
A central limit theorem for measure-valued Reed–Frost epidemics TU Ilmenau, Germany
To account for continuous features among individuals such as age or place of residence, we extend multitype Reed–Frost models, i.e., discrete-time SIR models, such that the distribution of certain features may be modeled as a probability distribution on a suitable space of possible types of individuals. The epidemic then runs on a population randomly drawn from this distribution while the infection and recovery probabilities may also depend on the individuals’ types. Our main results state that for every point in time, the empirical measures of susceptible, infective and recovered individuals converge pointwise in probability to a deterministic limit given by recursions (resembling those for Reed–Frost epidemics) and that those measures fulfill a central limit theorem. We discuss our model assumptions and illustrate the results with simulations.
|
4:20 pm - 6:00 pm | S 6 (3): Stochastic modelling in natural sciences Location: POT 112 Floor plan Session Chair: Alexandra Blessing Session Chair: Anton Klimovsky |
|
4:20 pm - 4:45 pm
Physical origin of the fractional Brownian motion and related Gaussian processes arising in the models of anomalous diffusion 1Kassel University, Germany; 2Saarland University, Germany; 3Sapienza Università di Roma, Italy; 4Basque Center of Applied Mathematics (BCAM), Spain
Experimentally well-established, anomalous diffusion (AD) is a phenomenon observed in many different natural systems belonging to different research fields. In particular, AD has become fundational in living systems after a large use of single-particle tracking techniques in the recent years.
Generally speaking, AD labels all those diffusive processes that are governed by laws that differ from that of classical diffusion, namely, all that cases when particles' displacements do not accomodate to the Gaussian density function and/or the variance of such displacements does not grow linearly in time.
We propose an attempt for establishing the physical origin of AD within the classical picture of a test-particle kicked by infinitely many surrounding particles. We consider a stochastic dynamical system where the microscopic thermal bath is the source for the mesoscopic Brownian motion of a bunch of $N$ particles that express the environment of a single test-particle. Physical conservation principles, namely the conservation of momentum and the conservation of energy, are met in the considered particle system in the form of the fluctuation-dissipation theorem for the motion of the surround-particles. The key feature of the considered particle-system is the distribution of the masses of the particles that compose the surround of the test-particle. When the number of mesoscopic Brownian particles $N$ is large enough for providing a crowded environment, then the test-particle displays AD characterised by the distribution of the masses of the surround-particles. More precise, we prove that, in the limit $N\to\infty$,
the test-particle diffuses according to a quite general non-Markovian Gaussian process $(Z_t)_{t\geq0}$ characterised by a covariance function
\begin{align}\label{eq:Covariance}
Cov(Z_t,Z_s)=C(v(t)+v(s)-v(|t-s|)),
\end{align}
where $v(\cdot)$ is determined by the distribution of the masses of the surround-particles. With a particular choice of distribution of surround-particles, we obtain fractional Brownian motion (fBm) with Hurst parameter $H\in(1/2,1)$ as a special case. In this respect, we remind that the fBm experimentally turned out to be the underlying stochastic motion in many living systems. We present also some distributions of masses of the surround-particles which lead to a mixture of independent fractional Brownian motions with diferent Hurst parameters or to a classical Brownian motion as a limiting process $(Z_t)_{t\geq0}$. Moreover, we present some distributions of masses of the surround-particles leading to the limiting processes which perform a transition from ballistic diffusion to superdiffusion, or from ballistic diffusion to classical diffusion.
Furthermore, the constant $C$ in the above formula for the covariance
of the process $(Z_t)_{t\geq0}$ depends on the coupling parameter between the test-particle and the surround. Therefore, if we consider several independent identical copies of the same Brownian surround and immerse into each copy of the surround a single test particle of the same art but with its own individual characteristics (assuming our test-particle is a complex macromolecule with its individual shape, radius, densty e.t.c.), we may obtain different coupling parameters and hence different coefficients $C$ in the covariance of the limiting Gaussian process $(Z_t)_{t\geq0}$ in different copies of the experiment. This fact may serve as a physical basis for the formulation of AD within the framework of the superstatistical fBm, where
further randomness is provided by a distribution of the diffusion coeffients associated to each diffusing test-particle and also within the framework of its generalisation called diffusing-diffusivity approach, where the diffusion coefficient of each test-particle is no longer a random variable but a process.
The proof of our result is based on properties of the Ornstein--Uhlenbeck processes that describe the dynamics of the Brownian surround-particles in the exact fashion of the Langevin equation and on a kind of Central-Limit-Theorem arguments which however deal with a worser scaling than the one in the classical CLT. A worser scaling is however compensated by good properties of Ornstein--Uhlenbeck processes.
4:45 pm - 5:10 pm
A probabilistic study of the set of stationary solutions to spatially kinetic-type equations Stiftung Universität Hildesheim, Germany
We study multivariate kinetic-type equations in the general case, which includes a.o. the spatially homogeneous Boltzmann equation with Maxwellian molecules, both with elastic and inelastic collisions. Assuming, that the collisional kernel is of the form, derived by Bassetti et.al. in \cite{1}, we prove the existence and uniqueness of time-dependent solutions with the help of continuous-time branching random walks, under weakest assumptions possible. We further derive an exact representation of stationary solutions, e.g. equilibrium solutions for kinetic-type equations, using the central limit theorem for triangular null arrays.
\footnotesize
\begin{thebibliography}{1}
\bibitem{1} {F. Bassetti, L. Ladelli and D. Matthes} (2015). Infinite energy solutions to inelastic homogeneous Boltzmann equations. {\em Electronic Journal of Probability}. 20:1--34.
\end{thebibliography}
5:10 pm - 5:35 pm
Analysis of a strongly repulsive particle system chemically interacting with the environment: a stochastic model for the sulphation phenomenon. 1Università degli Studi di Milano, Italy; 2Karlstad University
We present a new stochastic model for the sulphation process of calcium carbonate at the microscale, focusing on the chemical reaction that leads to the formation of gypsum and to the consequent marble degradation, which is relevant in Cultural Heritage conservation.
The Langevin dynamics of the sulfuric acid particles is described via first order stochastic differential equations (SDEs) of It\^o type, while calcium carbonate and gypsum are modelled as underlying random fields evolving according to random ODEs. Furthermore, particles interact pairwise via a strongly singular potential of Lennard Jones type. The system is finally coupled with a marked Poisson compound point measure for realizing the chemical reactions.
We discuss the well-posedness of the system for a broad class of singular potentials, including Lennard Jones, by proving that, almost surely, particle collisions do not occur in a finite time.
This is a joint work with Daniela Morale, Stefania Ugolini (University of Milano) and Adrian Muntean, Nicklas Javergard (University of Karlstad).
|
4:20 pm - 6:00 pm | S 8 (3): Finance, insurance and risk: Modelling Location: POT 361 Floor plan Session Chair: Peter Hieber Session Chair: Frank Seifried |
|
4:20 pm - 4:45 pm
Risk measures based on target risk profiles RPTU Kaiserslautern-Landau, Germany
We address the problem that classical risk measures may not detect the tail risk adequately. This can occur for instance due to the averaging process when computing Expected Shortfall. The current literature proposes a solution, the so-called adjusted Expected Shortfall. This risk measure is the supremum of Expected Shortfalls for all possible levels, adjusted with a function $g$, the so-called target risk profile. We generalize this idea by using other risk measures instead of Expected Shortfall. Therefore, we introduce the concept of general adjusted risk measures. For these the realization of the adjusted risk measure quantifies the minimal amount of capital that has to be raised and injected in a financial position $X$ to ensure that the risk measure is always smaller or equal to the adjustment function $g(p)$ for all levels $p\in[0,1]$. We discuss a variety of assumptions such that desirable properties for risk measures are satisfied in this setup. From a theoretical point of view, our main contribution is the analysis of equivalent assumptions such that a general adjusted risk measure is positive homogeneous and subadditive. Furthermore, we show that these conditions hold for a bunch of new risk measures, beyond the adjusted Expected Shortfall. For these risk measures, we derive their dual representations. Finally, we test the performance of these new risk measures in a case study based on the S$\&$P $500$.
4:45 pm - 5:10 pm
Multi-asset return risk measures 1RPTU Kaiserslautern-Landau, Germany; 2University of Amsterdam, Netherlands
In this talk, we revisit the recently introduced concept of return risk measures (RRMs). We extend it by allowing risk management via multiple so-called eligible assets. The resulting new risk measures are called multi-asset return risk measures (MARRMs). We analyze properties of these risk measures. In particular, we prove that a positively homogeneous MARRM is quasi-convex if and only if it is convex. Furthermore, we state conditions to avoid inconsistent risk evaluations Then, we point out the connection between MARRMs and the well-known concept of multi-asset risk measures (MARMs). This is used to obtain dual representations of MARRMs. Moreover, we compare RRMs, MARMs, and MARRMs in numerous case studies. Using typical continuous-time financial markets and different notions of acceptability of losses, we compare MARRMs and MARMs and draw conclusions about the cost of risk mitigation. In a real-world example, we compare the relative difference between RRMs and MARRMs in times of crisis.
5:10 pm - 5:35 pm
Some remarks on the effect of risk sharing and diversification for infinite mean risks Universität Siegen, Germany
The basic principle of any version of insurance is the paradigm that exchanging risk by sharing it in a pool is beneficial to all participants. In case of independent risks with a finite mean this is typically the case for risk averse decision makers due to the law of large numbers. The situation may be very different in case of infinite mean models. In that case risk sharing may have a negative effect. For the case of stable distributions this has been described by Ibragimov et al. (2009) where this effect is called the nondiversification trap. In a series of recent papers this has been studied further by Chen, Wang and coauthors, who obtained similar results for infinite mean Pareto and Fr\'echet distributions. We further investigate this property by showing that many of these results can be obtained as special cases of a simple result demonstrating that this holds for any distribution that is more skewed than a Cauchy distribution. We also relate this to the situation of deadly catastrophic risks, where we assume the possibility of a positive probability for an infinite damage. That case give a very simple intuition why this phenomenon can occur for catastrophic risks.
5:35 pm - 6:00 pm
Perpetual American Options in a Two-Dimensional Black-Merton-Scholes Model 1LSE, United Kingdom; 2The University of Manchester
We study optimal stopping problems for two-dimensional geometric Brownian motions
driven by constantly correlated standard Brownian motions on an infinite time interval.
These problems are related to the pricing of perpetual American options such as basket
options (with an additive payoff structure) and traffic-light options (with a multiplicative
payoff structure) in a two-dimensional Black-Merton-Scholes model. We find closed formulas for the value functions expressed in terms of the optimal stopping boundaries which in turn are shown to be unique solutions to the appropriate nonlinear Fredholm integral equations. A key argument in the existence proof is played by a pointwise maximisation of the expressions obtained by the change-of-measure arguments. This provides tight bounds on the optimal stopping boundaries describing its asymptotic behaviour for marginal coordinate values.
|
4:20 pm - 6:00 pm | S13 (3): Nonparametric and asymptotic statistics Location: ZEU 250 Floor plan Session Chair: Alexander Kreiss Session Chair: Leonie Selk |
|
4:20 pm - 4:45 pm
Asymptotics for large-dimensional projection matrices CERGE-EI, Czech Republic
We derive the joint asymptotic behavior of the diagonal and off-diagonal elements of projection matrices, whose underlying dimension $m$ is asymptotically proportional to sample size $n$, so that $m/n\to\mu$ as $n\to\infty$, where the aspect ratio $\mu\in (0,1)$. The rate of convergence turns out to be $\sqrt{n}$, and the limiting distribution turns out to be multivariate centered Gaussian with an intriguing pattern in the asymptotic variance-covariance matrix.
Technically, using the celebrated Sherman–Morrison-Woodbury identity for an inverse of rank-k-perturbated matrices, we work out the formulas for quadratic and bilinear forms with respect to such an inverse. On the basis of these results, we figure out the connections between the elements of the projection matrix and their leave-k-analogs. This helps break the dependence between the projection matrix denominator, $Z’Z$, and the individual columns $z_i$ and $z_j$. The asymptotic Gaussianity of the elements of the leave-k projection matrix is then translated to that of the elements of the original projection matrix, using the Delta-method. In turn, the asymptotic Gaussianity of the elements of the leave-k projection matrix is derived from the central limit theorems for quadratic and bilinear forms of IID random elements.
4:45 pm - 5:10 pm
Change point detection in the mean of functional data with covariance operator offsetting 1Charles University Prague, Czech Republic; 2Otto-von-Guericke Universität Magdeburg, Germany
In a multivariate setting, classical test statistics such as the Hotelling's-t-test or the multivariate CUSUM test are usually weighted with the inverse covariance operator. For functional data, i.e., random elements in infinite dimensional Hilbert spaces, one approach for taking the inverse covariance operator into account is based on dimension reduction techniques to a finite dimensional subspace, possibly with principle components, followed by classical multivariate procedures. Such an approach has often been criticized as not being fully functional and losing too much information. As an alternative, tests have been proposed directly based on the functional CUSUM but they fail to include the covariance structure.
In this talk, we propose an alternative that includes the covariance structure with an offset parameter as a middle ground to produce a scale-invariant test procedure and to increase power when the change is not aligned with the first components. Some asymptotic properties are provided under mild assumptions on the dependence structure. A simulation study investigates the behavior of the proposed methods, including detecting abrupt and gradual mean changes.
5:10 pm - 5:35 pm
Asymptotic Behavior of PCA Projections for Multivariate Extremes University of Hamburg, Germany
Drees and Sabourin (2021) examined the PCA projection of the angular part of a multivariate regular varying random vector. In particular, they derived uniform bounds for the risk of the PCA approximation, that is the expected squared norm of the difference between the angular part and its PCA approximation. In this talk we show that under mild conditions the true rate of convergence is much faster than it is suggested by these results. In addition, we establish limit distributions for the PCA projection matrix and the resulting risk in a setting with fixed dimensions. The asymptotic results are used to motivate a data-driven method to select the dimension of the PCA subspace.
5:35 pm - 6:00 pm
On uniqueness of the set of $k$-means 1Georg-August Universität Göttingen, Germany; 2Universidad Autónoma de Madrid, Spain; 3Universidad del País Vasco-Euskal Herriko Unibertsitatea, Spain
We provide necessary and sufficient conditions for the uniqueness of the $k$-means set of a probability distribution. This uniqueness problem is related to the choice of $k$: depending on the underlying distribution, some values of this parameter could lead to multiple sets of $k$-means, which hampers the interpretation of the results and/or the stability of the algorithms.
We give a general assessment on consistency of the empirical $k$-means adapted to the setting of non-uniqueness and determine the asymptotic distribution of the within cluster sum of squares. We also provide a statistical characterization of $k$-means uniqueness in terms of the asymptotic Gaussianity of the empirical WCSS. As a consequence, we derive a bootstrap test for uniqueness of the set of $k$-means. The results are illustrated with examples of different types of non-uniqueness that might arise. Finally, we check by simulations the performance of the proposed methodology.
|
5:10 pm - 6:00 pm | S 7 (5): Stochastic processes: theory, statistics and numerics Location: POT 81 Floor plan Session Chair: Andreas Neuenkirch Session Chair: Jakob Söhl |
|
5:10 pm - 5:35 pm
Siegmund Duality and Time Reversal of Lévy-type Processes 1Technische Universität Dresden, Germany; 2Universität Ulm, Germany
In 1976 Siegmund introduced the notion of ``duality of Markov processes with respect to a function''. This concept has since then served as a powerful tool for analyzing Markov processes in fields like population genetics, risk theory, and queuing models, as it allows to connect long-term behavior and fluctuation theory.
In this talk, we will discuss Siegmund duality of Lévy-type processes, focusing on structural properties of (dual) generators and drawing relations to their adjoint operators. We will also investigate the connection of duality to the time reversal of soutions to Lévy-driven SDEs, with a focus on the example of generalized Ornstein-Uhlenbeck processes. Under certain conditions, we
will show that the two concepts coincide.
5:35 pm - 6:00 pm
Parameter estimation for polynomial models Kiel University, Germany
Polynomial processes, which include affine processes as a subclass, are a class of Markov processes characterized by the property that their conditional polynomial moments can be computed in closed form. Due to their computational tractability, polynomial models are widely utilized in mathematical finance, with notable examples being the Heston model and Lévy-driven models. The aim of our research is to estimate the parameters of discretely observed polynomial models. In asymptotic statistics, the maximum likelihood estimator is highly desirable for its favorable properties, such as consistency, asymptotic normality, and minimal asymptotic error. However, in practice, the score function is often not available in closed form, motivating the use of alternative approaches.
We propose using martingale estimating functions in place of the score function, with a focus on the Heyde-optimal martingale estimating function, which minimizes the distance to the score function in an $L^2$ sense within a specified class of estimating functions. Our framework constructs a specialized class of polynomial martingale estimating functions for general polynomial processes, requiring only the calculation of polynomial conditional moments. Specifically, we consider:
$$G_n(\vartheta) = \sum_{m=1}^n \sum_{|\pmb{\alpha}| \leq k} \lambda_{\vartheta, \pmb{\alpha}}(m) \left( \Delta X(m)^{\pmb{\alpha}} - \mathbb{E}_\vartheta[\Delta X(m)^{\pmb{\alpha}} | \mathcal{F}_{m-1}]\right)$$
where the maximum degree $k$ is fixed, and the integrand $\lambda_{\vartheta, \pmb{\alpha}}(m)$, measurable with respect to $\mathcal{F}_{m-1}$, can be freely chosen. By applying ergodic theory for Markov processes, we establish both the consistency and asymptotic normality of these estimating functions. Additionally, we demonstrate how to explicitly compute the Heyde-optimal estimating function within this class.
|
6:30 pm - 9:00 pm | Poster Exhibition Location: Dülfer-Saal Floor plan |
|
Optimal Control of McKean-Vlasov Stochastic Partial Differential Equations Technische Universität Berlin, Germany
We consider the control of stochastic partial differential equations (SPDE) of McKean-Vlasov type via deterministic controls using the variational approach to SPDE. Based on a recent novel approach to the Lions derivative for Banach space valued functions by Stannat and Vogler (https://arxiv.org/abs/2407.14884), we prove the Gateaux differentiability of the control to state map and, using adjoint calculus, we derive explicit representations of the gradient of the cost functional and a Pontryagin maximum principle.
Further, we prove the existence of optimal controls using a martingale approach and compactness methods.
Our setting uses monotone coefficients and allows the drift and diffusion coefficients to depend on the state, the distribution of the state and the control.
Limit Theorem for Trace of the Squared Sample Correlation Matrices in High Dimensions Stockholm University, Sweden
From a sample of $n$ observations of a $p$-dimensional random vector, we construct the sample correlation matrix. In this work, we establish the asymptotic behavior of the trace of the squared correlation matrix as both $p$ and $n$ grow large.
Known results confirm that for p and n growing at the same rat and finite fourth moments of the vector components, the trace of the squared correlation matrix satisfies a central limit theorem (CLT). Additionally, we identify more general conditions under which this asymptotic normality holds, revealing how the relationship between $p$, $n$, and the distribution of the random vector influences the limiting behavior. These findings extend existing results and provide a broader framework for independence testing in high dimensions.
Multiple Contrast Test for Youden-Indices in Factorial Diagnostic Trials 1Institute for Mathematical Stochastics, Faculty of Mathematics, Otto-von-Guericke-University Magdeburg, Germany; 2Institute of Biometry and Clinical Epidemiology, Charité-Universitätsmedizin Berlin, Germany
Diagnostic tests are important for clinical decision-making. However, many possible biomarkers and other factors could often be used, and their examination in a diagnostic trial is needed to find the optimal method to answer the clinical question. A well-known method for a single biomarker is using the cutoff value corresponding to the maximized Youden index. Bantis et al. have developed several methods to compare two biomarkers to calculate statistical tests and confidence intervals for the difference of two maximized Youden indices [bantis2021].
As there are often more than two diagnostic methods or biomarkers and possibly more than one rater or clinician involved in the trial, its design has a factorial structure with rather complex dependencies. To select in this cases the biomarker with the highest accuracy, multiple statistical tests need to be calculate. We propose to use as multiple contrast test the max T test based on ideas from Bretz et al [bretz2001], which controls the family wise error rate in the strong sense and uses the correlations between the test statistics.
In our work, we extend the theoretical background and implemented R code to execute for arbitrarily many biomarkers and general factorial designs the multiple contrast tests and calculate the corresponding simultaneous confidence intervals. Different methods for the calculation of the Youden indices have been implemented, which are based on a normal, a power normal and an arbitrary distribution assumption of the sample data.
We will discuss methods’ properties such as their control of the type-1 error rate and power obtained from extensive simulation studies.
References
[bantis2021] Bantis LE, Nakas CT, Reiser B. Statistical inference for the difference between two maximized Youden indices obtained from correlated biomarkers. Biometrical Journal. 2021; 63:1241–1253.
[bretz2001] Bretz F, Genz A, Hothorn LA. On the numerical availability of multiple comparison procedures. Biom J. 2001;43(5):645-656.
Mixed moving average field guided learning with unbounded losses TU Chemnitz, Germany
Influenced mixed moving average fields are versatile models for spatio-temporal data. However, apart from the Gaussian distributed case, their predictive distribution is not generally known. To allow the use of such models in forecasting tasks also in the non-Gaussian setup and for data with short and long memory, ensemble forecasts for data with such an underlying model have been established using a generalized Bayesian algorithm and a bounded loss function in [1]. We extend this approach to the unbounded loss function setup by determining a novel PAC Bayesian bound for $\theta$-lex weakly dependent data.
Stable convergence in law in approximation of stochastic integrals with respect to diffusions University of Zagreb, Faculty of Science, Croatia
We assume that the one-dimensional diffusion $X$ satisfies a stochastic differential equation of the form:
$dX_t=\mu(X_t)dt+\nu(X_t)dW_t$, $X_0=x_0$, $t\geq 0$. Let $(X_{i\Delta_n},0\leq i\leq n)$ be discrete observations along fixed time interval $[0,T]$. We prove that the random vectors which $j$-th component is $\frac{1}{\sqrt{\Delta_n}}\sum_{i=1}^n\int_{t_{i-1}}^{t_i}g_j(X_s)(f_j(X_s)-f_j(X_{t_{i-1}}))dW_s$, for $j=1,\dots,d$, converge stably in law to mixed normal random vector with covariance matrix which depends on path $(X_t,0\leq t\leq T)$, when $n\to\infty$. We use this result to prove stable convergence in law for $\frac{1}{\sqrt{\Delta_n}}(\int_0^Tf(X_s)dX_s-\sum_{i=1}^nf(X_{t_{i-1}})(X_{t_i}-X_{t_{i-1}}))$.
Perpetual American standard and lookback options in models with progressively enlarged filtrations 1LSE, United Kingdom; 2University of New South Wales, Australia
We derive closed-form solutions to optimal stopping problems related to the pricing of perpetual American standard and lookback put and call options in extensions of the Black-Merton-Scholes model under progressively enlarged filtrations. It is assumed that the information available from the market is modelled by Brownian filtrations progressively enlarged with the random times at which the underlying process attains its global maximum or minimum, that is, the last hitting times for the underlying risky asset price of its running maximum or minimum over the infinite time interval, which are supposed to be progressively observed by the holders of the contracts. We show that the optimal exercise times are the first times at which the asset price process reaches certain lower or upper stochastic boundaries depending on the current values of its running maximum or minimum depending on the occurrence of the random times of the global maximum or minimum of the risky asset price process. The proof is based on the reduction of the original necessarily three-dimensional optimal stopping problems to the associated free-boundary problems and their solutions by means of the smooth-fit and either normal-reflection or normal-entrance conditions for the value functions at the optimal exercise boundaries and the edges of the state spaces of the processes, respectively.
A comparison of multiple imputation algorithms University hospital of Essen, Germany
Missing data is a major problem in medicine and other branches of science. There are a lot of competing algorithms that deal with missing data. In this poster we take the point of view that the decision to use multiple imputation by chained equations has already been made and it only remains to choose the exact algorithm within this class. We will see that predictive mean matching is superior to all other algorithms.
Efficient Estimation of a Gaussian Mean with Local Differential Privacy Institute of Science and Technology Austria (ISTA), Austria
In this paper, we study the problem of estimating the unknown mean $\theta$ of a unit variance Gaussian distribution in a locally differentially private (LDP) way. In the high-privacy regime ($\epsilon\le 1$), we identify an optimal privacy mechanism that minimizes the variance of the estimator asymptotically. Our main technical contribution is the maximization of the Fisher-Information of the sanitized data with respect to the local privacy mechanism $Q$. We find that the exact solution $Q_{\theta,\epsilon}$ of this maximization is the sign mechanism that applies randomized response to the sign of $X_i-\theta$, where $X_1,\dots, X_n$ are the confidential iid original samples. However, since this optimal local mechanism depends on the unknown mean $\theta$, we employ a two-stage LDP parameter estimation procedure which requires splitting agents into two groups. The first $n_1$ observations are used to consistently but not necessarily efficiently estimate the parameter $\theta$ by $\tilde{\theta}_{n_1}$. Then this estimate is updated by applying the sign mechanism with $\tilde{\theta}_{n_1}$ instead of $\theta$ to the remaining $n-n_1$ observations, to obtain an LDP and efficient estimator of the unknown mean.
STATISTICAL GUARANTEES FOR APPROXIMATE STATIONARY POINTS OF SHALLOW NEURAL NETWORKS 1UHH, Germany; 2RUB, Germany
Since statistical guarantees for neural networks are usually restricted to global optima of intricate objective functions, it is unclear whether these theories explain the performances of actual outputs of neural network pipelines.
The goal of this paper is, therefore, to bring statistical theory closer to practice.
We develop statistical guarantees for shallow linear neural networks that coincide up to logarithmic factors with the global optima but apply to stationary points and the points nearby.
These results support the common notion that neural networks do not necessarily need to be optimized globally from a mathematical perspective.
We then extend our statistical guarantees to shallow ReLU neural networks, assuming the first layer weight matrices are nearly identical for the stationary network and the target.
More generally, despite being limited to shallow neural networks for now,
our theories make an important step forward in describing the practical properties of neural networks in mathematical terms.
Adaptive Kernel Density Estimation in L2-norm using artificial data 1Georg-August-Universität Göttingen; 2Universidad de Valparaiso; 3Université Paris Dauphine
This paper deals with the study of Kernel Density Estimator using artificial data. We investigate theoretical and practical properties of such estimators and propose a data-driven procedure such that the estimator is adaptive to the regularity of the underlying density of the data.
Estimating the density of a sample is one of the most useful steps in any data analysis. Among the nonparametric methods used for this goal, the Kernel Density Estimator is perhaps the most widely used. These estimators depend on a kernel $K$ and a bandwidth $h$. We study an adaptive and data driven method to select the bandwidth $h$, as introduced in \cite{goldenshluger2011bandwidth}, in the context of artificial data. More precisely, we are interested in estimating the density $f_Y$ of a random variable $Y$ that satisfies $Y = m(X)$ where $X$ is another random variable and $m : \mathbb{R} \mapsto \mathbb{R}$ is an unknown function to be estimated. We observe two independent and identically distributed samples generated from theses variables. The first one is quite difficult to obtain and rather small: ${(X_1, Y_1), \dots, (X_n, Y_n)}$, that satisfies $Y_i = m(X_i)$. The second one (independent of the first one) is simpler to obtain ${X_{n+1}, \dots, X_N}$ and can be as large as the statistician needs.
To estimate $f_Y$, we can use two approaches. In the classical approach, we use the sample $Y_1 , \dots, Y_n$. In the artificial data approach, we estimate the unknown function $m$ by $\hat{m}$ using $(X_1 , Y_1 ), \dots, (X_n , Y_n)$, then we construct the artificial data $\hat{Y}_{n+1} = \hat{m}(X_{n+1}), \dots ,\hat{Y}_{n+N} = \hat{m}(X_{n+N})$, and finally we estimate $f_Y$ using these artificial data.
We show that the kernel estimators using artificial data achieves a faster convergence rates when compared to the same estimator in the classical approach. Moreover, we propose a Goldenshluger-Lepski method to select the bandwidth in the artificial data approach and prove that its converges achieves the optimal rate of $\varphi_n(\beta)= n^{-4\beta/2\beta+3} (\log n)^{8\beta/2\beta+3}$. Finally, we perform a simulation study and compare the results via the Mean Integrated Square Error criterion.
Measuring Dependence between Events 1Heidelberg Institute for Theoretical Studies, Germany; 2Ruprecht Karl University of Heidelberg, Germany; 3Goethe University Frankfurt, Germany
Measuring dependence between two events, or equivalently between two binary random variables, amounts to expressing the dependence structure inherent in a $2\times 2$ contingency table in a real number between $-1$ and 1. Countless such dependence measures exist but there is little theoretical guidance on how they compare and on their advantages and shortcomings. Thus, practitioners might be overwhelmed by the problem of choosing a suitable dependence measure. We provide a set of natural desirable properties that a \emph{proper} dependence measure should fulfill. We show that Yule's $\mathsf{Q}$ and the little-known Cole coefficient are proper, while the most widely-used measures, the phi coefficient and all contingency coefficients, are improper. They have a severe attainability problem, that is, even under perfect dependence they can be very far away from $-1$ and $1$, and often differ substantially from the proper measures in that they understate strength of dependence. The structural reason is that these are measures for equality of events rather than of dependence. We derive the (in some instances non-standard) limiting distributions of the measures and illustrate how asymptotically valid confidence intervals can be constructed. In a case study on drug consumption we demonstrate that misleading conclusions may arise from the use of improper dependence measures.
Testing Monotonicity of Regression in Sublinear Time 1Georg August University of Göttingen, Germany; 2Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells'' (MBExC), University of Göttingen, Germany
Modern data sets have grown in size and complexity, exposing the scalability limitations of classical statistical methods, where computational efficiency is as crucial as statistical accuracy. In the context of testing monotonicity in regression functions, we propose FOMT (Fast and Optimal Monotonicity Test), a novel methodology designed to overcome these challenges. FOMT employs a sparse collection of local tests, strategically generated at random, to detect violations of monotonicity scattered throughout the domain of the regression function. This sparsity enables significant computational efficiency, achieving sublinear runtime in most cases, and quasilinear runtime (i.e., linear up to a log factor) in the worst case. In contrast, existing statistically optimal tests typically require at least quadratic runtime. FOMT's statistical accuracy is achieved through the precise calibration of these local tests and their effective combination, ensuring both sensitivity to violations and control over false positives. More precisely, we show that FOMT separates the null and alternative hypotheses at minimax optimal rates over Hölder function classes of smoothness order in (0,2]. Further, for cases with unknown smoothness, we introduce an adaptive version of FOMT, based on the Lepskii principle, which attains statistical optimality and meanwhile maintains the same computational complexity as if the intrinsic smoothness were known. Extensive simulations confirm the competitiveness and effectiveness of both FOMT and its adaptive variant.
Some results on statistical classification University of Trier, Germany
We present
1. an explicit representation of the stepwise Bayes classification functions introduced by Wald and Wolfowitz (1951),
2. a simple formula for the minimax classification of two binomials,
3. a minimax risk upper bound in terms of the spectral radius of a Hellinger matrix,
4. a phonetic application of the minimax bound, showing that people can be distinguished by their hesitation behavior with small error probabilities.
|
Date: Wednesday, 12/Mar/2025 | |
9:00 am - 10:00 am | Plenary II Location: POT 81 Floor plan Session Chair: Martin Keller-Ressel |
|
9:00 am - 10:00 am
Statistics and calibration for rough volatility: misconceptions and optimal procedures' École Polytechnique, France
Rough volatility models have gained very large interest in the financial engineering community in the recent years. The goal of this talk is to provide an accurate statistical analysis of such models, with minimax speeds of convergence, optimal procedures and central limit theorems. This enables us to study financial data properly in the rough volatility paradigm, with a rigorous statistician's perspective.
|
10:00 am - 10:30 am | Coffee Break Location: Foyer Potthoff Bau Floor plan |
10:00 am - 10:30 am | Coffee Break Location: POT 168 Floor plan |
10:30 am - 11:20 am | S 5 (4): Stochastic modelling in life sciences Location: POT 13 Floor plan Session Chair: Matthias Birkner |
|
10:30 am - 10:55 am
Conditioning the logistic continuous state branching process on non-extinction 1CMAP, Ecole Polytechnique, Palaiseau, France; 2Centro de Investigacion en Matematicas, Guanajuato Mexico; 3Fakultät für Mathematik, Universität Duisburg-Essen, Germany
We condition a continuous-state branching process with quadratic
competition on non-extinction by requiring the total progeny to exceed arbitrarily large exponential random variables. This is related to a Doob’s h-transform with an explicit excessive function h. We show that the h-transformed process has a finite lifetime (it is either killed or it explodes continuously) almost surely. When starting from positive values, we can characterize the conditioned process up to its lifetime as the solution to a certain stochastic equation with jumps. The latter superposes the dynamics of the initial logistic CB process with an additional density-dependent immigration term. Last, we establish that the conditioned process can be started from zero. Key tools employed are a Laplace and Siegmund duality relationships with auxiliary diffusion processes.
10:55 am - 11:20 am
Limit Theorems for Branching Processes with Thresholds 1Ulm University, Germany; 2George Mason University, Virginia, USA
Motivated by applications to COVID and other epidemics dynamics, we describe a branching process in random environments model whose characteristics change when crossing upper and lower thresholds. This introduces a cyclical path behavior involving periods of increase and decrease leading to supercritical and subcritical regimes. We establish law of large numbers and central limit theorems for the average length of these regimes and the proportion of time spent in each regime. We also derive the limiting joint distribution of offspring mean estimators in the supercritical and subcritical regimes and thereby show that they are asymptotically independent. We explicitly identify the limiting variances in terms of functionals of the offspring distribution, threshold distribution, and environmental sequences.
References
G. Francisci and A. N. Vidyashankar. Branching processes in random environments with thresholds. Advances in Applied Probability, 56(2):495–544, 2024.
|
10:30 am - 11:20 am | S10 Keynote: Stochastic optimization and operation research Location: POT 81 Floor plan Session Chair: Nikolaus Schweizer Session Chair: Ralf Werner |
|
10:30 am - 11:20 am
Bridging Data and Decisions: A Method for Optimization under Uncertainty using Regression Residuals We present data-driven approaches that integrate machine/statistical learning with stochastic optimization by using residuals from the learning models. Given a new covariate/contextual observation, the goal is to choose a decision that minimizes the expected objective function conditioned on this observation. We first review a Sample Average Approximation (SAA) approach for approximating this problem that is formed using regression residuals. We then present several extensions and discuss real-world applications. First, in the limited-data regime, we discuss Distributionally Robust Optimization (DRO) variants using Wasserstein distance, sample robust, or phi-divergence based ambiguity sets. Then, we investigate extensions under Decision-Dependent Uncertainty. We then discuss applications of this method in energy and transportation systems using real-world data. |
10:30 am - 12:10 pm | S 1 (4): Machine Learning Location: POT 06 Floor plan Session Chair: Merle Behr |
|
10:30 am - 10:55 am
Flow matching vs. kernel density estimation Karlsruher Institut für Technologie, Germany
Recently, Flow Matching, introduced by Lipman et. al. (2023), has caused increasing interest in generative modeling. Using a solution to an ODE leads to a generation process that is much simpler compared to diffusions, the current state of the art generative model. This idea has been further developed and several adaptations, especially regarding the choice of conditional probability paths, have been presented in the literature. Exploiting the connection to kernel density estimation, we analyze flow matching from a statistical perspective. We derive reasonable conditions for the choice of conditional probability paths. In addition, we study the rate of convergence and compare it to kernel density estimation.
10:55 am - 11:20 am
Detecting the memorizing effect in generative AI 1Carl von Ossietzky Universität Oldenburg; 2Deutsche Rück; 3Universität Augsburg
Generative AI such as generative adversarial networks or (variational) autoencoders produce new data from a training set. Sometimes the memorizing effect occurs, i.e. the generative AI only memorizes the training set and does not produce new, unseen samples. We introduce a new memorizing ratio to detect the memorizing effect and prove its convergence. Applications to economics are given.
This talk is based on https://arxiv.org/pdf/2301.12719
11:20 am - 11:45 am
Fixed-points of the distributional Bellman operator Goethe University Frankfurt a.M., Germany
In distributional reinforcement learning beyond the expected returns of a policy complete return distributions are taken into account. The return distribution for a fixed policy is given as the fixed-point of an associated distributional Bellman operator (DBO). Existence and uniqueness of fixed-points of DBOs are discussed, as well as their tail properties. Further, distributional dynamic programming algorithms are presented to approximate the unknown return distributions together with error bounds, both within Wasserstein and Kolmogorov–Smirnov distances. For return distributions having probability density functions the algorithms yield approximations for these densities; error bounds are given within supremum norm. The concept of quantile-spline discretizations is introduced for these algorithms which shows promising results in simulation experiments, also in the presents of heavy tails.
11:45 am - 12:10 pm
Transport Dependency: Optimal Transport Based Dependency Measures University Göttingen, Germany
In this talk, we present a framework for measuring statistical dependency, the transport dependency $\tau$, which relies on the notion of optimal transport and is applicable in general Polish spaces. It can be estimated via the corresponding empirical measure and is adaptable to various scenarios by proper choices of the cost function. Notably, statistical independence is characterized by $\tau = 0$, while large values of $\tau$ indicate highly regular relations between the variables. Based on sharp upper bounds, we exploit three distinct dependency coefficients with values in $[0,1]$, each of which emphasizes different functional relations. Monte Carlo results suggest that $\tau$ is a robust quantity that efficiently discerns dependency structure from noise for data sets with complex internal metric geometry. The use of the transport dependency for inferential tasks is illustrated for independence testing on a data set of trees from cancer genetics.
|
10:30 am - 12:10 pm | S 2 (4): Spatial stochastics, disordered media, and complex networks Location: POT 251 Floor plan Session Chair: Chinmoy Bhattacharjee Session Chair: Benedikt Jahnel |
|
10:30 am - 10:55 am
Random connection hypergraphs 1University of Bergen; 2Aarhus University; 3University of Leiden
In this talk, a novel random hypergraph model is introduced. In accordance to the standard theory, the hypergraphs are represented as bipartite graphs, with both vertex sets derived from marked Poisson point processes. After establishing the model, we investigate the limit theory of a variety of characteristics, including higher-order degree distributions, Betti numbers, and simplex counts. The theoretical results will be complemented by a simulation study to analyze finite-size effects to help understanding of the model's behavior in practical scenarios. Finally, to demonstrate the real-world applicability of the results, we examine how this model can be employed to understand scientific collaboration networks extracted from arXiv.
10:55 am - 11:20 am
Graph and Hypergraph limits MPI-CBG, Germany
The theory of graph limits considers the convergence of sequences of graphs with a divergent number of vertices. From an applied perspective, it aims at the convenient representation of large networks. In this talk, I will give a brief introduction to graph limits and report on recent extensions to weighted graphs and more general combinatorial objects as hypergraphs. In particular, I will develop the theory of probability graphons, focusing on the right convergence viewpoint, and the equivalent notion of P-variables convergence. The relation between probability graphons and P-variables is analogous to the relation between probability measures and random variables. I will also explain how these notions can be generalised to hypergraph limits and how they relate to many other areas of mathematics, statistics and applications.
11:20 am - 11:45 am
Multivariate normal approximation for stabilizing functionals with binomial input in the convex distance Hamburg University of Technology, Germany
The aim of this work is to establish quantitative multivariate central limit theorems with respect to the convex distance for a large class of geometric functionals of marked binomial point processes. More precisely, the underlying functionals are assumed to be sums of exponentially stabilizing score functions which additionally have to satisfy certain moment conditions. The found results are also illustrated by some specific examples from stochastic geometry.
The whole framework and results, which profoundly make use of the findings in Kasprzak and Peccati (Ann. Appl. Probab. 33 (2023), 3449-3492) and Lachieze-Rey, Schulte and Yukich (Ann. Appl. Probab. 29 (2019), 931-993), can be regarded as binomial counterparts of those in Schulte and Yukich (Ann. Appl. Probab. 33 (2023), 507-548), which were concerned with Poisson point processes.
11:45 am - 12:10 pm
Maximal degree in a window for Beta-Delaunay and Beta-Prime-Delaunay Triangulations University of Groningen, Netherlands
The $\beta$-Delaunay and $\beta^\prime$-Delaunay triangulations are models generalizing classical Poisson-Delaunay triangulation on the Euclidean space. Both were introduced recently by Gusakova, Kabluchko, and Thäle in a series of papers. We investigate the distribution of a maximal vertex degree in a growing window for these models. For $\beta$-Delaunay we show that it behaves similarly to the classical case, that is, concentrates on a finite number of values (just two values if we are on a plane). On the other hand, for $\beta^\prime$-Delaunay we show that the situation is completely different.
|
10:30 am - 12:10 pm | S 3 (2): Stochastic Analysis and S(P)DEs Location: POT 151 Floor plan Session Chair: Vitalii Konarovskyi Session Chair: Aleksandra Zimmermann |
|
10:30 am - 10:55 am
Probabilistic approach to semi-linear elliptic equations with measure data Nicolaus Copernicus University, Poland
Let $E$ be a locally compact separable metric space, $D$ be an open subset of $E$ and $m$
be a Radon measure on $E$ with full support.
Let $(L,\mathfrak D(L))$ be a self-adjoint operator that generates a
Markov semigroup $(T_t)_{t>0}$ on $L^2(E;m)$
and regular Dirichlet form $(\mathcal E,\mathfrak D(\mathcal E))$ (i.e. $L$ is a Dirichlet operator).
The goal is to study, within this general framework,
the Dirichlet problem for semilinear equations
\[
-Lu=f(\cdot,u)+\mu\quad\text{in }D.
\]
Here
$f:E\times\mathbb R\rightarrow\mathbb R$ is a
given function and $\mu\ll\mbox{Cap}$, where $\mbox{Cap}:2^E\to [0,\infty]$
is a Choquet capacity naturally associated with $L$ (note that $m\ll\mbox{Cap}$).
It is by now recognized that well-posed Dirichlet problem for the above equation
must consist of two conditions: an exterior condition on $D^c:=\mathbb R^d\setminus D$
and a description of the asymptotic behavior of a solution at the boundary $\partial D$.
We give a probabilistic and analytical definition of a solution to the problem and
show their equivalence. As a result, we prove Feynman-Kac formula for solutions
of the studied problem. Next, based on the theory of Backward Stochastic Differential Equations,
we prove an existence results in case $f$ is monotone with respect to $u$.
We also provide some regularity results.
The talk is based on the paper [1].
Bibliography
[1] Klimsiak, T., Rozkosz A.:
Dirichlet problem for semilinear partial integro-differential equations: the method of orthogonal projection.
arXiv:2304.00393
10:55 am - 11:20 am
A nonlinear stochastic convection-diffusion equation with reflection 1TU Clausthal, Germany; 2Scuola Normale Superiore di Pisa, Italy; 3Université de Pau et des Pays de l'Adour, France
We show existence of a stochastic parabolic obstacle problem with obstacle $\psi =0$ under homogeneous Dirichlet boundary conditions. In the penalized equation, the penalization term converges to a random Radon measure $\eta$ only. Since the solution $u$ of the obstacle problem is not continuous in space-time in general, this causes problems to give a proper definition of the minimalization condition of $\eta$. We show that $\eta$ does not charge sets of zero capacity and the solution is nonnegative almost everywhere with respect to the Lebesgue measure and $\eta(u)=0$ for some Borel-measurable representative. Uniqueness may be obtained for quasi continuous solutions.
11:20 am - 11:45 am
Functional and Cheeger-type inequalities for Brownian motion with sticky-reflecting boundary diffusion 1Leipzig University, Germany; 2Max Planck Institute for Mathematics in the Sciences, Germany; 3Center for Applied Mathematics, Tianjin University, China
We consider Brownian motion on manifolds with sticky reflection from the boundary and with or without diffusion along the boundary. For the invariant measure consisting of a convex combination of the volume measure in the interior and the Hausdorff measure on the boundary we present Poincaré and logarithmic Sobolev inequalities under general curvature assumptions on the manifold and its boundary. Additionally we also present a Cheeger-type inequality for the spectral gap.
This is based on joint work with Max von Renesse and Feng-Yu Wang.
11:45 am - 12:10 pm
The quenched Edwards–Wilkinson equation with Gaussian disorder 1Independent; 2EPFL, Bâtiment MA, Switzerland; 3WIAS Berlin, Germany
We consider the quenched Edwards--Wilkinson equation with a Gaussian disorder, which is white in the spacial component and colored in the height component. We comment on showing the existence of the solution for this singular SPDE and possibly discuss the phenomenon of pinning vs depinning.
|
10:30 am - 12:10 pm | S 4 (3): Limit theorems, large deviations and extremes Location: ZEU 160 Floor plan Session Chair: Jan Nagel Session Chair: Marco Oesting |
|
10:30 am - 10:55 am
Extreme values of permutation statistics and triangular arrays 1Ruhr-Universität Bochum, Germany; 2Otto-von-Guericke-Universität Magdeburg, Germany; 3Stockholm University, Sweden
We investigate limit theorems for extreme values of the classical inversion and descent statistics on symmetric groups and related permutation groups. The central limit theorem (CLT) is known to hold for these permutation statistics. We can achieve comparable results for their maxima in a suitable triangular array $(X_{n1}, \ldots, X_{nk_n})$. The triangular array is required as these discretely distributed statistics have a degenerate extreme value behavior otherwise. It is important to discuss the regime of the row lengths $k_n$ since an overly large triangular array is still degenerate. We demonstrate the use of large deviations theory and high-dimensional Gaussian approximation in order to transfer asymptotic normality to the extreme values, and we discuss general findings for extreme values in triangular arrays.
10:55 am - 11:20 am
Spatio-temporal statistical modeling of the occurrence of extreme events University of Stuttgart
In this work, we aim to model spatio-temporal heavy precipitation events, which enhances the understanding of their causes and their prediction. Similar to Koh et al. (2023), we propose to use spatio-temporal point processes with covariates. However, we model the intensity measure of the spatio-temporal point process differently with a similar approach as Baddeley et al. (2012) who estimate the intensity measure nonparametrically. This point process model can be extended by adding marks e.g. the spatial extent (see Oesting & Huser, 2022) and clustering to obtain a clustered marked point process model.
References:
A. Baddeley, Y.-M. Chang, Y. Song, and R. Turner. Nonparametric estimation of the dependence of a spatial point process on spatial covariates. Statistics and its interface, 5(2):221-236, 2012.
J. Koh, F.Pimont, J.-L. Dupuy, and T. Opitz. Spatiotemporal wildfire modeling through point processes with moderate and extreme mark.
The Annals of Applied Statistics, 17(1):560-582, 2023.
M. Oesting and R. Huser. Patterns in spatio-temporal extremes.
arXiv preprint arXiv:2212.11001, 2022.
11:20 am - 11:45 am
Convergence of Extremal Processes in Spaces of Growing Dimension Universität Bern, Switzerland
Consider a random walk $S_k^{(d)}$, $k\geq 0$, in $d$-dimensional
Euclidean space with square integrable centred increments such that
the expected square norm of the increment is one. The values of this
random walk for $k=0,\dots,n$ normalised by $\sqrt{n}$ are considered
as a finite metric space $\mathcal{Z}_n^{(d)}$ which is embedded in
$\mathbb{R}^d$ with the induced metric. Under condition of uniform
smallness type on the $d$ components of the incement, Kabluchko and
Marynych (2022) proved that, as $n$ and $d$ go to infinity in any regime, then the metric space $\mathcal{Z}_n^{(d)}$ converges in probability in the Gromov--Hausdorff metric to the Wiener spiral. The latter space is the space of all indicators $\mathbf{1}_{[0,t]}$, $t\in[0,1]$,
embedded in $L^2([0,1])$, equivalently, the interval $[0,1]$ with the
metric $r(t,s)=\sqrt{|t-s|}$.
In their subsequent paper, Kabluchko and Marynych (2023) showed that in the heavy-tailed case with $\alpha\in(0,1)$, the limiting metric space is random and is derived from an infinite-dimensional version of a subordinator (called a crinckled subordinator), assuming certain condition on the joint growth regime of $n$ and $d$.
We study an extremal version of this setting, where the random walk is replaced by a sequence of successive maxima and the underlying metric in $\mathbb{R}^d$ is taken to be $\ell_\infty$. We show that the limiting metric space is the extremal version of the crinckled subordinator which is derived from a Poisson process $\{x_k, y_k \} $ on $[0,1] \times \mathbb{R}_{+}$ with intensity measure being the product of the Lebesgue meausure and the tail measure derived from the distribution of the components. The limit is isometric to $[0, 1]$ with the metric $r(s, t)$ given by the maximum of $y_k$ over all $x_k$ with $x_k \in [s, t]$.
Joint work with Ilya Molchanov (Bern).
11:45 am - 12:10 pm
Bayesian Inference for Functional Extreme Events Defined via Partially Unobserved Processes University of Stuttgart, Germany
In order to describe the extremal behaviour of some stochastic process $X$ approaches from univariate extreme value theory are typically generalized to the spacial domain.
Besides max-stable processes, that can be used in analogy to the block maxima approach, a generalized peaks-over-threshold approach can be used, allowing us
to consider single extreme events. These can be flexibly defined as exceedances of a risk functional $\ell$, such as a spatial average, applied to $X$.
Inference for the resulting limit process, the so-called $\ell$-Pareto process, requires the evaluation of $\ell(X)$ and thus the knowledge of the whole process $X$.
In practical application we face the challenge that observations of $X$ are only available at single sites. To overcome this issue, we propose a two-step MCMC-algorithm in a Bayesian framework. In a first step, we sample from $X$ conditionally on the observations in order to evaluate which observations lead to $\ell$-exceedances. In a second step, we use these exceedances to sample from the posterior distribution of the parameters of the limiting $\ell$-Pareto process. Alternating these steps results in a full Bayesian model for the extremes of $X$.
We show that, under appropriate assumptions, the probability of classifying an observation as $\ell$-exceedance in the first step converges to the desired probability. Furthermore, given the first step, the distribution of the Markov chain constructed in the second step converges to the posterior distribution of interest. Our procedure is compared to the Bayesian version of the standard procedure in a simulation study.
|
10:30 am - 12:10 pm | S 6 (4): Stochastic modelling in natural sciences Location: POT 112 Floor plan Session Chair: Alexandra Blessing Session Chair: Anton Klimovsky |
|
10:30 am - 10:55 am
Freezing limits of Calogero-Moser-Sutherland particle models Technische Universität Dortmund, Germany
One-dimensional Calogero-Moser-Sutherland particle models with $N$ particles can be regarded as diffusions on Weyl chambers or alcoves in $\mathbb R^N$
with second order differential operators as generators,
which are singular on the boundaries of the state spaces.
The most relevant examples are multivariate Bessel and Heckman-Opdam processes
which are related to special functions associated with root systems.
These models include Dyson's Brownian motions, multivariate Laguerre and Jacobi processes and, for fixed time,
$\beta$-Hermite, Laguerre, and
Jacobi ensembles.
In some cases, they are related to Brownian motions on the classical symmetric spaces.
We review some freezing limits for fixed $N$ when some
parameter, an inverse temperature, tends to $\infty$.
The limits are normal distributions and, in the process case,
Gaussian processes. The parameters of the limits
are described in terms of solutions of
ordinary differential equations which are frozen versions of the particle diffusions. We discuss connections of
these ODES with the zeros of the classical orthogonal polynomials.
The talk is partially based on joint work with Sergio Andraus, Kilian Herrmann, and Jeannette Woerner.
10:55 am - 11:20 am
Simulation of Surface Defects using Voronoi Tessellations 1Fraunhofer ITWM, Germany; 2RPTU Kaiserslautern-Landau, Germany
In industry, automated visual surface inspection is a common method for object inspection. Machine learning approaches show promising results for automated defect detection using the acquired images. The performance of such defect detection algorithms increases with the amount and the diversity of training data. However, real data of objects with defects are rarely available in the quantity required for robust machine learning algorithms. Moreover, it is impossible to include the large variety and variability of possible defects since manufacturers try to prevent defect formation.
The lack of data can be overcome by using synthetic training data. That is, we simulate images similar to those obtained by the real surface inspection system by virtually recreating the visual surface inspection environment and the imaging process. This requires a digital twin of the inspected object having the same geometry, material, surface texture and defects. Thus, defect models are necessary to generate synthetic defects and imprint them into the object geometry.
Here, we consider cast metal objects and we focus on defect modeling, more precisely, on two basic defect structures: linear structures such as cracks and flat structures such as surface delaminations. The stochastic models for both defect classes are based on random Voronoi tessellations to reflect the polycrystalline micro-structure of the metal. Models for different defect classes use different features of the tessellations, namely their vertices, edges or cells. Moreover, tessellations with different cell intensities can be used to model the defect structure at different scales and defect variation is guaranteed by using various parameter configurations.
11:20 am - 11:45 am
A min-max random game on a graph that is not a tree 1Universidad Nacional de Colombia; 2University of Göttingen; 3The Czech Academy of Sciences, Institute of Information Theory and Automation, Czech Republic
In a classical game two players, Alice and Bob, take turns to play $n$ moves each. Alice starts. For each move each player has two options, 1 and 2. The outcome is determined by the exact sequences of moves played by each player. Prior to the game, a winner is assigned to each of the $2^{2n}$ possible outcomes in an i.i.d. fashion, where $p$ is the probability that Bob is the winner for a given outcome. Then it is known that there exists a value $0<p_{\rm c}<1$ such that the probability that Bob has a winning strategy for large $n$ tends to one if $p>p_{\rm c}$ and to zero if $p<p_{\rm c}$. We study a modification of this game for which the outcome is determined by the exact sequence of moves played by Alice as before, but in the case of Bob all that matters is how often he has played move 1. We show that also in this case, there exists a sharp threshold $p'_{\rm c}$ that determines which player has with large probability a winning strategy in the limit as $n$ tends to infinity.
11:45 am - 12:10 pm
Random eigenvalues of dual infinite $(p,q)$--nanotubes 1Ulm University - Institute of Stochastics, Germany; 2Steklov Mathematical Institute RAN, Russia; 3Université de Lausanne, Switzerland
The hexagonal lattice and its dual, the triangular lattice, are fundamental models for understanding atomic and ring connectivity in carbon structures such as \textit{graphene} and \textit{$(p,q)$--nanotubes}.
The chemical and physical properties of these carbon allotropes are closely tied to the average number of closed paths of different lengths $k \in \mathbb{N}_0$ on their respective graph representations, which can be described in terms of their spectra.
Since a carbon $(p,q)$--nanotube can be viewed as a graphene sheet rolled up in a manner determined by the \textit{chiral vector} $(p,q)$, our findings are based on the study of \textit{random eigenvalues} for both the hexagonal and triangular lattices presented in \cite{bille23}, as well as previous results on nanotubes \cite{Cotfas00}.
In this talk, we discuss results from \cite{bille24} on the spectral properties of dual infinite $(p,q)$--nanotubes, focusing on the counts of closed paths of length $k \in \mathbb{N}_0$ on these lattices.
In particular, for any \textit{chiral vector} $(p,q)$, we show that the sequence of closed path counts forms a moment sequence derived from a functional of two independent uniform distributions.
Explicit formulas for key distribution characteristics, including the probability density function and moment generating function, are presented for selected choices of the chiral vector.
Moreover, we demonstrate that as the \textit{circumference} of a $(p,q)$--nanotube becomes infinitely large, i.e., as $p+q\rightarrow \infty$, the $(p,q)$--nanotube converges to the hexagonal lattice in terms of the number of closed paths of any given length $k$, indicating weak convergence in the underlying distributions.
This approach offers new insights into the spectral behavior of nanotubes and graphene, with practical implications for modeling and analysis in materials science.
\begin{thebibliography}{9}
\bibitem{Cotfas00}
N. Cotfas.
\textit{Random walks on carbon nanotubes and quasicrystals.}
Journal of {P}hysics {A}: {M}athematical and {T}heoretical,
33:2917--2927, 2000.
\bibitem{bille23}
A. Bille, V. Buchstaber, S. Coste, S. Kuriki and E. Spodarev.
\textit{Random eigenvalues of graphenes and the triangulation of plane.}
ArXiv preprint No 2306.01462, submitted, 2023.
\url{https://arxiv.org/abs/2306.01462}
\bibitem{bille24}
A. Bille, V. Buchstaber, P. Ievlev, C. Redenbach, S. Novikov and E. Spodarev.
\textit{Random eigenvalues of nanotubes.}
ArXiv preprint No 2408.14313, submitted, 2024.
\url{https://arxiv.org/abs/2408.14313}.
|
10:30 am - 12:10 pm | S 7 (6): Stochastic processes: theory, statistics and numerics Location: POT 51 Floor plan Session Chair: Andreas Neuenkirch Session Chair: Jakob Söhl |
|
10:30 am - 10:55 am
Learning Stochastic Reduced Models from Data: A Nonintrusive Approach 1University of Potsdam, Germany; 2Martin-Luther-University of Halle-Wittenberg
A nonintrusive model order reduction method for bilinear stochastic differential equations with Gaussian noise is proposed. A reduced order model (ROM) is designed in order to approximate the statistical properties of high-dimensional systems. The drift and diffusion coefficients of the ROM are inferred from state observations by solving appropriate least-squares problems. The closeness of the ROM obtained by the presented approach to the intrusive ROM obtained by the proper orthogonal
decomposition (POD) method is investigated. Two generalisations of the snapshot-based dominant subspace construction to the stochastic case are presented. Numerical experiments are provided to compare the developed approach to POD.
10:55 am - 11:20 am
On the mathematical theory of continuous time Ensemble Kalman Filters Technische Universität Berlin
Ensemble Kalman Filters (EnKFs) are a class of Monte Carlo algorithms devolped in the 90s for high dimensional stochastic filtering problems. Despite their widespread popularity, especially for numerical weather prediction and data assimilation tasks in the geosciences, a mathematical theory investigating these kinds of algorithms has emerged only recently in the last decade. In particular estimating the asymptotic bias for these kinds of algorithms in the case of nonlinear dynamics and mathematical justifications for their usage in these settings, is a longstanding open problem.
This talk is therefore concerned with the elementary mathematical analysis of continuous time versions of EnKFs, which are a particular class of mean field interacting Stochastic Differential Equations. Besides elementary well posedness results, we show a quantitative convergence to the mean field limit with (almost) optimal rates and compare/relate this mean field limit to the optimal filter given by the Kushner-Stratonovich equation.
11:20 am - 11:45 am
Learning to steer with Brownian noise 1Christian-Albrechts-Universität, Germany; 2Friedrich-Schiller-Universität, Germany
In this talk we consider an ergodic version of the bounded velocity follower problem, assuming that the decision maker lacks knowledge of the underlying system parameters and must learn them while simultaneously controlling. We propose algorithms based on moving empirical averages and develop a framework for integrating statistical methods with stochastic control theory. Our primary result is a logarithmic expected regret rate. To achieve this, we conduct a rigorous analysis of the ergodic convergence rates of the underlying processes and the risks of the considered estimators.
|
10:30 am - 12:10 pm | S 8 (4): Finance, insurance and risk: Modelling Location: POT 361 Floor plan Session Chair: Peter Hieber Session Chair: Frank Seifried |
|
10:30 am - 10:55 am
The fundamental theorem of asset pricing with and without transaction costs Goethe University Frankfurt, Germany
We prove a version of the fundamental theorem of asset pricing~(FTAP) in continuous time that is based on the strict no-arbitrage condition and that is applicable to both frictionless markets and markets with proportional transaction costs. We consider a market with a single risky asset whose ask price process is higher than or equal to its bid price process. Neither the concatenation property of the set of wealth processes, that is used in the proof of the frictionless FTAP, nor some boundedness property of the trading volume of admissible strategies usually argued with in models with a nonvanishing bid-ask spread need to be satisfied in our model.
\url{https://arxiv.org/abs/2307.00571}
10:55 am - 11:20 am
On the absence of arbitrage in diffusion markets with reflection and skewness 1University of Freiburg, Germany; 2Université Grenoble-Alpes, France; 3University of Duisburg-Essen, Germany
We are interested in the absence of arbitrage for single asset financial market models whose asset price process is modeled by a one-dimensional general regular diffusion (captured via scale function and speed measure). In recent work, Criens and Urusov proved precise characterizations of NA, NUPBR and NFLVR in terms of scale and speed. In particular, it was shown that these notions are violated in the presence of skewness effects or reflecting boundaries (that reflection entails such arbitrage opportunities is rather intuitive). It remained open whether weaker notions of no arbitrage can hold in the presence of skewness or reflection. The literature suggests that this is not the case. Indeed, Rossello (Insur. Math. Econ., 2012) had observed that the weaker "no increasing profit" (NIP) condition fails for an exponential skew Brownian motion model, and Buckner, Dowd and Hulley (Finance & Stochastics, 2024) showed that increasing profits exist in a reflected geometric Brownian motion model. In this talk, we explain the surprising observation that there are diffusion markets that satisfy NIP in the presence of skewness effects and reflecting boundaries.
11:20 am - 11:45 am
Equilibrium Asset Pricing with Epstein-Zin Stochastic Differential Utility University of Warwick, Statistics Department, Coventry, CV4 7AL, UK
We revisit the classical problem of equilibrium asset pricing in a continuous-time complete-market setting, but in the case where investors' preferences are described by Epstein-Zin Stochastic Differential Utility. The market is comprised of a riskless bond and a risky asset, where the latter pays continuously a stochastic dividend stream. The equilibrium is characterised by a system of strongly coupled Forward-Backward Stochastic Differential Equations (FBSDEs). This is joint work in progress with Dr. Martin Herdegen.
11:45 am - 12:10 pm
Mean-variance equilibria in continuous time 1University of Warwick, Department of Statistics, Coventry, CV4 7AL, UK; 2London School of Economics and Political Science, Department of Mathematics, Columbia House, Houghton Street, London WC2A 2AE, UK; 3ETH Zürich, Department of Mathematics, Rämistrasse, 101, 8092 Zürich, Switzerland
We revisit the classical topic of mean-variance equilibria in the setting of continuous time, where asset prices are driven by continuous semimartingales. We show that under mild assumptions, a mean-variance equilibrium corresponds to a quadratic equilibrium for different preference parameters. We then use this connection to study a fixed-point problem that establishes existence of mean-variance equilibria. Our results rely on fine properties of mean-variance hedging as well as a novel stability result for quadratic BSDEs. The talk is based on joint work with Christoph Czichowsky, Martin Herdegen and David Martins.
|
10:30 am - 12:10 pm | S12 (1): Computational, functional and high-dimensional statistics Location: ZEU 260 Floor plan Session Chair: Martin Wahl |
|
10:30 am - 10:55 am
Sequential Monte Carlo depth computation with statistical guarantees 1Otto-von-Guericke University Magdeburg, Germany; 2Universidad de Cantabria, Spain
Statistical depth functions provide center-outward orderings in spaces of dimension larger than one, where a natural ordering does not exist. The computation of such depth functions can be computationally prohibitive, even for relative low dimensions.
We present a novel sequential Monte Carlo methodology for the computation of depth functions and related quantities (seMCD), that outputs an interval, a so-called bucket, to which the quantity of interest belongs with a high probability prespecified by the user.
For specific classes of depth functions, we adapt algorithms from sequential testing, providing finite-sample guarantees. For depth functions dependent on unknown distributions, we offer asymptotic guarantees using nonparametric statistical methods.
In contrast to plain-vanilla Monte-Carlo-methodology the number of samples required in the algorithm is random but typically much smaller than standard choices suggested in the literature.
The seMCD method can be applied to various depth functions, including the simplicial depth and the integrated rank-weighted depth, and covers multivariate as well as functional spaces.
We demonstrate the efficiency and reliability of our approach through empirical studies, highlighting its applicability in outlier detection, classification, and depth region computation. In conclusion, the seMCD algorithm can achieve accurate depth approximations with fewer Monte Carlo samples while maintaining rigorous statistical guarantees.
10:55 am - 11:20 am
Lower Complexity Adaptation for Empirical Entropic Optimal Transport University of Göttingen, Germany
Entropic optimal transport (EOT) presents an effective and computationally viable alternative to unregularized optimal transport (OT), offering diverse applications for large-scale data analysis. We derive novel statistical bounds for empirical plug-in estimators of the EOT cost and show that their statistical performance in the entropy regularization parameter $\epsilon$ and the sample size $n$ only depends on the simpler of the two probability measures. For instance, under sufficiently smooth costs this yields the parametric rate $n^{-1/2}$ with factor $\epsilon^{-d/2}$, where $d$ is the minimum dimension of the two population measures. This confirms that empirical EOT also adheres to the lower complexity adaptation principle, a hallmark feature only recently identified for unregularized OT. In particular, this suggests that the estimation of the EOT cost is only affected by the curse of dimensionality when both measures have a high intrinsic dimension. Our technique employs empirical process theory and relies on a dual formulation of EOT over a single function class. Central to our analysis is the observation that the entropic cost-transformation of a function class does not increase its uniform metric entropy by much.
11:20 am - 11:45 am
Simultaneous Estimation of Model Evidence and Posterior Predictive Distributions with Non-equilibrium Thermodynamic Integration Universität Augsburg, Germany
In frequentist approaches to statistical learning, training involves finding optimal parameter values through techniques such as maximum likelihood estimation. In an end-to-end Bayesian approach, however, the training step is replaced by a sampling problem. One advantage of such a Bayesian framework is that parameter uncertainty is fully incorporated into the posterior predictive distribution (PPD). Another advantage is that marginalising parameters entirely yields the marginal likelihood, or model evidence, which is arguably the best measure of model likelihood, offering strong protection against overfitting. Unfortunately, computational feasibility rapidly deteriorates as model complexity increases, limiting the expressivity of applicable Bayesian models.
As a step forward, we present a novel algorithm that simultaneously estimates both posterior averages (e.g. the PPD) and the model evidence. Furthermore, we propose leveraging this simultaneous estimation in the form of evidence-weighted model averages. This approach allows candidate models to remain relatively simple, with the complexity managed through the PPDs and the model ensemble.
The presented algorithm is a non-equilibrium version of the traditional thermodynamic integration method for evidence estimation, adapting free-energy estimators from non-equilibrium thermodynamics of microscopic systems for use in the Bayesian context. The resulting non-equilibrium integration (NEQI) is formulated as the exponential average of an observable from a customised stochastic process. The non-equilibrium flavour of thermodynamic integration offers several advantages: it allows for combining arbitrarily many short trajectories to thoroughly explore the parameter space, and it eliminates the need for burn-in, thinning, or stationarity. We present the details of the estimator and its implementation in TensorFlow Probability, demonstrating through relevant examples how NEQI competes with state-of-the-art methods such as annealed importance sampling for evidence estimation and Hamiltonian Monte Carlo for posterior inference.
|
10:30 am - 12:10 pm | S13 (4): Nonparametric and asymptotic statistics Location: ZEU 250 Floor plan Session Chair: Alexander Kreiss Session Chair: Leonie Selk |
|
10:30 am - 10:55 am
Axiomatic characterisation of generalized $\psi$-estimators 1University of Szeged, Szeged, Hungary; 2University of Debrecen, Debrecen, Hungary
We introduce the notion of generalized $\psi$-estimators as unique points of sign change of some appropriate functions.This notion is a generalization of usual $\psi$-estimators (also called $Z$-estimators). We give necessary as well as sufficient conditions for the (unique) existence of generalized $\psi$-estimators. Our results are well-applicable in statistical estimation theory, for example, in case of empirical quantiles, empirical expectiles, some (usual) $\psi$-estimators in robust statistics, and some maximum likelihood estimators as well.
Furthermore, we give axiomatic characterisations of generalized $\psi$-estimators and (usual) $\psi$-estimators, respectively. The key properties of estimators that come into play in the characterisation theorems are the symmetry, the (strong) internality and the asymptotic idempotency. In the proofs, a separation theorem for Abelian subsemigroups plays a crucial role.
The talk is based on our papers [1], [2] and [3].
Mátyás Barczy has been supported by the project TKP2021-NVA-09. Project no. TKP2021-NVA-09 has been implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021-NVA funding scheme.
References
[1] Barczy, M., Páles Zs.: Existence and uniqueness of weighted generalized $\psi$-estimators, ArXiv 2211.06026.
[2] Barczy, M., Páles Zs.: Basic properties of generalized $\psi$-estimators.To appear in Publicationes Mathematicae Debrecen. Available also at Arxiv 2401.16127.
[3] Barczy, M., Páles Zs.: Axiomatic characterisation of generalized $\psi$-estimators, ArXiv 2409.16240.
10:55 am - 11:20 am
An arginf continuous mapping theorem with application in regression analysis Technische Universität Dresden, Germany
Let $\theta_n = (\tau_n, \alpha_n) \in \mathbb{R}^{2}$, be an estimator consisting of two parts: $\tau_{n}$ is an infimising point of a process $X_n$ in the Skorokhod space and $\alpha_{n}$ is a real-valued random variable. For this estimator we give conditions under which weak convergence of $(X_n)_{n \in \mathbb{N}}$ and $(\alpha_n)_{n \in \mathbb{N}}$ leads to a convergence result for $(\theta_n)_{n \in \mathbb{N}}$, which may coincide with distributional convergence.
Usually, arginf continuous mapping theorems are used for distributional convergence of $(\tau_n)_{n \in \mathbb{N}}$. For this purpose, it is assumed that the processes $X_n$ weakly converge to a process $X$, $(\tau_n)_{n \in \mathbb{N}}$ is stochastically bounded and the set of infimising points of $X$ is almost surely a singleton. However, we consider the case, in which this uniqueness is not satisfied. In fact, in this case we obtain weak convergence of $(\tau_n)_{n \in \mathbb{N}}$ in the topological space $(\mathbb{R}, \mathcal{O})$, where $\mathcal{O}$ is the right- or left-order topology. Further, we have to deal with the fact that Slutsky's theorem is not applicable to connect the convergence of $(\tau_n)_{n \in \mathbb{N}}$ and $(\alpha_n)_{n \in \mathbb{N}}$.
The result is generalised for $\theta_n \in \mathbb{R}^{d+l}$ with $d \in \mathbb{N}$ and $l \in \mathbb{N}$. Then, we use this in regression when estimating the best least-squares approximation of an unknown regression function and constructing confidence regions.
11:20 am - 11:45 am
Sharp oracle inequalities and universality of the AIC and FPE University of Vienna, Austria
In two landmark papers, Akaike introduced the AIC and FPE, demonstrating their significant usefulness for prediction. In subsequent seminal works, Shibata developed a notion of asymptotic efficiency and showed that both AIC and FPE are optimal, setting the stage for decades-long developments and research in this area and beyond. Conceptually, the theory of efficiency is universal in the sense that it (formally) only relies on second-order properties of the underlying process $(X_t)_{t \in \mathbb{Z}}$, but, so far, almost all (efficiency) results require the much stronger assumption of a linear process with independent innovations. In this work, we establish sharp oracle inequalities subject only to a very general notion of weak dependence, establishing a universal property of the AIC and FPE. A direct corollary of our inequalities is asymptotic efficiency of these criteria. Our framework contains many prominent dynamical systems such as random walks on the regular group, functionals of iterated random systems, functionals of (augmented) Garch models of any order, functionals of (Banach space valued) linear processes, possibly infinite memory Markov chains, dynamical systems arising from SDEs, and many more.
11:45 am - 12:10 pm
What are the Clopper-Pearson bounds, simply if a bit roughly? Universität Trier, Germany
We provide close bounds for the Clopper-Pearson binomial confidence bounds,
similar to, but necessarily a bit more complicated than, the well-known
Lagrange (1776) expressions $\widehat{p}\pm\Phi^{-1}(\beta)\sqrt{\widehat{p}\widehat{q}/n}$.
The goal here is to enable rough but valid mental or back-of-the-envelope calculations
for a most elementary and important statistical procedure.
Our result improves a bit on the one of Short (2023),
and for example roughly reproduces the well-known upper bound $\frac{3}{n}$
in the case of $\beta=0.95$ and $\widehat{p}=0$.
But further simplifications or sharpenings are desirable.
|
11:20 am - 12:10 pm | S 5 Keynote: Stochastic modelling in life sciences Location: POT 81 Floor plan Session Chair: Matthias Birkner |
|
11:20 am - 12:10 pm
A spatial measure-valued model for chemical reaction networks in heterogeneous systems In this talk, we shall introduce a measure-valued Markov process modelling a finite system of biochemical reactions taking place in a continuous compact space. In its general form, molecules need to be close to each other to react; some reactions or chemical species may be localised in space; some species are abundant (with a number O(N) of molecules) while others may be rare (with O(1) molecules). We shall show that, as we let N tend to infinity, a suitable normalisation of the counting measure describing the state of the system converges to a measure-valued piecewise deterministic Markov process. To illustrate the use of such a framework, we shall briefly discuss work in progress on the modelling of the dynamics of the population of actin polymers in the cellular cortex during cell division. This is joint work with Lea Popovic (Concordia University) and François Robin (Inserm & Sorbonne U.) |
11:20 am - 12:10 pm | S10 (1): Stochastic optimization and operation research Location: POT 13 Floor plan Session Chair: Nikolaus Schweizer Session Chair: Ralf Werner |
|
11:20 am - 11:45 am
Existence of equilibria in Dynkin games of war-of-attrition type Kiel University, Germany
One of the fundamental results in optimal stopping theory states that for Markovian problems, optimal stopping times exist in the class of (Markovian) pure first-entry times. On the other hand, when it comes to Markovian games of optimal stopping, equilibria in the class of (Markovian) pure first-entry times do only exist under restrictive assumptions, while the most general existence results do only provide equilibria in the class of randomized stopping times. Due to the vastness of the class of randomized stopping times such equilibria can hardly be pinpointed and their inherent path-dependency compromises subgame perfection. Therefore, it is natural to restrict the scope to the inbetween-class of Markovian randomized stopping times. We outline a general scheme to existence of equilibria in that class based on the example of a war-of-attrition type Dynkin game.
11:45 am - 12:10 pm
Mokobodzki's intervals: an approach to Dynkin games when value process is not a semimartingale Nicolaus Copernicus University in Toruń, Poland
Consider a Brownian motion $(B_t)_{t \geq 0}$ on a given complete probability space
$(\Omega, \mathcal{F}, \mathbb P)$ with the filtration $\mathbb{F} := (\mathcal{F}_t)_{t \geq 0}$
being the standard augmentation of $(\mathcal F^B_t)_{t\ge 0}$. We fix $T > 0$ and let $\mathcal{T}$ denote
the set of all $[0,T]$ valued $\mathbb{F}$-stopping times. Suppose that $L$ and $U$ are $\text{càdlàg}$, $\mathbb{F}$-adapted processes of class (D), satisfying
\[
L_t \leq U_t, \quad t \in [0, T].
\]
We study Dynkin games governed by a nonlinear $\mathbb{E}^f$-expectation with payoff processes $L$ and $U$ which do not necessarily satisfy the Mokobodzki's condition—the existence of a $\text{càdlàg}$ semimartingale between the barriers. To address this, we introduce the notion of Mokobodzki's stochastic intervals $\mathcal{M}(\tau)$, i.e.
maximal stochastic intervals on which Mokobodzki's condition is satisfied for the pair $(L,U)$ when starting from $\tau\in\mathcal T$.
Based on this concept, we prove that the Dynkin game on stopping times in $\mathcal T$ under the nonlinear expectation $\mathbb{E}^f$ has a value, i.e.
\begin{equation}\label{1}
\mathop{\mathrm{ess\,inf}}_{\sigma \ge\theta} \mathop{\mathrm{ess\,sup}}_{\tau \ge \theta} \mathbb{E}^f_{\theta, \tau \wedge \sigma}(L_{\tau} \mathbf{1}_{\{\tau \leq \sigma\}} + U_{\sigma} \mathbf{1}_{\{\sigma < \tau\}})
= \mathop{\mathrm{ess\,sup}}_{\tau \ge \theta} \mathop{\mathrm{ess\,inf}}_{\sigma \ge\theta} \mathbb{E}^f_{\theta, \tau \wedge \sigma}(L_{\tau} \mathbf{1}_{\{\tau \leq \sigma\}} + U_{\sigma} \mathbf{1}_{\{\sigma < \tau\}})
\end{equation}
for any stopping time $\theta \in \mathcal{T}$. Moreover, we show that the family $\{V^f(\theta), \, \theta \in \mathcal{T}\}$
of said values is aggregable to a process $Y$ that is a semimartingale on each Mokobodzki's interval $\mathcal{M}(\theta)$, $\theta \in \mathcal{T}$, and that there exist saddle points for this game in case $L_{t-}\le L_t$, $U_{t-}\ge U_t$,
$t\in [0,T]$. A complete novelty, even in the case of the standard expectation, is that we provide a minimal saddle point.
The method we propose is based on the theory of Reflected BSDEs that we suitable extend
to the case when the barriers $L,U$ are not supposed to satisfy Mokobodzki's condition. As a by-product we show
that $Y$ is a solution to some Reflected BSDEs and moreover $Y^n_t\to Y_t$, where
\[
Y^n_t=L_T+\int_t^Tf(s,Y^n_s,Z^n_s)\,ds+n\int_t^T(Y^n_s-L_s)^-\,ds-n\int_t^T(Y^n_s-U_s)^+\,ds-\int_t^TZ^n_s\,dB_s,
\]
i.e. $Y^n$ is the first component of a solution to nonlinear BSDE with the penalty terms.
The additional advantage of applying BSDEs is that \(f: \Omega \times [0, T] \times \mathbb{R} \times \mathbb{R}^d \to \mathbb{R}\) may be assumed to be merely monotone and continuous in \(y\) (with no restrictions on the growth), Lipschitz continuous in \(z\), and integrable at zero, i.e. $\mathbb E\int_0^T|f(s,0,0)|\, ds<\infty$. Furthermore,
as we mentioned before $L,U$ are merely of class (D) which means that the families $(L_\tau)_{\tau\in\mathcal T}$,
$(U_\tau)_{\tau\in\mathcal T}$ are uniformly integrable.
The presented results were obtained in cooperation with Tomasz Klimsiak.
Bibliography:
Klimsiak, T., Rzymowski, M.: Mokobodzki's intervals: an approach to Dynkin games when value process is not a semimartingale. arXiv:2407.15601
|
12:10 pm - 1:40 pm | Lunch |
1:40 pm - 2:30 pm | S 2 Keynote: Spatial stochastics, disordered media, and complex networks Location: POT 81 Floor plan Session Chair: Chinmoy Bhattacharjee Session Chair: Benedikt Jahnel |
|
1:40 pm - 2:30 pm
Multiple Poisson Integrals: from U-statistics to Poincaré inequalities Multiple Wiener-Itô integrals with respect to Poisson measures are fundamental tools in modern stochastic analysis. Despite their widespread use, they remain less understood than their Gaussian counterparts, with key questions about their regularity—such as the existence of higher moments or hypercontractivity properties—still open for investigation. In this talk, I will first explore the applications of multiple integrals in a stochastic geometric context, particularly through the lens of U-statistics. I will then turn to product formulae, and show how a recent extension of the classical Poincaré inequality has enabled us to derive comprehensive necessary and sufficient conditions for the validity of product formulae of arbitrary length. This result substantially resolves a longstanding open question from one of Surgailis's seminal papers (1984). The second part of the talk is based on joint work with Lorenzo Cristofaro (Luxembourg). |
1:40 pm - 3:20 pm | S 1 (5): Machine Learning Location: POT 06 Floor plan Session Chair: Merle Behr |
|
1:40 pm - 2:05 pm
Optimization of high-dimensional random functions Universität Mannheim, Germany
Classical optimization theory cannot explain the success of optimization in machine learning. We motivate the assumption for the cost function to be the realization of a random function, discuss machine learning phenomena and heuristics explained by the random function framework and outline the benefits and future of this Bayesian approach to optimization.
2:05 pm - 2:30 pm
Towards Applying Regression Techniques for Counterfactual Reasoning Eberhard Karls Universität Tübingen, Germany
The goal of this work is to justify the application of regression models for counterfactual reasoning in the sense of Judea Pearl. Let $X$ and $Y$ be two random variables with joint distribution $\pi$, and assume that $\pi$ can be described with an additive noise model
$$
Y := \tilde{f}(X) + \tilde{U},
$$
where $\tilde{f}$ is a continuous function and $\tilde{U}$ is a random variable with mean value $E[\tilde{U}] = 0$ that is independent of $X$. Generally, it is not feasible to reason about the effects of external interventions or counterfactuals solely from the joint distribution $\pi$. Previous work, however, shows that, in most cases, there exists no additive noise model of the form $X := \tilde{g}(Y) + \tilde{V}$ describing $\pi$, establishing an asymmetry that allows us to identify $X$ as the cause of $Y$ and to predict effects from external interventions, such as setting $X$ to a specific value $x$. In my work, I further demonstrate that the joint distribution $\pi$ uniquely determines the function $ \tilde{f}$. I then argue that this also uniquely determines all counterfactual probabilities.
For context, humans reason in terms of counterfactuals, meaning they consider how events might have unfolded under different circumstances without experiencing those alternative realities. For instance, an economist might judge, "If the government had reduced the tax rate by 5%, we would have seen at least a 2% increase in economic growth," without observing the scenario in which the government reduced the tax rate. Since this capability is fundamental to understanding the past and reasoning about ethical notions such as responsibility, blame, fairness, or harm, it is also desirable to augment current machine learning models with counterfactual reasoning. In this case, the economist might use this counterfactual statement to attribute responsibility for a current economic crisis to the government.
Sticking with the above example, $X$ could denote the tax rate and $Y$ could denote economic growth to illustrate the main ideas behind counterfactual reasoning and my work. Following Pearl's framework, I assume that the situation of interest can be captured by a functional causal model. In this case, the influence of $X$ on $Y$ can be represented by the equation:
$$
Y := f(X, U),
$$
where $f$ is the causal mechanism and $U$ is a random variable with a distribution $\pi_{U}$, representing hidden or unmodeled influences.
Suppose we observe an economic growth of 0% and a tax rate of 30%. The distribution of $U$ can be updated according to the evidence $Y = 0$ and $X = 30$. This updated distribution for the error term $U$ and the modified model
$$
Y := f(25, U)
$$
then allows us to derive a distribution for the economic growth that would have taken place if the government had reduced the tax rate by 5%. This approach lets us quantify our degree of belief that the government is responsible for the observed recession.
In practice, this approach to counterfactual reasoning requires fitting a regression model $Y := \tilde{f}(X, \tilde{U})$ and providing justification as to why this regression model is the correct causal explanation for the observed distribution $\pi$. However, it is known that neither the causal direction nor the causal mechanism is generally uniquely determined by the joint distribution $\pi$. Therefore, in my work, I assume that the true functional causal model is also an additive noise model as well. I hope this assumption could be justified by Occam's razor in future studies. In our example, this means I assume that
$$
Y := f(X) + U
$$
for a function $f$ and a real-valued error term $U$ with mean value $E[U] = 0$ that is independent of $X$.
Further, assume that the distributions of $X$ and $U$ are given by strictly positive densities and that $f \in L^2(\pi_X)$ is continuous and square integrable over $\pi_X$. In this case, I am able to prove that the function $f$ is also uniquely determined by the joint distribution $\pi$ of $X$ and $Y$. From this fact, I further conclude that the functions $f$ and $\tilde{f}$ coincide. In particular, this means that the assumed regression model $Y := \tilde{f} (X) + \tilde{U}$ is the correct causal model, which can then be used to compute counterfactual probabilities. I therefore propose this result as a step towards using regression techniques from machine learning for data-based counterfactual reasoning.
2:30 pm - 2:55 pm
Lévy Langevin Monte Carlo BRUNATA METRONA München, Germany
Analogue to the well-known Langevin Monte Carlo method we provide a method to sample from a target distribution \(\pi\) by simulating a solution of a
stochastic differential equation. Hereby, the stochastic differential equation is driven
by a general Lévy process which - other than in the case of Langevin Monte Carlo -
allows for non-smooth targets. Our method will be fully explored in the particular
setting of target distributions supported on the half-line (0,∞) and a compound
Poisson driving noise. Several illustrative examples demonstrate the usefullness of this method.
|
1:40 pm - 3:20 pm | S 3 (3): Stochastic Analysis and S(P)DEs Location: POT 151 Floor plan Session Chair: Vitalii Konarovskyi Session Chair: Aleksandra Zimmermann |
|
1:40 pm - 2:05 pm
Bayesian Filtering for SPDEs with Spatio-Temporal Point Process Observations 1Institute of Mathematics, Technical University Berlin, Germany; 2Institute of Physics and Astronomy, University of Potsdam, Germany
We introduce a novel mathematical framework for filtering problems in biophysical applications, focusing on data collected from confocal laser scanning microscopy that records the spatio-temporal evolution of intracellular wave dynamics. In these settings, the signals are modeled by stochastic partial differential equations (SPDEs), and the observations are represented as functionals of marked point processes whose intensities depend on the underlying signal. We derive both the unnormalized and normalized filtering equations for these systems and demonstrate the asymptotic consistency of our approach, providing approximations for finite-dimensional observation schemes and partial observations. Our theoretical findings are validated through extensive simulations using both synthetic and real-world data. This work enhances the understanding of filtering with point process observations, state-space models and Bayesian estimation, establishing a robust framework for future research in this area.
2:05 pm - 2:30 pm
On the existence of weak solutions to mean-field stochastic Volterra equations University of Mannheim, Germany
Mean-field SDEs, also known as McKean-Vlasov SDEs, provide mathematical descriptions of random systems of interacting particles, whose time evolutions depend on the probability distribution of the entire system. They are frequently used in applied mathematics, particularly in mathematical finance. Stochastic Volterra equations on the other hand include memory effects and allow for generating non-Markovian stochastic processes.
In this talk, we unify these two concepts and consider the $d$-dimensional mean-field stochastic Volterra equation
$$ X_t = X_0 +\int_0^t K_{b}(s,t) b(s,X_s,\mathcal{L}(X_s))\,\mathrm{d} s+\int_0^t K_{\sigma}(s,t)\sigma(s,X_s,\mathcal{L}(X_s))\,\mathrm{d} B_s,\quad t\in [0,T], $$
where $B$ is an $m$-dimensional Brownian motion and $\mathcal{L}(X_s)$ denotes the law of $X_s$. We establish a suitable local martingale problem and prove the existence of weak solutions to the above mean-field stochastic Volterra equation under mild assumptions on the kernels and non-Lipschitz coefficients following the idea of Skorokhod's existence theorem.
2:30 pm - 2:55 pm
Ergodicity for stochastic Volterra processes 1Trento University, Italy; 2Dublin City University, Ireland
In this work, we investigate the long-time behaviour of Hilbert space-valued stochastic Volterra processes given as solutions of
\[
u(t) = G(t) + \int_0^t E_b(t-s)b(u(s))\,ds + \int_0^t E_\sigma(t-s)\sigma(u(s))\,dW_s
\]
on $H$, where $E_b,E_\sigma$ form a family of bounded linear operators on $H$ with additional integrability conditions, $(W_t)_{t\geq0}$ is a cylindrical Wiener process on a Hilbert space $U$, $G\in L^2(\Omega;L^2_{loc}(\mathbb{R_+};H))$ is $\mathcal{F}_0$-measurable and $b\colon H\longrightarrow H$ and $\sigma\colon H\longrightarrow L_2(U,H)$ are Lipschitz. For Markovian systems, such problems are typically analyzed using various methods that leverage the Markov property. However, solutions to stochastic Volterra equations are generally neither Markov processes nor semimartingales, making their asymptotic analysis both intriguing and challenging. To remedy the lack of the Markov property, we consider a Hilbert space-valued Markovian lift $X$ for $u$ and study its asymptotic behaviour. Finally, a projection argument allows us to provide a full characterization of corresponding invariant measures, derive a law-of-large numbers, and show that in certain cases a central limit theorem with the usual Gaussian domain of attraction holds.
2:55 pm - 3:20 pm
Limit theorems for general functionals of Brownian local times 1Technical University of Hamburg, Germany; 2University of Luxembourg, Luxembourg
We present the asymptotic theory for integrated functions of increments in space of Brownian local times. Specifically, we determine their first-order limit, along with the asymptotic distribution of the fluctuations. Our key result establishes that a standardized version of our statistic converges stably in law towards a mixed normal distribution. Our contribution builds upon a series of prior works by S. Campese, X. Chen, Y. Hu, W.V. Li, M.B. Markus, D. Nualart and J. Rosen, which delved into the special case of polynomials using the method of moments, Malliavin calculus and Ray-Knight theorems as well as Perkins’ semimartingale representation of local time and the Kailath-Segall formula. In contrast to those methodologies, our approach relies on infill limit theory for semimartingales, which allows us to establish a limit theorem for general functions that satisfy mild smoothness and growth conditions.
|
1:40 pm - 3:20 pm | S 4 (4): Limit theorems, large deviations and extremes Location: ZEU 160 Floor plan Session Chair: Jan Nagel Session Chair: Marco Oesting |
|
1:40 pm - 2:05 pm
Phase transitions for linear spectral statistics of sample correlation matrices in high dimension Stockholm University, Sweden
We investigate linear spectral statistics (LSS) of a sample correlation matrix $R$, constructed from $n$ observations of a $p$-dimensional random vector with iid components. If the entries have finite fourth moment and $p$ and $n$ grow proportionally, it is known that LSS satisfy a central limit theorem (CLT) and the centering and scaling sequences are universal in the sense that they do not depend on the entry distribution. Under a symmetry and a regular variation assumption with index $\alpha$ and any growth rate of the dimension, we prove that the universal CLT remains valid for $\alpha >3$. Moreover, for $\alpha\le 3$ we establish a non-universal CLT with norming sequences depending on the value of $\alpha$. Our findings are illustrated in a small simulation study.
2:05 pm - 2:30 pm
A sharper Lyapunov-Katz central limit error bound for i.i.d. summands Zolotarev-close to normal Universität Trier
We prove a central limit error bound for convolution powers of laws with finite moments of order $ r \in \mathopen]2,3\mathclose] $, taking a closeness of the laws to normality into account:
Going beyond the Berry (1941) - Esseen (1942) theorem, for Lyapunov's (1901) limit theorem the classical error bound with a constant independent of $r$ was first obtained explicitly by Katz (1963). Up to a universal constant, our result sharpens the i.i.d. case and generalises the special case of $r=3$ obtained by Mattner (2024).
2:30 pm - 2:55 pm
The free additive convolution of semicircular and uniform distribution TU Dortmund, Germany
The hermitian Brownian motion is one of the most studied random matrix processes and its eigenvalue process is the well known Dyson Brownian motion. It turns out that deforming this matrix model by adding a certain deterministic diagonal matrix, which is linearly scaled by time t, gives rise to another interesting eigenvalue process. We we will see that the latter is precisely the componentwise logarithm of the singular value process of the multiplicative Brownian motion on $\operatorname{GL}(N,\mathbb{C})$. In the large $N$ limit, we in turn get a formula relating the free additive convolution of the semicircle and uniform distribution to the distribution of the free positive multiplicative Brownian motion. Based on joint work with Michael Voit.
2:55 pm - 3:20 pm
Central limit theorem for convex expectations 1ETH Zurich; 2University of Konstanz
Based on a recently developed theory for strongly continuous convex monotone semigroups on spaces of continous functions, we provide a central limit theorem for convex expectations including explicit convergence rates. In the present context, the nonlinearity of the expectation arises due to uncertainty regarding the distribution of the samples which transfers to the transition probabilities determining the corresponding semigroup. Our results are consistent with Peng's G-framework for sublinear expectations and, to the best of our knowledge, the first extension to the convex case. This is of particular interest for applications in mathematical finance. Furthermore, the framework allows to consider large deviations results as a law of larges numbers for suitable convex expectations.
|
1:40 pm - 3:20 pm | S 6 (5): Stochastic modelling in natural sciences Location: POT 112 Floor plan Session Chair: Alexandra Blessing Session Chair: Anton Klimovsky |
|
1:40 pm - 2:05 pm
Multiple-merger coalescents {\it (I)} when the sample size is large, and {\it (II)} in a random environment Museum fuer Naturkunde
{\it (I)} Individual recruitment success, or the offspring number
distribution of a given population, is a fundamental element in
ecology and evolution. Randomly increased recruitment refers to an
increased chance of producing many offspring without natural selection
being involved, and may apply to individuals in broadcast spawning
populations characterised by Type III survivorship. We consider an
extension of the (Schweinsberg, 2003) model of randomly increased
recruitment for a haploid panmictic population of constant size $N$;
the extension also works as an alternative to the Wright-Fisher model.
Our model incorporates a fixed upper bound on the random number of
potential offspring produced by an arbitrary individual. Depending on
how the bound behaves relative to $N$ as $N$ increases, we obtain
the Kingman-coalescent, an incomplete Beta-coalescent, or the original
(complete) Beta-coalescent of (Schweinsberg, 2003). We argue that
applying such an upper bound is biologically reasonable. Moreover, we
estimate the error of the coalescent approximation. The error
estimates reveal that convergence can be slow, and small sample size
can be sufficient to invalidate onvergence, for example if the stated
bound is of the form $N/\log N$. We use simulations to investigate the
effect of increasing sample size on the site-frequency spectrum. When
the model is in the domain of attraction of a Beta-coalescent, the
site frequency spectrum will be as predicted by the limiting tree even
though the full coalescent tree may deviate from the limiting one.
When the model is in the domain of attraction of the
Kingman-coalescent the effect of increasing sample size depends on the
effective population size as has been noted in the case of the
Wright-Fisher model. This is joint work with JA Chetwyn-Diggle.
{\it (II)} When estimates of $\alpha$ of the Beta-coalescents derived
in {\it (I)} are close to 1 recovering the mutations used to estimate
$\alpha$ may require strong assumptions regarding the population size
and/or the mutation rate. With this in mind we consider population
genetic models of randomly increased recruitment in haploid panmictic
populations of constant size ($N$) evolving in a random environment.
Our main results are {\it (i)} continuous-time coalescents that are
either the Kingman-coalescent or specific families of Beta or
Poisson-Dirichlet coalescents; when combining the results the
parameter $\alpha$ of the Beta-coalescent ranges from 0 to 2, and the
Beta-coalescents may be incomplete due to an upper bound on the number
of potential offspring an arbitrary individual may produce; {\it (ii)}
in large populations we measure time in units proportional to at least
$ N/\log N$ generations; {\it (iii)} using simulations we show that in
some cases estimates of the site-frequency spectra as predicted by a
given coalescent do not match the estimates predicted by the
corresponding pre-limiting model; {\it (iv)} estimates of the
site-frequency spectra obtained by conditioning on the (random)
complete sample gene genealogy are broadly similar (for the models
considered here) to the estimates obtained without conditioning on the
sample tree.
2:05 pm - 2:30 pm
A probabilistic interpretation of a non-conservative and path-dependent nonlinear reaction-diffusion system for marble sulphation in Cultural Heritage 1University of Oslo, Norway; 2University of Milano, Italy
We discuss a probabilistic interpretation of a specific deterministic reaction-diffusion system [1,2] arising in literature for the modelling of the marble sulphation phenomenon. This phenomenon occurs when sulphur dioxide, which is present in the atmosphere due to human emissions, reacts with the calcium carbonate rock (like marble or limestone), causing a chemical reaction which facilitates the degradation of the material. Since this degradation phenomenon affects both modern and historical buildings, as well as artistic artifacts exposed to open air, a better understanding of it through mathematical modelling is needed.
We derive a single regularised non-conservative and path-dependent nonlinear partial differential equation and propose a probabilistic interpretation using a non-Markovian McKean-Vlasov-type stochastic differential equation, through a Feynman-Kac approach [3]. We discuss the well-posedness of such a stochastic model.
Then, we provide a deeper understanding of the deterministic reaction-diffusion system as a mean field approximation of a system of interacting Brownian particles subject to a non-conservative evolution law. Propagation of chaos holds for such a particle system. Therefore, the proposed McKean-Vlasov-Feynman-Kac SDE can be interpreted as the dynamics of sulfur dioxide molecules during the reaction, at the microscale.
This is a joint work with Daniela Morale and Stefania Ugolini (Universita’ degli Studi di Milano) [4].
REFERENCES
[1] D. Aregba-Driollet, F. Diele, and R. Natalini. “A Mathematical Model for the Sulphur Dioxide Aggression to Calcium Carbonate Stones: Numerical Approximation and Asymptotic Analysis”. In: SIAM Journal on Applied Mathematics 64.5 (2004), pp. 1636–1667.
[2] F. R. Guarguaglini and R. Natalini. “Fast reaction limit and large time behavior of solutions to a nonlinear model of sulphation phenomena”. In: Comm. Partial Differential Equations 32.1-3 (2007), pp. 163–189.
[3] A. Lecavil, N. Oudjane, and F. G. Russo. “Probabilistic representation of a class of non conservative nonlinear Partial Differential Equations”. In: ALEA, Lat. Am. J. Probab. Math. Stat. (2016), pp. 1189–1233.
[4] D. Morale, L. Tarquini and S. Ugolini. “A probabilistic interpretation of a non-conservative and path-dependent nonlinear reaction-diffusion system for the marble sulphation in Cultural Heritage”. 2024. arXiv:2407.19301
2:30 pm - 2:55 pm
The high-temperature phases of the complex CREM: Beyond weak correlations 1Technion Haifa, Israel; 2JGU Mainz, Germany; 3Würzburg University, Germany
We identify the fluctuations of the partition function of the complex continuous random energy model (CREM) on a Galton-Watson tree in the full high-temperature regime. We show that in the strongly correlated regime a third high-temperature phase emerges as conjectured by Kabluchko and one of us. This phase is not present in the regime of weak correlations and the complex REM. A key ingredient is a Lindeberg-Feller type central limit theorem.
2:55 pm - 3:20 pm
Dimensionality Reduction in Filtering for Stochastic Reaction Networks 1King Abdullah University of Science and Technology (KAUST), Saudi Arabia; 2Utrecht University, Netherlands; 3RWTH Aachen University, Germany
Stochastic reaction networks (SRNs) model stochastic effects for various applications, including intracellular chemical/biological processes, economy, and epidemiology. These models represent the dynamics of systems with several interacting species with low copy numbers. Essentially, SRNs are continuous-time Markov chains, with state being the copy number of each species (multivariate vector) and transitions representing randomly occurring reactions.
In many applications, a challenge arises when only a few of the species can be observed. This leads to the stochastic filtering problem, which is to estimate the distribution of unobserved (hidden) species counts conditional on the given observed trajectories. Despite the non-linearity, various numerical methods for this problem have been developed in recent years. However, these methods have limited applicability due to the curse of dimensionality, which means that their computational complexity grows exponentially with respect to the number of species.
We propose a novel Markovian projection based approach that reduces the dimensionality of the filtering problem for SRNs. This significantly enhances the efficiency of numerical methods for solving the filtering problem without imposing additional assumptions on the shape of the underlying distribution. Our analysis and empirical results highlight the superior computational efficiency of our method compared to state-of-the-art methods in the high-dimensional setting.
|
1:40 pm - 3:20 pm | S 7 (7): Stochastic processes: theory, statistics and numerics Location: POT 51 Floor plan Session Chair: Frank Aurzada |
|
1:40 pm - 2:05 pm
Some insight in jumps of Dunkl processes and connections to Gilat's theorem TU Dortmund, Germany
Dunkl processes are a special typ of multidimensional jump diffusions, whose radial part is a multidimensional Bessel process, well known in the framework of interacting particle systems and closely related to random matrix theory. In addition to Bessel processes Dunkl processes possess jumps, namely reflections, which produce sign changes and changes of coordinate processes and lead to a martingale structure of the process. In one dimension this property is closely related to Gilat's theorem, which states that every positive submartingale is the absolute value of a martingale. Namely a one dimensional Dunkl process is the unique martingale corresponding to the positive submartingale of the classical Bessel process.
In this talk we will analyse the jump mechanism of the Dunkl process and the relation to the martingale property. Furthermore, we will show how this provides inside in constructions of martingales associated to Gilat's theorem.
2:05 pm - 2:30 pm
On nonlocal Neumann problem and corresponding stochastic process Wroclaw University of Science and Technology, Poland
We consider the Neumann problem for the fractional Laplacian introduced
by S. Dipierro, X. Ros-Oton and E. Valdinoci. Following the probabilistic interpretation of the corresponding heat equation given by the authors we construct a stochastic process $X_t$ such that $X_t$ starting in $D$ moves as the isotropic stable process until the first exit time from $D$. At the exit time it jumps out of $D$ according to the L\'evy measure $\nu$ of stable process. It stays at the exit point
$y$ for an exponential time with mean $1/ν(y, D)$ then jumps back to $D$ and restarts.
We investigate some fundamental properties of $X_t,$ the corresponding semigroup and bilinear form. In particular we prove that the lifetime of $X$ is infinite for $\alpha\in (0, 1]$ and finite for $\alpha\in (1, 2).$ In the latter case we have
$\lim_{t\to\zeta} X_t = 0$, where $\zeta$ denotes the lifetime.
We prove that for suffciently regular functions $f$ the function $u = Gf$ is the solution of the Neumann problem, where $G$ is the 0-potential of $X$, i.e.,
$Gf(x) = E_x \int_0^\infty f(X_t) \, dt.$
This is a joint work with Krzysztof Bogdan and Damian Fafula from Wroclaw University of Science and Technology.
2:30 pm - 2:55 pm
Self-intersection local times of Volterra Gaussian processes University of Nottingham
Consider a Volterra Gaussian process of the form $X(t)=\int^t_0K(t,s)dW(s),$ where $W$ is a Wiener process and $K$ is a square integrable Volterra kernel. In this talk, we study weighted multiple self-intersection local times formally defined as
$$
T^{\rho}_k=\int_{\Delta_k}\rho(X(t_1))\prod^{k-1}_{i=1}\delta_0(X(t_{i+1})-X(t_i))dt_1\ldots dt_k,
$$
where $\Delta_k=\{0\leqslant t_1\leqslant\ldots \leqslant t_k\leqslant 1\}$ and $\rho$ is a weight function.
The variable $T^{\rho}_k$ measures how much time the process $X$ spends in small neighbourhoods of its self-intersection points. As geometric characteristics of nonsmooth stochastic processes [1], self-intersection local times are an important and interesting notion related to the stochastic process and have been studied in different directions.
To give a rigorous definition to the variable $T^{\rho}_k$, it is natural to approximate the delta-function and prove the existence of the limit. In this talk, we discuss conditions on the process $X$ and the weight function $\rho$ that guarantee the existence of the limit in mean square.
Next, we highlight a class of Volterra Gaussian processes generated by continuously differentiable kernels $K(t-s),\ t,s\in[0,1]$ such that $K(0)\neq 0.$ This class of Volterra Gaussian processes displays similarity to the Wiener process. It is not difficult to check that for a planar Wiener process $W$, even a double self-intersection local time does not exist. Indeed, one can check that
$$
E\int_{\Delta_2}\delta_0(W(t_2)-W(t_1))dt_1dt_2=\int_{\Delta_2}\frac{1}{2\pi(t_2-t_1)}dt_1dt_2=\infty.
$$
Nevertheless, the Rosen renormalized self-intersection local time
$$
\int_{\Delta_k}\prod^{k-1}_{i=1}(\delta_0(W(t_{i+1})-W(t_i))-E\delta_0(W(t_{i+1})-W(t_i)))dt_1\ldots dt_k
$$
exists [2]. In this talk, we construct the Rosen renormalized self-intersection local time of multiplicity $k$ for planar Volterra Gaussian processes [3].
References
1. J.-F. Le Gall, Wiener sausage and self-intersection local times, Journal of Functional Analysis 88, 1990, 299-341.
2. J. Rosen, A renormalized local time for multiple intersections of planar Brownian
motion, Séminaire de Probabilités, XX, 1984/85, Lecture Notes in Math., vol. 1204,
Springer, Berlin, 1986, pp. 515–531.
3. O. Izyumtseva, W. R. KhudaBukhsh, Local time of self-intersection and sample path properties of Volterra Gaussian processes https://www.arxiv.org/pdf/2409.04377).
2:55 pm - 3:20 pm
Vague and basic convergence of signed measures University Uln
We study the relationship between different kinds of convergence of finite signed measures and discuss their metrizability. In particular, we study the concept of basic convergence recently introduced by A. A. Khartov and introduce the related concept of almost basic convergence. We discover that a sequence of finite signed measures converges vaguely if and only if it is locally uniformly bounded in variation and the corresponding sequence of distribution functions either converges in Lebesgue measure up to constants, converges basically, or converges almost basically.
|
1:40 pm - 3:20 pm | S 8 (5): Finance, insurance and risk: Modelling Location: POT 361 Floor plan Session Chair: Peter Hieber Session Chair: Frank Seifried |
|
1:40 pm - 2:05 pm
A Pareto tail plot and the principle of a single huge jump Karlsruher Institut für Technologie (KIT), Germany
We propose a mean functional that exists for any probability distributions and
characterizes the Pareto distribution within the set of distributions with finite left
endpoint. This is in sharp contrast to the mean excess plot which is not meaningful
for distributions without an existing mean and has a nonstandard behaviour if the
mean is finite, but the second moment does not exist.
The construction of the plot is based on the so-called principle of a single huge jump, which differentiates between distributions with moderately heavy and super heavy tails. Between these two groups lies the family of Pareto distributions. We present an estimator of the tail function based on U-statistics and study its large sample properties. Several loss datasets illustrate the use of the new plot.
2:05 pm - 2:30 pm
Shrink-Swell Soils: Modelling and Pricing with Mean-Reverting Regime-Switching Lévy Processes EM Lyon, France
Shrink-swell soils create immense challenges for insurers and reinsurers. The claims that they produce are quickly increasing in frequency, while predictability remains a key concern. This paper constructs and compares several models to tackle and price this risk. These models mean-revert towards a seasonality function, present jumps with infinite arrival rates - via Lévy processes, and display a regime switching nature to allow for a va- riety of scenarios for the coming future years. We introduce structural and reduced-form frameworks, that is, frameworks that are more phenomenological or more efficiency-based. A comparison of these frameworks concludes this paper.
2:30 pm - 2:55 pm
On the Range Process of a L\'{e}vy Risk Process with Fair Valuation of Insurance Contract 1School of Mathematical Sciences,University of Southampton; 2Department of Finance,The Chinese University of Hong Kong
In this paper, we study the range process of L\'{e}vy risk processes through the characterization of some fluctuation results pertaining to inverse range time. In particular, we derive explicit expressions for Laplace transforms associated with occupation times and many related quantities. The range process under the Poissonian observation scheme will also be introduced. We further study extremum levels up to the inverse range. Explicit results under the Brownian risk process and the Cram\'{e}r-Lundberg risk model will be presented. As an application, we present an extensive numerical analysis on fair valuation of insurance contracts.
|
1:40 pm - 3:20 pm | S10 (2): Stochastic optimization and operation research Location: POT 13 Floor plan Session Chair: Nikolaus Schweizer Session Chair: Ralf Werner |
|
1:40 pm - 2:05 pm
Optimal control of stochastic delay differential equations and applications to portfolio optimization and optimal advertising Luiss University
Optimal control problems involving Markovian stochastic differential equations have been extensively studied in the research literature; however, many real-world applications necessitate the consideration of path-dependent non-Markovian dynamics. In this talk, we consider an optimal control problem of (path-dependent) stochastic differential equations with delays in the state. To use the dynamic programming approach, we regain Markovianity by lifting the problem on a suitable Hilbert space. We characterize the value function $V$ of the problem as the unique viscosity solution of the associated Hamilton-Jacobi-Bellman (HJB) equation, which is a fully non-linear second-order partial differential equation on a Hilbert space with an unbounded operator. Since no regularity results are available for viscosity solutions of these kinds of HJB equations, via a new finite-dimensional reduction procedure and using the regularity theory for finite-dimensional PDEs, we prove partial $C^{1,\alpha}$-regularity of $V$. When the diffusion is independent of the control, this regularity result allows us to define a candidate optimal feedback control. However, due to the lack of $C^2$-regularity of $V$, we cannot prove a verification theorem using standard techniques based on Ito’s formula. Thus, using a technical double approximation procedure, we construct functions approximating $V$, which are supersolutions of perturbed HJB equations and regular enough to satisfy a non-smooth Ito’s formula. This allows us to prove a verification theorem and construct optimal feedback controls. We provide applications to optimal advertising and portfolio optimization. We discuss how these results extend to the case of delays in the control variable (also) and discuss connections with new results of $C^{1,1}$-regularity of the value function and optimal synthesis for optimal control problems of stochastic differential equations on Hilbert spaces via viscosity solutions.
The talk is based on the following manuscripts:
F. de Feo, S. Federico, A. Święch, "Optimal control of stochastic delay differential equations and applications to path-dependent financial and economic models", SIAM J. Control Optim. 62 (2024), no. 3, 1490–1520.
F. de Feo, A. Święch, "Optimal control of stochastic delay differential equations: Optimal feedback controls", arXiv preprint arXiv:2309.05029 (2024).
F. de Feo, "Stochastic optimal control problems with delays in the state and in the control via viscosity solutions and applications to optimal advertising and optimal investment problems", Decis. Econ. Finance (2024) 31 pp.
F. de Feo, A. Święch, L. Wessels, "Stochastic optimal control in Hilbert spaces: $C^{1,1}$-regularity of the value function and optimal synthesis via viscosity solutions", arXiv preprint, arXiv:2310.03181 (2024).
2:05 pm - 2:30 pm
Time-consistent asset allocation for risk measures in a Lévy market Ulm University, Germany
Focusing on gains & losses relative to a risk-free benchmark instead of terminal wealth, we consider an asset allocation problem to maximize time-consistently a mean-risk reward function with a general risk measure which is i) law-invariant, ii) cash- or shift-invariant, and iii) positively homogeneous, and possibly plugged into a general function. Examples include (relative) Value at Risk, coherent risk measures, variance, and generalized deviation risk measures. We model the market via a generalized version of the multi-dimensional Black-Scholes model using $\alpha$-stable Lévy processes and give supplementary results for the classical Black-Scholes model. The optimal solution to this problem is a Nash subgame equilibrium given by the solution of an extended Hamilton-Jacobi-Bellman equation. Moreover, we show that the optimal solution is deterministic under appropriate assumptions.
2:30 pm - 2:55 pm
An investmentproblem with incomplete information Trier University
We consider an investment problem with incomplete information where an
irreversible investment yields a flow of operating profits. These profits are
determined by a geometric Brownian motion with unknown drift. Mathematically,
this is an optimal stopping problem with incomplete information
where the payoff function depends directly on the unknown parameter. We
transform the problem into a two-dimensional optimal stopping problem with
complete information. Thereby, we find that the optimal stopping time is
a threshold time with respect to the underlying two-dimensional process.
Then, we identify the optimal stopping set with a boundary function mapping
the current posterior belief onto the minimal stopping trigger for the
state of the geometric Brownian motion. Further, we use an elementary
probabilistic way to show regularity and monotonicity of the value function
and the boundary function. We finally derive a nonlinear integral equation
for the boundary function.
2:55 pm - 3:20 pm
A Hot Topic: Modeling Prosumer Heat Storage with a Markov Decision Process Karlsuhe Institue of Technology (KIT)
Heat energy management plays an important role in the ongoing efforts to reduce carbon emissions. Due to the widespread use of solar collectors, many households are no longer just consumers of thermal energy but also producers (short: prosumers). A major challenge such prosumers face is the efficient use of the collected heat energy. Help is provided by technologies like long-term geothermal storage solutions and district heating networks. However, these more advanced systems raise questions about their optimal operation, for example, what amount of excess production should be stored long-term/sold vs. what amount should be kept on hand? This presentation aims to answer some of these questions by modeling prosumer heat storage with a Markov decision process, by deriving the structure of the optimal policy and by computing it via learning algorithms.
The talk is based on joint work with Nicole Bäuerle and funded by the Bundesministerium für Bildung und Forschung (BMBF).
|
1:40 pm - 3:20 pm | S11 (1): Time series - Change-Point Analysis Location: POT 251 Floor plan Session Chair: Alexander Schnurr |
|
1:40 pm - 2:05 pm
Monitoring Time Series with Short Detection Delay Aarhus University
In this work, we develop sequential tests for a change in the mean of a dependent, Banach space-valued time series. For this purpose, we introduce a new class of weighted CUSUM statistics tailored to the detection of changes shortly after they occur. Unlike current alternatives -which either experience long detection delays or offer short delays only at the very beginning of the monitoring period- our approach provides consistently short detection delays anywhere in the monitoring period. This property is highly relevant for modern applications, such as epidemiology and finance, where short delays are crucial and the timing of the change is unpredictable. Our theoretical results are based on new Hölderian invariance principles that we prove under some high level conditions for Banach space-valued data. We show that these conditions hold in may instances and discuss as one particular example m-approximable time series on Hilbert spaces. Such time series cover important classes of models for multivariate and functional data. The resulting Hölderian invariance principle for m-approximable time series is of independent interest. A simulation study and data example underline the usefulness and relative advantages of the proposed approach.
2:05 pm - 2:30 pm
Functional AR-Sieve Bootstrap for Change-Point Tests Otto-von-Guericke Universität Magdeburg, Germany
A generalization of the CUSUM statistic can be used to detect structural breaks in functional time series. However, to determine the critical value without the help of dimension reduction is challenging. We propose a sequential version of the functional autoregressive sieve bootstrap and show that this bootstrap method is asymptotically valid. Additionally to the theoretical results, we demonstrate that it leads to well calibrated test for finite samples in a simulation study. We compare the AR-Sieve Bootstrap to other resampling methods suggested: block bootstrap and dependent wild bootstrap.
2:30 pm - 2:55 pm
Two change point tests for a gradual change in the Poisson INARCH(1)-process 1Fraunhofer ITWM, Germany; 2Hochschule für Technik und Wirtschaft Berlin; 3Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau
Change point detection methods are a common tool to identify structural changes in the distribution of time series. In recent years, there has been progress in detecting changes within times series in countable spaces, e.g. the natural numbers. For a number of applications, such as outbreak detection of infectious diseases, modeling a gradual change could be valuable. Such count time series can be modeled by Poisson INARCH(1) processes. One possibility to model gradual changes is by introducing a non-linear time dependent factor in the intensity function of a Poisson INARCH(1) process. This additional factor characterizes the gradual change after the change point.Two test statistics are applied to this model and it is shown that both still have a limiting distribution given by the Gumbel extreme value distribution under the null hypothesis. Under the alternative, consistency holds for certain assumptions. The difference between both test statistics is elaborated by an experimental analysis.
|
1:40 pm - 3:20 pm | S12 (2): Computational, functional and high-dimensional statistics Location: ZEU 260 Floor plan Session Chair: Martin Wahl |
|
1:40 pm - 2:05 pm
Delayed Acceptance Slice Sampling: A Two-Level Method For Improved Efficiency In High-Dimensional Settings 1TU Bergakademie Freiberg, Germany; 2U Passau, Germany
Slice sampling is a Markov chain Monte Carlo (MCMC) method for drawing (approximately) random samples from a posterior distribution that is typically only known up to a normalizing constant. The method is based on sampling a new state on a slice, i.e., a level set of the target density function. Slice sampling is especially interesting because it is tuning-free and guarantees a move to a new state, which can result in a lower autocorrelation compared to other MCMC methods.
However, finding such a new state can be computationally expensive due to frequent evaluations of the target density, especially in high-dimensional settings. To mitigate these costs, we introduce a delayed acceptance mechanism that incorporates an approximate target density for finding potential new states. We will demonstrate the effectiveness of our method through various numerical experiments and outline a possible extension of our two-level method into a multilevel framework.
2:05 pm - 2:30 pm
Metropolis-adjusted interacting particle sampling 1TU Bergakademie Freiberg, Germany; 2University of Mannheim, Germany; 3Heidelberg University, Germany
In recent years, various interacting particle samplers have been developed to sample from complex target distributions, such as those found in Bayesian inference. These samplers are motivated by the mean-field limit perspective and implemented as ensembles of particles that move in the product state space according to coupled stochastic differential equations. The ensemble approximation and numerical time stepping used to simulate these systems can introduce bias and affect the invariance of the particle system with respect to the target distribution. To correct for this, we investigate the use of a Metropolization step, similar to the Metropolis-adjusted Langevin algorithm. We examine both ensemble- and particle-wise Metropolization and prove basic convergence of the resulting ensemble Markov chain to the target distribution. Our results demonstrate the benefits of this correction in numerical examples for popular interacting particle samplers such as affine invariant interacting Langevin dynamics, consensus-based sampling, and stochastic Stein variational gradient descent.
2:30 pm - 2:55 pm
A Unified Framework for Pattern Recovery in Penalized Estimation 1TU Wien, Austria; 2University of Burgundy, France; 3University of Wroclaw; 4University of Angers, France
We consider the framework of penalized estimation where the penalty term is given by a polyhedral norm, or more generally, a polyhedral gauge, which encompasses methods such as LASSO and generalized LASSO, SLOPE, OSCAR, PACS and others. Each of these estimators can uncover a different structure or “pattern” of the unknown parameter vector. We define a novel and general notion of patterns based on subdifferentials and formalize an approach to measure pattern complexity. For pattern recovery, we provide a minimal condition for a particular pattern to be detected with positive probability, the so-called accessibility condition. We make the connection to estimation uniqueness by showing that uniqueness holds if and only if no pattern with complexity exceeding the rank of the $X$-matrix is accessible. Subsequently, we introduce the noiseless recovery condition which is a stronger requirement than accessibility and which can be shown to play exactly the same role as the well-known irrepresentability condition for the LASSO – in that the probability of pattern recovery is bounded by 1/2 if the condition is not satisfied. Through this, we unify and extend the irrepresentability condition to a broad class of penalized estimators using an interpretable criterion. We also look at the “gap” between accessibility and the noiseless recovery condition and discuss that our criteria show that it is more pronounced for simple patterns. Finally, we prove that the noiseless recovery condition can indeed be relaxed when turning to so-called thresholded penalized estimation: in this setting, the accessibility condition is already sufficient (and necessary) for sure pattern recovery provided that the signal of the pattern is large enough. We demonstrate how our findings can be interpreted through a geometrical lens throughout the talk and illustrate our results for the Lasso as well as other estimation procedures. [See also arXiv:2307.10158]
2:55 pm - 3:20 pm
Robust posterior sampling using the multiple Laplace approximations TU Bergakademie Feiberg, Germany
In Bayesian inference, approximating the posterior distribution accurately is essential for deriving meaningful probabilistic insights from data. The Laplace approximation is a widely used method to approximate posterior distributions by fitting a Gaussian distribution centered at the mode. However, as a Gaussian approximation, it often fails when the posterior distribution has multiple modes or exhibits non-Gaussian characteristics. To address these limitations, we introduce a method that utilizes a linear combination of multiple Laplace approximations. We demonstrate that this method remains robust as the number of observations increases or observation noise decreases. This improved approximation technique can be effectively integrated into sampling methods for integral calculations. We specifically implemented our approach in the context of importance sampling and analyzed the convergence of the effective sample size. The robustness and efficiency of our method are illustrated through two examples, showing its potential to enhance posterior approximation in practical scenarios.
|
1:40 pm - 3:20 pm | S13 (5): Nonparametric and asymptotic statistics Location: ZEU 250 Floor plan Session Chair: Alexander Kreiss Session Chair: Leonie Selk |
|
1:40 pm - 2:05 pm
Convergence Rates for the Maximum A Posteriori Estimator in PDE-Regression Models with Random Design Ruprecht-Karls-Universität Heidelberg, Germany
We consider the statistical inverse problem of recovering a parameter $\theta \in H^\alpha$ from data arising from a Gaussian regression model given by
$Y = \mathscr{G}(\theta)(Z) + \varepsilon$,
where $\mathscr{G}: \mathbb{L}^2 \to \mathbb{L}^2$ is a nonlinear forward map, $Z$ represents random design points, and $\varepsilon$ denotes Gaussian noise. Our estimation strategy is based on a least squares approach with $\|\cdot\|_{H^\alpha}$-constraints. We establish the existence of a least squares estimator $\hat{\theta}$ as a maximizer of a specified functional under Lipschitz-type assumptions on the forward map $\mathscr{G}$. A general concentration result is shown, which is used to prove consistency of $\hat{\theta}$ and establish upper bounds for the prediction error. The corresponding rates of convergence reflect not only the smoothness of the parameter of interest but also the ill-posedness of the underlying inverse problem. We apply this general model to the Darcy problem, where the recovery of an unknown coefficient function $f$ is the primary focus. For this example, we also provide specific rates of convergence for both the prediction and estimation errors. Additionally, we briefly discuss the applicability of the general model to other problems.
2:05 pm - 2:30 pm
Shift-Dispersion Decompositions of Wasserstein and Cramér Distances 1Technical University of Munich, Germany; 2Karlsruhe Institute for Technology, Germany; 3Heidelberg University
Divergence functions are measures of distance or dissimilarity between probability distributions that
serve various purposes in statistics and applications. We propose decompositions of Wasserstein and
Cramér distances—which compare two distributions by integrating over their differences in distribution
or quantile functions—into directed shift and dispersion components. These components are obtained by dividing the differences between the quantile functions into contributions arising from shift and dispersion, respectively. Our decompositions add information on how the distributions differ in a condensed form and consequently enhance the interpretability of the underlying divergences. We show that our decompositions satisfy a number of natural properties and are unique in doing so in location-scale families. The decompositions allow us to derive sensitivities of the divergence measures to changes in location and dispersion, and they give rise to weak stochastic order relations that are linked to the usual stochastic and the dispersive order. Our theoretical developments are illustrated in two applications, where we focus on forecast evaluation of temperature extremes and on the design of probabilistic surveys in economics.
2:30 pm - 2:55 pm
Uncovering Intrinsic Decompositions: A Tool to Interpret Statistical Distances Karlsruhe Institute of Technology (KIT), Germany
There is an increasing trend in the field of applied statistics away from only
considering summary statistics towards considering entire distributions,
especially in prediction tasks. While this allows for a more nuanced treatment
of the given distribution or sample, e.g. by calculating some statistical
distance measure between two distributions, its lack of interpretability is a
considerable downside. In this talk, a decomposition of statistical distances
is proposed, dividing them into easily interpretable components of location
and dispersion as well as asymmetric and symmetric shape. The decomposition
algorithm sequentially minimizes the distance that can be attained
by changing only one of these characteristics in the considered distributions.
For that, we use transformations that are invariant with respect to all
characteristics other than the one we are interested in. These transformations
follow directly from stochastic orders that are commonly used to define
measures of location, dispersion, etc. This approach can be applied to all
statistical distances and lets the chosen distance measure induce the measurement
of the individual components. The decomposition is empirically
illustrated using the comparison of historical and recent temperature data.
2:55 pm - 3:20 pm
Unlinked regression under vanishing variance 1University of Heidelberg; 2Catholic University of Eichstätt-Ingolstadt
A standard problem in shape-constrained curve estimation is isotonic regression where the regression function is non-decreasing and is estimated by means of observed data pairs $(x_i,Y_i)$, $i=1,\dots,n$. We remove the assumption of linked regression data, i.e., we do not know to which design point $x_j$ the response $Y_i$ belongs.
In this model, we study an estimator of the regression function that essentially relies on inverting the estimated distribution function. We derive convergence rates, under the assumption that the variance in the noise terms decays to zero at a suitable rate. Here, we distinguish both a kernel smoothed and an unsmoothed version of our estimator and argue when the smoothed version is superior. We also provide a local functional central limit theorem for the unsmoothed estimator. Finally, we present a numerical illustration supporting our results.
|
2:30 pm - 3:20 pm | S 9 Keynote: Finance, insurance and risk: Quantitative methods Location: POT 81 Floor plan Session Chair: Nils-Christian Detering Session Chair: Peter Ruckdeschel |
|
2:30 pm - 3:20 pm
Modelling contagious bank runs We develop a modelling framework for contagion in financial networks arising from bank runs. We discuss different strategies that institutions which experience a bank run might use to satisfy their liquidity needs and analyse and compare implications of these different strategies for systemic risk. We find that the magnitude of contagion is highly sensitive to the choice of the mitigating strategies used by the market participants. We provide an application of our framework to financial stress testing and an empirical case study. |
3:20 pm - 3:50 pm | Coffee Break Location: Foyer Potthoff Bau Floor plan |
3:20 pm - 3:50 pm | Coffee Break Location: POT 168 Floor plan |
3:50 pm - 4:40 pm | S 1 (6): Machine Learning Location: POT 06 Floor plan Session Chair: Shayan Hundrieser |
|
3:50 pm - 4:15 pm
Statistical guarantees for stochastic Metropolis-Hastings 1Universität Hamburg, Germany; 2Karlsruhe Institute of Technology, Germany
Uncertainty quantification is a key issue when considering the application of deep neural network methods in science and engineering. To this end, numerous Bayesian neural network approaches have been introduced. The main challenge is to construct an algorithm which is applicable to the large sample sizes and parameter dimensions of modern applications on the one hand and which admits statistical guarantees on the other hand. A stochastic Metropolis-Hastings step saves computational costs by calculating the acceptance probabilities only on random (mini-)batches, but reduces the effective sample size leading to less accurate estimates. We demonstrate that this drawback can be fixed with a simple correction term. Focusing on deep neural network regression, we prove a PAC-Bayes oracle inequality which yields optimal contraction rates and we analyze the diameter and show high coverage probability of the resulting credible sets. The method is illustrated with a simulation example.
4:15 pm - 4:40 pm
Lévy Langevin Monte Carlo for heavy-tailed target distributions TU Dresden, Germany
We extend the Monte Carlo method of Oechsler 2024 to the setting of a target
distribution with heavy tails: We choose a regularly varying distribution and
prove the convergence of a solution of a stochastic differential equation to this
target. Hereby, the stochastic differential equation is driven by a general Lévy
process - unlike in the case of a classical Langevin diffusion. This method is
justified, apart from the possibility to sample from non-smooth targets, by
the fact that an exponential convergence to the invariant distribution holds,
which in general cannot be guaranteed by the classical Langevin diffusion in
presence of heavy tails. Advantageous compared to other Langevin Monte
Carlo methods is the option of an easy implementation of the method by only
using a compound Poisson process as noise term and a numerically manageable
drift term.
|
3:50 pm - 4:40 pm | S 6 Keynote: Stochastic modelling in natural sciences Location: POT 81 Floor plan Session Chair: Alexandra Blessing Session Chair: Anton Klimovsky |
|
3:50 pm - 4:40 pm
Stability and instability of almost-surely invariant structures in stochastic systems In this talk I will report on a toolkit for use in evaluating the asymptotic stability or instability of almost-surely invariant subsets of the phase space of a random dynamical system. Such invariant sets arise naturally in PDE as classes of solutions obeying some symmetry, e.g., the set of incompressible vector fields which are constant along some axis. An application to the stochastically driven Lorenz 96 system of coupled oscillators is given, as well as some outlook on proposed applications to systems of practical interest such as the Navier-Stokes equations with stochastic driving. |
3:50 pm - 5:30 pm | S 3 (4): Stochastic Analysis and S(P)DEs Location: POT 151 Floor plan Session Chair: Vitalii Konarovskyi Session Chair: Aleksandra Zimmermann |
|
3:50 pm - 4:15 pm
A regularized Kellerer theorem in arbitrary dimension 1Department of Mathematics, ETH Zürich; 2Department of Statistics, University of Klagenfurt; 3Faculty of Mathematics, University of Vienna
We present a multidimensional extension of Kellerer's theorem on the existence of mimicking Markov martingales for peacocks, a term derived from the French for stochastic processes increasing in convex order. For a continuous-time peacock in arbitrary dimension, after Gaussian regularization, we show that there exists a strongly Markovian mimicking martingale Itô diffusion. A novel compactness result for martingale diffusions is a key tool in our proof. Moreover, we provide counterexamples to show, in dimension $d \geq 2$, that uniqueness may not hold, and that some regularization is necessary to guarantee existence of a mimicking Markov martingale.
4:15 pm - 4:40 pm
On the weak representation property in progressively enlarged filtrations TU Dresden
In this talk we review some results about the propagation of the weak representation property to progressively enlarged filtrations. The enlargement of the reference filtration can be carried by a random time or by a whole semimartingale.
4:40 pm - 5:05 pm
Limit Laws for Critical Dispersion on Complete Graphs 1National University of Singapore; 2Ludwig Maximilian University of Munich
We consider a synchronous process of particles moving on the vertices of a graph $G$, introduced by Cooper, McDowell, Radzik, Rivera and Shiraga (2018). Initially, $M$ particles are placed on a vertex of $G$. In subsequent time steps, all particles that are located on a vertex inhabited by at least two particles jump independently to a neighbour chosen uniformly at random. The process ends at the first step when no vertex is inhabited by more than one particle; we call this (random) time step the dispersion time.
In this work we study the case where $G$ is the complete graph on $n$ vertices and the number of particles is $M=n/2+\alpha n^{1/2} + o(n^{1/2})$, $\alpha\in \mathbb{R}$. This choice of $M$ corresponds to the critical window of the process, with respect to the dispersion time.
We show that the dispersion time, if rescaled by $n^{-1/2}$, converges in $p$-th mean, as $n\rightarrow \infty$ and for any $p \in \mathbb{R}$, to a continuous and almost surely positive random variable $T_\alpha$.
We find that $T_\alpha$ is the absorption time of a standard logistic branching process, thoroughly investigated by Lambert (2005), and we determine its expectation. In particular, in the middle of the critical window we show that $\mathbb{E}[T_0] = \pi^{3/2}/\sqrt{7}$, and furthermore we formulate explicit asymptotics when $|\alpha|$ gets large that quantify the transition into and out of the critical window. We also study the (random) total number of jumps that are performed by the particles until the dispersion time is reached. In particular, we prove that it centers around $\frac27n\ln n$ and that it has variations linear in $n$, whose distribution we describe explicitly.
5:05 pm - 5:30 pm
A Lévy-Itô decomposition for non-stationary processes on Lie groups 1TU Dresden, Germany; 2King's College London, United Kingdom
We investigate stochastic processes with independent, but not necessarily stationary increments on finite-dimensional Lie groups. Assuming stochastic continuity, these processes exhibit strong regularity properties, admitting a modification with càdlàg paths. We analyze the stochastic logarithm of such processes within the associated Lie algebra and its inverse map, the stochastic exponential. Drawing parallels to vector-valued processes, we explore appropriate versions of the Lévy-Khintchine formula and the Lévy-Itô decomposition.
|
3:50 pm - 5:30 pm | S 4 (5): Limit theorems, large deviations and extremes Location: ZEU 160 Floor plan Session Chair: Jan Nagel Session Chair: Marco Oesting |
|
3:50 pm - 4:15 pm
Poissonian pair correlations for dependent Hochschule Ruhr West, Germany
A sequence of real numbers is said to have Poissonian pair correlations (PPC)
if $\lim_{N\to\infty}\frac{1}{N}\#\left\{1\leq i,j\leq N\middle | i\neq j, \Vert x_i-x_j\Vert\leq \frac{s}{N}\right\}=2s$ for every $s>0$. Because PPC characterizes equidistribution on a local scale, it has been a topic of large interest within the community in recent years. However, the focus has so far been primarily on deterministic sequences, and whenever random variables are involved, on iid sequences. While it is known that iid sequences of uniformly distributed random variables generically have Poissonian pair correlations, the case of dependent random variables is more complicated. This talk takes a closer look at the case of dependent random variables, which to the best of the author's knowledge has not yet been discussed within the literature yet. More specifically, two types of sequences on the torus are investigated: For sequences of jittered samples, the PPC property depends on how the finite jittered sample is extended to infinite sequences. Second, for random walks on the torus where results by Schatte for the convergence of sums on the torus can be used to show generic PPC.
4:15 pm - 4:40 pm
Small-scale asymptotic structure of ordered uniform k-spacings 1University of Bern; 2Igor Sikorsky Kyiv Polytechnic Institute
The construction of uniform spacings is well-established. Consider $n-1$ independent points uniformly distributed over the unit interval. Uniform $1$-spacings correspond to the distances between consecutive points, while $k$-spacings are defined as the distances between points separated by exactly $k-1$ other points.
There is an extensive body of work on the exact and asymptotic properties of uniform $1$-spacings, and to a lesser extent, $k$-spacings, as well as on their numerous statistical applications.
We propose an apparently novel approach, referred to as the local Poisson approximation for $k$-spacings. This method provides a detailed understanding of their asymptotic behavior on small scales across the entire range of possible values. This framework not only immediately yields existing limit theorems for the minimum and maximum $k$-spacings, which were previously proved using more complex techniques such as the Chen-Stein method, but also extends the analysis to encompass the asymptotic properties of all $k$-spacings, regardless of their length.
4:40 pm - 5:05 pm
An approximation for the quantiles of the maxima University of Vienna
Let $X_1,\ldots, X_n \in \mathbb{R}^d$ be a sequence of iid random vectors, where $n\ll d$. A fundamental problem in high-dimensional statistics concerns normal approximations and convergence properties of the maximum statistic $$M_n=\max_{1\leq k\leq d} \frac{1}{\sqrt{n}}\sum_{i=1}^n X_{i,k},$$ whose study was initiated in seminal works by by Chernozhukov, Chetverikov and Kato. A next step in understanding the asymptotic properties of $M_n$ and accompanying quantile approximations is the development of Edgeworth-type expansions and corresponding bootstrap methods. A very recent result in this direction was established by Koike, developing an Edgeworth expansion for $\frac{1}{\sqrt{n}}\sum_{i=1}^n X_i$ based on Stein kernels, subject to some regularity conditions. In our project, we view the problem through the lens of Poisson-approximations to directly construct an Edgeworth expansion for $M_n$. Our main assumptions are a Cram\'{e}r-type condition for all pairs of components of the $X_i$ and a notion of weak dependence across the dimension. Inverting the Edgeworth expansions, we obtain a Cornish-Fisher-type expansion for the quantiles of $M_n$, which is also second-order accurate. Furthermore, we extend our results to studentized case, i.e. to the statistic $\max_{1\leq k\leq d} T_{n,k}$, where $T_{n,k}$ is Student-t statistic of the $k$-th components of the $X_i$.
5:05 pm - 5:30 pm
Decay of correlations for the massless hierarchical Liouville model in infinite volume Weizmann Institute of Science
Let $(A_v)_{v\in \mathcal{T}}$ be the balanced Gaussian Branching Random Walk on a $d$-ary tree $\mathcal{T}$ and let $M^A$ be the multiplicative chaos with parameter $\gamma \in (0, \sqrt{2\log d})$ constructed from $A$.
In this work we establish the precise first order asymptotics of
%the the logarithm of
negative exponential moment of $M^A$,
i.e.\ we prove that for $t_k = \lambda p_\gamma^k$ with $\lambda>0$ and $p_\gamma$ an explicit constant depending only on $\gamma$, we have as $k \to \infty$,
\begin{equation*}
-\frac{1}{d^k} \log \mathbb{E}[e^{-\lambda p_\gamma^k M^A } ] \to h(\lambda),
\end{equation*}
where $h\colon (0,\infty)\to \mathbb{R}$ is a non-explicit positive continuous function.
This result allows us to study the law of $A$ tilted by $e^{-t_k M^A}$ for particular values of $\lambda$, with $k\to \infty$.
In this setting we prove that the normalized $L^1$ norm of $A$ in generation $k-a$
is bounded and converges to $0$ when first $k\to \infty$ and then $a\to 0$.
As an application we prove that in this setting,
under the tilt $e^{-t_k M^A}$ and with $k\to \infty$, the Branching Random Walk $A$ exhibits a weak decay of correlation,
which is not present in the non-tilted model.
Our methods also apply to the usual Branching Random Walk $(S_v)_{v\in \mathcal{T}}$ and with $M^A$ replaced by $\frac{1}{2}(M^+ + M^- )$, where $M^+$ and $M^-$ are the multiplicative chaoses with parameter $\gamma \in (0, \sqrt{2\log d})$ constructed from $S$ and $-S$.
In that case we prove that, as $k\to \infty$,
\begin{equation*}
-\frac{1}{d^k} \log \mathbb{E}[e^{- \frac{\lambda p_\gamma^k}{2}( M^+ + M^-) }] \to \tilde h(\lambda),
\end{equation*}
where $\tilde h\colon (0,\infty)\to \mathbb{R}$ is again a non-explicit positive continuous function.
Our models are motivated by Euclidean field theory and can be seen as hierarchical versions of the massless Liouville and the sinh-Gordon field theory in infinite volume.
From this perspective our analysis sheds new light on the existence and the decay or correlations in these models, which are among the major open questions in this area.
|
3:50 pm - 5:30 pm | S 7 (8): Stochastic processes: theory, statistics and numerics Location: POT 51 Floor plan Session Chair: Markus Bibinger |
|
3:50 pm - 4:15 pm
Drift Parameter Estimation of Discretely Observed High-Dimensional Diffusion Processes. Luxembourg university, Luxembourg
In this talk, we explore the parametric statistical inference of the drift function in high-dimensional diffusion processes. Specifically, we examine the statistical performance of the Lasso estimator when the process is observed discretely and under a sparsity condition over the parameter.
In particular, we present error bounds for the estimator, and we provide an in-depth analysis of the contribution of the discretization error, clarifying the impact of the sampling effects in a high-dimensional regime.
This is based on joint work with Chiara Amorino and Mark Podolskij.
4:15 pm - 4:40 pm
Testing the rank of the spot covariance matrix of a multidimensional semi-martingale Humboldt-Universität zu Berlin
Our work deals with the estimation of the instantaneous or spot covariance matrix of a continuous semi-martingale X(t) using high-frequency observations. It has been shown that estimating and testing the rank of a covariance matrix leads to fast rates that are affected by a potential drift. Therefore, a potential drift term cannot be neglected and a modified estimator has to be considered. For this reason, we propose a re-centered covariance estimator instead of a second moment estimator. Our aim is to infer the maximal rank of the spot covariance matrix of the multidimensional process X(t). Consequently, we consider the null hypothesis that the rank of the spot covariance matrix is at most r for all t, and compute a corresponding critical value.
This talk is based on joint work with Markus Reiß.
4:40 pm - 5:05 pm
Adaptive Elastic-Net Estimation for Ergodic Diffusion Processes: oracle properties and non-asymptotic bounds Hamburg University, Germany
Penalized estimation methods for diffusion processes have recently gained significant attention due to their effectiveness in handling high-dimensional stochastic systems. In this work, we introduce an adaptive Elastic-Net estimator for ergodic diffusion processes observed under high-frequency sampling schemes. Our method combines adaptive $\ell_1$ and $\ell_2$ regularization, enhancing prediction accuracy and interpretability while effectively recovering the sparse underlying structure of the model.
We provide finite-sample guarantees for the estimator's performance by deriving high-probability non-asymptotic bounds for the $\ell_2$ estimation error. These results complement the established oracle properties in an asymptotic regime with mixed convergence rates, ensuring consistent selection of the relevant interactions and achieving optimal rates of convergence. Furthermore, we utilize our results to analyze one-step-ahead predictions, offering non-asymptotic control over the prediction error.
The performance of our method is evaluated through simulations and real data applications, demonstrating its effectiveness, particularly in scenarios with strongly correlated variables.
5:05 pm - 5:30 pm
Non-ergodic statistics for stationary-increment harmonizable stable processes Universität Ulm, Germany
We consider the class of stationary-increment harmonizable stable processes $X=\{X(t):t\in\mathbb{R}\}$ defined by
\begin{align*}
X(t)=Re\left(\int_\mathbb{R}\left(e^{itx}-1\right)\Psi(x)M_\alpha(dx)\right),\quad t\in\mathbb{R},
\end{align*}
where $M_\alpha$ is an isotropic complex symmetric $\alpha$-stable ($S\alpha S$) random measure with Lebesgue control measure.
This class contains real harmonizable fractional stable motions, which are a generalization of the harmonizable representation of fractional Brownian motions to the stable regime, when $\Psi(x) = \vert x\vert^{-H-1/\alpha}$, $x\in\mathbb{R}$.
We give conditions for the integrability of the path of $X$ with respect to a finite, absolutely continuous measure, and show that the convolution with a suitable measure yields a real stationary harmonizable $S\alpha S$ process with finite control measure.
Such a process admits a LePage type series representation consisting of sine and cosine functions with random amplitudes and frequencies, which can be estimated consistently using the periodogram.
Combined with kernel density estimation, this allows us to construct consistent estimators for the index of stability $\alpha$ as well as the kernel function $\Psi$ in the integral representation of $X$ (up to a constant factor).
For real harmonizable fractional stable motions consistent estimators for the index of stability $\alpha$ and its Hurst parameter $H$ are given, which are computed directly from the periodogram frequency estimates.
|
3:50 pm - 5:30 pm | S 8 (6): Finance, insurance and risk: Modelling Location: POT 361 Floor plan Session Chair: Peter Hieber Session Chair: Frank Seifried |
|
3:50 pm - 4:15 pm
Competitive portfolio optimization via a value-at-risk based constraint Karlsruhe Institute of Technology
Motivated by the competitive investment behavior of hedge fund managers, competitive portfolio optimization problems are a widely studied topic in the (continuous-time) portfolio optimization literature. In this talk, we propose a new way to incorporate competition into the classical expected utility maximization problem by using a value-at-risk-based constraint. Instead of just maximizing the expected utility of their terminal wealth, agents also aim to outperform a weighted average of their competitors’ terminal wealth with some fixed probability. In the special case of logarithmic utility, we determine and discuss optimal solutions in the form of Nash equilibria. If time permits, we also discuss the influence of several model parameters on the Nash equilibria in the special case of a Black-Scholes market.
4:15 pm - 4:40 pm
Multi-Agent and Mean Field Games for Optimal Investment under Relative Performance Concerns with Jump Signals Technische Universität Berlin, Germany
In contrast to the existing literature on mean field equilibrium models for financial markets, which focuses on differences in agents' trading needs, this research project aims to extend these models by also considering differences in agents' information flows.
We bring together the models of [Bank, Körber, 2022] and [Lacker, Zariphopoulou, 2019] and investigate equilibrium problems in continuous-time stochastic control for optimal investment. Specifically, we model a multi-agent game, where a finite number of investors receives signals about impending price shocks and interacts through relative performance concerns.
For this purpose, we introduce novel signal-driven strategies utilizing Meyer-$\sigma$-measurable controls and a utility function with interactions as in [Lacker, Zariphopoulou, 2019], where investors assess their wealth based on their individual relative risk aversion and the average wealth of their peers, mediated by a concern parameter. A key aspect is the role of a single Poisson random measure, which drives jumps in both the market and signal processes, capturing both common and idiosyncratic noise. This requires understanding of the information shared among investors to accurately define the common noise filtration and the concept of mean wealth when transitioning to the mean field setting, via randomization over type vectors representing individual investor characteristics such as signal quality or quantity.
In both the multi-agent and the mean field case, using the dynamic programming techniques from [Bank, Körber, 2022], we derive and explicitly solve the corresponding HJB-equation and prove a verification theorem for best-response controls of a single (resp. representative) investor interacting with a fixed environment of other investors.
The existence of equilibria in both the multi-agent and mean field games is established using Schauder's Fixed Point Theorem under appropriate assumptions on the investors' characteristics, particularly their signal processes.
As a final step, we provide a numerical example to illustrate equilibria from a financial-economic perspective, addressing questions such as how much investors should care about information known by their peers.
4:40 pm - 5:05 pm
Multi-asset optimal trade execution in an Obizhaeva-Wang-type model 1University of Wuppertal, Germany; 2University of Duisburg-Essen, Germany
We analyze a continuous-time optimal trade execution problem in multiple assets where the price impact and the resilience can be matrix-valued stochastic processes. Our starting point is a stochastic control problem where the control process is of finite variation, possibly with jumps, and acts as an integrator both in the state dynamics and in the cost functional. We discuss how this problem can be continuously extended from finite-variation controls to progressively measurable controls and how the extended problem is linked to a linear-quadratic (LQ) stochastic control problem. We obtain a solution of the LQ stochastic control problem by using results from the theory on LQ stochastic optimal control. From this we recover a solution of the extended problem. Finally, we present an example where it is optimal to start trading also in an asset where the initial position is already zero.
This is based on joint work with Thomas Kruse and Mikhail Urusov.
|
3:50 pm - 5:30 pm | S 9 (1): Finance, insurance and risk: Quantitative methods Location: POT 112 Floor plan Session Chair: Nils-Christian Detering Session Chair: Peter Ruckdeschel |
|
3:50 pm - 4:15 pm
Pricing of geometric Asian options in the Volterra-Heston model 1Johannes Kepler University Linz, Austria; 2RICAM Linz, Austria
Geometric Asian options are a type of option where the payoff depends on the geometric mean of the underlying asset over a certain period of time. This paper is concerned with the pricing of such options for the class of Volterra-Heston models, covering the rough Heston model. We are able to derive semi-closed formulas for the prices of geometric Asian options with fixed and floating strikes for this class of stochastic volatility models. These formulas require the explicit calculation of the conditional joint Fourier transform of the logarithm of the stock price and the logarithm of the geometric mean of the stock price over time. Linking our problem to the theory of affine Volterra processes, we find a representation of this Fourier transform as a suitably constructed stochastic exponential, which depends on the solution of a Riccati-Volterra equation. Finally we provide a numerical study for our results in the rough Heston model.
4:15 pm - 4:40 pm
A comparison principle based on couplings of partial integro-differential operators 1Bielefeld University, Germany; 2TU Delft, The Netherlands
In this talk, we present a new perspective on the comparison principle for viscosity solutions of Hamilton-Jacobi (HJ), HJ-Bellman, and HJ-Isaacs equations.
Our approach innovates in three ways: (1) We reinterpret the classical doubling-of-variables method in the context of second-order equations by casting the Ishii-Crandall Lemma into a test function framework. This adaptation allows us to effectively handle non-local integral operators, such as those associated with L\'{e}vy processes. (2) We translate the key estimate on the difference of Hamiltonians in terms of an adaptation of the probabilistic notion of couplings, providing a unified approach that applies to both continuous and discrete operators. (3) We strengthen the sup-norm contractivity resulting from the comparison principle to one that encodes continuity in the strict topology.
We apply our theory to derive well-posedness results for partial integro-differential operators. In the context of spatially dependent L\'{e}vy operators, we show that the comparison principle is implied by a Wasserstein-contractivity property on the L\'{e}vy jump measures.
4:40 pm - 5:05 pm
Semi-static variance-optimal hedging with self-exciting jumps 1University of Padova; 2TU Dresden; 3University of Bari
The aim of this work is to study a hedging problem in an incomplete market model in
which the underlying log-asset price is driven by a diffusion process with self-exciting jumps of Hawkes type.
We aim at hedging a variance swap (target claim) at time $T > 0$, using a basket of European options (contingent claims). We investigate a semi-static variance-optimal hedging strategy, combining dynamic (i.e., continuously rebalanced) and static (i.e., buy-and-hold) positions to minimize the residual error variance at time $T$.
The semi-static strategy has already been computed in literature for different models of the asset price $S$. The purpose of our work is to solve the hedging problem for an unexplored model featuring self-exciting jumps of Hawkes type. The key aspect of our work is the generality of our framework, both from the perspective of the hedging and the model investigated. Moreover, research into models with self-exciting jumps is significant as it has been observed that prices in the financial market (e.g. commodity markets) exhibit spikes having clustered behavior.
In our work, we establish and analyze our model, studying its properties as an affine semimartingale. We characterize its Laplace transform to rewrite contingent claims using a Fourier transform representation. We finally obtain a semi-explicit expression for the hedging strategy. A possible further development might regard the problem of optimal selection of static hedging assets and potential applications in energy markets.
5:05 pm - 5:30 pm
Optimal Execution Strategies in Short-Term Energy Markets under (Marked) Hawkes Processes 1University of Amsterdam, Netherlands, The; 2Statkraft Trading GmbH
This research develops theoretical tools for the risk management and optimization of intermittent renewable energy in short-term electricity markets. The first part introduces a stochastic model based on a mutually exciting marked Hawkes process to capture key empirical characteristics of Germany's intraday energy market prices. The model effectively reflects the increasing market activity, volatility patterns, and the Samuelson effect observed in the realized mid-price process as time to delivery approaches. By fitting the empirical signature plot of the mid-price process, the model provides a robust calibration method through a closed-form solution, using least squares to match the empirical data.
Building on this foundation, the second part of this research explores optimal execution strategies for energy companies managing large trading volumes, whether from outages, renewable generation, or trading decisions. The study employs a linear transient price impact model combined with a bivariate Hawkes process, which models the flow of market orders, to solve a meta-order execution problem. The optimal execution problem is solved explicitly in this context, due to the affine structure of the state space dynamics. The model determines an optimal liquidation strategy which minimizes the expected costs, allowing traders to react to the actions of other market participants. The research concludes with a back-testing transaction cost analysis for the German intraday energy market, comparing the proposed optimal strategy against benchmark execution strategies like Time Weighted Average Price (TWAP) and Volume Weighted Average Price (VWAP). The results confirmed that the optimal strategy is cost-efficient, significantly reducing transaction costs compared to the benchmark strategies. Further analysis of individual hourly products revealed that cost reductions were particularly substantial for early trading hourly products and stabilized thereafter, with a slight decrease in the mid-day products. This indicates that cost savings are negatively correlated with the average traded volume, suggesting that less volatile, more liquid products might benefit less from the optimal strategy compared to less liquid ones, where improvements are more pronounced.
|
3:50 pm - 5:30 pm | S10 (3): Stochastic optimization and operation research Location: POT 13 Floor plan Session Chair: Nikolaus Schweizer Session Chair: Ralf Werner |
|
3:50 pm - 4:15 pm
Probabilstic discrepancy bounds for different drawing strategies RWTH Aachen, Germany
In this presentation we investigate the discrepancy of a sample. The discrepancy measures how well a distribution is represented by a sample (for example in Monte-Carlo simulation) and how evenly distributed the realizations are. We focus on calculating a-priori probabilistic bounds for the discrepancy which means that before sampling we want to know how probable it is that the discrepancy is below a specific value. We do this for pseudo-random draws but also for variance reduction techniques like systematic sampling or antithetics. We will see that the runtime to calculate probabilistic bounds decreases significantly if we apply variance reduction techniques.
4:15 pm - 4:40 pm
Exponential convergence of general iterative proportional fitting procedures Eberhard Karls Universität Tübingen, Germany
The information projection is a frequently occurring optimization problem, wherein the goal is to compute the projection of a probability measure onto a set of measures with respect to the relative entropy. A famous instance of this problem arises from entropic regularization of optimal transport, which has recently seen a surge in applications because it allows for the use of the iterative proportional fitting procedure (IPFP, also called Sinkhorn's algorithm). In this work, we study convergence properties of generalized IPFPs for more general sets of probability measures than those arising from optimal transport. In particular, we establish exponential convergence guarantees for general information projection problems whenever the set which is projected onto is defined through constraints arising from linear function spaces. This unifies and generalizes recent results from multi-marginal, adapted and martingale optimal transport. The proofs are based on duality and establishing a Polyak-Lojasiewicz inequality. A key contribution is to illuminate the role of the geometric interplay between the linear function spaces determining the constraints.
|
3:50 pm - 5:30 pm | S11 (2): Time series - Spectral Analysis and Limit Theorems Location: POT 251 Floor plan Session Chair: Marie Düker |
|
3:50 pm - 4:15 pm
Evaluating Multivariate Singular Spectrum Analysis via Multiple Testing Error Rates Data Science Center- University of Bremen
Appropriate preprocessing is a fundamental prerequisite for analyzing a noisy dataset. The aim of this paper is to utilize the nonparametric preprocessing method known as (Multivariate) Singular Spectrum Analysis ((M)SSA) on various (multivariate) datasets. These datasets are then subjected to multiple statistical hypothesis tests.
In this study, we compare (M)SSA with three other state-of-the-art preprocessing methods in terms of denoising quality and the statistical power of subsequent multiple tests. The competing methods include both parametric and nonparametric approaches. Furthermore, we investigate the effectiveness of these preprocessing methods in controlling type I errors, which play a critical role in ensuring the reliability of statistical inferences.
Our evaluation primarily focuses on the (empirical) Family-Wise Error Rate (FWER) and on empirical power. We utilize these metrics to assess the ability of (M)SSA, particularly, to maintain the desired level of error control and to demonstrate its superiority in terms of empirical power over other methods. This analysis provides valuable insights into the robustness and reliability of the preprocessing methods, particularly in terms of noise reduction, and their ability to control the empirical type I error rate effectively across simulated and real-world datasets. Our findings demonstrate that (M)SSA can be considered a promising method to reduce noise, extract the main signal from noisy data, and detect statistically significant signal components.
4:15 pm - 4:40 pm
Trend estimation for time series with polynomial-tailed noise 1Universität Bamberg, Institut für Statistik; 2Friedrich-Schiller-Universität Jena, Institut für Mathematik
For time series data observed at non-random and possibly non-equidistant time points,
we estimate the trend function nonparametrically. Under the assumption of bounded total variation of the trend function we propose a nonlinear wavelet estimator which uses a Haar-type basis adapted to a possibly non-dyadic sample size. An appropriate thresholding scheme for sparse signals with an additive polynomial-tailed noise is first derived in an abstract framework
and then applied to the problem of trend estimation.
4:40 pm - 5:05 pm
Asymptotics of peaks-over-threshold estimators in long memory linear time series 1University of Stuttgart, Germany; 2University of Angers, France
In this talk, we consider peaks-over-threshold (POT) estimators for extremes of long memory linear time series. As these time series are not beta-mixing, classical asymptotic results on POT estimators are not applicable. We adapt a reduction principle for subordinated long memory linear time series to our setting. Thus we prove a central limit theorem for POT estimators, including the Hill estimator. We obtain convergence to stable limit distributions with different rates for light and heavy tails.
5:05 pm - 5:30 pm
Time-varying Lévy-driven state space models, locally stationary approximations and asymptotic normality Ulm University, Germany
We first introduce time-varying Lévy-driven state space models, as a class of time series models in continuous time encompassing continuous-time autoregressive moving average processes with parameters changing over time.
In order to allow for their statistical analysis we define a notion of locally stationary approximations for sequences of continuous time processes and establish laws of large numbers and central limit type results under θ-weak dependence assumptions. Finally, we consider the asymptotic behaviour of the empirical mean and autocovariance function of time-varying Lévy-driven state space models under appropriate conditions.
|
3:50 pm - 5:30 pm | S12 (3): Computational, functional and high-dimensional statistics Location: ZEU 260 Floor plan Session Chair: Martin Wahl |
|
3:50 pm - 4:15 pm
Tracy-Widom, Gaussian, and Bootstrap: Approximations for Leading Eigenvalues in High-Dimensional PCA 1Aarhus University; 2University of Davis, California
Under certain conditions, the largest eigenvalue of a sample covariance matrix undergoes a well-known phase transition when the sample size $n$ and data dimension $p$ diverge proportionally.
In the subcritical regime, this eigenvalue has fluctuations of order $n^{-2/3}$ that can be approximated by a Tracy-Widom distribution, while in the supercritical regime, it has fluctuations of order $n^{-1/2}$ that can be approximated with a Gaussian distribution. However, the statistical problem of determining which regime underlies a given dataset has remained largely unresolved. We develop a new testing framework and procedure to address this problem. In particular, we demonstrate that the procedure has an asymptotically controlled level, and that it is power consistent for certain alternatives. Also, this testing procedure enables the design a new bootstrap method for approximating the distributions of functionals of the leading sample eigenvalues within the subcritical regime---which is the first such method that is supported by theoretical guarantees.
4:15 pm - 4:40 pm
AIC for many-regressor heteroskedastic regressions CERGE-EI, Czech Republic
The original Akaike information criteria (AIC) and its corrected version (AICc) have been routinely used for model selection for ages. The penalty terms in these criteria are tied to the classical normal linear regression, characterized by conditional homoskedasticity and a small number of regressors relative to the sample size, which leads to very simple and computationally attractive penalty forms.
We derive, from the same principles, a general version that takes account of conditional heteroskedasticity and regressor numerosity. The new AICm penalty takes a form of a ratio of certain weighted average error variances, and encompasses the classical ones: it is approximately equal to the AIC penalty when the regression is conditionally homoskedastic and regressors are few, and to the AICc penalty when the regression is conditionally homoskedastic but the number of regressors is not negligible. In contrast to those of AIC and AICc, the AICm penalty is stochastic and thus not immediately implementable, as it in addition depends on the pattern of conditional heteroskedasticity in the sample.
The infeasible AICm criterion, however, can be operationalized via unbiased estimation of individual variances. The feasible AICm criterion still minimizes the expected Kullback-Leibler divergence up to an asymptotically negligible term that does not relate to regressor numerosity. In simulations, the feasible AICm does select models that deliver systematically better out-of-sample predictions than the classical criteria.
4:40 pm - 5:05 pm
Identification in ill-posed linear regression: estimation rates, prediction risk, asymptotic distributions University of Vienna, Austria
In applications from biology, chemistry, genomics and finance practitioners face a plethora of high-dimensional data sets affected by highly-correlated features. They often exploit regression models to predict unobserved responses or to identify combinations of features driving the underlying generating process.
The prediction problem can be tackled by nonlinear regression algorithms. Model complexity is not necessarily a burden, since overparametrised models in the regime of benign overfitting achieve small prediction error despite interpolating the observed response. Deep learning, random forests, kernel estimators, L2-penalised linear regression, latent factor regression, all can exhibit the double-descent pattern typical of benign overfitting: the prediction error as a function of the total number of parameters has a local minimum in the underparametrised regime and a global minimum in the overparametrised regime.
Most overparametrised methods are hard to interpret and thus unsuitable for identifying any combination of features that might be relevant for the response. Linear regression is the simplest alternative since it provides a vector of coefficients highlighting the contribution of the features and, despite its simplicity, the regularisation of ill-posed linear models is still relevant in modern data science.
An established strategy for identification relies on the sparsity principle. One assumes that only a few features actually carry any information on the response. Despite the recent development of diagnostic measures of influence, it has become apparent that the necessary regularity conditions fail dramatically when dealing with ill-posed data sets from genomics where model selection becomes essentially random. Furthermore, one of the major drawbacks of the sparsity principle in general is its lack of invariance under orthogonal transformations. This means that any sparse method will overestimate the degrees of freedom of the problem when only a few linear combinations of the features are important, rather than the features themselves.
Another strategy for identification relies on the principal components principle, which assumes that the response only depends on the main directions of variations of the features. The classical theory of latent factor models hinges on regularity conditions allowing consistent estimation of the true number of latent factors via sample eigenvalue ratio. This makes principal components regression (PCR), or unsupervised dimensionality reduction in general, the most natural approach to such problems. However, the sentiment that these assumptions are too restrictive is quite old and many authors have suggested that there is no logical reason for the principal components to contain any information at all on the response.
Extensive reviews are available for genome-wide association studies (GWAS) aiming at identifying the association of genotypes with phenotypes of many diseases such as coronary artery disease, atrial fibrillation, type 2 diabetes, inflammatory bowel disease and breast cancer. The association is estimated by fitting linear models with the addition of possibly random effects. The problem is ill-posed because genotypes of genetic variants that are physically close together are not independent and, more importantly, complex traits may be highly polygenic in the sense that many genetic variants with small effects contribute to the phenotype. The interpretation of such complex models is a big open challenge that is beyond the capabilities of sparse regression and principal components regression.
Motivated by the above, we revisit the theory for identification in ill-posed linear models and propose a novel framework. The classical latent factor model for linear regression is extended by assuming that, up to an unknown orthogonal transformation, the features consist of subsets that are relevant and irrelevant for the response. Furthermore, a joint low-dimensionality is imposed only on the relevant features vector and the response variable. The proposed framework allows to: i) characterise the identifiable parameters of interest that are crucial for interpretation; ii) characterise the instrinsic geometrical properties of any regularisation algorithm; iii) comprehensively study the partial least squares (PLS) algorithm under random design with heavy tails. In particular, a novel perturbation bound for PLS solutions is proven and the high-probability L2-rate for estimation and prediction of the PLS estimator are obtained. As a corollary, necessary and sufficient conditions for the asymptotic normality of PLS estimators are derived. This framework sheds light on the identification performance of regularisation methods for ill-posed linear regression that exploit sparsity or unsupervised projection. The theoretical findings are confirmed by numerical studies on both real and simulated data.
|
3:50 pm - 5:30 pm | S13 (6): Nonparametric and asymptotic statistics Location: ZEU 250 Floor plan Session Chair: Alexander Kreiss Session Chair: Leonie Selk |
|
3:50 pm - 4:15 pm
Statistical Inference for Rank Correlations 1Heidelberg Institute for Theoretical Studies, Germany; 2Helmut Schmidt University Hamburg, Germany; 3Goethe University Frankfurt, Germany
Kendall's Tau and Spearman's Rho are key tools for measuring dependence between two variables. Surprisingly, when it comes to statistical inference for these rank correlations, some fundamental results and methods have not been developed, in particular regarding asymptotic variances for discrete random variables or in the time series case, and variance estimation in general such that e.g. asymptotic confidence intervals are not available. We provide a comprehensive treatment of asymptotic inference for classical rank correlations, including Kendall's Tau, Spearman's Rho, Goodman and Kruskal's Gamma, Kendall's Tau-b and Grade Correlation. We derive asymptotic distributions and variances for both independent and time-series data, resorting to asymptotic results for U-statistics, and introduce consistent variance estimators. This enables the construction of confidence intervals and tests. We also obtain limiting variances under independence between the two variables or processes of interest, which generalize classical results limited to continuous random variables and lead to corrected versions of widely-used tests of independence based on rank correlations. We analyze the finite-sample performance of our variance estimators, confidence intervals and tests in simulations and illustrate their use in applications.
4:15 pm - 4:40 pm
Quantifying and estimating dependence via sensitivity of conditional distributions 1University of Salzburg, Austria; 2Paracelsus Medical University, Salzburg, Austria
Recently established, directed dependence measures for pairs $(X,Y)$ of random variables build upon the natural idea of comparing the conditional distributions of $Y$ given $X=x$ with the unconditional distribution of $Y$. They assign pairs $(X,Y)$ values in $[0,1]$, where the value is $0$ if and only if $X,Y$ are independent, and it is $1$ exclusively for $Y$ being a measurable function of $X$. Here we show that comparing randomly drawn conditional distributions with each other instead or, equivalently, analyzing how sensitive the conditional distribution of $Y$ given $X=x$ is on $x$, opens the door to constructing novel families of dependence measures $\Lambda_\varphi$ induced by general convex functions $\varphi: \mathbb{R} \rightarrow \mathbb{R}$, containing, e.g., Chatterjee's coefficient of correlation as special case. After establishing additional useful properties of $\Lambda_\varphi$ we focus on continuous $(X,Y)$, translate $\Lambda_\varphi$ to the copula setting, consider the $L^p$-version and establish an estimator which is strongly consistent in full generality. A real data example and a simulation study illustrate the chosen approach and the performance of the estimator. Complementing the afore-mentioned results, we show how a slight modification of the sensitivity idea underlying $\Lambda_\varphi$ can be used to define new measures of explainability generalizing the fraction of explained variance.
Ansari, J., P. B. Langthaler, S. Fuchs, and W. Trutschnig. Quantifying and estimating dependence via sensitivity of conditional distributions. Available at https://arxiv.org/abs/2308.06168.
4:40 pm - 5:05 pm
Bootstrap Consistency and Normality of Chatterjee's Rank Correlation Ruhr-Universität Bochum, Germany
It is known that the usual n out of n bootstrap fails for Chatterjee’s rank correlation. To remedy this, we present an m out of n bootstrap which is consistent for Chatterjee’s rank correlation whenever asymptotic normality of Chatterjee’s rank correlation can be established. Our bootstrap outperforms alternative estimation methods and approximates the limiting distribution in the Kolmogorov distance as well as the Wasserstein distance. We also present some results on the asymptotic normality of Chatterjee’s rank correlation, which is non-trivial to establish.
H. Dette and M. Kroll (2024): A Simple Bootstrap for Chatterjee’s Rank Correlation. Biometrika. DOI: 10.1093/biomet/asae045
M. Kroll (2024+): Asymptotic Normality of Chatterjee’s Rank Correlation. Preprint. arXiv: 2408.11547
5:05 pm - 5:30 pm
A new dependence order for Chatterjee's rank correlation and related dependence measures University of Salzburg, Austria
Motivated by Chatterjee's rank correlation, we propose a novel rearrangement-invariant dependence order for conditional distributions that reflects many desirable properties of dependence measures.
Our dependence order is based on the Schur-order for functions and capable to characterize independence of a random variable \(Y\) and a random vector \(\mathbf{X}\) as well as perfect dependence of \(Y\) on \(\mathbf{X}\,.\) Further, it satisfies an information gain inequality and is also able to characterizes conditional independence. As we show, our dependence order transfers all its fundamental properties to a large class of dependence measures. Moreover, it yields new supermodular comparison results for multi-factor models and thus various applications to robust risk models.
|
4:40 pm - 5:30 pm | S 1 Keynote: Machine Learning Location: POT 81 Floor plan Session Chair: Merle Behr |
|
4:40 pm - 5:30 pm
A primer on physics-informed machine learning Physics-informed machine learning typically integrates physical priors into the learning process by minimizing a loss function that combines a data-driven term with partial differential equation regularization. Practitioners often rely on physics-informed neural networks (PINNs) to address this type of problem. After discussing the strengths and limitations of PINNs, I will show that for linear differential priors, the problem can be directly formulated as a kernel regression task, providing a rigorous framework for analyzing physics-informed machine learning. In particular, incorporating the physical prior can significantly enhance the estimator's convergence. Building on this kernel regression formulation, I will explain how Fourier methods can be used to approximate the associated kernel and propose a tractable estimator that minimizes the physics-informed risk function. This approach, which we refer to as physics-informed kernel learning (PIKL), provides theoretical guarantees on performance. We will demonstrate the numerical performance of the PIKL estimator through simulations in both hybrid modeling and PDE-solving contexts. Joint work with Francis Bach (Inria), Claire Boyer (University Paris-Saclay), and Nathan Doumèche (Sorbonne University). |
5:35 pm - 6:15 pm | Förderpreise der FG Stochastik: Förderpreise der FG Stochastik: Preisverleihung und Vorträge der Preisträger:innen Location: POT 81 Floor plan |
6:15 pm - 7:45 pm | General Assembly: Mitgliederversammlung der FG Stochastik Location: POT 81 Floor plan |
Date: Thursday, 13/Mar/2025 | |
9:00 am - 10:00 am | Plenary III Location: POT 81 Floor plan Session Chair: René Schilling |
|
9:00 am - 10:00 am
Homogenization of jump processes in random media Waseda University, Japan
Homogenization theory aims to derive equations (or processes) for averages of solutions of equations (or scaling limits of processes) with inhomogeneous coefficients. It has been significantly developed in both the fields of PDE and probability theory. In this talk, we will discuss recent progress in homogenization theory for jump processes (non-local operators) in periodic and random media.
|
10:00 am - 10:30 am | Coffee Break Location: Foyer Potthoff Bau Floor plan |
10:00 am - 10:30 am | Coffee Break Location: POT 168 Floor plan |
10:30 am - 11:20 am | S12 Keynote: Computational, functional and high-dimensional statistics Location: POT 81 Floor plan Session Chair: Jan Gertheiss |
|
10:30 am - 11:20 am
Physics-Informed Statistical Learning Department of Mathematics, Politecnico di Milano, Italy Recent years have seen an explosive growth in the recording of increasingly complex and high-dimensional data. Classical statistical methods are often unfit to handle such data, whose analysis calls for the definition of new methods, merging ideas and approaches from statistics, applied mathematics and engineering. My talk will in particular focus on Physics-Informed statistical learning methods, which feature regularizing terms involving Partial Differential Equations (PDEs). Such PDEs encode the available problem-specific information about the phenomenon under study. The methods can handle spatial and functional data observed over non-Euclidean domains, such as linear networks, two-dimensional manifolds and non-convex volumes. The methods will be illustrated through applications from life and environmental sciences. |
10:30 am - 12:10 pm | S 2 (5): Spatial stochastics, disordered media, and complex networks Location: POT 251 Floor plan Session Chair: Chinmoy Bhattacharjee Session Chair: Benedikt Jahnel |
|
10:30 am - 10:55 am
Strong limit theorems for empirical halfspace depth trimmed regions 1University of Bern; 2Swiss Re Management
We study empirical variants of the halfspace (Tukey) depth of a probability measure $\mu$, which are obtained by replacing $\mu$ with the corresponding weighted empirical measure. We prove analogues of the Marcinkiewicz--Zygmund strong law of large numbers and of the law of the iterated logarithm in terms of set inclusions and for the Hausdorff distance between the theoretical and empirical variants of depth trimmed regions, which are random convex bodies. In the special case of $\mu$ being the uniform distribution on a convex body $K$, the depth trimmed regions are convex floating bodies of $K$, and we obtain strong limit theorems for their empirical estimators.
10:55 am - 11:20 am
Random Laguerre tessellations: Convergence of the $\beta$- Voronoi to the Poisson-Voronoi tessellation University Muenster, Germany
In this talk we present a family of random Laguerre tessellations $\mathcal{L}_d(f)$ in $\mathbb{R}^d$ as well as their duals, generated by inhomogeneous Poisson point processes in $\mathbb{R}^d\times\mathbb{R}$ whose intensity measures have density of the form $(v,h)\mapsto \gamma f(h)$ under some natural restrictions on the functions $f$. We show that the construction provides a random tessellation and establish a connection to fractional calculus. This family includes the models introduced in [1] and [2], namley the $\beta$-Voronoi, $\beta'$-Voronoi and the Gaussain-Voronoi tessellations. We give the distribution of the typical cell of the corresponding dual tessellation explicitly in terms of $f$. Further we include the classical Poisson-Voronoi tessellation in this family as a limit case of the $\beta$-Voronoi tessellation.
[1] Gusakova, A., Kabluchko, Z., and Thäle, C. The $\beta$-Delaunay tessellation: Description of the model and geometry of typical cells. Adv. in Appl. Probab. 54, 4 (2022), 1252-1290.
[2] Gusakova, A., Kabluchko, Z., and Thäle, C. The $\beta$-Delaunay tessellation II: The Gaussian limit tessellation. Electron. J. Probab., 27 (2022), 1-33.
11:20 am - 11:45 am
Stein's method for spatial random graphs University of Goettingen, Germany
Spatial random graphs provide an important framework for the analysis of relations and interactions in networks. In particular, the random geometric graph has been intensively studied and applied in various frameworks like network modeling or percolation theory.
In this talk we focus on approximation results for a generalization of the random geometric graph that consists of vertices given by a Gibbs process and (conditionally) independent edges generated from a connection probability function. We introduce a new graph metric between finite spatial graphs of possibly different sizes that is built on the OSPA metric for point patterns, but penalizes both vertex and edge structures. We develop Stein's method for general integral probability metrics that compare the distributions of spatial random graphs. We then focus on the Wasserstein distance with respect to the new graph metric to obtain improved rates of convergence for a suitable type of convergence in distribution of spatial random graphs.
Finally, we present an application of our approximation results to the percolation graph of large balls in a Boolean model.
11:45 am - 12:10 pm
Functional central limit theorems for stabilising functionals 1Hamburg University of Technology, Germany; 2Lehigh University, United States of America
Many functionals of point processes arising in stochastic geometry can be written as sums of scores, where each score represents the contribution of a point. If the score of a point depends only on the local point configuration in a random neighbourhood, the functional is called stabilising. One is often interested in the asymptotic behaviour of stabilising functionals as an underlying observation window is increased. For this situation several central limit theorems were shown in recent years. In this talk these results are complemented by functional central limit theorems.
|
10:30 am - 12:10 pm | S 3 (5): Stochastic Analysis and S(P)DEs Location: POT 151 Floor plan Session Chair: Vitalii Konarovskyi Session Chair: Aleksandra Zimmermann |
|
10:30 am - 10:55 am
Quantitative relative entropy estimates for interacting particle systems with common noise University of Mannheim, Germany
We derive quantitative estimates proving the conditional propagation of chaos for large stochastic systems of interacting particles subject to both idiosyncratic and common noise. We obtain explicit bounds on the relative entropy between the conditional Liouville equation and the stochastic Fokker--Planck equation with an bounded and square integrable interaction kernel, extending far beyond the Lipschitz case. Our method relies on reducing the problem to the idiosyncratic setting, which allows us to utilize the exponential law of large numbers.
10:55 am - 11:20 am
Weak well-posedness of energy solutions to singular SDEs with supercritical distributional drift 1University of Warwick; 2Freie Universität Berlin
We study stochastic differential equations with additive noise and
distributional drift on $\mathbb{T}^d$ or $\mathbb{R}^d$ and $d \geqslant 2$.
We work in a scaling-supercritical regime using energy solutions and recent
ideas for generators of singular stochastic partial differential equations. We
mainly focus on divergence-free drift, but allow for scaling-critical
non-divergence free perturbations.
Roughly speaking we prove weak
well-posedness of energy solutions $X$ with initial law $\mu \ll \text{Leb}$
for drift $b \in L^p_T B^{- \gamma}_{p, 1}$ with $p \in (2, \infty]$ and $p
\geqslant \frac{2}{1 - \gamma}$. We can extend this by allowing
a blow-up of $\| b \|_{L^p_T B^{- \gamma}_{p, 1}}$ around some space-time
singularity set, but have to assume $X$ to be of a certain Hoelder
regularity to make the equation well-posed. In this way we can find for any $p
> 2$ a dimension $d$ and $b \notin B^{- 1}_{p, 2}$ such that weak
well-posedness holds for (Hoelder-regular) energy solutions with drift $b$.
This talk is based on joint work with Nicolas Perkowski.
11:20 am - 11:45 am
Reduced Inertial PDE models for Cucker-Smale flocking dynamics 1Zuse Institute Berlin, Germany; 2Freie Universität Berlin, Germany; 3University of Bath, UK
In particle systems, flocking refers to the phenomenon where particles’ individual velocities eventually align. The Cucker-Smale model is a well-known mathematical framework that describes this behaviour. Many continuous descriptions of the Cucker-Smale model use PDEs with both particle position and velocity as independent variables, thus providing a full description of the particles mean-field limit (MFL) dynamics. The simulation of the MFL equation requires solving a high dimensional PDE, motivating the derivation of reduced PDEs. In this talk, we present a novel reduced inertial PDE model consisting of two equations that depend solely on particle position. In contrast to other reduced models, ours is not derived from the MFL, but directly includes the model reduction at the level of the empirical densities, thus allowing for a straightforward connection to the underlying particle dynamics. We present a thorough analytical investigation of our reduced model, showing that: firstly, our reduced PDE satisfies a natural and interpretable continuous definition of flocking; secondly, in specific cases, we can fully quantify the discrepancy between PDE solution and particle system. Our theoretical results are supported by numerical simulations. This is a joint work with Federico Cornalba, Natasa Djurdjevac Conrad and Ana Djurdjevac.
11:45 am - 12:10 pm
Time-Changed White Noise Calculus and its Malliavin-Watanabe Regularity Theory 1Linnaeus University, Sweden; 2Tunis El Manar, Tunesia
The absence of a Wiener-Ito chaos decomposition in general non-Gaussian analysis results in the absence of a Hida-Malliavin derivative and hence of a general Malliavin-Watanabe regularity theory.
In this project we will develop a white noise calculus for processes with conditional independent increments and apply it to a the class of non-Gaussian processes represented by randomly time-changed Brownian motion. In this setting, we obtain a conditioned Wiener-Ito chaos decomposition, which allows us to introduce spaces of regular test and generalized functions w.r.t. time-changed white noise. On these spaces we define a non-Gaussian Malliavin-Watanabe regularity theory using a similar characterisation result to the Gaussian case from Grothaus et al..
We then apply the results to a stochastic transport equation with Skorokhod type noise.
|
10:30 am - 12:10 pm | S 4 (6): Limit theorems, large deviations and extremes Location: ZEU 160 Floor plan Session Chair: Jan Nagel Session Chair: Marco Oesting |
|
10:30 am - 10:55 am
Extremal Process of Last Progeny Modified Branching Random Walks 1Technische Universität Braunschweig; 2Université Toulouse III Paul Sabatier
We consider a last progeny modified branching random walk, in which the position of each particle at the last generation $n$ is modified by an i.i.d. copy of a random variable $Y$ . Depending on the asymptotic properties of the tail of $Y$ , we describe the asymptotic behaviour of the extremal process of this model as $n\to\infty$.
10:55 am - 11:20 am
Central limit theorem for a random walk on Galton-Watson trees with random conductances TU Dortmund, Germany
We consider random walks on supercritical Galton-Watson trees with random conductances. That is, given a Galton-Watson tree, we assign to each edge a positive random weight (conductance) and the random walk traverses an edge with a probability proportional to its conductance. On these trees, the random walk is transient and the distance of the walker to the root satisfies a law of large numbers with limit the speed of the walk. We show that the distance of the walker to the root satisfies a functional central limit theorem under the annealed law. When a positive fraction of edges is assigned a small conductance $\varepsilon$, we study the behavior of the limiting variance as $\varepsilon\to 0$. Provided that the tree formed by larger conductances is supercritical, the variance is nonvanishing as $\varepsilon\to 0$, which implies that the slowdown induced by the $\varepsilon$-edges is not too strong. The proof utilizes a specific regeneration structure, which leads to escape estimates uniform in $\varepsilon$.
11:20 am - 11:45 am
Sums of i.i.d. random variables with exponential weights Ulm University, Germany
It is well known that a random walk $S_n = \sum_{k=1}^n X_k$, with $(X_k)$ i.i.d. having finite expecation diverges almost surely to $\infty$ if and only if $E (X_1) > 0$, while for $E (X_1) = 0$ it oscillates. Less well known is the study of random walks when $E |X_1| = \infty$. In 1973, Erickson [1] obtained an integral criterion characterising when the corresponding random walk diverges to $\infty$, $-\infty$, or when it oscillates. In this talk we are interested in the divergence behaviour of $W_n=\sum_{k=1}^n c^k X_k$, where $(X_k)$ is i.i.d. and $0<c<1$. It is well known that this sum converges almost surely if and only if $E \log^+ |X_1| < \infty$, but we are interested in the divergence behaviour when $E \log^+ |X_1| = \infty$. We give sufficient analytic conditions for $W_n$ to exhibit an almost sure oscillating behaviour (i.e. $-\infty = \liminf\limits_{n\to\infty}W_n<\limsup\limits_{n\to\infty}W_n=\infty$), as well as a sufficient criterion for the almost sure limit to exist in the sense that $\lim\limits_{n\to\infty}W_n=\infty$. The talk is based on joint work in progress with A. Lindner and R. Maller.
[1] K. Bruce Erickson. “The strong law of large numbers when the mean is undefined”. In: Transactions of the American Mathematical Society 185 (1973), pp. 371–381.
11:45 am - 12:10 pm
Law of Large Numbers and Central Limit Theorem for Ewens-Pitman Model 1University of Pavia, Italy; 2University of Torino and Collegio Carlo Alberto, Italy
The Ewens-Pitman model is a distribution for random partitions of $\{1,\ldots,n\}$, with $n\in\mathbb{N}$, indexed by a pair of parameters $\alpha \in [0,1)$ and $\theta>-\alpha$, such that $\alpha=0$ corresponds to the Ewens model in population genetics. The large $n$ asymptotic behaviour of the number $K_{n}$ of blocks in the Ewens-Pitman random partition has been extensively investigated in terms of almost-sure and Gaussian fluctuations, which show that $K_{n}$ scales as $\log n$ or $n^{\alpha}$ depending on whether $\alpha=0$ or $\alpha\in(0,1)$, providing non-random and random almost-sure limits, respectively.
We study the large $n$ asymptotic behaviour of $K_{n}$ when the parameter $\theta$ is allowed to depend linearly on $n\in\mathbb{N}$. Precisely, for $\alpha\in[0,1)$ and $\theta=\lambda n$, with $\lambda>0$, we establish a law of large numbers (LLN) and a central limit theorem (CLT) for $K_{n}$, which show that $K_{n}$ scales as $n$ for $\alpha\in[0,1)$, providing a non-random almost sure (a.s.) limit. In particular, for $\alpha\in(0,1)$, the CLT relies on the compound Poisson construction of $K_{n}$, which leads to introduce novel LLNs, CLTs and corresponding Berry-Esseen theorems for the negative-Binomial compound Poisson random partition and for the Mittag-Leffler distribution function, which are of independent interest.
In conclusion, we show an application of our results to the problem of uncertainty quantification in the Bayesian nonparametric (BNP) approach to the estimation of the number of unseen species. Given $n\geq1$ observed individuals, modeled as a random sample $(X_{1},\ldots,X_{n})$ from the two-parameter Poisson-Dirichlet distribution, the unseen-species problem calls for estimating the number $K_{m}^{(n)}$ of hitherto unseen distinct species that would be observed if $m\geq1$ additional samples were collected from the same distribution. The posterior expectation of $K_{m}^{(n)}$ (i.e. its conditional distribution given the Ewens-Pitman random partition induced by $(X_{1},\ldots,X_{n})$) is the natural BNP estimator of $K_{m}^{(n)}$.
For $\alpha\in[0,1)$ and in the regime $m = \lambda \theta$, we develop for $K_{m}^{(n)}$ posterior counterparts of the LLN and CLT for $K_n$. This allows to introduce large $m$ Gaussian credible intervals for the Bayesian estimator of $K_{m}^{(n)}$, whose construction is purely analytical and which outperform the existing methods.
|
10:30 am - 12:10 pm | S 7 (9): Stochastic processes: theory, statistics and numerics Location: POT 51 Floor plan Session Chair: Andreas Neuenkirch Session Chair: Jakob Söhl |
|
10:30 am - 10:55 am
Collisions in one-dimensional particle systems 1Japan International University, Tsukuba, Japan; 2Heinrich Heine University Düsseldorf, Germany; 3Wrocław University of Science and Technology, Poland
In this joint work we consider a general one-dimensional particle system. These processes are mainly characterized by multiplicity parameters controlling the strength of the particles' interaction. It is well-known that collisions between particles never take place when all of these multiplicities are large, but occur almost surely otherwise.
It was recently shown by the authors that the collision times in special cases the multivariate Bessel processes of rational type is a piecewise-linear function of the minimum of its multiplicities, but is independent of the dimension of the process’s domain. This implies that the Hausdorff dimension does not depend on the particle number.
In this talk, we present an approach to extend this result to a general particle system.
10:55 am - 11:20 am
The level of self-organized criticality in oscillating Brownian motion: stable limiting distribution theory for the MLE Albert-Ludwigs-Universität Freiburg, Germany
For some discretely observed path of oscillating Brownian motion with level of self-organized criticality $\rho_0$, we prove in the infill asymptotics that the MLE is $n$-consistent, where $n$ denotes the sample size, and derive its limit distribution with respect to stable convergence. As the transition density of this homogeneous Markov process is not even continuous in $\rho_0$, interesting and somewhat unexpected phenomena occur: The likelihood function splits into several components, each of them contributing very differently depending on how close the argument $\rho$ is to $\rho_0$. Correspondingly, the MLE is successively excluded to lay outside a compact set, a $1/\sqrt{n}$-neighborhood and finally a $1/n$-neigborhood of $\rho_0$ asymptotically. Sequentially and as a process in $\rho$, the martingale part of the suitably rescaled local log-likelihood function exhibits a bivariate Poissonian behavior in the stable limit with its intensity being a function of the local time at $\rho_0$.
11:20 am - 11:45 am
Brownian motion conditioned to have restricted $L_2$-norm 1Technical University of Darmstadt; 2Paderborn University; 3St. Petersburg State University
We condition a Brownian motion on having an atypically small $L_2$-norm on a long time
interval. The obtained limiting process is an Ornstein-Uhlenbeck process. As a main
ingredient, we prove a result on the small ball probabilities of non-centered Brownian
motion.
11:45 am - 12:10 pm
Non-parametric estimation for linear SPDEs on arbitrary bounded domains based on discrete observations Karlsruhe Institute of Technology, Germany
Most statistical methods for stochastic partial differential equations (SPDEs) based on discrete observations are limited to one space dimension or to quite restrictive settings. In order to study SPDEs on a bounded domain driven by a stochastic noise process which is white in time and possibly colored in space, we aim for bridging the gap between two popular observations schemes studied for statistics for SPDEs, namely, discrete observations and local measurements. To this end, we have to extend the local measurements approach to kernels of distribution type. This link allows us to construct a non-parametric estimator for the diffusivity based on discrete high-frequency observations.
The talk is based on joint work with Randolf Altmeyer and Florian Hildebrandt.
|
10:30 am - 12:10 pm | S 8 (7): Finance, insurance and risk: Modelling Location: POT 361 Floor plan Session Chair: Peter Hieber Session Chair: Frank Seifried |
|
10:30 am - 10:55 am
Representation theorems for convex expectations on path space 1University of Freiburg, Germany; 2University of Konstanz, Germany
Random outcomes are typically modeled using probabilities and expectations. In many situations, these probabilities are only partially known, resulting in an expectation that is imprecise and lies in an interval defined by the lower and upper expectations.
In this talk, we consider convex upper expectations on a path space of cadlag functions. We show that they have a one-to-one relation to penality functions. Based on this relation, we establish that upper convex expectations (without so-called fixed times of discontinuity) are uniquely determined by their finite-dimensional distributions. In certain Markovian cases, it even suffices to know their one-dimensional distributions. As canonical examples, we discuss upper convex expectations that arise form relaxed control rules and nonlinear Levy processes for which we also derive infinitesimal descriptions.
10:55 am - 11:20 am
Global approximation theorem on the Wiener space via signatures University of Mannheim, Germany
The signature is a transformation of a path, stochastic process or time series. In recent years, signatures have been very successfully applied in mathematical finance, in particular, to develop data driven methods and models for financial markets. At the very heart of these signature-based approaches are the universal approximation theorems for signatures, establishing that continuous functionals can be approximated arbitrarily well on compact sets by linear maps acting on signatures. However, in the context of mathematical finance the limitation to compact sets seriously restricts the scope of these signature-based approaches.
In this talk, we extend the theoretical foundation of signature-based approach by providing various global universal approximation theorems in the $L^p$ - sense with respect to the Wiener measure. Specifically, we demonstrate that $L^p$ - functionals on rough path space can be approximated globally in the $L^p$ - sense under the Wiener measure. This allows us to approximate solutions to stochastic differential equations driven by Brownian motions by signature-based models, i.e., by linear combinations of signature elements of Brownian motion.
11:20 am - 11:45 am
Representation property for 1d general diffusion semimartingales 1Universität Freiburg, Germany; 2Universität Duisburg-Essen, Germany
A general diffusion semimartingale is a 1d continuous semimartingale that is also a regular strong Markov process. The class of general diffusion semimartingales is a natural generalization of the class of (weak) solutions to SDEs. A continuous semimartingale has the representation property if all local martingales w.r.t. its canonical filtration have an integral representation w.r.t. its continuous local martingale part. The representation property is closely related to market completeness. We show that the representation property holds for a general diffusion semimartingale if and only if its scale function is (locally) absolutely continuous in the interior of the state space. Surprisingly, this means that not all general diffusion semimartingales possess the representation property, which is in contrast to the SDE case. Furthermore, it follows that the laws of general diffusion semimartingales with absolutely continuous scale functions are extreme points of their semimartingale problems. We construct a general diffusion semimartingale whose law is not an extreme point of its semimartingale problem. This contributes to the solution of the problems posed by Jacod and Yor and by Stroock and Yor on the extremality of strong Markov solutions (to martingale problems).
11:45 am - 12:10 pm
The fundamental theorem of weak optimal transport University of Vienna
Weak optimal transport is a generalization of optimal transport that allows for costs that cover many optimization problems outside the realm of classic optimal transport, while still permitting the same results concerning primal existence and weak duality as in the classical case.
However, the question of dual attainment has remained open so far. Our main contribution is to establish the existence of dual optimizers, thus extending the fundamental theorem of optimal transport to the weak transport case.
This is based on joint work with Mathias Beiglböck, Gudmund Pammer und Lorenz Riess.
|
10:30 am - 12:10 pm | S 9 (2): is dropped Location: POT 112 Floor plan Session Chair: Nils-Christian Detering Session Chair: Peter Ruckdeschel |
10:30 am - 12:10 pm | S10 (4): Stochastic optimization and operation research Location: POT 13 Floor plan Session Chair: Nikolaus Schweizer Session Chair: Ralf Werner |
|
10:30 am - 10:55 am
The oriented derivative in stochastic control LMU Munich, Germany
We show that the derivatives in the sense of Fréchet and Gâteaux can be viewed as derivatives oriented towards a star convex set with the origin as center. The resulting oriented differential calculus extends the mean value theorem, the chain rule and the Taylor formula in Banach spaces. As applications in stochastic control, we consider functionals and operators of stochastic processes.
10:55 am - 11:20 am
The role of correlation in diffusion control ranking games 1Friedrich Schiller University Jena, Germany; 2Institut Élie Cartan de Lorraine
In this talk we consider Nash equilibria in two player continuous time stochastic differential games with diffusion control, and where the Brownian motions driving the state processes are correlated. We consider zero-sum ranking games, in the sense that the criteria to optimize only depends on the difference of the two players' state processes. We explicitly compute the players' equilibrium strategies, depending on the correlation of the Brownian motions driving the two state equations: in particular, if the correlation coefficient is smaller than some explicit threshold, then the equilibrium strategies consist of strong controls, whereas if the correlation exceeds the threshold, then the optimal controls are mixed strategies. To do so, we rely on a relaxed formulation of the game based on solutions to martingale problems, allowing the players to randomize their actions.
11:20 am - 11:45 am
Cost-Optimal management of a Standalone Micro-grid Equipped With Renewable Production and Battery BTU Cottbus-Senftenberg, Germany
In this talk, we consider a domestic micro-grid equipped with a local renewable energy production unit such as photovoltaic panels, consumption units, and a battery storage to balance supply and demand and investigate the stochastic optimal control problem for its cost-optimal management. Such systems are complex to control because of uncertainties in the weather and environmental conditions which affect the production and demand of energy.
As a special feature, the manager has no access to the grid but has access to a local generator, which makes it possible to produce energy using fuel when needed. Further, we assume that the battery and the fuel tank have limited capacities and the fuel tank can only be filled once at the beginning of the planning period. This leads us to the so-called finite fuel problem. In addition, we assume that the energy demand is not always satisfied and we impose penalties on unsatisfied demand, the so-called inconvenience cost.
The main goal is to minimize the expected aggregated cost for generating power using the generator and operating the system. This leads to a challenging mathematical optimization problem.
The optimization problem is formulated first as a continuous-time stochastic optimal control problem for a controlled multi-dimensional diffusion process. Then, we transform the continuous-time optimal control problem into a discrete-time control problem and solve it numerically using methods from the theory of Markov decision processes.
11:45 am - 12:10 pm
Stochastic Optimal Control of Epidemics Under Partial Information BTU Cottbus-Senftenberg, Germany
We consider stochastic optimal control problems arising in the mathematical modeling of decision-making processes for the cost-optimal management and containment of epidemics. We focus on the impact of uncertainties such as dark figures and study the resulting optimal control problems which are under partial information since some components of the state process are hidden.
Working with diffusion approximations for the population dynamics and the associated Kalman filter estimates of non-observable state variables leads to control problems for controlled diffusion processes. Applying time-discretization, the latter is transformed into a Markov Decision Process that we solve numerically using a backward recursion algorithm. We use the results of optimal quantization and model reduction techniques to overcome the curse of dimensionality. Numerical results are presented.
|
10:30 am - 12:10 pm | S11 (3): Time series - Functional and High-Dimensional Time Series Location: POT 06 Floor plan Session Chair: Martin Wendler |
|
10:30 am - 10:55 am
An operator-level GARCH Model 1Department of Statistics, University of California, Davis (US); 2Chair of Stochastics, Ruhr University of Bochum (DE); 3Department of Statistics and Actuarial Science, University of Waterloo (CN)
The GARCH model is a commonly used statistical tool for describing conditional heteroskedastic processes. It has been extensively studied in both univariate and multivariate cases, and more recently, in function spaces as well. This paper builds upon the concept of the functional GARCH model, which has been defined exclusively within function spaces for each time point within the domain. This paper defines the GARCH model in general, separable Hilbert spaces, and considers the entire functions rather than point-wise definitions for the GARCH equations. The paper derives sufficient conditions for strictly stationary solutions, finite moments, and weak dependence, and discusses sufficient and necessary conditions for weak stationarity. In addition, it establishes consistent Yule-Walker estimates with explicit convergence rates for the finite-dimensional projections of the GARCH parameters and their entire representation. Finally, the usefulness of the proposed model is demonstrated through a simulation study and a real data example.
10:55 am - 11:20 am
Towards a bootstrap uniform functional central limit theorem for nonstationary time series 1Otto-Friedrich-Universität Bamberg; 2RWTH Aachen University
Sequential empirical processes (SEPs) are an important tool in nonparametric statistics where they are applied, for example, to change detection problems and goodness-of-fit testing. Due to this connection to applied statistics it is not only relevant to study the weak convergence of SEPs in different settings, but also to develop methods with which the distribution of possible weak limits can be approximated. To the best of my knowledge, such methods are currently lacking for function-indexed SEPs that are constructed from nonstationary time series. Addressing this problem, this talk presents work in progress on the weak convergence of multiplier SEPs for weakly dependent nonstationary arrays. A general result on the asymptotic equicontinuity of multiplier processes is established, starting from which multiplier SEPs are studied under dependency- and bracketing-conditions. Regarding the asymptotic equicontinuity, the only assumptions that need be imposed on the multiplier sequence are its independence of the data and the (uniform) existence of moments of any order. Possible extensions and statistical applications are discussed.
11:20 am - 11:45 am
High-dimensional Gaussian linear processes: Marchenko-Pastur beyond simultaneous diagonalizability University of Freiburg, Germany
The eigenvectors of a spectral density matrix $\mathcal{F}(\theta)$ to a stationary Gaussian process $(X_t)_{t \in \mathbb{Z}}$ depend explicitly on the frequency $\theta \in [0,2\pi]$. The most commonly used estimator of the spectral density matrix $\mathcal{F}(\theta)$ is the smoothed periodogram, which takes the form $YY^T$ for random matrices $Y$ with independent columns that each have differing underlying covariance structure. When the covariance matrices of the columns are not simultaneously diagonalizable, such matrices $YY^T$ are out of reach for the current state of random matrix theory. In this paper, we derive a Marchenko-Pastur law in this non-simultaneously diagonalizable case. On the technical level, we make the following two contributions:
1) We introduce a generalization of graph-theoretical methods specific to Gaussian random matrices, which allow for the exploitation of independent columns without needing independent rows.
2) By means of the Lagrange inversion formula, we draw a direct connection between trace moment expansions and the Marchenko-Pastur equation.
The Marchenko-Pastur law emerges when the dimension $d$ of the process and the smoothing span $m$ of the smoothed periodogram grow at the same rate, which is slower than the number of observations $n$.
|
10:30 am - 12:10 pm | S13 (7): Nonparametric and asymptotic statistics Location: ZEU 250 Floor plan Session Chair: Alexander Kreiss Session Chair: Leonie Selk |
|
10:30 am - 10:55 am
Efficient Estimation of a Gaussian Mean with Local Differential Privacy Institute of Science and Technology Austria (ISTA), Austria
In this paper, we study the problem of estimating the unknown mean $\theta$ of a unit variance Gaussian distribution in a locally differentially private (LDP) way. In the high-privacy regime ($\epsilon\le 1$), we identify an optimal privacy mechanism that minimizes the variance of the estimator asymptotically. Our main technical contribution is the maximization of the Fisher-Information of the sanitized data with respect to the local privacy mechanism $Q$. We find that the exact solution $Q_{\theta,\epsilon}$ of this maximization is the sign mechanism that applies randomized response to the sign of $X_i-\theta$, where $X_1,\dots, X_n$ are the confidential iid original samples. However, since this optimal local mechanism depends on the unknown mean $\theta$, we employ a two-stage LDP parameter estimation procedure which requires splitting agents into two groups. The first $n_1$ observations are used to consistently but not necessarily efficiently estimate the parameter $\theta$ by $\tilde{\theta}_{n_1}$. Then this estimate is updated by applying the sign mechanism with $\tilde{\theta}_{n_1}$ instead of $\theta$ to the remaining $n-n_1$ observations, to obtain an LDP and efficient estimator of the unknown mean.
10:55 am - 11:20 am
Quantum statistical inference under locally gentle measurements 1Universität Heidelberg, Germany; 2ENSAE, France
We study the task of state estimation and quantum hypothesis testing under the constraint of gentle measurements acting independently on $n$ independent identical states. Gentle measurements are quantum instruments for which the post-measurement state $\rho(\theta)_{M \to y}$ of a state $\rho(\theta)$ differs only by a small amount $\alpha$ from the pre-measurement state, i.e. $|| \rho(\theta)_{M_\to y} - \rho(\theta)||_1 \leq \alpha$. We show that both the testing and estimation errors scale with a factor of $\frac{1}{\alpha \sqrt{n}}$ which is consistent with the relation established between gentleness and local differential privacy.
11:20 am - 11:45 am
Estimation for the convolution of several multidimensional densities 1Université Paris Cité, France; 2Universität Heidelberg, Germany
This work is concerned with the problem of estimating the $m$-fold convolution of the densities of $m$ independent variables in the first step and vectors in the second step. A nonparametric estimator is proposed and the point-wise and integrated quadratic risk is studied. We use Fourier analysis to bound the variance and kernel methods allow us to consider Nikolski and Hölder classes in addition to the standard Sobolev classes for deconvolution estimators. For this, smoothness properties of $m$-fold convoluted densities are studied. We discuss rates of convergence. In addition, we look at bandwidth selection methods.
11:45 am - 12:10 pm
Minimax-optimal data-driven estimation in multiplicative inverse problems Ruprecht-Karls-Universität Heidelberg, Germany
We consider the nonparametric estimation of a function of interest $\theta$
based on empirical versions of both an observable signal $g=s\,\theta$ and a multiplicative function $s$. The general framework covers, for example, circular convolution, additive convolution on the real line, and multiplicative convolution on the positive real line. Typical questions in this context are the nonparametric estimation of the function $\theta$ as a whole or the value of a linear functional evaluated at $\theta$, referred to as global or local estimation, respectively. The proposed estimation procedures necessitates the choice of a tuning parameter, which in turn, crucially influences the attainable accuracy of the constructed estimator. Its optimal choice, however, follows from a classical squared-bias-variance trade-off and relies on an a-priori knowledge about $\theta$ and $s$, which is usually inaccessible in practice. We propose a fully data-driven choice of the tuning parameter by model selection and Goldenshluger-Lepski method for global and local estimation, respectively. We derive global and local oracle inequalities and discuss attainable minimax rates of convergence considering usual behaviours for $\theta$ and $s$.
|
12:10 pm - 1:40 pm | Lunch |
1:40 pm - 2:30 pm | S11 (4): Time series - New Developments in Time Series Analysis Location: POT 81 Floor plan Session Chair: Tim Manfred Kutta |
|
1:40 pm - 2:05 pm
Artificial Neural Network small-sample-bias-corrections of the AR(1) parameter close to unit root 1Technische Universität Dresden, Germany; 2Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig, Germany; 3University of Lausanne. HEC, Switzerland
This paper introduces an ANN approach to estimate the autoregressive process AR(1) when the autocorrelation parameter is near one. Traditional OLS estimators suffer from biases in small samples, necessitating various correction methods proposed in the literature. The ANN, trained on simulated data, outperforms these methods due to its nonlinear structure. Unlike competitors requiring simulations for bias corrections based on specific sample sizes, the ANN directly incorporates sample size as input, eliminating the need for repeated simulations. Stability tests involve exploring different ANN architectures and activation functions and robustness to varying distributions of the process innovations. Empirical applications on financial and industrial data highlight significant differences among methods, with ANN estimates suggesting lower persistence than other approaches.
2:05 pm - 2:30 pm
Nonparametric spectral density estimation under local differential privacy 1University of Kassel, Germany; 2CREST, ENSAE, France; 3University of Vienna, Austria
We propose a new interactive locally differentially private mechanism for estimating Sobolev smooth spectral density functions of stationary Gaussian processes. Anonymization is achieved through two-stage truncation and subsequent Laplace perturbation. In particular, we show that our method achieves a pointwise L2-rate with a dependency of only $\alpha^2$ on the privacy parameter $\alpha$. This rate stands in contrast to the results of (Kroll, 2024), who proposed a non-interactive mechanism for spectral density estimation and showed a dependency of $\alpha^4$ on the privacy parameter for the uniform L2-rate.
|
1:40 pm - 3:20 pm | S 2 (6): Spatial stochastics, disordered media, and complex networks Location: POT 251 Floor plan Session Chair: Chinmoy Bhattacharjee Session Chair: Benedikt Jahnel |
|
1:40 pm - 2:05 pm
Multidimensional compound Poisson approximation for the Gilbert graph Universität Osnabrück, Germany
We have proved the multidimensional version of a Poisson process approximation theorem for specific point processes. This result can be used to derive multidimensional compound Poisson limit theorems in the total variation distance for vectors whose components are all $U$-statistics of the same Poisson point process. In general, the components have to be asymptotically independent, but we will also show how one can proceed if this is not the case. Our examples are based on the Gilbert graph.
2:05 pm - 2:30 pm
Poisson approximation for cycles in the generalised random graph Hamburg University of Technology, Germany
The generalised random graph contains $n$ vertices with positive i.i.d. weights. The probability of adding an edge between two vertices is increasing in their weights. In the following, we require certain moment assumptions concerning the weights, ranging from finite second to finite fourth moments. The object of interest is the point process $\mathcal{C}_n$ on $\{3,4,…\}$, which counts how many cycles of the respective length are present in the graph. We establish convergence of $\mathcal{C}_n$ to a Poisson process. When $\mathcal{C}_n$ is evaluated on a bounded set $A$, we provide a rate of convergence in the total variation distance. If the graph is subcritical, $A$ is allowed to be unbounded, which comes at the cost of a slower rate of convergence. From this we deduce the limiting distribution of the length of the shortest and of the longest cycle when the graph is subcritical, including rates of convergence. All mentioned results also apply to the Chung-Lu model and the Norros-Reittu model.
2:30 pm - 2:55 pm
Rectangular Gilbert Tessellation 1RPTU Kaiserslautern-Landau, Germany; 2Lund University, Sweden
A process of a random planar quadrangulation is introduced as an approximation for certain cellular automata describing neuronal dynamics. This model turns out to be a particular (rectangular) case of Gilbert tessellation. The initial state of the model is a Poisson point process on a plane with intensity $\lambda$. Each point is assigned independently equally probable direction, horizontal or vertical. With time, the rays grow from each point with a constant speed into both sides along the given direction. As soon as a ray crosses the pathway of another ray, it stops growing.
The central and still open question is the distribution of the line segments. We derive exponential bounds for the tail of this distribution. The correlations in the model are derived for some instructive examples. In particular, the exponential decay of correlations is established. For the initial conditions confined in a box $[0,N]^2\subset\mathbb{R}^2$ it is proved that the average number of rays escaping the box has a linear order in $N$.
2:55 pm - 3:20 pm
Intersection processes of $k$-flats in hyperbolic space: New limits and convergence rates for observations in spherical windows KIT, Germany
Let $\eta$ be an isometry invariant Poisson process of $k$-flats, $0\leq k\leq d-1$, in $d$-dimensional hyperbolic space.
For $d-m(d-k)\geq 0$, the $m$-th order intersection process of $\eta$ consists of all nonempty intersections of distinct flats $E_1,\ldots,E_m\in\eta$.
Of particular interest is the total volume $F^{(m)}_r$ of this intersection process in a ball of radius $r$, since it has been observed that this functional obeys a nonstandard CLT for certain values of $d,k$.
For $2k>d+1$, we determine the asymptotic distribution of $F^{(m)}_r$, as $r\to\infty$, previously known only for $m=1$, and derive rates of convergence in the Kolmogorov distance.
We further discuss properties of the non-Gaussian limit distribution.
|
1:40 pm - 3:20 pm | S 3 (6): Stochastic Analysis and S(P)DEs Location: POT 151 Floor plan Session Chair: Vitalii Konarovskyi Session Chair: Aleksandra Zimmermann |
|
1:40 pm - 2:05 pm
A stochastic approach to time-dependent BEC University of Milan (La Statale), Italy
We propose a stochastic description of the time dependent quantum Bose-Einstein condensate at zero temperature, within the context of Nelson stochastic mechanics. We describe an infinite particle limit of interacting diffusions which corresponds to the mean field limit in the related quantum system. We are able to extend the framework of Nelson stochastic mechanics to nonlinear systems in particular to the case of the nonlinear Schrödinger equation. We also propose how to extend to this nonlinear case the Guerra-Morato variational approach. Our work can also be seen in the context of a mean field limit of McKean–Vlasov processes in a general situation where the drift is a very singular function depending non-trivially on all the particles.
2:05 pm - 2:30 pm
McKean—Vlasov SDEs: New results on existence of weak solutions and on propagation of chaos ETH Zürich, Switzerland
We consider the existence of weak solutions of McKean-Vlasov SDEs with common noise and the propagation of chaos for the associated weakly interacting finite particle systems. Our strategy consists of two main components allowing us to analyze settings with general nonlinear but uniformly elliptic coefficients possessing only low spatial regularity through a marriage of probabilistic and analytic techniques.
First, we explore the emergence of regularity in limit points of McKean-Vlasov particle systems, leading to a priori regularity estimates for large-system limits of the empirical measure flows from finite particle systems. Second, we leverage this regularity to establish the existence of weak solutions for McKean-Vlasov SDEs and to identify more nuanced conditions under which chaos propagates, i.e. under which an asymptotic decoupling of the particles takes place and the dynamics in large systems become conditionally independent in law.
Next to its applicability to low-regularity regimes, the approach we take to obtain weak solutions and the propagation of chaos may also be useful in future applications to mean-field games and controlled problems.
2:30 pm - 2:55 pm
Mean-field stochastic differential equations with local interactions Humboldt University Berlin, Department of Mathematics, Germany
We introduce a novel class of particle processes governed by stochastic differential equations, featuring both local and mean-field interactions. The impact of the (countably infinite) particle population on an individual is felt through the state of designated neighbors and the average state across the entire population. Under a suitable homogeneity condition on the coefficients of the SDEs, we prove that the dynamics of the infinite system is well-defined, and that the average dynamics can be characterized as the unique solution to a non-linear Fokker-Planck-Kolmogorov equation. In view of the latter, the average dynamics can be decoupled from the local one, rendering the interaction within the population purely local. We further prove that the infinite system can be viewed as the limit of an increasing sequence of finite systems, in which both the individual and aggregate quantitates converge to their respective counterparts.
This is joint work with Ulrich Horst.
2:55 pm - 3:20 pm
Brownian Motion with Occupation Time Restrictions Outside a Compact Interval: Extreme Entropic Repulsion 1Technical University of Darmstadt, Germany; 2Paderborn University, Germany
We condition a Brownian motion on spending an atypically small amount of time outside a compact interval and characterize the resulting process in terms of an SDE. In particular, we encounter situations where the process almost surely does not leave the interval at all, discovering a very rare extreme example of entropic repulsion. Moreover, we explicitly determine the exact asymptotic behavior of associated conditioning probabilities on $[0,T]$, as $T\to\infty$.
|
1:40 pm - 3:20 pm | S 4 (7): Limit theorems, large deviations and extremes Location: ZEU 160 Floor plan Session Chair: Jan Nagel Session Chair: Marco Oesting |
|
1:40 pm - 2:05 pm
Limit Theorems for Open Quantum Dynamics In A Random Environment 1University of Copenhagen, Denmark; 2Michigan State University
Markovian dynamics of an open quantum system is determined by a family of super operators $(\phi_{s,t})_{s\leq t}$ that satisfy the \emph{composition law}
\[
\phi_{r,t} = \phi_{s,t}\circ \phi_{r,s} \quad \forall \quad s \leq r \leq t\, .
\]
These super-operators are called \emph{dynamical maps} or \emph{dynamic propagators} are generated by a time-dependent Lindbladian $\mathcal L$ (also known as a GKLS generator):
\[
\phi_{s,t} = \mathcal T\left\{\text{exp} \left( \int_s^t \mathcal L(\tau) \ d\tau\right)\right\}\, .
\]
Here $\mathcal T$ denotes the time-ordering.
We now consider the case where $\mathcal L$ is time-dependent but stationary in time. As such, we are interested in asymptotics of time-inhomogeneous Markovian dynamics obtained from a random Lindbladian process $(\mathcal L_t)_{t\in\mathbb R}$ that is strictly stationary.
Under certain irreducibility criteria we obtain stationary processes of random full-rank matrices $(Z_t)_{t\in\mathbb R}$, $(Z'_t)_{t\in\mathbb R}$ and a family of rank-one super operators $\left( ^s{\Xi}{_t}\right)_{s,t}$ that approximate the dynamics of the open quantum system exponentially fast almost surely and super-polynomial fast in mean. In the discrete time-parameter such propagators describes quantum dynamics of a random repeated interaction (random collision) model and in discrete time-parameter we obtain a Law of Large Numbers (LLN) and a Central Limit Theorem (CLT) involving the top Lyapunov exponent of the product
\[
\widetilde\Phi^{(n)} = \phi_{n-1, n}\circ \ldots\circ \phi_{0,1}
\, .\]
2:05 pm - 2:30 pm
On fluctuations of complexity measures for the QuickSelect algorithm Goethe-Universität Frankfurt, Germany
The Quickselect algorithm (also called FIND) is a fundamental algorithm to select ranks or quantiles within a set of data.
It was shown by Grübel and Rösler that the number of key comparisons required by Quickselect as a process of the quantiles $\alpha\in[0,1]$ in a natural probabilistic model converges after normalization in distribution within the càdlàg space $D[0,1]$ endowed with the Skorokhod metric.
We show that the process of the residuals in the latter convergence after normalization converges in distribution to a mixture of Gaussian processes in $D[0,1]$ and identify the limit's conditional covariance functions.
A similar result holds for the related algorithm QuickVal.
Our method extends to other cost measures such as the number of swaps (key exchanges) required by QuickSelect or cost measures which are based on key comparisons but take into account that the cost of a comparison between two keys may depend on their values, an example being the number of bit comparisons needed to compare keys given by their bit expansions.
2:30 pm - 2:55 pm
Randomized Geodesic Flow on Hyperbolic Groups 1University of Münster, Germany; 2School of Mathematics, TIFR, Colaba Mumbai, India
It is well-established that Patterson-Sullivan measures on the boundary of a hyperbolic space, along with the associated Bowen-Margulis-Sullivan measure, provide valuable insights into the action of a group of isometries on the space's boundary through analysis of the geodesic flow. Given that paths of a random walk on a hyperbolic groups lie close to the group's quasi-geodesics, it is natural to ask whether similar behaviour can also be seen in the flow along bi-infinite random walk paths.
In this talk, I will show how studying bi-infinite random walks on a discrete hyperbolic group $G$ leads to an analogue of the Patterson-Sullivan measure on $\partial^2G$. This measure can be constructed in multiple measure-equivalent ways, each giving distinct perspectives on its intrinsic structure. Moreover, as in the classical case, the action $G \curvearrowright \partial^2 G$ is ergodic with respect to this measure. Central to the construction of these measure is the almost sure convergence of the random walk to the boundary $\partial G$ and the study of the distribution of the hitting points. This talk is based on joint work with Mahan Mj and Chiranjib Mukherjee.
2:55 pm - 3:20 pm
Limit Theorems for Multiscale Ergodic Diffusion Processes 1Karlsruhe Institute of Technology; 2Imperial College London
There exists a continuing interest in establishing consistency and asymptotic normality of estimators for homogenized SDE models, which emerge from the respective multiscale SDE model when the scale parameter $\epsilon$ goes to zero. However, most of these asymptotic results are proved when limits are taken sequentially, that is, the time horizon $T$ goes to infinity and then the scale parameter $\epsilon$ goes to zero, or vice versa. In this talk, we want to present a first attempt at answering the questions of taking the simultaneous limit, i.e. $T=T_\epsilon$ depends explicitly on $\epsilon$ such that $T_\epsilon$ goes to infinity when $\epsilon$ goes to zero.
We present two limit theorems, a mean ergodic theorem and a central limit theorem, for a specific class of one-dimensional ergodic diffusion processes that depend on the small scale parameter $\epsilon$. In these results, we not only allow for the time horizon $T_\epsilon$ to depend explicitly on epsilon but also for the test function $\phi_\epsilon$. The novelty of the results arises from the circumstance that many quantities are unbounded for $\epsilon \rightarrow 0$, so that formerly established theory is not directly applicable here and a careful investigation of all relevant $\epsilon$-dependent terms is required.
As a mathematical application, we then use these limit theorems to prove robustness and asymptotic normality of a minimum distance estimator for parameters in homogenized Langevin equations subject to multiscale observations.
|
1:40 pm - 3:20 pm | S 7 (10): Stochastic processes: theory, statistics and numerics Location: POT 51 Floor plan Session Chair: Mathias Trabs |
|
1:40 pm - 2:05 pm
Persistence for Spherical Fractional Brownian Motion TU Darmstadt, Germany
We consider spherical fractional Brownian motion $(S_H(\eta))_{\eta\in\mathbb{S}_{d-1}}$, which is obtained by taking fractional Brownian motion indexed by the (multi-dimensional) sphere $\mathbb{S}_{d-1}$, and calculate its persistence exponent. Persistence in this context is the study of the decay of the probability
$$
\mathbb{P}\left( \sup_{\eta \in \mathbb{S}_{d-1}} S_H(\eta) \leq \varepsilon \right)
$$
when the barrier $\varepsilon \searrow 0$ becomes more and more restrictive. Our main result shows that the persistence probability of spherical fractional Brownian motion has the same order of polynomial decay as its Euclidean counterpart.
2:05 pm - 2:30 pm
Multivariate Fractional Brownian Motion: Correlation structure, statistics and applications to improve forecasting Universität Würzburg, Germany
We re-consider a multivariate fractional Brownian motion with component-wise different Hurst exponents. We investigate how strongly processes with different autocorrelation patterns over time can be correlated. We highlight that the multivariate model facilitates efficiency gains in forecasting compared to using univariate fractional Brownian motions only. This is demonstrated to be practically relevant for multivariate time series of realized volatilities. Therefore, we can improve the prediction of rough volatility. We advance the statistical theory for the model to provide parameter estimates, asymptotic confidence intervals and hypothesis tests, which are of interest for the application. We show that correlations do not only help to reduce prediction uncertainty, but moreover can be exploited to minimize statistical risk.
2:30 pm - 2:55 pm
Sampling inverse subordinators and subdiffusions 1University of Turin, Italy; 2University of Zagreb, Croatia
In this presentation, a method to exactly sample the trajectories of
inverse subordinators, jointly with the undershooting and overshooting processes, will be provided. The method is applicable to general subordinators. To deal with such non-Markovian processes, we use the theory of semi-Markov processes, and recent developments for exact simulation of first passage events (i.e. passage time, undershooting, and overshooting) at a single time, developed by Cázares, Lin, and Mijatović (2023). To the best of our knowledge, the presented method is the first one that exactly simulates the whole trajectory of a general inverse subordinator.
Additionally, the Monte Carlo approximation of a functional
of subdiffusive processes (in the form of time-changed Feller processes) will be considered, where a central limit theorem and the Berry-Esseen bounds will be presented.
The approximation of time-changed Itô diffusions is also studied where the strong error is explicitly evaluated as a function of the time step, demonstrating the strong convergence.
2:55 pm - 3:20 pm
On a finite-velocity random motion related to a modified Euler-Poisson-Darboux equation Università degli Studi di Salerno, Italy
The standard telegraph process $X_t$ describes the random motion of a particle on the real line with velocity changing alternately between a positive and a negative value ($\pm c$, $c>0$). Classically,
the sequence of changing epochs is governed by a homogeneous Poisson process $N_t$ with rate $\lambda>0$. This means that the random times between consecutive reversals of direction of
the motion are independent, identically distributed and exponentially distributed.
Due to the important role of the telegraph process in various applied contexts, several generalizations of such a process have been studied.
Since the memoryless property of the times between consecutive velocity changes represents a rare case in real phenomena, in some papers the sequence of changing epochs is supposed as governed by a non-homogeneous Poisson process with rate function
$\lambda(t)>0$. In the latter case, the pde satisfied by the probability density function of the process is given by
\begin{equation}
\frac{\partial^2 p}{\partial t^2}+2 \lambda(t)\frac{\partial p}{\partial t}=c^2\frac{\partial^2 p}{\partial x^2},\quad x\in {\mathbb R},\quad t>0. \qquad (1)
\end{equation}
When $\lambda(t)=t^{-1}$, Eq. (1) identifies with the well-known
one-dimensional Euler-Poisson-Darboux (EPD) equation, which, under suitable initial conditions, admits an explicit solution (see also [3]).
We study a modification of the Euler-Poisson-Darboux equation.
Specifically, we consider a time-decreasing intensity function that tends to $0$ more slowly with respect to $\lambda(t)=\frac{\alpha}{t}$, in order to describe a random motion characterized by a long tail behaviour, typical of several physical, biological, economic and financial behaviour (see, for example [1]).
Therefore, we assume that $N_t$ is a non-homogeneous Poisson process
with rate function $\lambda(t)=\frac{\alpha}{\sqrt{t}}$, $\alpha>0$, $t>0$.
Following the lines of [2], we consider the Fourier transform of the related pde (1)
and solve it by means of a suitable transformation which leads to a one-dimensional Schr$\ddot{o}$dinger-type ordinary differential equation. The solution of such equation, which, to the best of our knowledge,
is unknown in the literature, is obtained by means of the Frobenius method and provides a closed form expression of the characteristic function of $X_t$.
Differently from the EPD equation, in such modified Euler-Poisson-Darboux equation, we have that $\int_{0}^{t} \lambda(s) {\rm d}s<+\infty$, so that the probability law admits also a discrete component.
Starting from the characteristic function, we obtain the expression of the $n$-th moment of $X_t$, $n\in {\mathbb N}$, which is expressed as a finite sum of modified spherical Bessel functions.
We determine Kac's-type scaling conditions under which the characteristic function of the non-homogeneous telegraph process $X_t$
tends to the one of the fractional Brownian motion with Hurst index $H=3/4$.
Moreover, we study the probability law of $X_t$. This is composed by an absolutely continuous component
on $(-c t, c t)$ and a discrete one concentrated on $c t$. The study of the behavior of the density at the end points of the interval $(-c t, c t)$ is finally provided.
[1]
F. den Hollander, Long Time Tails in Physics and Mathematics. In: Grimmett, G. (eds) Probability and Phase Transition. NATO ASI Series, vol 420. Springer, Dordrecht, (1994).
[2]
S.K. Foong, U. van Kolck, Poisson Random Walk for Solving Wave Equations, Prog. Theor. Phys. 87 (2) (1992) 285--292.
[3]
R. Garra, E. Orsingher, Random flights related to the Euler-Poisson- Darboux equation, Markov Process. Relat. Fields. 22 (1) (2016) 87--110.
[4]
B. Martinucci, S. Spina, On a finite-velocity random motion governed by a modified Euler-Poisson-Darboux equation, submitted.
|
1:40 pm - 3:20 pm | S 8 (8): Finance, insurance and risk: Modelling Location: POT 361 Floor plan Session Chair: Peter Hieber Session Chair: Frank Seifried |
|
1:40 pm - 2:05 pm
Control of Drawdowns with Random Inspection University of Cologne, Germany
In order to appear reliable, insurance companies are interested in stabilising their surplus process and avoiding large losses. An object of interest is therefore the size of the drawdown, that is, the distance of the surplus to the last historical maximum. L. Brinker and H. Schmidli introduced the idea of dividing the size of the drawdown into a critical and a non-critical area. The aim is then to stabilise the drawdown in the non-critical area through a suitable reinsurance strategy. In reality, one can observe the process and adapt the strategy at discrete time points only. In this talk, we implement this idea by inspecting the drawdown process at the arrival times $\{T_k\}_{k\in\mathbb{N}}$ of a renewal process and by considering strategies of the form $$B_t = \sum_{k=0}^{\infty} b_k \mathbb{1}_{[T_k,T_{k+1})}(t)\;.$$ We then want to minimise the expected number of times at which a critical drawdown level is observed. In order to characterise the optimal strategy, we state a dynamic programming equation and show that the value function is the unique bounded solution. Moreover, the distribution of the drawdown under constant strategies can be calculated explicitly. This allows us to rewrite the dynamic programming equation into an integral form, which may be solved numerically. This talk is based on joint work with Hanspeter Schmidli.
2:05 pm - 2:30 pm
On the Bailout Dividend Problem with Periodic Dividend Payments and Fixed Transaction Costs 1HSE University, Russian Federation; 2Centro de Investigación en Matemáticas
We study the optimal bailout dividend problem with transaction costs for an insurance company, where shareholder payouts align with the arrival times of an independent Poisson process. In this scenario, the underlying risk model follows a spectrally negative L\'evy process. Our analysis confirms the optimality of a periodic $(b_{1},b_{2})$-barrier policy with classical reflection at zero. This strategy involves reducing the surplus to $b_1$ when it exceeds $b_{2}$ at the Poisson arrival times and pushes the surplus to 0 whenever it goes below zero.
2:30 pm - 2:55 pm
Framework for asset-liability management with liquid and fixed-term assets 1University of Ulm; 2University of Copenhagen
Insurance companies and pension funds have asset-allocation processes that may involve multiple risk management constraints due to liabilities. Furthermore, the investment universe of such institutional investors often contains assets with different levels of liquidity, e.g., liquid stocks and illiquid investments in infrastructure projects or private equity. Therefore, we propose an analytically tractable framework for economic agents who maximize their expected utilities of terminal portfolio value by choosing investment-consumption strategies subject to lower bound constraints on both intermediate consumption and the terminal value of assets, some of which are liquid, while others are fixed-term. For institutional investors such as insurance companies consumption can be interpreted as payments to policyholders and/or dividends to shareholders.
In our talk, we present the key building blocks of our framework and demonstrate how to derive optimal investment-consumption strategies. At the end of the talk, we consider a numerical study, where we analyze optimal strategies from the economic perspective.
|
1:40 pm - 3:20 pm | S 9 (3): Finance, insurance and risk: Quantitative methods Location: POT 112 Floor plan Session Chair: Nils-Christian Detering Session Chair: Peter Ruckdeschel |
|
1:40 pm - 2:05 pm
Practical Challenges of Interest Rate Model Calibration Fraunhofer-Institut für Techno- und Wirtschaftsmathematik ITWM
This presentation addresses the calibration of financial mathematical capital market models, highlighting the practical challenges involved and focusing on improving the accuracy and consistency of these models.
Using the so-called PIA basic model, a market model for interest rate and share price development that is recognized as the industry standard for capital market simulations in Germany, we examine each step of the standard calibration procedure in detail.
We particularly focus on the calibration of the interest rate model, specifically a two-factor Hull-White model with a perfect fit to the initial market yield curve. We demonstrate how the quality of calibration can be improved by selecting appropriate underlying yield curves and introduce a framework for measuring calibration quality, including metrics such as fit errors and stability over time.
Furthermore, we study the influence of the choice of interest rate derivatives, namely interest rate caps and swaptions, on the calibration process. Given the tendency of the two stochastic factors within the interest rate model to be highly negatively correlated, the necessity and advantages of a two-factor model are critically discussed by comparing the calibration results to those of a one-factor model.
Finally, the market data used for calibration, including yield curves, interest rate derivatives, and forecasts, are thoroughly examined regarding their construction and consistency. Overall, we aim to provide practical insights for improving the robustness of capital market models.
2:05 pm - 2:30 pm
Term structure shapes and their consistent dynamics in the Svensson family TU Dresden, Germany
The Nelson-Siegel and the Svensson family are parametric interpolation families for yield curves and forward curves, which are widely used by national banks and other financial institutions. The Nelson-Siegel family expresses the forward curve as a linear combination of three basis functions, commonly associated to level, slope and curvature in the form
$$\beta_0 + \beta_1\exp\left(-\frac{x}{\tau}\right) + \frac{\beta_2}{\tau}x\exp\left(\frac{x}{\tau}\right)$$
Svensson argues that this family is not flexible enough to reproduce more complex shapes with multiple humps and dips, as they are frequently encountered in the market, and adds another curvature term with a different time-scale $\tau_2 \neq \tau_1$, resulting in
$$\beta_0 + \beta_1\exp\left(-\frac{x}{\tau_1}\right) + \frac{\beta_2}{\tau_1}x\exp\left(\frac{x}{\tau_1}\right)+ \frac{\beta_3}{\tau_2}x\exp\left(\frac{x}{\tau_2}\right)$$
A further interpolation family, due to Bliss, is obtained by setting $\beta_2 = 0$ in the Svensson parametrization. We are interested in the term structure shapes that can be represented by the Svensson family of curves and by its subfamilies (Nelson-Siegel, Bliss). The shape of the term structure is a fundamental economic indicator and it encodes important information on market preferences for short-term vs. long-term investments, on expectations of central bank decisions and on the general economic outlook. Decreasing (inverse) shapes of the term structure, for example, have been identified as signals of economic recessions. On the other hand, complex shapes with multiple local extrema have frequently been observed in both US and Euro area markets.
We provide a complete classification of all attainable shapes and partition the parameter space of each family according to these shapes. Building upon these results, we then examine the consistent dynamic evolution of the Svensson family under absence of arbitrage. Our analysis shows that consistent dynamics further restrict the set of attainable shapes, and we demonstrate that certain complex shapes can no longer appear after a deterministic time horizon. Moreover a single shape (either inverse or normal curves) must dominate in the long-run.
2:30 pm - 2:55 pm
Efficient simulation and valuation of equity-indexed annuities under a two-factor G2++ model Université de Lausanne, Switzerland
Equity-indexed annuities (EIAs) with investment guarantees are pension products sensitive to changes in the interest rate environment. A flexible and common choice for modelling this risk factor is a Hull-White model in its G2++ variant. We investigate the valuation of EIAs in this model setting and extend the literature by introducing a more efficient framework for Monte-Carlo simulation. In addition, we build on previous work by adapting an approach based on scenario matrices to a two-factor G2++ model. This method does not rely on simulations or on Fourier transformations. In numerical studies, we demonstrate its fast convergence and the limitations of techniques relying on the independence of annual returns and the central limit theorem.
2:55 pm - 3:20 pm
Fast Bayesian calibration of option pricing models based on sequential Monte Carlo methods and deep learning 1University of Pavia; 2University of Vienna; 3University of Freiburg
Model calibration is a complicated yet fundamental task in financial engineering. By exploiting sequential Monte Carlo methods, we turn the non-convex optimization problem into a Bayesian estimation task based on the construction of a sequence of distributions from the prior to the posterior. This allows to compute any statistic of the estimated parameters, to overcome the strong dependence on the starting point, and to avoid troublesome local minima, all of which are typical plagues of the standard calibration. To highlight the strength of our approach, we consider the calibration of an affine stochastic volatility model with price-volatility co-jumps on both simulated and real implied volatility surfaces and find that our Bayesian approach largely outperforms the standard calibration approach in terms of run-time/accuracy, option pricing errors, and statistical fit. We further accelerate the computations by using Markov Chain Monte Carlo methods with delayed-acceptance and a neural network approach to option pricing that exploits the risk-neutral cumulants of the log returns as additional highly informative features.
|
1:40 pm - 3:20 pm | S10 (5): Stochastic optimization and operation research Location: POT 13 Floor plan Session Chair: Nikolaus Schweizer Session Chair: Ralf Werner |
|
1:40 pm - 2:05 pm
Common Noise by Random Measures: Mean-Field Equilibria for Competitive Investment and Hedging Humboldt Universität zu Berlin, Germany, Institut für Mathematik
( paper https://arxiv.org/abs/2408.01175v1 )
We study mean-field games where common noise dynamics are described by integer-valued random measures, for instance Poisson random measures, in addition to Brownian motions. In such a framework, we describe Nash equilibria for mean-field portfolio games of both optimal investment and hedging under relative performance concerns with respect to exponential (CARA) utility preferences. Agents have independent individual risk aversions, competition weights and initial capital endowments, whereas their liabilities are described by contingent claims which can depend on both common and idiosyncratic risk factors. Liabilities may incorporate, e.g., compound Poisson-like jump risks and can only be hedged partially by trading in a common but incomplete financial market, in which prices of risky assets evolve as Itô-processes. Mean-field equilibria are fully characterized by solutions to suitable McKean-Vlasov forward-backward SDEs with jumps, for whose we prove existence and uniqueness of solutions, without restricting competition weights to be small.
2:05 pm - 2:30 pm
Continuous-time Mean Field Markov Decision Models Karlsruher Institut für Technologie, Germany
For many Markovian decision problems, it is reasonable to consider several statistically equal decision makers operating simultaneously on the same state space and interacting with each other (e.g. maintenance of identical machines in a production site, population of potentially infected persons). Depending on the model, the state transition and the profit of the individual may depend on the empirical distribution of the decision makers across the states. In the limiting case, as the number $N$ of decision makers tends to infinity, we show that the resulting mean-field model describes a classical deterministic control problem, for which the limit state process is characterized by a controlled ordinary differential equation. We show that an optimal control of the mean-field model yields an asymptotically optimal control for the model with $N$ decision makers. In the end we discuss some applications.
The corresponding paper is joint work with N. Bäuerle and appeared in Applied Mathematics & Optimization 90, 12 (2024), https://doi.org/10.1007/s00245-024-10154-1.
2:30 pm - 2:55 pm
A mean field search game 1Friedrich-Schiller-Universität Jena, Germany; 2Christian-Albrechts-Universität zu Kiel, Germany
We consider a symmetric search game with the following features: each player chooses a searching area, any player's search can be successful at only one location and successively the reward (yield) at any location is assigned at random to one of the players searching at this location. We derive the mean-field version of the game by letting the number of players converge to infinity. Within the mean-field version of the game we obtain a concise characterization of equilibrium strategies. Based on this we show sufficient conditions on the reward function for the existence and uniqueness of equilibria. We illustrate with an example that the equilibrium may not be Pareto optimal, suggesting that the intervention of a central planner is useful for every player.
2:55 pm - 3:20 pm
Mean-field analysis of a bipartite queueing model for threshold-based mobile edge computing University of Tsukuba, Japan
In this study, we consider a large-scale bipartite queueing model for mobile edge computing (MEC). MEC is a network architecture designed to bring computing power and data storage closer to the data's origin, typically at the network edge or on mobile devices, to enhance processing speed and reduce latency.
Despite the potential of MEC to enable real-time communication, minimize latency, and improve data processing, task offloading is one of the challenging issues in MEC. The end users decide whether to offload to edge servers or to process the job locally based on the status such as interaction among the large number of users, network conditions, and task requirements. Thus, it is crucial to find the optimal offloading decision policy for the users. There are many related works to deal with the concerns. To overcome the difficulties, existing studies considered game theoretic approach, reinforcement learning, Lyapunov optimization, and stochastic optimization. Besides, to achieve effective task offloading, load balancing aims to utilize computational resources across the network efficiently. Under certain assumptions, distributing tasks to shorter queues has been shown to result in optimal load balancing. Although both the join-the-shortest-queue (JSQ) policy and the power of-$d$ choices policy (also referred to as JSQ($d$)) are considered to be optimal strategies, many studies have overlooked the impact of delays when monitoring queue lengths. Specifically, implementing JSQ policy requires the dispatcher to have real-time information on the length of each server's queue, which can cause significant communication overheads and hinder scalability in environments with a large number of servers.
To the best of our knowledge, there is no work to simultaneously deal with the large number of user interactions, queueing dynamics and the resource allocations, including the delay in checking the number of queue lengths in the MEC context. We minimize the response time (sojourn time in queueing theory) and identify the optimal offloading decisions by applying the mean-field theory. Mean-field theory is a powerful methodology that simplifies the analysis of complex queueing networks by approximating their behavior when the number of entities (servers or queues) becomes very large. We adapt the threshold-based offloading and explore the efficiency of this offloading policy by applying the mean-field technique to a large-scale bipartite queueing model, which is the abstraction of the resource allocation under specific conditions related to graph connectivity between users and edge servers. Numerical experiments show the sojourn time of the tasks and the optimal threshold of the users for some task offloading algorithms, including random allocation, JSQ, and JSQ($d$). The experiments also demonstrate the consistency between the numerical analysis by mean-field theory and the simulation data.
|
1:40 pm - 3:20 pm | S12 (4): Computational, functional and high-dimensional statistics Location: ZEU 260 Floor plan Session Chair: Jan Gertheiss |
|
1:40 pm - 2:05 pm
MULTIPLE CHANGE POINT DETECTION IN FUNCTIONAL DATA WITH APPLICATIONS TO BIOMECHANICAL FATIGUE DATA 1Ruhr Universität Bochum, Germany; 2Universität zu Köln, Germany
Injuries to the lower extremity joints are often debilitating, particularly for professional athletes. Understanding the onset of stressful conditions on these joints is therefore important in order to ensure prevention of injuries as well as individualised training for enhanced athletic performance. We study the biomechanical joint angles from the hip, knee and ankle for runners who are experiencing fatigue. The data is cyclic in nature and densely collected by body worn sensors, which makes it ideal to work with in the functional data analysis (FDA) framework.
We develop a new method for multiple change point detection for functional data, which improves the state of the art
with respect to at least two novel aspects. First, the curves are compared with respect to their maximum absolute deviation, which leads to a better interpretation of local changes in the functional data compared to classical $L^2$-approaches. Secondly, as slight aberrations are to be often expected in a human movement data, our method will not detect arbitrarily small changes but hunts for relevant changes, where maximum absolute deviation between the curves exceeds a specified threshold, say $\Delta >0$.
We recover multiple changes in a long functional time series of biomechanical knee angle data, which are larger than the desired threshold $\Delta$, allowing us to identify changes purely due to fatigue. In this work, we analyse data from both controlled indoor as well as from an uncontrolled outdoor (marathon) setting.
2:05 pm - 2:30 pm
Statistical inference for the error distribution in functional linear models Universität Hamburg, Germany
Some recent results on functional linear models with scalar response and functional covariate are presented. In those models it is more challenging to deal with residual-based procedures than in regression models with vector-valued covariates. We present procedures for testing for changes in the error distribution, goodness-of-fit testing, and testing for independence of covariates and errors. We also consider models with vector-valued responses and functional covariates. Here the dependence between the components of the response, given the covariate, can be modeled by the copula of vector-valued errors, and we present the asymptotics of the residual-based empirical copula.
2:30 pm - 2:55 pm
Testing for white noise in multivariate locally stationary functional time series 1Ruhr Universität Bochum, Germany; 2Tsinghua University, Beijing, China
Multivariate locally stationary functional time series provide a flexible framework for modeling complex data structures exhibiting both temporal and spatial dependencies while allowing for time-varying data generating mechanism. In this study, we introduce a specialized Portmanteau test tailored for assessing white noise assumptions for multivariate locally stationary functional time series without dimension reduction. Our approach is based on a new Gaussian approximation result for the kernel-weighted second-order functional time series, which is of independent interest. A simple bootstrap procedure is proposed to implement the test because the limiting distribution can be non-standard or
even does not exist. Through theoretical analysis and simulation studies, we demonstrate the efficacy and adaptability of the proposed portmanteau test in detecting departures from white noise assumptions in multivariate locally stationary functional time series.
2:55 pm - 3:20 pm
A goodness-of-fit test for geometric Brownian motion FH Aachen, Germany
In the functional data setting, we study a new goodness-of-fit test for the composite null hypothesis that the data are coming from a geometric Brownian motion, or equivalent from a scaled Brownian motion with linear drift. Critical values are easily obtained and ensure that the test keeps the significance level in the finite sample case. New theoretical results investigate limits of the test statistic as the sample size tends to infinity, under the null hypothesis and under alternatives, and show the consistency of the test. Another advantage of the new approach is a fast and simple implementation and finally the reduction of computational effort. Moreover, in a comprehensive simulation study, the novel test compares favourably against competitors. An obvious application is for testing financial time series whether the Black-Scholes model applies. For illustration, we provide data examples for different stock and interest rate time series.
|
1:40 pm - 3:20 pm | S13 (8): Nonparametric and asymptotic statistics Location: ZEU 250 Floor plan Session Chair: Alexander Kreiss Session Chair: Leonie Selk |
|
1:40 pm - 2:05 pm
Bootstrap-based Goodness-of-Fit Test for Parametric Families of Conditional Distributions Fachhochschule Aachen
In various scientific fields, researchers are interested in exploring the relationship between some response variable $Y$ and a vector of covariates $X$. In order to make use of a specific model for the dependence structure, it first has to be checked whether the conditional density function of $Y$ given $X$ fits into a given parametric family. We propose a consistent bootstrap-based goodness-of-fit test for this purpose. The test statistic traces the difference between a nonparametric and a semi-parametric estimate of the marginal distribution function of $Y$. As its asymptotic null distribution is not distribution-free, a parametric bootstrap method is used to determine the critical value. A simulation study shows that, in some cases, the new method is more sensitive to deviations from the parametric model than other tests found in the literature. We also apply our method to real-world datasets.
2:05 pm - 2:30 pm
Tests of independence based on correlations TU Dresden, Department of Forest Sciences, Germany
Most statistical tests customarily taught in introductory mathematical courses on probability and statistics can be equivalently expressed, perhaps after a rank transformation, in terms of the test statistic of a classical multivariate test for correlation, which is itself an application of Rao’s score test. This talk draws attention to that apparently little-known fact, discusses existing knowledge and presents proofs in a slightly generalized framework to close some gaps. Tests covered include all F-tests (and t-tests) and their multivariate generalizations based on Pillai’s statistic, Wilcoxon rank-sum and signed rank tests, Kruskal-Wallis tests, chi-squared tests of independence and also particular score tests in binomial and multinomial logit models, among others. Recognizing all of those as variations of the same test of independence of two sets of variables given a third one can be of considerable help in teaching. Indeed, the only differences are in the potential prior elementary transformation of the variables, in the level of conditioning on which the statistical model is presented and in the choice of the approximation of the null distribution of the test statistic.
2:30 pm - 2:55 pm
Goodness-of-fit testing based on graph functionals for homogeneous Erdös-Rényi graphs 1TU Wien, Vienna, Austria; 2TU Dortmund University, Germany
The Erdös‐Rényi graph is a popular choice to model network data as it is parsimoniously parameterized, straightforward to interpret and easy to estimate. However, it has limited suitability in practice, since it often fails to capture crucial characteristics of real-world networks. To check its adequacy, we propose a novel class of goodness-of-fit tests for homogeneous Erdös‐Rényi models against heterogeneous alternatives that permit non-constant edge probabilities. We allow for both asymptotically dense and sparse networks. The tests are based on graph functionals that cover a broad class of network statistics for which we derive limiting distributions in a unified manner. The resulting class of asymptotic tests includes several existing tests as special cases. Further, we propose a parametric bootstrap and prove its consistency, which avoids the often tedious variance estimation for asymptotic tests and enables performance improvements for small network sizes. Moreover, under certain fixed and local alternatives, we provide a power analysis for some popular choices of subgraph counts as goodness-of-fit test statistics. We evaluate the proposed class of tests and illustrate our theoretical findings by simulations.
2:55 pm - 3:20 pm
Bootstrap-based inference for pseudo-value regression models 1Department of Mathematics, Otto von Guericke University Magdeburg, Germany; 2Department of Statistics, TU Dortmund University, Germany; 3Research Center Trustworthy Data Science and Security, Germany; 4Department of Public Health - Department of Biostatistics, Aarhus University, Denmark
Generalized estimating equations (GEE) are a popular method to model the effects of covariates on various estimands, which only rely on the
specification of a functional relationship without the need of restrictive distributional assumptions.
However, if the response variable is not fully observable, e.g. in the case of time-to-event data, the GEE approach is not directly applicable.
Andersen et al. (2003) proposed to replace the partially unobservable response variables by jackknife pseudo-observations, and Overgaard et al. (2017)
showed that the resulting parameter estimates are consistent and asymptotically normal under very general conditions. For further inference about the
parameter vector an estimator of the asymptotic covariance matrix is necessary.
But due to the dependence of the pseudo-observations, the limiting covariance matrix is highly complicated and the usual sandwich estimator seems to be inconsistent (Jacobsen and Martinussen (2016), Overgaard et al. (2018)).
Overgaard et al. (2017) proposed an alternative estimator which incorporates the dependence of the pseudo-observations and performs well in medium to large samples.
\\
These results would in principle allow for the construction of tests for general linear hypotheses about the parameters. However,
mainly confidence intervals for individual parameters or simple contrasts, e.g. risk differences, have been considered in the past.
\\
In this talk we aim to bridge this gap by introducing different test statistics for general linear hypotheses in pseudo-value regression models.
To improve the small sample performance of these tests we discuss different bootstrap methods for pseudo-observations as well as possible extensions
to multiple testing problems and simultaneous confidence intervals for contrasts.
\section*{Acknowledgements}
We would like to thank Marc Ditzhaus for his invaluable collaboration and guidance in the early phase of this work. Sadly, he has deceased and he could not complete this work together with us.
\section*{References}
Per Kragh Andersen, John P. Klein, and Susanne Rosthøj. "Generalised linear models for correlated pseudo‐observations, with applications to multi‐state models." Biometrika 90.1 (2003): 15-27.
\\
\noindent
Morten Overgaard, Erik Thorlund Parner, and Jan Pedersen. "Asymptotic theory of generalized estimating equations based on jack-knife pseudo-observations." The Annals of Statistics 45.5 (2017): 1988-2015.
\\
\noindent
Martin Jacobsen, and Torben Martinussen. "A note on the large sample properties of estimators based on generalized linear models for correlated pseudo‐observations." Scandinavian Journal of Statistics 43.3 (2016): 845-862.
\\
\noindent
Morten Overgaard, Erik Thorlund Parner, and Jan Pedersen. "Estimating the variance in a pseudo‐observation scheme with competing risks." Scandinavian Journal of Statistics 45.4 (2018): 923-940.
|
2:30 pm - 3:20 pm | S11 Keynote: Time series Location: POT 81 Floor plan Session Chair: Annika Betken Session Chair: Marie Düker |
|
2:30 pm - 3:20 pm
High-Dimensional Dynamic Pricing under Non-Stationarity: Learning and Earning with Change-Point Detection We consider a high-dimensional dynamic pricing problem under non-stationarity, where a firm sells products to T sequentially arriving consumers that behave according to an unknown demand model with potential changes at unknown times. The demand model is assumed to be a high-dimensional generalized linear model (GLM), allowing for a feature vector in R^d that encodes products and consumer information. To achieve optimal revenue~(i.e. least regret), the firm needs to learn and exploit the unknown GLMs while monitoring for potential change-points. To tackle such a problem, we first design a novel penalized likelihood-based online change-point detection algorithm for high-dimensional GLMs, which is the first algorithm in the change-point literature that achieves optimal minimax localization error rate for high-dimensional GLMs. A change-point detection assisted dynamic pricing (CPDP) policy is further proposed and achieves a near-optimal regret of order O(s\sqrt{\Upsilon_T T}\log(Td)), where s is the sparsity level, and \Upsilon_T is the number of change-points. This regret is accompanied with a minimax lower bound, demonstrating the optimality of CPDP (up to logarithmic factors). In particular, the optimality with respect to \Upsilon_T is seen for the first time in the dynamic pricing literature and is achieved via a novel accelerated exploration mechanism. Extensive simulation experiments and a real data application on online lending illustrate the efficiency of the proposed policy and the importance and practical value of handling non-stationarity in dynamic pricing. |
3:20 pm - 3:50 pm | Coffee Break Location: Foyer Potthoff Bau Floor plan |
3:20 pm - 3:50 pm | Coffee Break Location: POT 168 Floor plan |
3:50 pm - 4:40 pm | S 2 (7): Spatial stochastics, disordered media, and complex networks Location: POT 251 Floor plan Session Chair: Chinmoy Bhattacharjee Session Chair: Benedikt Jahnel |
|
3:50 pm - 4:15 pm
A Gaussian approximation result for weakly dependent random fields using dependency graphs RWTH Aachen University, Institute of Statistics, Germany
Non-stationary random fields under the physical dependence measure are investigated. In particular, the objective is to study the maximum of local averages given an increasing bandwidth under expanding-domain asymptotics. By defining suitable vectors based on the studied random field it becomes possible to use the concept of dependency graphs known from time series analysis.This leads to an approximation result for the maximum of local averages through a Gaussian random field which preserves the covariance structure.
4:15 pm - 4:40 pm
Diffusion Means and their Relation to Intrinsic and Extrinsic Means 1University of Göttingen, Germany; 2University of Copenhagen, Denmark
On manifold data spaces, we introduce a new family of location statistics describing centers of isotropic diffusion for different diffusion times. In contrast to the situation in Euclidean data, these diffusion means on manifolds do not generally coincide for different diffusion times. In the limit of vanishing diffusion time, diffusion means can be shown to converge to the intrinsic mean in general. For diverging diffusion time, we show for the circle and spheres of arbitrary dimension that diffusion means converge to the extrinsic mean in the canonical embedding into Euclidean space. The generalization of this result to real projective spaces leads to a conjecture for all compact symmetric spaces.
|
3:50 pm - 4:40 pm | S 3 (7): Stochastic Analysis and S(P)DEs Location: POT 151 Floor plan Session Chair: Vitalii Konarovskyi Session Chair: Aleksandra Zimmermann |
|
3:50 pm - 4:15 pm
Landau-Lifschitz-Navier-Stokes Equations: Large Deviations and Relationship to the Energy Equality MPI MIS Leipzig & Universität Bielefeld, Germany
The dynamical large deviations principle for the three-dimensional incompressible Landau-Lifschitz-Navier-Stokes equations is shown, in the joint scaling regime of vanishing noise intensity and correlation length. This proves the consistency of the large deviations in lattice gas models [QY98], with Landau-Lifschitz fluctuating hydrodynamics [LL87]. Secondly, we unveil a novel relation between the validity of the deterministic energy equality for the deterministic forced Navier-Stokes equations and matching large deviations upper and lower bounds.
Joint work with Daniel Heydecker and Zhengyan Wu.
4:15 pm - 4:40 pm
Asymptotic Exit Problems for a Singular Stochastic Reaction-Diffusion Equation 1Technische Universität Berlin, Germany; 2University of Oxford, United Kingdom
We consider a singular stochastic reaction-diffusion equation with a cubic non-linearity on the 3D torus and study its behaviour as it exits a domain of attraction of an asymptotically stable point. Mirroring the results of Freidlin and Wentzell in the finite-dimensional case, we relate the logarithmic asymptotics of its mean exit time and exit place to the minima of the corresponding (quasi-)potential on the boundary of the domain. The challenge, in our setting, is that the stochastic equation is singular such that its solution only lives in a Hölder–Besov space of distributions. The proof accordingly combines a classical strategy with novel controllability statements as well as continuity and locally uniform large deviation results obtained via the theory of regularity structures.
|
3:50 pm - 4:40 pm | S 4 (8): Limit theorems, large deviations and extremes Location: ZEU 160 Floor plan Session Chair: Jan Nagel Session Chair: Marco Oesting |
|
3:50 pm - 4:15 pm
Percolation and geometry of Cayley graphs University of Münster
We use invariant percolation to understand geometry of finitely generated groups. More concretely, our incentive will be to characterize geometric properties of groups by constructing suitably dependent percolation models on their Cayley graphs and examining two competing properties, namely large marginals and decaying connectivity. A quantified relation between these two properties will also characterize large scale geometric properties of connected, locally finite graphs. Joint work with Konstantin Recke (Münster).
4:15 pm - 4:40 pm
Stein's Method for Networks University of Oxford
A network, or graph, can be viewed by way of its adjacency matrix, which is a random matrix. We used Stein's density method in order to derive univariate Stein operators for random graphs. We then give an explicit solution to the resulting Stein equation, and use it to derive distributional limit results for certain types of graphs.
|
3:50 pm - 4:40 pm | S 7 (11): Stochastic processes: theory, statistics and numerics Location: POT 51 Floor plan Session Chair: Andreas Neuenkirch Session Chair: Jakob Söhl |
|
3:50 pm - 4:15 pm
Approximating nonlinear dynamics by low-dimensional linear SDEs 1WIAS Berlin, Germany; 2MLU Halle-Wittenberg, Germany
The goal of this presentation is to identify a low-dimensional linear SDE that fits data from a nonlinear stochastic system and can, therefore, reproduce key features of the underlying nonlinear dynamics. Here, we exploit a rough paths perspective on SDEs driven by Brownian motion. The solutions of these equations are continuous functions of the rough path lift corresponding to the driving process. These continuous mappings can be approximated by the (truncated) Stratonovich signature of a Brownian motion using the Universal Approximation Theorem. The truncated Stratonovich signature solves a high-dimensional linear SDE. To reduce the complexity of this problem, we apply dimension reduction techniques, resulting in a reduced-order stochastic linear system.
We introduce the concept of signatures and their application in modeling. Furthermore, we explain the theory behind dimension reduction and present numerical experiments demonstrating the effectiveness of our approach.
4:15 pm - 4:40 pm
Deep Operator BSDE: a Numerical Scheme to Approximate the Solution Operators University of Oslo, Norway
Motivated by dynamic risk measures and conditional $g$-expectations, in this work we propose a numerical method to approximate the solution operator given by a Backward Stochastic Differential Equation (BSDE). The main ingredients for this are the Wiener chaos expansion and the classical Euler scheme for BSDEs. We show convergence of this scheme under very mild assumptions, and provide a rate of convergence in more restrictive cases. We then implement it in practice using neural networks, and provide several numerical examples where we can check the accuracy of the method.
|
3:50 pm - 4:40 pm | S 8 Keynote: Finance, insurance and risk: Modelling Location: POT 81 Floor plan Session Chair: Peter Hieber Session Chair: Frank Seifried |
|
3:50 pm - 4:40 pm
On optimality criteria for dynamic investment and their connections In this presentation, we will explore various optimization criteria for portfolio investment problems and discuss their properties. A pivotal role is played by the risk-sensitive objective which is linked to Mean-Variance problems and risk measures. We show how to solve multi-stage risk-sensitive problems and discuss the impact of the risk-sensitivity parameter on the optimal policy. These kind of criteria typically lead to time-inconsistent optimal investment strategies. Using a Mean-Variance example we explain how this issue can be resolved by using a new interpretation. In the end we consider problems with risk-measures and if time allows discuss parameter uncertainty in these problems and propose methods to manage it. The talk is based on joint works with Anna Jaskiewicz, Antje Mahayni, Marcin Pitera, Ulrich Rieder and Lukasz Stettner. |
3:50 pm - 4:40 pm | S 9 (4): Finance, insurance and risk: Quantitative methods Location: POT 112 Floor plan Session Chair: Nils-Christian Detering Session Chair: Peter Ruckdeschel |
|
3:50 pm - 4:15 pm
A network approach to macroprudential buffers London School of Economics and Political Science, United Kingdom
I use network modelling of systemic risk to set macroprudential buffers from an operational perspective. I focus on the countercyclical capital buffer, an instrument designed to protect the banking sector from periods of excessive growth associated with a build-up of system-wide risk. I construct an indicator of financial vulnerability with a model of fire sales, which captures the spillover losses in the system caused by deleveraging and joint liquidation of illiquid assets. Using data on the U.S. bank holding companies, I show that the indicator is informative about the build-up of vulnerability and can be useful for setting the countercyclical capital buffer.
4:15 pm - 4:40 pm
Computing Systemic Risk Measures with Graph Neural Networks 1LMU Munich, Germany; 2Imperial College London, United Kingdom
This paper investigates systemic risk measures for stochastic financial networks of explicitly modelled bilateral liabilities. We extend the notion of systemic risk measures from Biagini, Fouque, Fritelli and Meyer-Brandis (2019) to graph structured data. In particular, we focus on an aggregation function that is derived from a market clearing algorithm proposed by Eisenberg and Noe (2001). In this setting, we show the existence of an optimal random allocation that distributes the overall minimal bailout capital and secures the network. We study numerical methods for the approximation of systemic risk and optimal random allocations. We propose to use permutation equivariant architectures of neural networks like graph neural networks (GNNs) and a class that we name (extended) permutation equivariant neural networks ((X)PENNs). We compare their performance to several benchmark allocations. The main feature of GNNs and (X)PENNs is that they are permutation equivariant with respect to the underlying graph data. In numerical experiments we find evidence that these permutation equivariant methods are superior to other approaches.
|
3:50 pm - 4:40 pm | S10 (6): Stochastic optimization and operation research Location: POT 13 Floor plan Session Chair: Nikolaus Schweizer Session Chair: Ralf Werner |
|
3:50 pm - 4:15 pm
Stationary Mean-Field Games of Singular Control under Knightian Uncertainty Bielefeld University, Germany
In this work we study a class of stationary mean-field games (MFGs) of singular stochastic control under Knightian uncertainty. The representative agent adjusts the dynamics of an Itô-diffusion via a one-sided singular stochastic control and faces a long-time-average expected profit criterion. The mean-field interaction is of scalar type and it is given through the stationary distribution of the population. Due to the presence of ambiguity, the problem of representative agent constructed as a stochastic (zero-sum) game. The decision maker chooses the 'best' policy and the adverse player the 'worst' probability measure. Via a constructive approach, we prove the existence and uniqueness of the stationary mean-field equilibrium. Finally, we provide a case study to illustrate the impact of uncertainty to the mean field equilibria. This is a joint work with Giorgio Ferrari.
4:15 pm - 4:40 pm
Existence of Bayesian Equilibria in Incomplete Information Games Without Common Priors 1Tilburg University, Netherlands, The; 2University of British Columbia
This paper examines finite-player incomplete information games where players may hold mutually inconsistent beliefs without requiring a common prior. Within a framework where the action space is compact metrizable and the type space is Polish, players' beliefs are assumed to be absolutely continuous with respect to a product measure. We consider an auxiliary complete information game with a common prior and show that a Bayesian equilibrium exists in the original game if and only if a Nash equilibrium exists in the auxiliary game. This equivalence provides a way for establishing the existence of Bayesian equilibria across a broad class of games, including those with action-discontinuous payoffs. Additionally, through this equivalence, established existence results such as Balder (1988) and Carbonell-Nicolau and McLean (2018) can be leveraged to demonstrate equilibrium existence in particular cases. The framework is applied to analyze games with large type spaces which accommodate infinite belief hierarchies, demonstrating the existence of equilibria in such settings.
|
3:50 pm - 4:40 pm | S13 (9): Nonparametric and asymptotic statistics Location: ZEU 250 Floor plan Session Chair: Alexander Kreiss Session Chair: Leonie Selk |
|
3:50 pm - 4:15 pm
The Weak Feature Impact Scenario and its Effects on Monotone Binary Regression Albert-Ludwigs-Universität Freiburg
Nonparametric maximum likelihood estimation in monotone binary regression models is studied when the impact of the features on the labels is weak. To define a notion of weak feature impact and to investigate the statistical behaviour of the nonparametric maximum likelihood estimator (NPMLE) in this context, we introduce a novel mathematical model. We prove consistency of the NPMLE in Hellinger distance, as well as its pointwise, $L^{1}$- and uniform consistency in the introduced weak feature impact model. Moreover, we derive consistency rates and limiting distributions respectivley. While consistency is shown to be independent of the level of feature impact, we observe a phase transition affecting both, convergence rates and limiting distributions, as a result of the level of feature impact.
4:15 pm - 4:40 pm
Approximation by totally positive distributions University of Bern, Switzerland
Maximum likelihood estimation of a totally positive ($TP_2$) density is ill-defined due to the unboundedness of the likelihood function.
An alternative is the consideration of a nonparametric maximum empirical likelihood estimator (NPMeLE) of $TP_2$ distributions. In terms of arbitrary distributions this refers to approximating a distribution with finite support optimally by a $TP_2$ distribution with respect to the Kullback-Leibler divergence.
We study d-dimensional $TP_2$ distributions without assuming the existence of (Lebesgue-) densities and show for the 2-dimensional case that it is possible to characterise the NPMeLE solely by the bivariate distribution function.
This leads naturally to some conjectures about projections of arbitrary bivariate distributions onto the space of $TP_2$ distributions. To further explore this, we investigate the continuity of the projection with respect to the Kolmogorov or Kuiper distance and the preservation of the $TP_2$ dependence structure for certain probability integral transforms.
Finally, we connect the approach to nonparametric estimation of the conditional distributions of a real response given a real covariate under the assumption that these conditional distributions are non-decreasing with respect to the likelihood ratio order (isotonic distributional regression under likelihood ratio order).
|
3:50 pm - 4:40 pm | S14 (1): History of Probability and Statistics Location: POT 361 Floor plan Session Chair: Hans Fischer Session Chair: Tilman Sauer Session Chair: René Schilling |
|
3:50 pm - 4:15 pm
The first 20 years of Brownian Motion (1905 - 1925) TU Dresden
We review the theoretical progress and the history of the reception of Brownian motion in the first quarter of the 20th century.
4:15 pm - 4:40 pm
The "Bernstein-von-Mises Theorem": historical aspects Independent Scholar
The so-called "Bernstein-von-Mises Theorem" is a statement on the asymptotic normality of the posterior of a distribution parameter under relatively general conditions. It provides a pivotal link between "frequentist" and "Bayesian" statistics.
The history of this theorem starts with Laplace, who published on the binomial case under the assumption of a uniform prior in 1774.
Later on, Laplace generalized his discussion to multinomial distributions and even with respect to means of continuous distributions.
We find several attempts to generalize Laplace's considerations by introducing non-uniform priors during the 19th century.
The first more or less rigorous proof (from the point of view of modern analysis) in the multinomial case is due to von Mises (1919).
Interestingly, Neyman, one of the leading propontents of "frequentist" versus "Bayesian" statistics, proved a very similar assertion in 1929,
initially without knowledge of von Mises's work, as he admitted.
The name "Bernstein-von-Mises Theorem" seems to be essentially due to LeCam, who gave a fairly general version of this theorem in 1953, and who
repeatedly cited Bernstein's and von Mises's contributions in that paper and in later works. With respect to Bernstein, the reason for this naming remains unclear, however. According to all we know, Bernstein only treated the binomial case, and did not go much beyond Laplace in this respect.
|
4:50 pm - 5:50 pm | Plenary IV Location: POT 81 Floor plan Session Chair: Mathias Trabs |
|
4:50 pm - 5:50 pm
Bayesian estimation in high dimensional Hawkes processes Universite Paris Dauphine & Oxford Univ, France
Multivariate Hawkes processes form a class of point processes describing
self and inter exciting/inhibiting processes. There is now a renewed interest
of such processes in applied domains and in machine learning, but there
exists only limited theory about inference in such models, in particular in
high dimensions.
To be more precise, the intensity function of a linear Hawkes process has
the following form: for each dimension $k \leq K$
$$\lambda_k(t) = \nu_k + \sum_{l=1}^K\int_{t-A}^{t^-} h_{lk}(t-s) dN^l_s ,$$
for t in [0,T] and $k \leq K$ and
where $(N^l , l \leq K)$ is the multivariate Hawkes process and $\nu_k>0$.
There have been some recent theoretical results on Bayesian estimation in the context of linear and nonlinear multivariate Hawkes processes, but these results assumed that the
dimension K was fixed. Convergence rates were studied assuming that the
observation window T goes to infinity.
In this work we consider the case where K is allowed to go to infinity
with T. We consider generic conditions to obtain posterior convergence rates
and we derive, under sparsity assumptions, convergence rates in L1 norm
and consistent estimation of the graph of interactions.
|
7:00 pm - 10:00 pm | Conference Dinner Location: Carolaschlösschen Floor plan |
Date: Friday, 14/Mar/2025 | |
9:00 am - 10:00 am | Plenary LT: Plenary Lehrkräftetag Location: POT 81 Floor plan Session Chair: Andrea Hoffkamp |
|
9:00 am - 10:00 am
(Angewandte) Statistik – was ist das? HS Bochum, Germany
--
|
10:00 am - 10:30 am | Coffee Break Location: Foyer Potthoff Bau Floor plan |
10:00 am - 10:30 am | Coffee Break Location: POT 168 Floor plan |
10:30 am - 11:20 am | S14 Keynote: History of Probability and Statistics Location: POT 81 Floor plan Session Chair: Hans Fischer Session Chair: Tilman Sauer Session Chair: René Schilling |
|
10:30 am - 11:20 am
Richard von Mises, the forgotten Bayesian The theory of probability as frequency that von Mises devised over a century ago included a construction corresponding to Bayes's rule, which von Mises considered the key tool for statistical inference. In the late 1940s and early 1950s, von Mises was the only "Bayesian" on the editorial board of the Institute of Mathematical Statistics. But today the interpretation of probability as frequency has been conflated with an opposition to reliance on Bayes's rule. Can today's statisticians still learn from von Mises? |
10:30 am - 12:10 pm | S 2 (8): Spatial stochastics, disordered media, and complex networks Location: POT 251 Floor plan Session Chair: Chinmoy Bhattacharjee Session Chair: Benedikt Jahnel |
|
10:30 am - 10:55 am
On the chaos expansion for a Dirichlet process Karlsruhe Institute of Technology
We consider a Dirichlet process $\zeta$ on a general measurable space equipped with a finite measure $\rho$. This is a random probability measure whose finite dimensional distributions are Dirichlet with parameters determined by $\rho$.
It was proven by Peccati (2008) that every square-integrable function of $\zeta$ can be written as an orthogonal series of multiple integrals w.r.t. $\zeta$, where the kernels (i.e. the integrands) are degenerate in a certain sense.
In this talk, we revisit this fundamental chaos expansion by providing an alternative proof via a Mecke-type equation for a Dirichlet process along with explicit formulas for the kernels.
As an application we give a direct proof of the Poincare inequality, which was derived by Stannat (2000) from the corresponding inequality for the Dirichlet distribution by a suitable approximation.
The talk is based on joint work with Günter Last.
10:55 am - 11:20 am
Transports of Stationary Random Measures: Asymptotic Variance, Hyperuniformity, and Examples 1DLR (German Aerospace Center), Germany; 2KIT (Karlsruhe Institute of Technology), Germany; 3ISI (Indian Statistical Institute), India
In this talk, we explore transports of stationary random measures on $\mathbb{R}^d$. Under a suitable mixing assumption on two different transports of a single random measure we prove that the resulting random measures have the same asymptotic variance. An important consequence is a mixing criterion that ensures the persistence of the asymptotic variance under a transport. We pay special attention to the case of a vanishing asymptotic variance, known as hyperuniformity, which implies a suppression of long-range density fluctuations. Our approach enables us to rigorously establish hyperuniformity for many point processes and random measures that are relevant, among others, to random self-organization and material design. In particular, we construct a perturbation that turns any ergodic point process of finite intensity into a hyperuniform one.
11:20 am - 11:45 am
Hierarchical cubes: Gibbs measures and decay of correlations 1Ludwig-Maximilians-Universität (LMU) Munich, Germany; 2Munich Center for Quantum Science and Technology (MCQST), Germany
We study a hierarchical model of non-overlapping cubes of sidelengths $2^j$, $j \in \mathbb{Z}$. The model allows for cubes of arbitrarily small size and the activities need not be translationally invariant. It can also be recast as a spin system on a tree with a long-range hard-core interaction. We prove necessary and sufficient conditions for the existence and uniqueness of Gibbs measures, discuss fragmentation and condensation, and prove bounds on the decay of two-point correlation functions. (Preprint: arXiv:2406.06249)
11:45 am - 12:10 pm
Lifschitz tail for long-range alloy-type models with Levy operators 1University of Warsaw; 2Wroclaw Technical University
We work with a class of random Hamiltonians on $\mathbb R^d,$ $H^\omega,$ whose kinetic part is a L\'{e}vy operator, and the independent potential part comes from an alloy-type potential. We study the asymptotic behavior of the integrated density of states at the bottom of the spectrum of $H^\omega$.
When the profile function of the potential had compact support, in [K.Kaleta, K.Pietruska-Paluba, Lifshitz tail for continuous Anderson models driven by Lévy operators, Comm. Contemp. Math. 2020, 2050065] we have thoroughly examined the asymptotics behaviour of the IDS at the bottom of the spectrum: we gave precise rate of its decay, being of Lifschitz-tail type. However, the approach relied on the compactness of the support of W, the profile function.
In present work we show how to proceed withouh the compact support assumption. In this case (the long range alloy-type Hamiltonians) we also give precise rates of decay for the IDS at the bottom of the spectrum.
The class of L\'{e}vy operators condsidered here contains fractional laplacians, relativistic stable laplacians, and the lattice random variables (those that `amplify' the profile function) are quite general.
|
10:30 am - 12:10 pm | S 3 (8): Stochastic Analysis and S(P)DEs Location: POT 151 Floor plan Session Chair: Vitalii Konarovskyi Session Chair: Aleksandra Zimmermann |
|
10:30 am - 10:55 am
Invariant submanifolds for solutions to rough differential equations Albert Ludwig University of Freiburg
In this talk we provide necessary and sufficient conditions for invariance of finite dimensional submanifolds for rough differential equations (RDEs) with values in a Banach space. Furthermore, we apply our findings to the particular situation of random RDEs driven by Q-Wiener processes and random RDEs driven by Q-fractional Brownian motion.
10:55 am - 11:20 am
Rough backward SDEs of Marcus-type with discontinuous Young drivers Humboldt-Universität zu Berlin, Germany
We show existence and uniqueness of backward differential equations that are jointly driven by Brownian martingales $B$ and a deterministic discontinuous rough path $W$ of $q$-variation for $q \in [1,2)$. Integration of jumps is in the geometric sense in the spirit of Marcus-type stochastic differential equations. The well-posedness is shown through a direct fix-point argument. By developing a comparison theorem, we can derive an apriori bound of the solution, which helps us attain a unique global solution of the differential equation. Furthermore, a connection to backward doubly SDE is established. If time permits, we will further discuss the continuity of the rough backward SDE solution with respect to the terminal condition and the driving rough noise in a Skorokhod-type norm.\\
This is a joint work with Dirk Becherer (HU Berlin).
11:20 am - 11:45 am
Pathwise convergence of the Euler scheme for rough and stochastic differential equations 1Durham University, United Kingdom; 2University of Mannheim, Germany; 3ShanghaiTech University, China
First and higher order Euler schemes play a central role in the
numerical approximation of stochastic differential equations. While
the pathwise convergence of higher order Euler schemes can be adequately explained by rough
path theory, the first order Euler scheme seems to be outside of its
scope, at least at first glance.
In this talk, we show the convergence of the first order Euler scheme
for differential equations driven by càdlàg rough paths satisfying a
suitable criterion, namely the so-called Property (RIE) along time discretizations with
mesh size going to zero. This property is verified for almost all
sample paths of various stochastic processes and time discretizations. Consequently, we
obtain the pathwise convergence of the first order Euler scheme for
stochastic differential equations driven by these stochastic processes.
The talk is based on joint work with A. L. Allan, C. Liu, and D. J. Prömel.
11:45 am - 12:10 pm
Rough Functional It\^o Formula TU Berlin, Germany
We prove a rough It\^o formula for path-dependent functionals of $\alpha$-H\"older continuous paths for $\alpha\in(0,1)$. Our approach combines the sewing lemma and a Taylor approximation in terms of path-dependent derivatives.
|
10:30 am - 12:10 pm | S 4 (9): Limit theorems, large deviations and extremes Location: ZEU 160 Floor plan Session Chair: Jan Nagel Session Chair: Marco Oesting |
|
10:30 am - 10:55 am
Asymptotic theory of Schatten classes 1Universität Ulm, Germany; 2Universität Münster, Germany; 3Universität Passau, Germany
The finite-dimensional Schatten-$p$ classes $S_p^{m \times n}$, consisting of real or complex $m \times n$-matrices endowed with the $\ell_p$-norm of their singular values, are a classical object of study in functional analysis and also have proved their usefulness in applications (e.g. compressed sensing). In addition to their functional analytic qualities, the geometric properties of the corresponding unit balls are subject to intense ongoing research, both in the exact and the asymptotic regimes. Yet most results so far have been obtained for the square case, i.e. $m = n$.
In this talk we will present recent findings about the unit balls of Schatten-$p$ classes of not necessarily square matrices. Among those are the exact volume of the $S_\infty^{m \times n}$-unit ball, a Poincaré-Maxwell-Borel principle for the uniform distribution on the $S_\infty^{m \times n}$-unit ball, and a Sanov-type large deviations principle for the empirical measure of the singular values of a random matrix sampled uniformly from the $S_p^{m \times n}$-unit ball, for any $0 < p \leq \infty$.
10:55 am - 11:20 am
Limit theorems for the volume of random projections and sections of $\ell_p^N$-balls 1Ruhr-Universität Bochum, Germany; 2Universität Passau, Germany
Let $\mathbb{B}_p^N$ be the $N$-dimensional unit ball corresponding to the $\ell_p$-norm. For each $N\in \mathbb{N}$ we sample a uniform random subspace $E_N$ of fixed dimension $m\in\mathbb{N}$ and consider the volume of $\mathbb{B}_p^N$ projected onto $E_N$ or intersected with $E_N$; we also consider geometric quantities other than the volume. In this setting we prove central limit theorems, moderate deviation principles, and large deviation principles as $N\to\infty$. Our results provide a complete asymptotic picture, in particular they generalize and complement a result of Paouris, Pivovarov, and Zinn [A central limit theorem for projections of the cube, Probab. Theory Related Fields. 159 (2014), 701-719].
11:20 am - 11:45 am
Concentration inequalities for Poisson $U$-statistics 1University of Groningen; 2University of Münster
I will present the paper with the same title as this talk. In this article we obtain concentration inequalities for Poisson $U$-statistics $F_m(f,\eta)$ of order $m\ge 1$ with kernels $f$ under general assumptions on $f$ and the intensity measure $\gamma \Lambda$ of underlying Poisson point process $\eta$. The main result are new concentration bounds of the form
$$\mathbb{P}(|F_m ( f , \eta) -\mathbb{E} F_m ( f , \eta)| \ge t)\leq 2\exp(-I(\gamma,t)),$$
where $I(\gamma,t)$ is of optimal order in $t$, namely it satisfies $I(\gamma,t)=\Theta(t^{1\over m}\log t)$ as $t\to\infty$ and $\gamma$ is fixed. The function $I(\gamma,t)$ is given explicitly in terms of parameters of the assumptions satisfied by $f$ and $\Lambda$. One of the key ingredients of the proof is bounding the centred moments of $F_m(f,\eta)$. We discuss the optimality of obtained concentration bounds and consider a number of applications related to Gilbert graphs and Poisson hyperplane processes in constant curvature spaces.
This is a joint work with Anna Gusakova.
11:45 am - 12:10 pm
Large deviation principle for binomial Gibbs processes 1Department of Probability and Mathematical Statistics, Faculty of Mathematics and Physics, Charles University; 2Department of Mathematics, Aarhus University
Gibbs processes in the continuum are one of the most fundamental models in spatial stochastics. They are typically defined using a density with respect to the Poisson point process. In the language of statistical mechanics, this corresponds to the grand-canonical ensemble, where the number of particles is random. Of the same importance is the canonical ensemble, where the number of particles is fixed. In the language of point processes, this corresponds to studying binomial Gibbs processes which are defined using a density with respect to the binomial point process.
In this talk, we present a large deviation theory developed for functionals of binomial Gibbs processes with fixed intensity in increasing windows. Our method relies on the traditional large deviation result from [1] noting that the binomial point process is obtained from the Poisson point process by conditioning on the point number. Our main methodological contribution is the development of coupling constructions allowing us to handle delicate and unlikely pathological events. The presented results cover a broad class of both the interaction function (possibly unbounded) and the functionals (given as a sum of possibly unbounded local score functions).
[1] Georgii, H.-O. and Zessin, H. (1993): Large deviations and the maximum entropy principle for marked point random fields, Probab. Theory Related Fields 96, 177-204.
|
10:30 am - 12:10 pm | S 7 (12): Stochastic processes: theory, statistics and numerics Location: POT 51 Floor plan Session Chair: Vitalii Golomoziy |
|
10:30 am - 10:55 am
Estimates of kernels and ground states for Schrödinger semigroups Wroclaw University of Science and Technology, Poland
We consider the Schrödinger operator of the form $H=-\Delta+V$ acting in $L^2(R^d,dx)$, $d \geq 1$, where the potential $V:R^d \to [0,\infty)$ is a locally bounded function. The corresponding Schrödinger semigroup $\big\{e^{-tH}: t \geq 0\big\}$ consists of integral operators, i.e.
$$
e^{-tH} f(x) = \int_{R^d} u_t(x,y) f(y) dy, \quad f \in L^2(R^d,dx), \ t>0.
$$
I will present new estimates for heat kernel of $u_t(x,y)$. Our results show the contribution of the potential is described separately for each spatial variable,
and the interplay between the spatial variables is seen only through the Gaussian kernel.
This estimates will be presented on two common classes of potentials. For confining potentials we get two sided estimates and for decaying potentials we get new upper estimate.
Methods we used to estimate kernel of semigroup allow to easily obtain sharp estimates of ground state for slowly varying potentials.
The talk is based on joint work with Kamil Kaleta [BK] and my work [B].
[BK]
M. Baraniewicz, K. Kaleta, Integral kernels of Schrödinger semigroups with
nonnegative locally bounded potentials, Studia Mathematica 275, 2024
[B]
M. Baraniewicz, Estmates of ground state for classical Schrödinger operator. To appear, 2024+.
10:55 am - 11:20 am
Progressive intrinsic ultracontractivity and ergodicity properties of discrete Feynman-Kac semigroups and related operators Wrocław University of Science and Technology, Poland
We present results of our investigation of a particular discrete-time counterpart of the Feynman--Kac semigroup with a confining potential in a countably infinite space. We focus on Markov chains with the direct step property, which is satisfied by a wide range of typically considered kernels. In our joint work with Wojciech Cygan, Ren\'e Schilling and Kamil Kaleta, we introduce the concept of progressive intrinsic ultracontractivity (pIUC) and investigate links between pIUC of Feynman--Kac semigroups, their uniform quasi-ergodicity and uniform ergodicity of their intrinsic semigroups. In particular, we study certain discrete analogues of examples common in literature and estimate the rates of convergence of these properties.
11:20 am - 11:45 am
Intrinsic ultracontractivity of Feynman-Kac semigroups for cylindrical stable processes Wrocław University of Science and Technology, Poland
The following Schrödinger operator
$$
K = K_0 + V,
$$
where
$$
K_0 = \sqrt{-\frac{\partial^2}{\partial x_1^2}} + \sqrt{-\frac{\partial^2}{\partial x_2^2}}
$$
is an example of a nonlocal, anisotropic, singular Lévy operator. We consider potentials $V : \mathbb{R}^2 \to \mathbb{R}$ such that $V(x)$ goes to infinity as $|x| \to \infty$. The operator $-K_0$ is a generator of a process $X_t = (X_t^{(1)}, X_t^{(2)})$, sometimes called cylindrical, such that $X_1^{(1)}$, $X_2^{(2)}$ are independent symmetric Cauchy processes in $\mathbb{R}$.
We define the Feynman-Kac semigroup
$$
T_t f(x) = E^x \left( \exp \left( -\int_0^t V(X_s) \, ds \right) f(X_t) \right).
$$
Operators $T_t$ are compact for every $t > 0$. There exists an orthonormal basis $\{ \phi_n \}_{n=1}^{\infty}$ in $L^2 (\mathbb{R}^2)$ and a corresponding sequence of eigenvalues $\{\lambda_n \}_{n=1}^{\infty}$, $0<\lambda_1 \leq \lambda_2 \leq \lambda_3 \leq \dots$, $\lim_{n \to \infty} \lambda_n = \infty$ such that $T_t \phi_n = e^{-\lambda_n t} \phi_n$. We can assume that $\phi_1$ is positive and continuous on $\mathbb{R}^2$. The main result I would like to present concerns estimates for $\phi_1$ and intrinsic ultracontractivity of the semigroup $T_t$ under certain conditions on the potential $V$.
11:45 am - 12:15 pm
Kato bounded Harnack inequality for Schrödinger operators on manifolds TU Dresden
On a smooth (possibly) non-compact geodesically complete connected Riemannian manifold, we investigate a Harnack inequality for the time-independent Schrödinger operator provided its potential is in the Kato class and Ricci curvature is bounded from below by a function in the Kato class. The proof is based on probabilistic Li-Yau type inequalities established for corresponding diffusion semigroup by martingale methods.
|
10:30 am - 12:10 pm | S 9 (5): Finance, insurance and risk: Quantitative methods Location: POT 112 Floor plan Session Chair: Nils-Christian Detering Session Chair: Peter Ruckdeschel |
|
10:30 am - 10:55 am
Shrinking the Covariance Matrix: A Portfolio Perspective LFIN/LIDAM UCLouvain, Belgium
Estimating the covariance matrix is a central problem in portfolio selection. The foundational shrinkage methodologies developed by Ledoit and Wolf (2004, 2017) suffer from two drawbacks: they are not designed to optimize out-of-sample portfolio performance and do not account for estimation errors in the means. In this paper, we propose a novel shrinkage covariance matrix estimator that addresses these two drawbacks. Specifically, we calibrate the shrinkage intensities in linear and nonlinear shrinkage estimators so that they maximize the expected out-of-sample portfolio performance. We find that this alternative calibration results in higher shrinkage intensities relative to the traditional approach and delivers a superior out-of-sample portfolio performance. Overall, our methodology is a one-step approach that estimates the covariance matrix and the optimal portfolio at the same time, which delivers large economic gains relative to the conventional two-step scheme.
10:55 am - 11:20 am
Over-confidence and subjective mortality beliefs in pooled annuity funds 1Université de Lausanne, Switzerland; 2University of Ulm, Germany; 3University of St. Gallen, Switzerland
People have difficulties to judge their own life expectancy, both relative to their peers and in absolute terms. Reasons for this are for example over-confidence or phenomena like the "grandfather clock". This leads to a subjective judgement of the (relative) attractiveness of different retirement products as subjective mortality beliefs differ from the life tables used by the insurance provider to set up the retirement contract. We show how to model and calibrate such beliefs and demonstrate their effect on retirement product demand. Among others, the results partially explain the annuity puzzle, indicating that the level of annuitization differs from the one proposed by economic models like the one of Yaari (1965).
11:20 am - 11:45 am
Thin-thick approach to martingale representations on progressively enlarged filtrations Dipartimento di Matematica- Università di Roma “Tor Vergata”, via della Ricerca Scientifica 1, I 00133 Roma, Italy
We study the predictable representation property of the filtration obtained by progressively enlarging with a random time $\tau$ a reference filtration $\mathbb{F}$. We base our approach on decomposing $\tau$ into thin and thick parts. We prove a representation theorem along the entire time axis without assuming the avoidance condition for $\tau$ and discuss some examples of application to the context of Lévy processes. In particular, we obtain that if $\mathbb{F}$ is the natural filtration of a Brownian motion, the latter and the compensated occurrence process of $\tau$ constitute a basis of the representation when the immersion condition only is assumed.
11:45 am - 12:10 pm
Forecasting Agricultural Financial Risk Using Singular Spectrum Analysis: Case Study of Rainfall and Paddy Rice Crops in Indonesia Department of Mathematics, RPTU Kaiserslautern-Landau
The READI Actuaries Science Applied Research Program in Indonesia is looking for potential research related to risk estimation using climate information along with actuarial assumptions and methods. The monthly accumulated precipitation value is one of the climate information recorded by the Meteorology, Climatology, and Geophysical Agency of Indonesia (BMKG in Bahasa Indonesia). This weather factor can be used to calculate the financial risks of paddy crops in all provinces of Indonesia. The first stage of this calculation is forecasting the precipitation values in 2016-2017 from the data up to 2015 using Singular Spectrum Analysis (SSA) both Univariate and Multivariate accordingly, considering the three areas of calculations: Province, Region, and Country. The second step is to convert these rainfall values to calculate payouts based on several linear index insurance models, with 6 Million IDR per hectare per planting season as the maximum indemnity. Some additional analyses, such as comparisons between areas of calculations, between method combinations of the SSA itself, and between various missing data handling methods, are also included in this research. The current major findings of this research are the better performance of some SSA method combinations and the optimal linear index insurance model to reproduce the payouts. There are many factors to be considered when discussing good policies and the benefits of such insurance against agricultural risk.
|
10:30 am - 12:10 pm | S10 (7): Stochastic optimization and operation research Location: POT 13 Floor plan Session Chair: Nikolaus Schweizer Session Chair: Ralf Werner |
|
10:30 am - 10:55 am
An optimal multibarrier strategy for a singular stochastic control problem with a state-dependent reward 1Universidad de los Andes; 2HSE University; 3Centro de Investigación en Matemáticas
We consider a singular control problem that aims to maximize the expected cumulative rewards, where the instantaneous returns depend on the state of a controlled process. The contributions of this paper are twofold. Firstly, to establish sufficient conditions for determining the optimality of the one-barrier strategy when the uncontrolled process $X$ follows a spectrally negative L\'evy process with a L\'evy measure defined by a completely monotone density. Secondly, to verify the optimality of the $(2n+1)$-barrier strategy when $X$ is a Brownian motion with a drift. Additionally, we provide an algorithm to compute the barrier values in the latter case.
10:55 am - 11:20 am
An Analytic Characterisation of Markov Perfect Equilibria in Stochastic Timing Games 1TU Berlin, Germany; 2TU Munich, Germany
We characterise Markov perfect equilibria in continuous time stochastic timing games in terms of an abstract coupled system of variational inequalities for the corresponding value functions. We provide conditions concerning regularity and construction of equilibria in the case of diffusions. We further investigate a non-trivial one-dimensional case where under relatively mild monotonicity conditions on the data it is possible to give a characterisation of a natural class of Markov perfect equilibria, and compute them numerically. The classical problem of pre-emption and rent equalisation is revisited from this perspective and we show that pre-emptive and war-of-attrition-like behaviour translate into different degrees of regularity of the value function at the stopping boundary.
11:20 am - 11:45 am
Stochastic Modeling and Optimal Control of an Industrial Energy System 1Brandenburg University of Technology Cottbus-Senftenber; 2German Aerospace Center (DLR), Institute of Low-Carbon Industrial Processes
We consider a power-to-heat energy system providing superheated steam for industrial processes. It consists of a high-temperature heat pump for heat supply, a wind turbine for power generation, a thermal energy storage to store excess heat and a steam generator. If the system's energy demand cannot be covered by electricity from the wind turbine, additional electricity must be purchased from the power grid.
For this system we investigate the cost-optimal management aiming to minimize the cost for electricity from the grid by a suitable combination of the wind power and the system's thermal storage. This is a decision making problem under uncertainties about the future prices for electricity from the grid and the future generation of wind power. The resulting stochastic optimal control problem is treated as finite horizon Markov Decision Process (MDP) for a multi-dimensional controlled state process.
We first consider the classical backward recursion techniques for solving the associated dynamic programming equation for the value function and compute the optimal decision rule. Since that approach suffers from the the curse of dimensionality we also apply Q-learning techniques that are able to provide a good approximate solution to the MDP within a reasonable computational time.
|
10:30 am - 12:10 pm | S11 (5): Time series - Ordinal Pattern and Discrete Time Series Location: POT 06 Floor plan Session Chair: Annika Betken |
|
10:30 am - 10:55 am
Classes of multivariate motion patterns and applications to environmental data 1Siegen University, Germany; 2Wageningen University and Research, The Netherlands; 3Stuttgart University, Germany
The classification of movement in space is one of the key tasks in environmental science. Various geospatial data such as rainfall or other weather data, data on animal movement or landslide data require a quantitative analysis of the probable movement in space to obtain information on potential risks, ecological developments or changes in future. Usually, machine-learning tools are applied for this task. Yet, machine-learning approaches also have some drawbacks, e.g. the often required large training sets and the fact that the algorithms are often hard to interpret. We propose a classification approach for spatial data based on ordinal patterns. Ordinal patterns have the advantage that they are easily applicable, even to small data sets, are robust in the presence of certain changes in the time series and deliver interpretative results. They, therefore, do not only offer an alternative to machine-learning in the case of small data sets but might also be used in pre-processing for a meaningful feature selection. In this talk, we introduce the basic concept of multivariate ordinal patterns, classify them and provide the corresponding limit theorem. We focus on the discrete case, that is, on movements on a two dimensional grid. The approach is applied to rainfall radar data.
10:55 am - 11:20 am
The Symbolic Correlation Integral: Measuring Complexity in Short-range Dependent Time Series. 1University of Siegen, Germany; 2Universidad Politécnica de Cartagena, Spain
Since their introduction by Bandt and Pompe (2002), ordinal patterns have become a popular tool for dynamical systems, statistics and data analysis. As the name may already suggest, ordinal patterns capture the ordinal structure of the underlying data within a moving window. They have many desirable properties like invariance under monotone transformations, robustness with respect to small noise and simplicity in application. In particular, ordinal patterns are able to capture possibly non-linear dependence.
These properties are directly transferred to ordinal pattern based measures, e.g., permutation entropy. Since permutation entropy is defined as the Shannon entropy of the ordinal pattern distribution, it is a natural idea to also consider other variants of complexity measures based on ordinal patterns. Here, we particularly consider a variant based on Renyi-2 entropy, which is strongly related to the symbolic correlation integral recently proposed by Caballero-Pintado et al. (2019).
We derive the limit distribution of the symbolic correlation integral (and hence also the Renyi-2 entropy) for a broad class of short-range dependent processes, namely 1-approximating functionals. Therefore, we complement the results by Caballero-Pintado et al. (2019) who only considered the i.i.d.-case. Our contributions prove to be useful in a variety of classification tasks, that is, they allow us to distinguish time series or data stemming from time series based on the degree of complexity present in each one of them. For example, to a certain extent we are able to distinguish, e.g., ARMA-models that differ only in the choice of their parameters. Furthermore, our approach allows for testing whether two time series follow the same underlying model in the sense of a distinction between, e.g., an AR- and an MA-model. We present a selection of our simulation study and illustrate the applicability of our test with a real-world data example given by the Bonn EEG-data base.
11:20 am - 11:45 am
Weighted Discrete ARMA Models for Categorical Time Series Helmut Schmidt University Hamburg, Germany
For real-valued time series, the autoregressive moving-average (ARMA) models are of utmost relevance in practice, not only because of their own modeling abilities, but also because they constitute the core of innumerable further time series models. While ARMA models are not directly applicable to discrete-valued time series, they served as an inspiration for defining several ``ARMA-like'' models for such data. An example is the so-called NDARMA model (new discrete ARMA), which constitutes a universally applicable ARMA-like model for any type of time series data. However, its generated sample paths are characterized by the repeated occurrence of one and the same value (``runs'') being interrupted by sudden jumps. Therefore, the NDARMA model is mainly relevant for nominal time series in practice, as the aforementioned features of the sample paths are hardly observed for real-world ordinal or even quantitative time series.
In this talk, a new and flexible class of ARMA-like models for nominal or ordinal time series is presented, which are characterized by using so-called ``weighting operators'' and are, thus, referred to as weighted discrete ARMA (WDARMA) models. By choosing an appropriate type of weighting operator, one can model, for example, nominal time series with negative serial dependencies, or ordinal time series where transitions to neighbouring states are more likely than sudden large jumps. Essential stochastic properties of WDARMA models are derived, such as the existence of a stationary, ergodic, and $\varphi$-mixing solution as well as closed-form formulae for marginal and bivariate probabilities. Numerical illustrations as well as simulation experiments regarding the finite-sample performance of maximum likelihood estimation are presented. The possible benefits of using an appropriate weighting scheme within the WDARMA class are demonstrated by an application to an ordinal time series on the air quality in Beijing.
Reference:
Weiß, C.H., Swidan, O. (2024)
Weighted discrete ARMA models for categorical time series.
\textit{Journal of Time Series Analysis}, in press.
DOI: https://doi.org/10.1111/jtsa.12773
11:45 am - 12:10 pm
Predictive inference for discrete-valued time series 1TU Dortmund University, Germany; 2Cyprus Academy of Sciences, Letters and Arts
For discrete-valued time series, predictive inference can not be implemented through the construction of prediction intervals to some pre-determined coverage level, as this is the case for real-valued time series. Although prediction sets rather than intervals respect more the discrete nature of the data and appear to be more natural, they are generally not able to retain a desired coverage level neither in finite samples nor asymptotically. To address this general problem of predictive inference for discrete-valued time series, we propose to reverse the construction principle by considering pre-defined sets of interest and estimating the corresponding predictive probability, that is, the probability that a future observation falls in these sets given the time series observed. The accuracy of the corresponding prediction procedure is then evaluated by quantifying the uncertainty associated with estimation of this predictive probability. For this purpose, parametric and non-parametric approaches are considered and asymptotic theory for the estimators involved is derived. Since the established limiting distributions are typically cumbersome to apply in practice, we propose suitable bootstrap approaches to evaluate the distribution of the estimators used. These bootstrap approaches also have the advantage to imitate the distributions of interest under different possible settings including the important case where a misspecified model is used for prediction. Considering such different settings leads to confidence intervals for the predictive probability which properly take into all sources of uncertainty that affect prediction. We elaborate on bootstrap implementations under different scenarios and we focus on the case of INAR and INARCH models. Simulations investigate the finite sample performance of the methods developed by considering different parametric and non-parametric bootstrap implementations to account for the various sources of randomness and variability. Applications to real life data sets are also presented.
|
10:30 am - 12:10 pm | S12 (5): Computational, functional and high-dimensional statistics Location: ZEU 260 Floor plan Session Chair: Jan Gertheiss |
|
10:30 am - 10:55 am
A Statistical Method for Anomaly Detection in Multivariate EEG Time Series Ulm University, Germany
We propose a statistical methodology for anomaly detection in nonstationary EEG time series using adaptive sequential hypothesis testing. Wavelet-based time-frequency decompositions capture localized spectral variations, while abrupt deviations are identified using Cumulative Sum (CUSUM) statistics. To control false discoveries in high-dimensional comparisons, we employ an adaptive multiple testing correction that adjusts to data-driven null distributions. This approach ensures statistically rigorous detection of shifts in time-dependent signals while accommodating dynamic changes in EEG data.
10:55 am - 11:20 am
Exact Representation for Product of Two Normals via Non-Central Chi-Square Distribution 1European Investment Bank, Luxembourg; 2Kahramanmaras Sutcu Imam University, K.Maras, Turkiye; 3Bakircay University, Izmir, Turkiye
This study presents a novel exact distributional representation for the product of two normal random variables, utilizing the non-central chi-square distribution. It demonstrates that the product of two normal variables exhibits the same distributional properties as the difference of two non-central chi-square random variables, under specific parameterizations. Through rigorous simulations across a variety of scenarios, the accuracy and robustness of this method are validated. The results show that the proposed approach surpasses traditional methods, which often rely on approximations or computationally expensive numerical integration. By offering a more precise and computationally efficient solution, this method is particularly valuable for applications in fields such as finance, econometrics, and risk management, where multiplicative effects are critical. The approach eliminates the need for complex infinite series or special functions, significantly reducing computation time while enhancing accuracy. This breakthrough opens new avenues for both theoretical developments and practical applications, providing a powerful tool for analyzing the product of normal random variables. The paper concludes by exploring potential extensions and implications for future research in various scientific and engineering disciplines.
11:20 am - 11:45 am
On the Stress-Strength Models for the Dagum Distribution under Adaptive Type-II Progressively Hybrid Censoring Qatar University
In this talk, we examine Stress-Strength models where the random variables follow a Dagum distribution, with data subject to an adaptive Type-II progressively hybrid censoring scheme. This flexible censoring approach presents unique challenges for statistical analysis. We derive the maximum likelihood estimators and construct both asymptotic and bootstrap confidence intervals. The proposed procedure is evaluated through extensive Monte Carlo simulations, and a real dataset is analyzed for illustration.
|
10:30 am - 12:10 pm | S13 (10): Nonparametric and asymptotic statistics Location: ZEU 250 Floor plan Session Chair: Alexander Kreiss Session Chair: Leonie Selk |
|
10:30 am - 10:55 am
Bayesian nonparametric estimation and inference using optimal transport divergences TU Dortmund, Germany
By characterizing the posterior distribution as the minimizer of a divergence functional, variational inference allows to identify the path of steepest descent toward the posterior as the gradient flow with respect to the divergence, and to calculate rates of convergence determined by the convexity of the functionals. While this approach has been successful for parametric families of distributions, we present an extension to the nonparametric setting, which is based on the construction of a divergence that generalizes the Mahalanobis divergence; an instance of this divergence has been constructed by Hallin, using multivariate quantiles based on optimal transport. We use an extension of this idea deemed Sinkhorn divergence, which is based on a regularized version of optimal transport and provides better estimates for the rates of convergence, given its stronger convexity as a functional. We present the estimation and inference result for this procedure as well as various approximation schemes to different types of distributions.
10:55 am - 11:20 am
Decompounding under general mixing distributions 1University of Duisburg-Essen; 2HSE University, Russian Federation
The present talk is based on the joint research with Denis Belomestny and Vladimir Panov, available as a preprint [1], which is devoted to the problem of statistical inference for random sums
\begin{equation}\label{main}
X=\sum\limits_{k=1} ^N \xi_k,
\end{equation}
where $\xi_1,\xi_2,\dots$ is a sequence of i.i.d. random variables, and $N$ is a positive integer-valued random variable independent of $\xi_1,\xi_2,\dots$. A natural problem arising in this setting is that of recovering the distribution of either $N$ or $\xi_1$ based on a sample from $X$, assuming that the law of the other variable ($\xi_1$ or $N$, respectively) is known. The current research considers the second task. In the most popular case of Poisson random sums, which appears when $N$ has the Poisson distribution, this problem is well-known and is typically referred to as decompounding [3]. Nowadays, there exists a great variety of methods for recovering the distribution of summands in the compound Poisson model, including, but not limited to, kernel-type estimators [7], spectral methods [4] and Bayesian approach [5]. Some results are also available for the case when the law of $N$ is geometric [6]. However, very few research has been devoted to the general case, when $N$ has an arbitrary distribution supported on non-negative integers.
In this research, we propose an estimator for the distribution of $\xi_1$ without imposing any parametric assumptions on the law of $N$. While the problem of nonparametric inference for the distribution of $\xi_1$ in the random sum model has already been considered by Bøgsted and Pitts [2], it should be noted that their estimation scheme requires the summands to be positive, which is not necessary for our estimation procedure. In addition, the rates of convergence are out of the scope of paper [2], while we put a special emphasis on proving the bounds for the error of the proposed estimator. We demonstrate that for several large classes of distributions the rates of convergence are of polynomial order, and moreover, we show that in certain cases the upper and lower bounds coincide, implying that the proposed estimator is minimax optimal.
References:
1. Belomestny, D., Morozova, E. and Panov, V. (2024). Decompounding under general mixing distributions. Preprint, arXiv:2405.05419, series "math.ST".
2. Bøgsted, M. and Pitts, S.M. (2010). Decompounding random sums: a nonparametric approach. Annals of the Institute of Statistical Mathematics, 62:855–872.
3. Buchmann, B. and Grübel, R. (2003). Decompounding: an estimation problem for Poisson random sums. The Annals of Statistics, 31(4):1054–1074.
4. Coca, A.J. (2018). Efficient nonparametric inference for discretely observed compound Poisson processes. Probability Theory and Related Fields, 170(1-2):475–523.
5. Gugushvili, S., Mariucci, E. and van der Meulen, F. (2020). Decompounding discrete distributions: A nonparametric Bayesian approach. Scandinavian journal of statistics, 47(2):464–492.
6. Hansen, M. B. and Pitts, S. M. (2006). Nonparametric inference from the M/G/1 workload. Bernoulli, 12(4):737–759.
7. Van Es, B., Gugushvili, S. and Spreij, P. (2007). A kernel type nonparametric density estimator for decompounding. Bernoulli, 13(3):672–694.
11:20 am - 11:45 am
Statistical Optimal Transport and its Entropic Regularization: Compared and Contrasted Georg August University Göttingen, Germany
In recent years, statistical methodology based on optimal transport (OT) witnessed a considerable increase in practical and theoretical interest. A central reason for this trend is the ability of optimal transport to efficiently compare data in a geometrically meaningful way. This development was further amplified by computational advances spurred by the introduction of entropy regularized optimal transport (EOT). In applications, the OT or EOT cost are often estimated through an empirical plug-in approach, raising statistical questions about the performance and uncertainty of these estimators.
The convergence behavior of the empirical OT cost for increasing sample size is dictated by various aspects. Remarkably, under distinct population measures with different intrinsic dimensions, we show that the convergence rate for the empirical OT cost is determined by the population measure with lower intrinsic dimension -- a novel phenomenon we term "lower complexity adaptation“. For the empirical EOT cost, we establish a similar phenomenon and show that the dependency on the entropy regularization parameter in the convergence rate is determined by the minimum intrinsic dimension of the population measures. Concerning the fluctuation of the empirical OT cost around its population counterpart, we show for settings where one measure is sufficiently low dimensional we show that the asymptotic fluctuation is given by the supremum over a Gaussian process. This is in strict contrast to the entropy regularized setting, where we establish a central limit theorem with a centered normal asymptotic law. Altogether this talk will highlight key similarities and differences between empirical OT and EOT, offering comprehensive insights into strengths and limitations of transport-based methodologies in statistical contexts.
This talk is based on joint work with Thomas Staudt, Marcel Klatt, Michel Groppe, Alberto-Gonzáles-Sanz, and Axel Munk.
11:45 am - 12:10 pm
Distributional convergence of empirical entropic optimal transport and applications University of Göttingen
The statistical properties of entropic optimal transport recently became of great interest as this quantity has been shown to be useful for complex data analysis, among others due to its fast computability. Prominent applications meanwhile include tasks in machine learning, computer vision and various subject matters, such as econometrics or particle physics. In cell biology, colocalization analysis based on entropic optimal transport (EOT) has been used as a measure for quantification of spatial proximity of different protein assemblies. Using properties of the entropic optimal transport plans, we derive asymptotic weak convergence result for a large class of functionals of the EOT plan, in which the colocalization process is included. The proof is based on Hadamard differentiability and the extended delta method. As applications, we obtain uniform confidence bands for colocalization curves, bootstrap consistency and a notion of conditional colocalization. Our theory is supported by simulation studies and is illustrated by real world data analysis from mitochondrial protein colocalization.
|
11:20 am - 12:10 pm | S14 (2): History of Probability and Statistics Location: POT 81 Floor plan Session Chair: Hans Fischer Session Chair: Tilman Sauer Session Chair: René Schilling |
|
11:20 am - 11:45 am
Leibniz on the Problem of Points (1676–78) Leibniz-Forschungsstelle der Akademie der Wissenschaften zu Göttingen und der GWLB Hannover
The famous problem of points deals with a game of chance that is played out in several rounds. In it, the players pay a stake before the game begins and agree that the first player to win $p$ rounds or points will receive the entire stake. The problem now is to divide the stakes ‘fairly’ between the players in case the game, due to force majeure, has to be stopped before one of the players has reached the number of points required for overall victory, with the player in the lead having won $g$ rounds and the other player having won $f$ rounds. For this reason, it is also known as the problem of division of the stakes.
The discussion of the problem by Pierre de Fermat, Blaise Pascal, and Christaan Huygens in the middle of the 17th century is often apostrophized as the birth of a new mathematical discipline, namely probability theory.
A few years after the aforementioned, Gottfried Wilhelm Leibniz (1646–1716) also dealt with the problem and developed his very own thoughts on it. He dedicated two manuscripts to this question, one in French in 1676 and the other in Latin in 1678. The results of an analysis of these two texts can be summarized in the following six theses:
(1) When Leibniz was dealing with the problem of points, there were already insightful contributions to its discussion he could have known about. But he obviously did not take note of these — especially not of Pascal’s probabilistically correct solution — although it would have been possible for him to do so. Instead, he started his very own reflections.
(2) In 1676 Leibniz formulated — in words, not as a formula — a first partition rule that can be considered ‘fair’ in a broader sense. It proposes a division of the stakes in the ratio of $(p+g-2f) : (p-g)$. As Struve and Struve prove, this rule has a unique feature: Every point a player is ahead is worth the same.
(3) A variant of the rule, formulated differently and less clearly, but with the same meaning, is found in his 1678 manuscript. Most historians of mathematics are only familiar with this later version, though, not with the more lucid earlier one.
(4) Again in 1676, Leibniz formulated a second ‘fair’ rule, based on a different idea. Translated into a formula, it stipulates a division of the stakes in the ratio of $(p-f)^2 : (p-g)^2$. This rule has received almost no attention in the historiography of probability theory — possibly since Leibniz hid it well in subordinate clauses, describing it in opaque words instead of giving a formula for it. More importantly, he dropped the second rule shortly after he had established it, as not even he himself attached further importance to it.
(5) This rule deserves attention nevertheless, as it regularly provides a better approximation of the ratio of winning probabilities than the first rule or any other common, simple rule, and its results are much easier to calculate than the exact solutions, which can be quite helpful.
(6) The assumption that Leibniz worked with a different notion of ‘fair partition’ than Pascal or Fermat cannot be substantiated. It is far more plausible to assume that he actually defined ‘fair’ as ‘according to probabilities’ in the modern sense. This, however, turns the problem of points from a normative to a probabilistic question, which means that Leibniz’s results are not to be characterised as creative normative solutions, but rather as probabilistic solutions that are not entirely correct.
11:45 am - 12:10 pm
Some elements for a (pre)history of martingales SORBONNE UNIVERSITY - LPSM
In 2022, Glenn Shafer and I published a collective work on the history of the martingale concept. In the present contribution, I focus on the key moment when the concept gradually stabilized as a mathematical theory, through the work of three mathematicians: Paul Lévy, Joseph Doob and Jean-André Ville. Although the latter is obviously the least well-known, it was he who, in his thesis defended in 1939, gave the name martingale to a construction inspired by game theory, which Doob would later take up and develop, while Paul Lévy's work on the subject remained relatively unnoticed until the 1950s. I give some insight into the astonishing emergence of a concept which, in the second half of the twentieth century, became one of the central concepts of modern probability theory.
|
12:10 pm - 1:40 pm | Lunch |
1:40 pm - 2:30 pm | S 2 (9): Spatial stochastics, disordered media, and complex networks Location: POT 251 Floor plan Session Chair: Chinmoy Bhattacharjee Session Chair: Benedikt Jahnel |
|
1:40 pm - 2:05 pm
Power-like divergence of the diameter of the transition front of the solution to the F-KPP equation in heterogeneous medium 1Johannes Gutenberg-University Mainz, Germany; 2University of Cologne, Germany
There is a duality between the extremum of space-heterogeneous Branching Brownian Motions, where branching rates are space dependent, and a certain semilinear partial differential equation, the so-called F-KPP equation. Given a space-dependent medium $\xi(x), x\in\mathbb{R}$, the associated PDE looks like this:
$$u_t = \frac{1}{2} u_{xx} + \xi(x)u(1-u).$$
The solution to this equation exhibit a much richer behavior than in the homogeneous case, i.e. $\xi \equiv 1$, obviously depending on the choosing of the medium $\xi$.
In this talk, we look at the diameter of the so-called transition front, which is the length the solution needs to transition between the stable states $1$ and $0$. In the homogeneous case the diameter is uniformly bounded in time, while in the heterogeneous case we will see that the diameter is generally unbounded and, more specifically, for every $\alpha \in (0,1)$ one can construct a $\xi$ where the diameter grows like $t^\alpha$.
Our proofs are entirely probabilistic and revolve around the duality with Branching Brownian Motions. We will see that for our choosing of $\xi$ the key step is to analyze the minimal position of a heterogeneous BBM starting close to zero, where branchin rates are equal to $1$ to the left of $0$ and slightly larger than $2$ to right of $0$.
2:05 pm - 2:30 pm
On Random Simplex Picking Charles University, Czech Republic
New selected values of odd random simplex volumetric moments (moments of the volume of a random simplex picked from a given body) are derived in an exact form in various bodies in dimensions three, four, five and six. In three dimensions, the well known Efron’s formula was used by Buchta & Reitzner and Zinani to deduce the mean volume of a random tetrahedron in a tetrahedron and a cube. However, for higher moments and/or in higher dimensions, the method fails. As it turned out, the same problem is also solvable using Blashke-Petkanchin formula in Cartesian parametrisation in the form of the Canonical Section Integral (Base-height splitting). In our presentation, we show how to derive the older results mentioned above using our base-height splitting method and also touch the essential steps how the method translates to higher dimensions and for higher moments.
|
1:40 pm - 2:30 pm | S 3 (9): is dropped Location: POT 151 Floor plan Session Chair: Vitalii Konarovskyi Session Chair: Aleksandra Zimmermann |
1:40 pm - 2:30 pm | S 4 Keynote: Limit theorems, large deviations and extremes Location: POT 81 Floor plan Session Chair: Jan Nagel Session Chair: Marco Oesting |
|
1:40 pm - 2:30 pm
Normal and α-stable convergence in weight-dependent random connection models Recently, the class of weight-dependent random connection models has received substantial research interest. This is because such networks are promising candidates for modeling complex networks. They are capable of reproducing heavy-tailed degree distributions while avoiding the tree-like structure of combinatorial random graph models. This motivates the need to derive limit results for standard network functions such as edge counts, triangle counts or more general subgraph counts. While the theory of normal approximation in spatial random networks and stochastic geometry has seen several breakthroughs in the recent years, only some isolated results regarding α-stable convergence could be established. Making use of tools from extreme value theory, in this talk, I will illustrate how such $\alpha$-stable results can be established for different variants of weight-dependent random connection models. If time permits, I will also comment on results in the dynamic case, when the networks changes over time. I will conclude by illustrating some of the results through collaboration data from the arxiv network. This talk is based on a series of (partly ongoing) joint works with M. Brun, B. Jahnel, P. Juhász, M. Otto, and T. Owada. |
1:40 pm - 2:30 pm | S 7 (13): Stochastic processes: theory, statistics and numerics Location: POT 51 Floor plan Session Chair: Andreas Neuenkirch Session Chair: Jakob Söhl |
|
1:40 pm - 2:05 pm
Quasi-infinitely divisible distributions Universität Ulm, Germany
A quasi-infinitely divisible distribution is a probability distribution whose characteristic function can be written as the quotient of the characteristic functions of two infinitely divisible distributions. Equivalently, a probability distribution is quasi-infinitely divisible if and only if its characteristic function admits a Lèvy-Khintchine representation with a signed Lévy measure. In this talk we give some examples of quasi-infinitely divisible distributions and study some of their properties. The talk is based on joint works with David Berger, Merve Kutlu, Lei Pan and Ken-iti Sato.
2:05 pm - 2:30 pm
Divisibility of probability measures TU Dresden, Germany
The set of infinitely divisible distributions in the space of all probability measures is a well studied object in stochastics. In this talk we discuss the divisibility property of quasi-infinitely divisible distribution and prove that not every quasi-infinitely divisible
distribution is divisible even in the set of (regular) bounded complex-valued measures.
Furthermore, we extend the notion of quasi-infinitely divisible distributions by extending
the abelian monoid of probability measures to the Grothendieck group and show that the constructed group is divisible.
|
1:40 pm - 2:30 pm | S10 (8): Stochastic optimization and operation research Location: POT 13 Floor plan Session Chair: Nikolaus Schweizer Session Chair: Ralf Werner |
|
1:40 pm - 2:05 pm
Alternative Transient Solutions for M/G/1 System with Multiple Server Vacations 1Shota Rustaveli Batumi State University. Batumi; 2Georgian Technical University, Tbilisi. Georgia
In this paper, we investigate M/G/1 systems with server vacations using the well-established supplementary variable method, combined with purely probabilistic reasoning.
The novel probabilistic approach simultaneously considers the system at two distinct time points: one at the moment of observation during service, and the other at the start of that service. This dual-time analysis provides new insights into the system's behavior, yielding solutions to the problem.
This innovative approach bypasses the need to solve partial differential equations (Kolmogorov forward equations) for the non-classical boundary value problem with non-local boundary conditions. Instead, it directly derives the system's solution using operational calculus.
Acknowledgements
The designated project has been fulfilled by financial support of the Shota Rustaveli National Science Foundation of Georgia (Grant: STEM-22-340)
2:05 pm - 2:30 pm
Optimal sequential sampling for attributive tests at consecutive times 1Physikalisch-Technische Bundesanstalt (PTB), Germany; 2Friedrich Schiller University Jena
Motivated by the surveillance of utility meters in Germany, we assume that every few years, a population of devices must be replaced if reliability for the next few years cannot be demonstrated in a test. To demonstrate reliability, the producer (or the operator) of the devices shall initiate acceptance sampling by attributes at every testing time. Sampling can be sequential. The resulting stopping problem for the producer may depend on previous results. If the current sample does not demonstrate reliability, the producer may or may not continue sampling. If the producer stops sampling, the population must be replaced. With the continuation, the producer hopes to demonstrate reliability with a larger sample. If the current sample demonstrates reliability, the population continues to operate until the next testing time, when a further test may be conducted.
To find an optimal strategy one needs to know the consequences of the decisions. We apply Bayesian analysis to predict the next sample. The analysis utilizes a lifetime model for the devices of the population in order to take samples from all previous testing times into account. Having the probability for the next sample we apply Bellman’s principle to calculate a cost minimizing strategy. We consider simple and easily adaptable cost functions.
Considering the surveillance of water meters, it is reasonable to assume a small number of possible test results and a small maximal number of testing times. This enables the calculation of cost minimizing sampling plans using the Bellman principle. Compared to single sampling, the resulting optimal strategy can decrease the costs for water meter surveillance by almost 30%.
|
1:40 pm - 2:30 pm | S13 (11): Nonparametric and asymptotic statistics Location: ZEU 250 Floor plan Session Chair: Alexander Kreiss Session Chair: Leonie Selk |
|
1:40 pm - 2:05 pm
Nonparametric isotropy test for spatial point processes using random rotations RPTU Kaiserslautern-Landau, Germany
In spatial statistics, point processes are often assumed to be isotropic meaning that their distribution is invariant under rotations around the origin. Most statistical tests for the null hypothesis of isotropy found in the literature are based either on asymptotics or on Monte Carlo simulation of a parametric null model. However, fitting a parametric null model is challenging in case of anisotropy and asymptotic distributions are only available to a limited extent.
In this talk, we present a novel nonparametric and computationally cheap test for the hypothesis of isotropy for stationary point processes. Our test is based on resampling the Fry points which are the pairwise difference vectors of the observed point pattern. The resampling itself is achieved by using random rotation techniques. We investigate empirical levels and powers of the proposed test in a simulation study for a diverse set of point process models. Anisotropic point process models are hereby obtained either via the geometric anisotropy mechanism or through oriented clusters where the orientation distribution is given by a von-Mises-Fisher distribution. Finally, a real data set of amacrine cells is tested for isotropy.
|
1:40 pm - 2:30 pm | S14 (3): History of Probability and Statistics Location: POT 361 Floor plan Session Chair: Hans Fischer Session Chair: Tilman Sauer Session Chair: René Schilling |
|
1:40 pm - 2:05 pm
Early Visualizations of Electron Probability Densities Institute of Mathematics, Johannes Gutenberg University Mainz
The Schroedinger equation, introduced in 1926, together with Born’s statistical interpretation allows us to calculate the probability of finding an electron in the hydrogen atom, or, more generally, the probability distribution of the possible outcomes of a measurement made upon a quantum system in a particular state.
It took a while before graphic representations of these electron densities were produced, which eventually established the notion of an orbital as the new pictorial representation of matter. In 1931, Harvey Elliot White constructed a mechanical apparatus to visualize the wave function of an electron in the hydrogen atom. In 1941, Don DeVault introduced a computational method
that allows planar electron densities to be represented by point intensities as cross sections in specific planes. Three years later, William J. Wiswesser published a three-part paper on the periodic table and the structure of atoms, in which he presented a mechanical, two-dimensional model for visualizing electron densities.
In our paper, we replicate and reassess these three early ways of visualizing the probability density of electrons with a view of teaching the atomic model of modern quantum mechanics to students.
|
2:40 pm - 3:40 pm | Plenary V Location: POT 81 Floor plan Session Chair: Anita Behme |
|
2:40 pm - 3:40 pm
Random walks in dynamical random environments
We give an introduction to random walks in static and dynamical random environments. In particular, we treat the example where the dynamical random environment is given by dynamical percolation. We explain some recent results about biased random walk on dynamical percolation. They address the existence and monotonicity of a linear speed as well as the validity of the central limit theorem.
If time permits, we describe some of the tools in the proofs.
The results were obtained in collaborations with Sebastian Andres, Noam Berger, Xiaoqin Guo, Jan Nagel, Dominik Schmid and Perla Sousi.
|
3:40 pm - 4:00 pm | Closing Location: POT 81 Floor plan |
Contact and Legal Notice · Contact Address: Conference: GPSD 2025 |
Conference Software: ConfTool Pro 2.8.105 © 2001–2025 by Dr. H. Weinreich, Hamburg, Germany |