Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Location indicates the building first and then the room number!

Click on "Floor plan" for orientation in the builings and on the campus.

 
Only Sessions at Location/Venue 
 
 
Session Overview
Session
S 1 (4): Machine Learning
Time:
Wednesday, 12/Mar/2025:
10:30 am - 12:10 pm

Session Chair: Merle Behr
Location: POT 06
Floor plan

Potthoff Bau
Session Topics:
1. Machine Learning

Show help for 'Increase or decrease the abstract text size'
Presentations
10:30 am - 10:55 am

Flow matching vs. kernel density estimation

Lea Kunkel, Mathias Trabs

Karlsruher Institut für Technologie, Germany

Recently, Flow Matching, introduced by Lipman et. al. (2023), has caused increasing interest in generative modeling. Using a solution to an ODE leads to a generation process that is much simpler compared to diffusions, the current state of the art generative model. This idea has been further developed and several adaptations, especially regarding the choice of conditional probability paths, have been presented in the literature. Exploiting the connection to kernel density estimation, we analyze flow matching from a statistical perspective. We derive reasonable conditions for the choice of conditional probability paths. In addition, we study the rate of convergence and compare it to kernel density estimation.


10:55 am - 11:20 am

Detecting the memorizing effect in generative AI

Gero Junike1, Solveig Flaig2, Ralf Werner3

1Carl von Ossietzky Universität Oldenburg; 2Deutsche Rück; 3Universität Augsburg

Generative AI such as generative adversarial networks or (variational) autoencoders produce new data from a training set. Sometimes the memorizing effect occurs, i.e. the generative AI only memorizes the training set and does not produce new, unseen samples. We introduce a new memorizing ratio to detect the memorizing effect and prove its convergence. Applications to economics are given. This talk is based on https://arxiv.org/pdf/2301.12719


11:20 am - 11:45 am

Fixed-points of the distributional Bellman operator

Julian Gerstenberg, Ralph Neininger, Denis Spiegel

Goethe University Frankfurt a.M., Germany

In distributional reinforcement learning beyond the expected returns of a policy complete return distributions are taken into account. The return distribution for a fixed policy is given as the fixed-point of an associated distributional Bellman operator (DBO). Existence and uniqueness of fixed-points of DBOs are discussed, as well as their tail properties. Further, distributional dynamic programming algorithms are presented to approximate the unknown return distributions together with error bounds, both within Wasserstein and Kolmogorov–Smirnov distances. For return distributions having probability density functions the algorithms yield approximations for these densities; error bounds are given within supremum norm. The concept of quantile-spline discretizations is introduced for these algorithms which shows promising results in simulation experiments, also in the presents of heavy tails.


11:45 am - 12:10 pm

Transport Dependency: Optimal Transport Based Dependency Measures

Thomas Staudt, Thomas Giacomo Nies, Axel Munk

University Göttingen, Germany

In this talk, we present a framework for measuring statistical dependency, the transport dependency $\tau$, which relies on the notion of optimal transport and is applicable in general Polish spaces. It can be estimated via the corresponding empirical measure and is adaptable to various scenarios by proper choices of the cost function. Notably, statistical independence is characterized by $\tau = 0$, while large values of $\tau$ indicate highly regular relations between the variables. Based on sharp upper bounds, we exploit three distinct dependency coefficients with values in $[0,1]$, each of which emphasizes different functional relations. Monte Carlo results suggest that $\tau$ is a robust quantity that efficiently discerns dependency structure from noise for data sets with complex internal metric geometry. The use of the transport dependency for inferential tasks is illustrated for independence testing on a data set of trees from cancer genetics.


 
Contact and Legal Notice · Contact Address:
Conference: GPSD 2025
Conference Software: ConfTool Pro 2.8.105
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany