Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 28th Nov 2022, 08:37:08pm CET

 
Only Sessions at Location/Venue 
 
 
Session Overview
Session
MS 16b: high order, tensor-structured methods and low rank approximation
Time:
Friday, 16/July/2021:
12:00pm - 2:00pm

Session Chair: Christoph Schwab
Session Chair: Maxim Rakhuba
Session Chair: Carlo Marcati
Virtual location: Zoom 2


Session Abstract

Tensor compression based numerical methods have been, in recent years, successfully em-

ployed for approximation and solution of PDEs in a wide range of scientific domains, provid-

ing at least the high order performance of hp and spectral methods. Unlike the latter methods

though, their deployment does not require explicit coding of higher-order discretizations. In-

stead, tensor compression methods realize a form of high order adaptivity by means of singular

value decomposition at runtime in tensor-formatted numerics.

In addition, tensor formatted methods have proven to be useful both in tackling high dimen-

sional problems —by overcoming the curse of dimensionality— and in reducing the complexity

and providing highly accurate approximations in low dimensional settings. Indeed, tensor com-

pression techniques work by identifying and taking advantage of approximate low-rank struc-

tures in data. Like spectral and high order methods, they adaptively exploit the regularity of the

underlying functions to obtain compact and rapidly converging representations.

The goal of this minisymposium is to bring together specialists in the field of tensor com-

pression methods and discuss recent advances in their mathematical analysis and their compu-

tational implementation, with a particular focus on their application on model order reduction

at runtime for the approximation of the solution of physical problems and of partial differen-

tial equations. The minisymposium covers, but is not restricted to, topics such as: analysis and

implementation of tensor train and hierarchical tensor decompositions, quantized tensor ap-

proximation, space-time tensor decomposition and dynamic low rank approximation, tensor

formatted solution of partial differential equations.


Show help for 'Increase or decrease the abstract text size'
Presentations
12:00pm - 12:30pm

A spectral tensor decomposition algorithm for time-dependent PDEs in Bayesian filtering

Colin Fox2, Sergey Dolgov1, Malcolm E. K. Morrison2, Timothy C. A. Molteno2

1University of Bath, United Kingdom; 2University of Otago, Dunedin, New Zealand

Dynamical data assimilation is complicated due to random noise in both the measurements and the underlying dynamical system.

An accurate model is given by the Bayesian posterior probability distribution of the state variables of the system.

Optimal filtering for a nonlinear dynamical system requires

solving the Fokker-Planck equation to propagate the probability density function between the measurements,

in alternation with Bayes' conditional updating to assimilate the next measurement.

A high-dimensional state vector yields a high-dimensional density function,

a straightforward discretization of which leads to an exponential increase of computational cost and storage.

We circumvent this ‘curse of dimensionality’

by using a tensor train (TT) approximation of density

functions and an adaptive ODE solver operating on the TT representation with a pseudospectral discretization.

We present numerical examples of tracking a system of weakly coupled pendulums in continuous time

to demonstrate filtering with complex density functions in up to 80 dimensions.

https://doi.org/10.1080/17415977.2020.1862109



12:30pm - 1:00pm

Approximation of Functions with Tensor Networks

Mazen Ali

Centrale Nantes, France

We study the approximation of multivariate functions with tensor networks (TNs). The main conclusion of this work is an answer to the following two questions: "What are the approximation capabilities of TNs?" and "What is an appropriate model class of functions that can be approximated with TNs?"

To answer the former: we show that TNs can (near to) optimally replicate h-uniform and h-adaptive approximation, for any smoothness order of the target function. Tensor networks thus exhibit universal expressivity w.r.t. isotropic, anisotropic and mixed smoothness spaces that is comparable with more general neural networks families such as deep rectified linear unit (ReLU) networks. Put differently, TNs have the capacity to (near to) optimally approximate many function classes without being adapted to the particular class in question.

To answer the latter: as a candidate model class we consider approximation classes of TNs and show that these are (quasi-)Banach spaces, that many types of classical smoothness spaces are continuously embedded into said approximation classes and that TN approximation classes are themselves not embedded in any classical smoothness space.



1:00pm - 1:30pm

Simplicial tensor refinement in two and three dimensions

Vladimir Kazeev

University of Vienna, Austria

Low-rank tensor refinement based on tensor-product finite-element functions has been rigorously analyzed for certain classes of functions solving elliptic second-order PDEs. These include, in particular, analytic functions, solutions with algebraic corner singularities and highly-oscillatory solutions to multiscale diffusion problems. In these settings, low-rank tensor approximation has been shown to admit exponentially convergent approximations and hence to be comparable in efficiency to such techniques as spectral and hp approximations.

This approximation power of low-rank tensor refinement is due to successive, multilevel approximation in suitable low-dimensional subspaces, which amounts to the construction of low-parametric effective discretizations within extravagantly large «virtual» generic discretizations, the latter of which can even be refined to the level of machine precision if necessary. Computational algorithms exploiting such approximations are data-driven in the sense that the effective discretizations are adapted to the data and are constructed in the course of computation, not being confined by design to any specific bases (such as polynomial or piecewise-polynomial ones). However, in three dimensions, the complexity of solvers based on low-rank tensor refinement becomes prohibitively large when one pursues stability and convergence in the energy norm (for example, by employing rigorous preconditioning techniques).

In this talk, we analyze approximation by low-rank tensor refinement subordinate to simplicial partitions for elliptic second-order linear PDEs in two and three dimensions. Using the corresponding space of piecewise-linear functions as the «virtual» finite-element space, we present a tensor-structured generalized mixed finite-element method, with an optimal preconditioner and an iterative solver for the corresponding discrete problem, whose efficiency is demonstrated in numerical experiments.



1:30pm - 2:00pm

Convergence bounds for empirical nonlinear least-squares and applications to tensor recovery

Martin Eigel1, Reinhold Schneider2, Philipp Trunschke2

1Weierstrass Institute for Applied Analysis and Stochastics; 2Berlin Institute of Technology

We consider best approximation problems in a nonlinear subset of a Banach space of functions.

The norm is assumed to be a generalization of the L2-norm for which only a weighted Monte Carlo estimate can be computed.

We establish error bounds for the empirical best approximation error in this general setting and use these bounds to derive a new, sample efficient algorithm for the model set of low-rank tensors.

The viability of this algorithm is demonstrated by recovering quantities of interest for a classical random partial differential equation.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ICOSAHOM2020
Conference Software - ConfTool Pro 2.6.145+CC
© 2001–2022 by Dr. H. Weinreich, Hamburg, Germany