HCM Workshop: Analysis and Computation in High Dimensions
Please note: October 3 is a public national holiday in Germany. Nevertheless there will be talks in the morning, followed by a free afternoon.
Monday, October 1
09:00 - 09:30 | Self-registration |
09:30 - 09:45 | Opening remarks |
09:45 - 10:30 | Endre Süli: High-dimensional McKean-Vlasov diffusion and the well-posedness of the Hookean bead-spring chain model |
10:30 - 11:00 | Coffee break |
11:00 - 11:30 | Harry Yserentant: On the expansion of solutions of Laplace-like equations into traces of separable higher dimensional functions |
11:30 - 12:00 | Peter Binev: Near-Best Piecewise Linear Approximation on Conforming Simplicial Partitions |
12:00 - 14:00 | Lunch break |
14:00 - 14:45 | Ben Adcock: Structured sparsity in high-dimensional approximation: progress and challenges |
14:45 - 15:15 | Aretha Teckentrup: Quasi- and multilevel Monte Carlo methods for Bayesian inverse problems |
15:15 - 15:45 | Coffee break |
15:45 - 16:30 | Wolfgang Dahmen: Reduced Models in High Dimensions |
16:30 - 17:00 | Alireza Doostan: A Multi-Fidelity Low-Rank Approximation Technique for Uncertainty Quantification |
Tuesday, October 2
Wednesday, October 3
09:00 - 09:45 | Ian Hugh Sloan: Unshifted lattice rules - Can lattice rules for high dimensional integration be effective without a shift? |
09:45 - 10:15 | Robert Scheichl: Efficient Sampling from High-dimensional Distributions using Lowrank Tensor Surrogates |
10:15 - 10:45 | Coffee break |
10:45 - 11:30 | Anders Hansen: On computational barriers in mathematics of information and instabilities in deep learning for inverse problems |
11:30 - 12:00 | Reinhold Schneider: Variational Monte Carlo and Hierarchical Tensor Representation |
12:00 - 12:30 | Clayton Webster: Energy norm regularization for sparse simultaneous reconstruction of solutions to parameterized PDEs |
12:30 - 19:00 | Free afternoon |
19:00 | Dinner at Gasthaus Nolden |
Thursday, October 4
09:00 - 09:45 | Christoph Schwab: GPC and DNN Approximations of solution manifolds of parametric and stochastic PDEs |
09:45 - 10:15 | Dinh Dung: ε-dimension in infinite dimensional hyperbolic cross approximation and application to parametric elliptic PDEs |
10:15 - 10:45 | Coffee break |
10:45 - 11:30 | Vladimir Temlyakov: Smooth fixed volume discrepancy, dispersion, and related problems |
11:30 - 12:00 | Tino Ullrich: Numerical performance of optimized Frolov lattices in tensor product reproducing kernel Sobolev spaces |
12:00 - 14:00 | Lunch break |
14:00 - 14:30 | Felix Krahmer: Sparse Harmonic Transforms: A New Class of Sublinear-time Algorithms for Learning Functions of Many Variables |
14:30 - 15:00 | Giovanni Migliorati: Adaptive approximation by optimal weighted least-squares methods |
15:00 - 16:00 | Coffee break / Poster Session / Group Picture |
16:00 - 16:45 | Henryk Wozniakowski: Absolute Value Information |
16:45 - 17:15 | Erich Novak: How good is random information for L2-approximation? |
Friday, October 5
Abstracts
Ben Adcock: Structured sparsity in high-dimensional approximation: progress and challenges
The application of compressed sensing techniques to the approximation of high-dimensional functions has been an area of significant interest over the last five years. While standard compressed sensing is based on the concept of sparsity, problems in high-dimensional approximation are known to possess additional structure. In this talk, I will present an overview of a number of different problems in multivariate approximation in which structured sparsity arises naturally or can be exploited for further benefits. These include: overcoming the curse of dimensionality with lower set structure, joint sparsity for parametric PDEs, dealing with corrupted data, and adaptive sampling. I intend to not only highlight recent results and work in progress, but also pose a series of open problems and challenges.
Peter Binev: Near-Best Piecewise Linear Approximation on Conforming Simplicial Partitions
We develop a course-to-fine adaptive approximation scheme using continuous piecewise linear approximation on conforming partitions and prove that it has guarantied near-best performance. The application of this scheme for approximation of high dimensional point clouds utilizes sparse occupancy trees to organize the data in an efficient way and calculate the functionals that define local quasi interpolation operators used in the hierarchy of approximations.
Tucker Carrington: Using iterative eigensolvers with matrices whose elements are obtained from Smolyak quadrature or collocation
I shall present techniques we have developed for solving the vibrational Schroedinger equation. Some ideas of general interest: 1) Writing a Smolyak quadrature without a sum over levels so that matrix-vector products for a D dimensional problem can be computed by evaluating only D sequential sums; 2) A general scheme for transforming from values of a function on a sparse grid to coefficients in pruned basis (polynomial chaos expansion); 3) A general method for pruning bases and grids when using sparse-grid methods; 4) Nested functions and grids in which the difference between the number of points in level $i+1$ and level $i$ is very small (e.g. one); 5) Collocation without a "mass matrix"; 6) "Hierarchical" functions built from non-polynomial general bases; 7) Using hierarchical functions to accelerate the calculation of matrix-vector products; 8) Leja points for interpolation or collocation with general basis functions.
Moulay Abdellah Chkifa: On the stability of the Newton formula in multivariate hierarchical polynomial interpolation
The numerical approximation of parametric partial differential equations $\mathcal D(u, y) = 0$, is a computational challenge, in particular when the dimension $d$ of the parameter vector $y = (y_1, . . . , y_d)$ is very large, due to the so-called curse of dimensionality. However, under mild assumptions on the dependence of the differential operator $\mathcal D$ on the parameter $y$, see e.g. [1], the entire family of solutions $\{u(y), y \in U\}$ can be simultaneously approximated using multivariate sparse polynomials in $y$ with convergence rates that are not affected by the dimensionality $d$ of $y$.
We have shown in [2,1] that concrete constructions of such polynomial can be built using a Lagrange interpolation process, that follows from standard Smolyak tensorization of uni-dimensional Lagrange operators $I_k$, $k\geq1$ which are hierarchical associated with the $k$-sections of a given infinite sequence $(r_j)_{j\geq0}$ of mutually distinct numbers. We have shown through an analysis of the Lebesgue constant that the stability of the interpolation process is strongly tied to the growth of the Lebesgue constants $\mathbb L_k$ of the operators $I_k$. For instance, cubic growth is insured when the sequence $(r_j)_{j\geq0}$ is of the $\Re$-Leja type, see [3]. Moreover, we have shown in [2,1] that such growth does not affect the decay rates of polynomial approximation of parametric PDEs investigated in [1].
The interpolation polynomials are in practice built using Newton interpolation formula. We present a follow-up work, see [4], that investigates the practical aspects of the Newton formulation. For instance,
- Computer precision (overflow/underflow).
- Number of arithmetic operations.
- Condition number of Newton formulation.
The superiority of $\Re$-Leja sequences for this process is reaffirmed. Numerical experiments are presented in large parameter dimension.
[1] A. Chkifa, A. Cohen and C. Schwab. Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs. Journal de Mathématiques Pures et Appliquées 103 (2), 400-428 (2015)
[2] A. Chkifa, A. Cohen, and C. Schwab. High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs. JFoCM May 2013
[3] A. Chkifa, A. Cohen. On the Stability of Polynomial Interpolation Using Hierarchical Sampling. In: Pfander G. (eds) Sampling Theory, a Renaissance. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham (2015)
[4] A. Chkifa, On the stability of the Newton formula in multivariate hierarchical polynomial interpolation. In preparation.
Wolfgang Dahmen: Reduced Models in High Dimensions
This talk is concerned with the construction and analysis of reduced models for high dimensional parametric problems for inverse tasks like state or parameter estimation. Specifically, intrinsic obstructions, performance criteria, and possible remedies are discussed.
(Collaborators: M. Bachmayr, P. Binev, A. Cohen, R. DeVore)
Dinh Dung: $\varepsilon$-dimension in infinite dimensional hyperbolic cross approximation and application to parametric elliptic PDEs
We present a cost-benefit analysis of the approximation in tensor products of Hilbert spaces of Sobolev-analytic type. The Sobolev part is defined on a finite dimensional domain, whereas the analytical space is defined on an infinite dimensional domain. As main mathematical tool, we use the $\varepsilon$-dimension in Hilbert spaces which gives the lowest number of linear information that is needed to approximate an element from the unit ball $W$ in a Hilbert space $Y$ up to an accuracy $\varepsilon>0$ with respect to the norm of a Hilbert space $X$. From a practical point of view this means that we a priori fix an accuracy and ask for the amount of information to achieve this accuracy. Such an analysis usually requires sharp estimates on the cardinality of certain index sets which are in our case infinite-dimensional hyperbolic crosses. As main result, we obtain sharp bounds of the $\varepsilon$-dimension of the Sobolev-analytic-type function classes which depend only on the smoothness differences in the Sobolev spaces and the dimension of the finite dimensional domain where these spaces are defined. This implies in particular that, up to constants, the costs of the infinite dimensional (analytical) approximation problem is dominated by the finite-variate Sobolev approximation problem. We demonstrate this procedure with examples of functions spaces stemming from the regularity theory of parametric partial elliptic PDEs.
Joint work with Michael Griebel, Vu Nhat Huy and Christian Rieger
Alireza Doostan: A Multi-Fidelity Low-Rank Approximation Technique for Uncertainty Quantification
The use of model reduction has become widespread as a means to reduce computational cost for uncertainty quantification of PDE systems. In this work we present a model reduction technique that exploits the low-rank structure of the solution of interest, when exists, for fast propagation of high-dimensional uncertainties. To construct this low-rank approximation, the proposed method utilizes models with lower fidelities (hence cheaper to simulate) than the intended high-fidelity model. After obtaining realizations to the lower fidelity models, a set of reduced basis and an associated approximation rule are identified and applied to a small set of high-fidelity realizations to obtain this low-rank, bi-fidelity approximation. In addition to the construction of this bi-fidelity approximation, we present convergence analysis and numerical results showcasing its efficacy on high-dimensional problems.
This is a joint work with Hillary Fairbanks (CU Boulder), Jerrad Hampton (CU Boulder), and Akil Narayan (U of Utah).
Lars Grasedyck: Stable low-rank and off-grid approximate interpolation in tree tensor formats
Wolfgang Hackbusch: On (Non-)Closedness of Tensor Formats
Tensors can be represented in various ways by other parameters. Using a limited number of parameters in a certain format, one describes a subset of tensors. In the case of the classical CP format it is well-known that this subset is not closed. The consequence are unfavourable numerical properties. We discuss various consequences. On the other hand, all tree-based formats are closed. This includes the Tucker representation, the hierarchical format and the TT scheme. The generalisation of tree-based representations are tensor networks based on graphs (containing cycles). In this case one must again expect nonclosedness. Prototypes of such formats are cyclic TT representations. In physics they are called the cyclic matrix-product states. A subset are the site-independent cyclic matrix-product states. We prove explicitely that - in most of the cases - these formats are not closed.
We show that the results depend on the dimensions and the
underlying field.
Anders Hansen: On computational barriers in mathematics of information and instabilities in deep learning for inverse problems
Modern mathematics of information relies heavily on computing minimisers of optimisation problems such as linear programming, constrained and unconstrained Lasso, Basis Pursuit etc. We will discuss the following potentially surprising issue. When given irrational inputs or using floating point arithmetic, we have the following phenomenon. For any of the mentioned optimisation problems, and given any integer $K > 2$, there does exists a class of well conditioned inputs, such that: (1) there does not exists any algorithm that can produce a minimiser with $K$ correct digits. (2) However, there does exist an algorithm that can produce $K-1$ correct digits, but any algorithm doing so will use arbitrarily long time. (3) Moreover, the problem of computing $K-2$ digits is in $P$, that is, there exists an algorithm providing a solution with $K-2$ correct digits requiring polynomial runtime in the number of variables.
A seemingly unrelated problem is the phenomenon of instabilities in deep learning. It is a well documented issue that deep learning for the classification problem becomes unstable. We will discuss how deep learning for inverse problems in general also becomes unstable. Paradoxically, despite the mentioned instabilities, it is possible to produce neural networks that are completely stable for Magnetic Resonance Imaging (MRI), however, they are adaptive. This means that the number of layers depend on the input. We will show how such neural networks require no training (no deep learning), have recovery guarantees, and that there does exist efficient algorithms to compute them. The existence of such networks are intimately related to the $(K,K-1,K-2)$-phenomenon described above regarding existence of algorithms for optimisation problems in modern mathematics of information.
Arnulf Jentzen: Overcoming the curse of dimensionality in the numerical approximation of semilinear parabolic partial differential equations
Partial differential equations (PDEs) are among the most universal tools used in modelling problems in nature and man-made complex systems. For example, stochastic PDEs are a fundamental ingredient in models for nonlinear filtering problems in chemical engineering and weather forecasting, deterministic Schroedinger PDEs describe the wave function in a quantum physical system, deterministic Hamiltonian-Jacobi-Bellman PDEs are employed in operations research to describe optimal control problems where companys aim to minimise their costs, and deterministic Black-Scholes-type PDEs are also highly employed in portfolio optimization models as well as in state-of-the-art pricing and hedging models for financial derivatives. The PDEs appearing in such models are often high-dimensional as the number of dimensions, roughly speaking, corresponds to the number of all involved interacting substances, particles, resources, agents, or assets in the model. For instance, in the case of the above mentioned financial engineering models the dimensionality of the PDE often corresponds to the number of financial assets in the involved hedging portfolio. Such PDEs can typically not be solved explicitly and it is one of the most challenging tasks in applied mathematics to develop approximation algorithms which are able to approximatively compute solutions of high-dimensional PDEs. Nearly all approximation algorithms for PDEs in the literature suffer from the so-called "curse of dimensionality" in the sense that the number of required computational operations of the approximation algorithm to achieve a given approximation accuracy grows exponentially in the dimension of the considered PDE. With such algorithms it is impossible to approximatively compute solutions of high-dimensional PDEs even when the fastest currently available computers are used. In this talk we introduce and analyze a new stochastic approximation algorithm for high-dimensional PDEs. We prove that this algorithm does indeed overcome the curse of dimensionality in the case of a general class of semilinear parabolic PDEs and we thereby prove, for the first time, that a general semilinear parabolic PDE with a nonlinearity depending on the PDE solution can be solved approximatively without the curse of dimensionality.
Vladimir Kazeev: Low-rank tensor approximation for multiscale problems
As a model problem, we consider a boundary-value problem for a linear second-order diffusion equation with high-frequency oscillations in the diffusion coefficient. At small scales, the multiscale structure of the problem renders it intractable for classical finite-element methods. Specifically, they require mesh refinement beyond the finest scale of the problem to achieve convergence, which means that the dimension of the discrete problem has to grow algebraically with respect to the scale parameter. One way to mitigate this is to retain a problem-nonspecific discretization but recast it in a high-dimensional space and apply a low-rank tensor representation to achieve the adaptive, data-driven compression of the solution in the course of its computation. This approach has been successfully applied to various types of problems but is contrary to special finite-element methods for multiscale problems, such as gFEM, MsFEM and HMM, which rely on constructing adaptive basis functions or quadrature rules by solving auxiliary problems.
As in the previous works by Khoromskij and Repin (RJNAMM 2015) and Kazeev, Oseledets, Rakhuba and Schwab (ACOM 2017), we consider the low-rank tensor decomposition known as matrix product states (MPS) or tensor train (TT). For the aforementioned multiscale problem, with several scales and posed in two or three dimensions, we use the TT-MPS decomposition as a low-parametric representation for the discretizations recast in high-dimensional spaces. Building on the recent result of Bachmayr and Kazeev (arXiv 1802.09062) on preconditioning in the TT-MPS format, we develop a numerical multiscale solver that computes approximate solutions in low-rank decompositions whose number of parameters is polylogarithmic with respect to the scale parameter. We investigate, as an alternative to the direct tensor-structured approach, the approximation of the multiscale solution using a high-dimensional single-scale limit problem, developed by Hoang and Schwab (MMS 2005). Under additional assumptions, we theoretically analyze the approximability of the solution by low-rank approximations in the TT-MPS format and prove exponential convergence of such approximations.
This is joint work with Christoph Schwab, Maxim Rakhuba and Ivan Oseledets.
Felix Krahmer: Sparse Harmonic Transforms: A New Class of Sublinear-time Algorithms for Learning Functions of Many Variables
In this talk we will discuss fast and memory efficient numerical methods for learning functions of many variables that admit sparse representations in terms of general bounded orthonormal tensor product bases. Such functions appear in many applications including, e.g., various Uncertainty Quantification (UQ) problems involving the solution of parametric PDE that are approximately sparse in Chebyshev or Legendre product bases. More concretely, let B be a finite Bounded Orthonormal Product Basis (BOPB) of cardinality $|B| = N$. Herein we will develop methods that rapidly approximate any function f that is sparse in the BOPB, that is, $f$ of the form $f(x) = \sum_{b \in S} c_b \cdot b(x)$ with $S \subset B$ of cardinality $|S| = s$ is much less than N. Our method has a runtime of just $(s \log N)^{O(1)}$, uses only $(s \log N)^{O(1)}$ function evaluations on a fixed and nonadaptive grid, and not more than $(s \log N)^{O(1)}$ bits of memory. We emphasize that nothing about $S$ or any of the coefficients $c_b$ is assumed in advance other than that $S \subset B$ has $|S| = s$. Both $S$ and its related coefficients $c_b$ will be learned from the given function evaluations by the developed method. For $s\ll N$, the runtime $(s \log N)^{O(1)}$ will be less than what is required to simply enumerate the elements of the basis $B$; thus our method is the first approach applicable in a general BOPB framework that falls into the class referred to as sublinear-time. This and the similarly reduced sample and memory requirements set our algorithm apart from previous works based on standard compressive sensing algorithms such as basis pursuit which typically store and utilize full intermediate basis representations of size $\Omega(N)$ during the solution process.Besides these improved theoretical guarantees, also the empirical performance of our method improves over previous approaches. In particular, the enhanced memory efficiency allows us to tackle much larger problem sizes than in previous works.
This is joint work with Bosu Choi (Michigan State University/UT Austin) and Mark Iwen (Michigan State University)
Christian Lubich: A low-rank projector-splitting integrator for the Vlasov-Poisson equation
Many problems encountered in plasma physics require a description by kinetic equations, which are posed in an up to six-dimensional phase space. A direct discretization of this phase space, often called the Eulerian approach, has many advantages but is extremely expensive from a computational point of view. In the present paper we propose a dynamical low-rank approximation to the Vlasov–Poisson equation, with time integration by a particular splitting method. This approximation is derived by constraining the dynamics to a manifold of low-rank functions via a tangent space projection and by splitting this projection into the subprojections from which it is built. This reduces a time step for the six- (or four-) dimensional Vlasov–Poisson equation to solving two systems of three- (or two-) dimensional advection equations over the time step, once in the position variables and once in the velocity variables, where the size of each system of advection equations is equal to the chosen rank. By a hierarchical dynamical low-rank approximation, a time step for the Vlasov–Poisson equation can be further reduced to a set of six (or four) systems of one-dimensional advection equations, where the size of each system of advection equations is still equal to the rank. The resulting systems of advection equations can then be solved by standard techniques such as semi-Lagrangian or spectral methods. Numerical simulations in two and four dimensions for linear Landau damping, for a two-stream instability and for a plasma echo problem highlight the favorable behavior of this numerical method and show that the proposed algorithm is able to drastically reduce the required computational effort. The talk is based on joint work with Lukas Einkemmer.
Yvon Maday: PBDW method for state estimation: error analysis for noisy data and a nonlinear formulation
We present an error analysis and further numerical investigations of the Parameterized-Background Data-Weak (PBDW) formulation to variational Data Assimilation (state estimation), proposed in [Y Maday, AT Patera, JD Penn, M Yano, Int J Numer Meth Eng, 102(5), 933-965]. Given M measurements of the state, PBDW aims at approximating the state $u^{true}$ as $\hat u = \hat z + \hat \eta$, where the background $\hat z$ belongs to (a subset of) a N-dimensional linear space informed by a parameterized mathematical model and the update $\hat \eta$ belongs to a M-dimensional space informed by the experimental observations. The contributions of this work are twofold. First, we present a mathematical analysis of the formulation for noisy measurements: we derive a general actionable a priori error analysis for linear reconstruction algorithms, and we apply it to the case in which the background belongs to the whole N-dimensional space (linear PBDW); furthermore, we present a stability result for the case in which the background belongs to a proper subset of the space (nonlinear PBDW). Second, we present several numerical examples to compare the linear formulation with the nonlinear formulation.
Mauro Maggioni: Statistical Learning and Dynamical Systems: two examples
We discuss two problems at the intersection of statistical learning and dynamical systems. In the first scenario, we consider the case of high-dimensional stochastic systems that are well-approximated by diffusions on low-dimensional manifold. Neither the process nor the manifold are known: we assume we only have access to an (expensive) simulator that can return short paths of the stochastic system, given an initial condition. We introduce a statistical learning framework for estimating local approximations to the system, then pieced together to form a fast global reduced model for the system, called ATLAS. ATLAS is guaranteed to be accurate (in the sense of producing stochastic paths whose distribution is close to that of paths generated by the original system) not only at small time scales, but also at large time scales, under suitable assumptions on the dynamics. We discuss applications to homogenization of rough diffusions in low and high dimensions, as well as relatively simple systems with separations of time scales, and deterministic chaotic systems in high-dimensions, that are well-approximated by stochastic diffusion-like equations.
In the second scenario we consider a system of interacting agents: given only observed trajectories of the system, we are interested in estimating the interaction laws between the agents. We consider both the mean-field limit (i.e. the number of agents going to infinity) and the case of a finite number of agents, with the number of observations going to infinity. We show that at least in particular cases the high-dimensionality of the state space of the system does not affect the learning rates. We exhibit efficient algorithms for estimating the interaction kernels, with statistical guarantees, and demonstrate them on various examples.
Giovanni Migliorati: Adaptive approximation by optimal weighted least-squares methods
We present a novel analysis of optimal weighted least-squares methods for the approximation of functions in high dimension, using an a priori given approximation space. Building on such analysis, we next consider approximation by weighted least squares in the adaptive setting, using a nested sequence of spaces. Finally we discuss numerical algorithms based on weighted least squares for adaptive approximation in high dimension with polynomial or wavelet spaces.
Olga Mula: An Adaptive Nested Source Term Iteration for Radiative Transfer Equations
In this talk, we present a new approach to the numerical solution of radiative transfer equations with certified a posteriori error bounds. We formulate a fixed-point iteration in a suitable, infinite dimensional function space that is guaranteed to converge with a fixed error reduction per step. The numerical scheme is then based on approximately realizing this outer iteration within dynamically updated accuracy tolerances that still ensure convergence to the exact solution. To ensure that these error tolerances are met, we employ rigorous a posteriori error bounds based on a Discontinuous Petrov--Galerkin (DPG) scheme. These a posteriori bounds are also used to generate adapted angular dependent spatial meshes to significantly reduce overall computational complexity. The scheme also requires the evaluation of the global scattering operator at increasing accuracy at every iteration and its computation is accelerated through low-rank approximation and matrix compression techniques. We will illustrate the theoretical findings with numerical experiments involving non-trivial scattering kernels.
Fabio Nobile: Least-Squares Padé approximation of Helmholtz problems with parametric/stochastic wavenumber
The present work concerns the approximation of the solution map associated to the parametric Helmholtz boundary value problem, i.e., the map which associates to each (real) wavenumber belonging to a given interval of interest the corresponding solution of the Helmholtz equation. We introduce a single-point Least Squares (LS) rational Padé-type approximation technique applicable to any meromorphic Hilbert space-valued univariate map, and we prove the uniform convergence of the Padé approximation error on any compact subset of the interval of interest that excludes any pole. We also present a simplified and more efficient version, named Fast LS-Padé, applicable to Helmholtz-type parametric equations with normal operators.
The LS-Padé techniques are then employed to approximate the frequency response map associated to various parametric time-harmonic wave problems, namely, a transmission/reflection problem, a scattering problem and a problem in high-frequency regime. In all cases we establish the meromorphy of the frequency response map. The Helmholtz equation with stochastic wavenumber is also considered. In particular, for Lipschitz functionals of the solution, and their corresponding probability measures, we establish weak convergence of the measure derived from the LS-Padé approximant to the true one. Two-dimensioanl numerical tests are performed, which confirm the effectiveness of the approximation method.
Co-authors: Francesca Bonizzoni, Ilaria Perugia, Davide Pradovera
[1] F.Bonizzoni, F.Nobile, and I.Perugia, "Convergence analysis of Padé approximations for Helmholtz frequency response problems", ESAIM: Math. Mod. Numer. Anal. (M2AN), 2017. published online.
[2] F. Bonizzoni, F. Nobile, I. Perugia, and D. Pradovera, "Fast Least-Squares Padé approximation of problems with normal operators and meromorphic structure." arXiv:1808.03112.
[3] F. Bonizzoni, F. Nobile, I. Perugia, and D. Pradovera, "Least-Squares Padé approximation of parametric and stochastic Helmholtz maps." arXiv:1805.05031.
Anthony Nouy: Learning with tree-based tensor formats
Erich Novak: How good is random information for $L_2$ approximation?
For optimal algorithms, based on $n$ pieces of information, one has to select the $n$ pieces (information functionals) in an optimal way. This might be difficult in practice,
so one often has to work with information that is not optimal. In this work we study the case where the information functionals are taken as i.i.d. random variables. Such a random information might be almost as good as optimal information or might be much worse. We analyze this problem in a Hilbert space setting and give some partial answers about the power of random information.
joint work with Aicke Hinrichs and David Krieg
Holger Rauhut: Multilevel compressive sensing methods for high-dimensional operator equations
Robert Scheichl: Efficient Sampling from High-dimensional Distributions using Lowrank Tensor Surrogates
High-dimensional distributions are notoriously difficult to sample from, particularly in the context of PDE-constrained Bayesian inverse problems. In this talk, we will present general purpose samplers based on low-rank tensor surrogates in the tensor-train (TT) format, a methodology that has been exploited already for many years for scalable, high-dimensional function approximations in quantum chemistry. In the Bayesian context, the TT surrogate is built in a two-stage process. First, we build a surrogate of the entire PDE solution in the TT format, using a novel combination of alternating least squares and the TT cross algorithm. It exploits and preserves the block diagonal structure of the discretised operator in stochastic collocation schemes, requiring only independent PDE solutions at a few parameter values, thus allowing the use of existing high performance PDE solvers. In a second stage, we approximate the high-dimensional likelihood function also in TT format. Due to the particular structure of the TT surrogate, we can build an efficient inverse Rosenblatt (or cumulative) transform that only requires a sampling algorithm for one-dimensional conditionals. The overall computational cost of the sampler grows only linearly with the dimension. For sufficiently smooth prior distributions of the input random fields, the ranks required for accurate TT approximations are moderate, leading to significant computational gains.
This is joint work with Karim Anaya-Izquierdo, Sergey Dolgov (both University of Bath, UK) and Colin Fox (University of Otago, NZ).
Reinhold Schneider: Variational Monte Carlo and Hierarchical Tensor Representation
In Variational Monte Carlo we replace our objective functional by an empirical functional, in a similar way as for risk minimization or loss functions in statistical learning. For the optimization we need only computable gradients at sample points. This works also for deep neuronal networks using back-propagation. This approach can be carried out easily for parametric problems in uncertainty quantifcation. At a price, we can show only convergence in probability, i.e. error estimates holds with high probability. The analysis is carried out in the spirit of Cucker & Smale, and first estimates about covering numbers for hierarchical tensors are available. We compare with recent results of Cohen et al. and provide an outlook to approximate the meta-stable eigenfunctions of the corresponding Backward Kolmogorov operator by numerical approximation of the transfer operator (also know as Koopman operator), and vice versa the Fokker Planck operator.
Hierarchical Tucker tensor format, introduced by Hackbusch and coworkers, and a particular case of Tensor Trains (TT) (Tyrtyshnikov) have been introduced for high dimensional problems. The parametrization had been known in quantum physics as tensor network states. The multi-linear parametrization can be applied to solve high-dimensional PDE into a variational framework restricted to tensor low rank classes.
For non-linear or more complex problems this direct approach becomes more difficult and will be replaced by an empirical functional for variational Monte Carlo.
Joint work with M. Eigel, P. Trunschke and S. Wolf
Christoph Schwab: GPC and DNN Approximations of solution manifolds of parametric and stochastic PDEs
We consider parametric operator equations with uncertain input from Banach spaces. We assume the data-to-solution map to be holomorphic between (complexified versions of) corresponding function spaces. Equipping the (sub)space of admissible inputs with an unconditional basis renders the data-to-solution map countably parametric. The size of the domain of analyticity determines the sparsity of gpc expansions of parametric solutions.
We present algorithms with dimension-independent convergence rates of sparse grid collocation and of Smolyak quadratures for the interpolation and the numerical integration of the corresponding solution maps, with stable (Petrov-Galerkin) discretizations of the parametric operator equation. We investigate the expressive power of deep ReLU Neural Networks (DNNs) of countably-parametric maps u : U → R on the parameter domain U. Admissible analytic maps arise for example as response surfaces of parametric PDEs with distributed uncertain inputs, i.e., input data from function spaces, but also as densities of posterior measures in Bayesian PDE inversion.
Joint work with Jakob Zech. Supported in part by the Swiss National Science Foundation
References:
Ch. Schwab and J. Zech Deep Learning in High Dimension, accepted (2018),
Analysis and Applications, Singapore, SAM Report 2017-57
https://www.math.ethz.ch/sam/research/reports.html?id=753
J. Zech and Ch. Schwab Convergence rates of high dimensional Smolyak quadrature,
in review, SAM Report 2017-27
https://www.math.ethz.ch/sam/research/reports.html?id=2017-27
Ian Hugh Sloan: Unshifted lattice rules - Can lattice rules for high dimensional integration be effective without a shift?
In this joint work with Yoshihito Kazashi and Frances Kuo we seek an existence result for the worst-case error of lattice rules for high dimensional integration over the unit cube, in an unanchored weighted space of functions with square-integrable mixed first derivatives. Existing studies rely on random shifting of the lattice to simplify the analysis, whereas here we work with unshifted lattice rules. If a certain number-theoretic conjecture holds, then we are able to prove that there exists an $N$-point rank-one lattice rule which gives a worst-case error of order $1/\sqrt{N}$ up to a (dimension-independent) logarithmic factor. The constant is independent of dimension under the usual condition on the weights.
Endre Süli: High-dimensional McKean-Vlasov diffusion and the well-posedness of the Hookean bead-spring chain model
We report new PDE-analytic results, aimed at dispelling certain misconceptions in the polymer physics literature associated with high-dimensional Hookean bead-spring chain models. By performing a rigorous derivation of the model from first principles, we show in particular that when the flow domain is bounded the configuration space for the Hookean bead-spring chain model is also bounded (rather than unbounded, as is commonly stated in the literature), and that the Fokker--Planck equation featuring in the model is uniformly parabolic, containing a centre-of-mass diffusion term (rather than mixed hyperbolic-parabolic with no center-of-mass diffusion term). These observations have significant impact on the construction and mathematical analysis of numerical methods for the approximate solution of high-dimensional Hookean bead-spring chain models. We also provide a rigorous proof of a formal asymptotic argument by Schieber and Öttinger (J. Schieber and H. C. Öttinger, The effects of bead inertia on the Rouse model, J. Chem. Phys. 89 (1988), no. 11), asserting that in the small-mass limit the model results in equilibration in momentum space. Our proofs rely on entropy/entropy dissipation estimates combined with various weak compactness and compensated compactness techniques.
The talk is based on joint work with Ghozlane Yahiaoui (Oxford), and arose from a series of discussions with Andrew Stuart (Caltech) following the workshop "Challenges in High-Dimensional Analysis and Computation'', held at San Servolo, Venice, May 2--6, 2016.
E. Süli and G. Yahiaoui. McKean--Vlasov diffusion and the well-posedness of the Hookean bead-spring-chain model for dilute polymeric fluids: small-mass limit and equilibration in momentum space. [85 pages]. arXiv:1802.06268 [math.AP]. https://arxiv.org/abs/1802.06268
Aretha Teckentrup: Quasi- and multilevel Monte Carlo methods for Bayesian inverse problems
The parameters in mathematical models for many physical processes are often impossible to determine fully or accurately, and are hence subject to uncertainty. By modelling the input parameters as stochastic processes, it is possible to quantify the induced uncertainty in the model outputs. Based on available information, a prior distribution is assigned to the input parameters. If in addition some dynamic data (or observations) related to the model outputs are available, a better representation of the parameters can be obtained by conditioning the prior distribution on these data, leading to the posterior distribution in the Bayesian framework.
In most situations, the posterior distribution is intractable in the sense that the normalising constant is unknown and exact sampling is unavailable. Using Bayes’ Theorem, we show that posterior expectations of quantities of interest can be written as the ratio of two prior expectations involving the quantity of interest and the likelihood of the observed data. These prior expectations can then be computed using Quasi-Monte Carlo and multilevel Monte Carlo methods.
In this talk, we give a convergence and complexity analysis of the resulting ratio estimators, and demonstrate their effectiveness on a typical model problem in uncertainty quantification.
Vladimir Temlyakov: Smooth fixed volume discrepancy, dispersion, and related problems
The smooth fixed volume discrepancy in the periodic and non-periodic case will be discussed. It was proved that the Frolov point sets adjusted to the periodic case have optimal in a certain sense order of decay of the smooth periodic discrepancy. The upper bounds for the $r$-smooth fixed volume periodic discrepancy for these sets are established. It was proved that the Fibonacci and the Frolov point sets, which are known to be very good for numerical integration, have optimal rate of decay of dispersion with respect to the cardinality of sets. This implies that the Fibonacci and the Frolov point sets provide universal discretization of the uniform norm for natural collections of subspaces of the multivariate trigonometric polynomials. It is shown how the optimal upper bounds for dispersion can be derived from the upper bounds for a new characteristic -- the smooth fixed volume discrepancy.
Tino Ullrich: Numerical performance of optimized Frolov lattices in tensor product reproducing kernel Sobolev spaces
In this talk, we deal with several aspects of the family of Frolov cubature formulas which are known to achieve optimal asymptotic convergence rates in a broad range of multivariate function spaces. Even though every admissible lattice has this favorable asymptotic behavior, there are significant differences concerning the numerical behavior of the worst-case error in higher dimensions. To this end, we propose new generating polynomials that promise a significant reduction of this worst case integration error compared to the classic ones given by Frolov. We further present a new algorithm to efficiently enumerate the Frolov points from non-orthogonal lattices in the $d$-dimensional unit cube $[0, 1]^d$. Since these point clouds are computed once for repeated use we made them available for download. Finally, we compare the numerical performance to other contemporary numerical integration methods in the framework of (unweighted) Sobolev spaces with (anisotropic) mixed smoothness and homogeneous boundary. In order to do this we derive explicit formulas for the reproducing kernels of the respective zero boundary spaces.
This is recent joint work with C. Kacwin (Bonn), J. Oettershagen (Bonn) and Mario Ullrich (Linz).
André Uschmajew: Low-rank approximation of nearest neighbor interaction systems
Low-rank tensor methods are an important tool for the numerical treatment of equations with a high-dimensional state space. Nearest neighbor interaction systems like the Ising model or some Chemical Master equations are examples for this, and the low-rank tensor train format has shown to be efficient for their computation in some cases. A challenging task, however, is to provide theoretical justification for this. For ground states of 1D quantum spin systems such arguments have been found in the theoretical physics community, but they can be applied more generally. The idea is to study the rank-increasing properties of Krylov subspace methods based on partial commutativity of local operators in nearest neighbor Hamiltonians. In this talk, we will explain this idea and present numerical examples.
Clayton Webster: Energy norm regularization for sparse simultaneous reconstruction of solutions to parameterized PDEs
This talk will focus on a novel sparse polynomial approximation method for the solution of PDEs with stochastic and parametric inputs. Our approach treats the parameterized problem as a problem of sparse signal reconstruction through a reformulation of the standard basis pursuit denoising problem. To achieve global reconstruction of solutions to the parameterized elliptic PDE, we employ a novel mixed $\ell_1$ and energy norm-based method of regularization. Combined with the standard measurement scheme developed for compressed sensing-based polynomial approximation, this approach allows for simultaneous global approximations of the solution over both physical and parametric domains. In addition, we are able to show that, with minimal sample complexity, error estimates comparable to the best $s$-term approximation are achievable, while requiring only a priori bounds on polynomial truncation error in energy norms.
Henryk Wozniakowski: Absolute Value Information
Two classes of information have been mainly considered in IBC, Information-Based Complexity, for approximate solutions of continuous problems. The first class is $\Lambda^{\rm all}$ and consists of all linear functionals, whereas the second one is $\Lambda^{\rm std}$ and consists of only function values.
Recently, due to Daubechies, Mallat and others, a new class of information has been introduced in the context of phase retrieval, where it is assumed that only absolute values of linear functionals are available. We call this class $\Lambda^{\rm avi}$, the absolute value information class. In this way we have $\Lambda^{\rm avi-all}$ when we can compute the absolute values of arbitrary linear functionals, and $\Lambda^{\rm avi-std}$ when only the absolute value of function values can be computed. For $\Lambda^{\rm avi}$ we need to modify the algorithm error to recover the missing phase in information values.
The purpose of this study is to establish the power of $\Lambda^{\rm avi}$ in comparison to $\Lambda^{\rm all}$ and $\Lambda^{\rm std}$ for various IBC problems in the worst case setting. Our mail result is that $\Lambda^{\rm avi-all}$ is roughly of the same power as $\Lambda^{\rm all}$ for linear IBC problems. In fact, this holds for all subsets $\Lambda$ of $\Lambda^{\rm all}$ with the property that $L_1, L_2 \in \Lambda$ implies that $L_1 + L_2$ and $L_1 + i L_2$, with $i = \sqrt{-1}$, are also in $\Lambda$. This property does not hold for $\Lambda^{\rm std}$. We prove that the class $\Lambda^{\rm avi-std}$ is usually too weak to solve linear IBC problems.
This is a joint work with Leszek Plaskota and Pawel Siedlecki of the University of Warsaw.
Harry Yserentant: On the expansion of solutions of Laplace-like equations into traces of separable higher dimensional functions
The talk deals with the equation $-\Delta u+\mu u=f$, $\mu$ a positive constant, on high-dimensional spaces $\mathbb{R}^m$. If the right-hand side $f$ is a rapidly converging series of separable functions, the solution $u$ can be represented in the same way. These constructions are based on approximations of the function $1/r$ by sums of exponential functions. We will present results of related kind for more general right-hand sides $f(x)=F(Tx)$ that are composed of a separable function on a space of dimension $n$ greater than $m$ and a linear mapping given by a matrix $T$ of full rank. Our results are based on the observation that in the high-dimensional case the euclidian norm of $T^t\omega$ behaves for $\omega$ in most of the $\mathbb{R}^n$ like the euclidian norm of $\omega$.
H. Yserentant,On the expansion of solutions of Laplace-like equations into traces of separable higher dimensional functions,arXiv:1807.05340 [math.NA]