Hausdorff School: Low-rank Tensor Techniques in Numerical Analysis and Optimization

Monday, April 18

08:30 - 09:00 (Self-) Registration
09:00 - 10:30 Lecture A. Uschmajew: Tensor product of Hilbert spaces
10:30 - 11:00 Coffee break
11:00 - 12:30 Lecture A. Uschmajew: Low-rank approximability
12:30 - 14:30 Lunch break
14:30 - 16:00 Problem session
16:00 - 16:30 Coffee break
16:30 - 17:30 Discussion

Tuesday, April 19

09:00 - 10:30 Lecture B. Vandereycken: Tensor trains and matrix product states
10:30 - 11:00 Coffee break & Group Photo
11:00 - 12:30 Lecture B. Vandereycken: Riemannian optimization for tensors
12:30 - 14:30 Lunch break
14:30 - 16:00 Problem session
16:00 - 16:30 Coffee break
16:30 - 17:30 Discussion

Wednesday, April 20

09:00 - 10:30 Participant's session
10:30 - 11:00 Coffee break
11:00 - 12:30 Participant's session
12:30 - 14:00 Lunch break
14:00 - 18:00 Excursion to Max-Ernst-Museum in Brühl (Exhibition 'M. C. Escher') - We meet in front of the building where the lectures take place.
19:30 Restaurant 'Ente', Martinsplatz 2A, 53113 Bonn (see www.ente-bonn.de)

Thursday, April 21

09:00 - 10:30 Lecture L. Grasedyck: Hierarchical Tucker format
10:30 - 11:00 Coffee break
11:00 - 12:30 Lecture L. Grasedyck: Blackbox / Cross approximation
12:30 - 14:30 Lunch break
14:30 - 16:00 Problem session
16:00 - 16:30 Coffee break
16:30 - 17:30 Discussion

Friday, April 22

09:00 - 10:30 Lecture B. Vandereycken: Quantized tensor trains
10:30 - 11:00 Coffee break
11:00 - 12:30 Lecture L. Grasedyck: Outlook, applications and discussion

André Uschmajew: Tensor product of Hilbert spaces

The theoretical background of tensors and tensor rank will be reviewed in a simplified Hilbert space setting. It is important to understand the topological structure of such spaces and bounded operators on them in order to study low-rank approximability of certain problems in scientific computing on an analytical or discretized level.

André Uschmajew: Low-rank approximability

The motivating examples for the fact that low-rank tensor approximation are useful for high-dimensional problems are the Poisson equation with zero b.c. / Lyapunov equation, and functions of mixed regularity. In the first case, the inverse operator admits a good approximation by elementary tensor products of operators. In the second case, results from approximation theory yield almost dimension independent approximation rates by sums of separable functions, e.g., via hyperbolic cross approximation. At the same time, one is faced with the fact that the approximation problem is not well posed since sets of a bounded canonical tensor rank are typically not closed. This motivates the approximation in tree tensor networks such as the TT or HT formats.

Bart Vandereycken: Tensor trains and matrix product states

Low-rank tensor formats that are based on minimal subspaces have the benefit of allowing efficient and stable numerical calculations. We will explain two formats in detail: the Tucker format and the tensor trains (or matrix product states). Apart from their definition based on minimal subspaces, we will also give some ideas how they can be used in numerical algorithms, for example, the explicit construction of quasi-best approximations based on standard SVD calculations for matrices.

Bart Vandereycken: Riemannian optimization for tensors

The set of fixed rank Tucker, TT, and HT tensors are each a smooth submanifold embedded in the set of general tensors. This smooth structure can be exploited when solving optimization problems on these sets. Particular examples are tensor completion and solving linear systems and eigenvalue problems. We will explain the basic elements of differential geometry in order to apply Riemannian optimization for the TT manifold. As prototypical examples we study the convergence of Riemannian steepest descent, and show how alternating linear schemes (ALS) can be seen as a Riemannian optimization method.

Lars Grasedyck: Hierarchical Tucker format

We consider again sets of tensors with low-rank properties of matricizations, in particular those defining the hierarchical Tucker format based on a dimension tree. This tree can be of maximal depth (leading to the TT or MPS format), or of some other structure. Properties and the algorithmic treatment of tensors in such tree-based formats will be discussed. Finally we put the different formats into context.

Lars Grasedyck: Blackbox / Cross approximation

An interesting method to approximate or interpolate or complete a low-rank tensor from a few known entries is the so-called black box approximation. It is based on the cross approximation of matrices. We discuss properties, theoretical results and open problems.

Bart Vandereycken: Quantized tensor trains

Quantizing a vector (or matrix) means folding its entries into a few modes of a high-dimensional tensor thereby representing different levels, or scales, as new dimensions. By applying the TT format to these quantized vectors, one arrives at the quantized tensor-train (QTT) format. Many signals, functions, and solutions of PDEs have a low QTT rank. We will explain a few examples based on polynomial interpolation and other standard approximation techniques.

Lars Grasedyck: Outlook, applications and discussion

In this final lecture we discuss the application and applicability of low rank tensor techniques by reviewing theoretical approximation results (for the CP / TT / MPS / HT case), e.g., based on exponential sums, as well as practical problems where tensors arise or can be used.