Date: August 24 - 26, 2021
Venue: Neues Hörsaalzentrum, Friedrich-Hirzebruch Allee 5, Bonn

General schedule

09:15 Opening
09:30 Lecture 1
10:20 Coffee break
11:00 Lecture 2
12:10 Lecture 3
13:00 - 15:00 Lunch break
15:00 Lecture 4
15:50 Coffee break
16:30 Lecture 5
17:40 Reception (Tuesday) or Informal Discussion
18:30 End

Tuesday, August 24

09:15 Opening, Lecture Room 2  
9:30 - 17:20 Parallel sessions  
17:40 Reception
18:30 End

Wednesday, August 25

9:30 - 17:20 Parallel sessions
17:40 Informal Discussion
18:30 End

Thursday, August 26

9:30 - 17:20 Parallel sessions
  Session 9: Stochastics in discrete, singular, and infinite dimensional structures
Chair: Anton Bovier & Karl-Theodor Sturm, Lecture Room 2
09:30 Lisa Hartung (Universität Mainz): Extremes of Continuous Random Energy Models
10:20 Coffee break
11:00 Jan Maas (IST Austria): Characterisation of gradient flows
12:10 [cancelled!!] Ngoc Tran (UT Austin): Does your problem have a tropical solution? [cancelled!!]
13:00 - 15:00 Lunch break
15:00 Eva Kopfer (Universität Bonn): Optimal transport and homogenization
15:50 Coffee break
16:30 Eveliina Peltola (Universität Bonn): On large deviations of SLEs, real rational functions, and zeta-regularized determinants of Laplacians
17:40 Informal Discussion
18:30 End


Francesca Arici (Universiteit Leiden): The role of Toeplitz extensions in mathematical physics and operator algebras

Motivated by the role played by Toeplitz-type extensions in the study of solid state physics, we review the theory of Toeplitz algebras and their role in operator $K$-theory and operator algebras. We will then describe recent results on the topology of a class of Toeplitz algebras (and quotients thereof) constructed from $SU(2)$-representations, and showcase results about their K-theoretic invariants.

Based on joint work with Jens Kaad (SDU Odense).

Arthur Bartels (Universität Münster): K-theory of Hecke algebras

For complex group rings $C[G]$ of discrete groups the Farrell-Jone conjecture predicts that its $K$-theory can be computed in terms of the K-theory of group rings $C[F]$ for finite subgroups $F$ of $G$. This talk will discuss a similar conjecture for Hecke algebras of totally disconnected groups. We will emphasis similarities and differences between the discrete and the totally disconnected case. Our main result is that this conjecture for Hecke algebras is satisfied for reductive p-adic groups, implying in particular a conjecture of Dat. This is joint work with Wolfgang Lück.

Jannis Blauth (Universität Bonn): Improving the Approximation Ratio for Capacitated Vehicle Routing

We devise a new approximation algorithm for capacitated vehicle routing. Our algorithm yields a better approximation ratio for general capacitated vehicle routing as well as for the unit-demand case and the splittable variant. Our results hold in arbitrary metric spaces. This is the first improvement on the classical tour partitioning algorithm by Haimovich and Rinnooy Kan and by Altinkemer and Gavish.

Joint work with Vera Traub and Jens Vygen.

Bastian Bohn (Fraunhofer SCAI): Least Squares and Sparse Grids - Error Bounds, Adaptivity and Beyond

We first recapitulate the framework of least squares regression in certain sparse grid spaces. The underlying numerical problem can be solved quite efficiently with state-of-the-art algorithms. Analyzing its stability and convergence properties, we can derive the optimal coupling between the number of necessary data samples and the degrees of freedom in the ansatz space.Our analysis is based on the assumption that the least-squares solution employs some kind of Sobolev regularity of dominating mixed smoothness, which is seldomly encountered for real-world applications. Therefore, we present possible extensions of the basic sparse grid least squares algorithm by introducing suitable a-priori data transformations in the second part of the talk. These are tailored such that the resulting transformed problem suits the sparse grid structure.

Florian Brandl (Universität Bonn): A Natural Adaptive Process for Collective Decision-Making

Consider an urn filled with balls, each labeled with one of several possible collective decisions. Now, draw two balls from the urn, let a random voter pick her more preferred as the collective decision, relabel the losing ball with the collective decision, put both balls back into the urn, and repeat. In order to prevent the permanent disappearance of some types of balls, once in a while, a randomly drawn ball is labeled with a random collective decision. We prove that the empirical distribution of collective decisions converges towards the outcome of a celebrated probabilistic voting rule proposed by Peter C. Fishburn (Rev. Econ. Stud., 51(4), 1984). The proposed procedure has analogues in nature studied in biology, physics, and chemistry. It is more flexible than traditional voting rules because it does not require a central authority, elicits very little information, and allows voters to arrive, leave, and change their preferences over time.

Inga Deimen (University of Arizona): Communication in the shadow of catastrophe

We study the role of risk for strategic information transmission in organizations. An increased likelihood of extreme outcomes – heavier tails – decreases the amount of information that is communicated. Allowing for a change of the optimal mode of decision-making from communication to simple delegation can become optimal for increased risk, but not vice versa. We complement the usual ex ante comparison of modes of decision making by a worst-case analysis and find a systematic discrepancy between ex ante and ex post. For intermediate levels of conflicts, even though communication is optimal ex ante, in a worst-case event delegation would perform better.

Joint work with Dezsoe Szalay.

Anne Driemel (Universität Bonn): Data structures for nearest neighbor searching under the Fréchet distance

We review different techniques for building a data structure that preprocesses a set of polygonal curves and answers proximity queries under the Fréchet distance. In particular, we consider the near-neighbor problem, where the query should return one curve that lies within a Fréchet distance r to the query curve. In the approximate setting, the query radius r can be relaxed by the query. We are interested in the preprocessing time, space and query time complexity that can be achieved when the data structure takes n polygonal curves, each with m vertices as input, and allows to query with polygonal curves of k vertices. We first discuss exact data structures based on multi-level partition trees which can be used in the exact setting, but are very expensive with polylog-factors exponential in k. For the discrete Fréchet distance it is possible—by enumerating a discrete subset of the query space— to build a ($1+\epsilon$)-approximate data structure that uses space linear in n and exponential in k and query time only depending on k. We discuss some challenges that arise when translating this technique to the setting of the continuous Fréchet distance and discuss some recent advances in this direction.

Tobias Dyckerhoff (Universität Hamburg): Topological Fukaya categories of symmetric powers

Various categorical descriptions of Fukaya categories can be interpreted as categorifications of corresponding computations of (co)homology. This suggests the possibility for defining and studying suitable classes Fukaya categories within combinatorial-topological categorical frameworks. Based on an ongoing joint project with Gustavo Jasso and Yanki Lekili, we discuss progress towards such a framework for symmetric powers of Riemann surfaces.

Alexander Effland (Universität Bonn): Convergence of the Time Discrete Metamorphosis Model on Hadamard Manifolds

Continuous image morphing is a classical task in image processing. The metamorphosis model proposed by Trouvé, Younes and coworkers casts this problem in the frame of Riemannian geometry and geodesic paths between images. In many applications, images are maps from the image domain into a manifold (e.g. in diffusion tensor imaging (DTI) the manifold of symmetric positive definite matrices with a suitable Riemannian metric). In this talk, I present a generalized metamorphosis model for manifold-valued images, where the range space is a finite-dimensional Hadamard manifold. A corresponding time discrete version was presented by Neumayer et al. based on the general variational time discretization proposed by Berkels et al. Moreover, I briefly sketch the proof of the Mosco--convergence of the time discrete metamorphosis functional to the proposed manifold-valued metamorphosis model, which implies the convergence of time discrete geodesic paths to a geodesic path in the (time continuous) metamorphosis model.

Jochen Garcke (Universität Bonn / Fraunhofer SCAI): Machine Learning meets Numerical Simulation

We present a conceptual framework that helps bridge the knowledge gap between the two individual communities from machine learning and numerical simulation to promote the development of hybrid systems. We describe combined approaches and employ it to give a structured overview of different types of combinations using exemplary approaches of simulation-assisted machine learning and machine-learning assisted simulation.

Nicola Gigli (SISSA, Trieste): Synthetic and distributional approach to lower curvature bounds

I will discuss the "classical" approach to lower curvature bounds by means of appropriate convexity inequalities and its relation with a more recent approach based on tensor calculus in the non-smooth setting. Both lower Ricci and lower Sectional curvature bounds will be discussed.

Shaoming Guo (University of Wisconsin-Madison): A short proof of decoupling inequalities for moment curves

I will talk about a short proof of Vinogradov's mean value estimates and decoupling inequalities for moment curves. The proof is based on a bilinear method of Wooley, and simplifies the proof of Bourgain, Demeter and Guth, which is based on multilinear restriction estimates.

Joint work with Zane Li, Po-Lam Yung and Pavel Zorin-Kranich.

Lisa Hartung (Universität Mainz): Extremes of Continuous Random Energy Models

Continuous random energy models were introduced by Derrida as toy models for spin glasses. In this talk we give an overview about what is known (and what is not!) about the extremes of continuous random energy models. We focus on two particular cases: the continuous random energy model on a supercritical Galton-Watson tree, also known as variable speed branching Brownian motion, and its 2 dimensional analogue, the 2d scale inhomogeneous Gaussian free field.

This talk is based on joint work with A. Bovier and M. Fels.

Stephan Held (Universität Bonn): Mathematical Innovations in Computer Chip Design

Due to its high complexity the design of computer chips is traditionally sub-divided into multiple steps that are performed widely independently, e.g. placement, routing, or signal delay optimization.In recent years, we achieved some significant progress in developing unifying global optimization models that integrate different sub-problems, e.g. routing and signal delay optimization. They also apply to central steps such as transistor sizing and threshold-voltage optimization. These new global optimization methods have quickly become state-of-the art in real-life design flows. The specific applications also triggered new approximation algorithms for well-known combinatorial optimization algorithms, e.g. for cost-distance Steiner trees or the discrete time-cost tradeoff problem. We will give a survey on these new developments with details on the new combinatorial approximation algorithms.

Alexander Ivanov (Universität Bonn): Deligne-Lusztig theory: From finite to p-adic case

Classical Deligne--Lusztig theory allows a cohomological realization of irreducible representations of all finite groups of Lie type. In 1979 Lusztig conjectured a similar theory for p-adic groups. I will explain a definition of p-adic Deligne--Lusztig spaces and give some justification that this indeed is a good definition. In some particular cases, I will describe which representations are realized by their cohomology (the last part is a joint work with Charlotte Chan).

Gustavo Jasso (Universität Bonn): Homotopical algebra in exact infinity-categories

Exact categories were introduced by Quillen in 1973 as part of his foundational work on higher algebraic K-theory. Exact categories are ubiquitous in representation theory of rings and algebras: they arise as module categories, categories of Gorenstein projective modules, categories of cochain complexes, etc. Much more recently, in 2015, Barwick introduced the class of exact infinity-categories in order to prove an infinity-categorical version of Neeman's Theorem of the Heart. Rouhgly speaking, Barwick's exact infinity-categories are a simultaneous generalisation of Quillen's exact categories and of Lurie's stable infinity-categories (the latter can be thought of as enhancements of Verdier's triangulated categories). In this talk, I will explain the role of Barwick's exact infinity-categories in representation theory of rings and algebras. I will also explain how Cisinki's results on infinity-categorical homotopical algebra can be leveraged to study localisations of exact infinity- categories and their derived infinity-categories.

This is a report on joint work in progress with Sondre Kvamme (Uppsala), Yann Palu (Amiens) and Tashi Walde (TU Munich).

Marina Khismatullina (Universität Bonn): Multiscale inference for nonparametric time trends

We develop multiscale methods to test qualitative hypotheses about nonparametric time trends in the presence of covariates. In many applications, practitioners are interested whether the observed time series all have the same time trend. Moreover, when there is evidence that this is not the case, one of the major related statistical problems is to determine which of the trends are dierent and whether we can group the time series with the similar trends together. In addition, when two trends are not the same, it may also be relevant to know in which time regions they dier from each other. We design multiscale tests to formally approach these questions. We derive asymptotic theory for the proposed tests and show that the proposed test has (asymptotically) the correct size and has asymptotic power of one against a certain class of local alternatives.

This presentation is based on the joint work with Michael Vogt (Universität Ulm)

Bettina Klaus (Lausanne): Minimal-Access Rights in School Choice and the Deferred Acceptance Mechanism

A classical school choice problem consists of a set of schools with priorities over students and a set of students with preferences over schools. Schools' priorities are often based on multiple criteria, e.g., merit-based test scores as well as minimal-access rights (siblings attending the school, students' proximity to the school, etc.). Traditionally, minimal-access rights are incorporated into priorities by always giving minimal-access students higher priority over non-minimal-access students. However, stability based on such adjusted priorities can be considered unfair because a minimal-access student may be admitted to a popular school while another student with higher merit-score but without minimal-access right is rejected, even though the former minimal-access student could easily attend another of her minimal-access schools. We therefore weaken stability to minimal-access stability: minimal-access rights only promote access to at most one minimal-access school. Apart from minimal-access stability, we also would want a school choice mechanism to satisfy strategy-proofness and minimal-access monotonicity, i.e., additional minimal-access rights for a student do not harm her. Our main result is that the student-proposing deferred acceptance mechanism is the only mechanism that satisfies minimal-access stability, strategy-proofness, and minimal-access monotonicity. Since this mechanism is in fact stable, our result can be interpreted as an impossibility result: fairer outcomes that are made possible by the weaker property of minimal-access stability are incompatible with strategy-proofness and minimal-access monotonicity.

Joint work with Flip Klijn.

Alois Kneip (Universität Bonn): Superconsistent estimation of points of impact in non-parametric regression with functional predictors

Predicting scalar outcomes using functional predictors is a classical problem in functional data analysis. In many applications, however, only specific locations or time-points of the functional predictors have an impact on the outcome. Such points of impact are typically unknown and have to be estimated besides the usual model components. In this paper we consider the case of nonparametric models and the practically relevant case of generalized linear models. We show that our points of impact estimator enjoys a super-consistent convergence rate and does not require knowledge or pre-estimates of the unknown model components. This remarkable result facilitates the subsequent estimation of the remaining model components as shown in the theoretical part. The finite sample properties of our estimators are assessed by means of a simulation study. Our methodology is motivated by data from a psychological experiment in which the participants were asked to continuously rate their emotional state while watching an affective video eliciting varying intensity of emotional reactions.

Eva Kopfer (Universität Bonn): Optimal transport and homogenization

We consider discrete dynamical transport costs on periodic network graphs and compute the limit cost as the mesh size of the graphs is getting finer and finer. A prominent example is given by the Benamou-Brenier formulation of the Wasserstein distance.

Gitta Kutyniok (LMU München): The Impact of Artificial Intelligence on Parametric Partial Differential Equations

High-dimensional parametric partial differential equations (PDEs) appear in various contexts including control and optimization problems, inverse problems, risk assessment, and uncertainty quantification. In most such scenarios the set of all admissible solutions associated with the parameter space is inherently low dimensional. This fact forms the foundation for the so-called reduced basis method. Recently, numerical experiments demonstrated the remarkable efficiency of using deep neural networks to solve parametric problems. In this talk, after an introduction into deep learning, we will present a theoretical justification for this class of approaches. More precisely, we will derive upper bounds on the complexity of ReLU neural networks approximating the solution maps of parametric PDEs. In fact, without any knowledge of its concrete shape, we use the inherent low-dimensionality of the solution manifold to obtain approximation rates which are significantly superior to those provided by classical approximation results. We use this low-dimensionality to guarantee the existence of a reduced basis. Then, for a large variety of parametric PDEs, we construct neural networks that yield approximations of the parametric maps not suffering from a curse of dimensionality and essentially only depending on the size of the reduced basis. Finally, we present a comprehensive numerical study of the effect of approximation-theoretical results for neural networks on practical learning problems in the context of parametric partial differential equations. These experiments strongly support the hypothesis that approximation-theoretical effects heavily influence the practical behavior of learning problems in numerical analysis.

Stefan Lauermann (Universität Bonn): Auctions with Frictions

Many markets are characterized by some forms of price competition. Auctions provide a convenient model of such informal price competition. However, existing models need to be extended to capture certain frictions that are more salient with informal price competition. In particular, the participating bidders may be the outcome of recruitment efforts by the seller and not all potential bidders may be willing to participate or submit serious bids. In addition, the private information of the seller may be more consequential. Finally, the seller's commitment abilities may be limited. We develop a model of auctions with bidder recruitment and limited seller commitment that captures such scenarios and derive some novel predictions. In particular, outcomes are often inefficient and the markets sometimes "unravels".

Joint work with Asher Wolinsky.

Tim Laux (Universität Bonn): De Giorgi varifold solutions for mean curvature flow: Convergence of the Allen-Cahn equation and weak-strong uniqueness

TWeak solution concepts for mean curvature flow have been studied since Brakke's seminal work in the 70's. However, all concepts which are general enough to study multiphase systems either lack satisfactory existence or uniqueness properties. In this talk, I will propose a new solution concept and prove (long-time) existence and (weak-strong) uniqueness. Our solutions are evolving varifolds which are coupled to the phase volumes by a transport equation. The solution concept is based on the gradient-flow structure of mean curvature flow and arises naturally as the analog of De Giorgi’s framework for abstract gradient flows. First, we show that, in the generality of Ilmanen’s work [J. Differential Geom. 38, 417—461, (1993)], solutions to the Allen—Cahn equation converge to such a De Giorgi varifold solution. Second, we prove that any calibrated flow in the sense of Fischer et al. [arXiv:2003.05478]—and hence any classical solution—is unique in the class of De Giorgi varifold solutions. This is in sharp contrast to the case of Brakke flows, which a priori may disappear at any given time and are therefore fatally non-unique.

This is joint work with Sebastian Hensel (IST Austria).

Dominik Liebl (Universität Bonn): Fast and Fair Simultaneous Confidence Bands for Functional Parameters

Quantifying uncertainty using confidence regions is a central goal of statistical inference. Despite this, methodologies for confidence bands in Functional Data Analysis are underdeveloped compared to estimation and hypothesis testing. This work represents a major leap forward in this area by presenting a new methodology for constructing simultaneous confidence bands for functional parameter estimates. These bands possess a number of striking qualities: (1) they have a nearly closed-form expression, (2) they give nearly exact coverage, (3) they have a finite sample correction, (4) they do not require an estimate of the full covariance of the parameter estimate, and (5) they can be constructed adaptively according to a desired criteria. One option for choosing bands we find especially interesting is the concept of fair bands which allows us to do fair (or equitable) inference over subintervals and could be especially useful in longitudinal studies over long time scales. Our bands are constructed by integrating and extending tools from Random Field Theory, an area that has yet to overlap with Functional Data Analysis.

Clara Löh (Universität Regensburg): Computing bounded cohomology?

Bounded cohomology is the cohomology of the bounded dual of the real singular/bar chain complex. Bounded cohomology has various applications to geometry, topology, and group theory. However, bounded cohomology cannot be computed through the usual divide and conquer approach. In this talk, I will survey some classical results, some new vanishing results, and explain why bounded cohomology is difficult to compute.

This is based on joint work with Francesco Fournier-Facio and Marco Moraschini.

Jan Maas (IST Austria): Characterisation of gradient flows

Consider a vector field $v$ and a functional $F$ on a manifold $M$. Does there exist a Riemannian metric on $M$ that turns the ODE $\dot x = v(x)$ into a gradient flow for $F$? In this talk we present conditions on $v$ and $F$ that are necessary and sufficient. As an application we characterise the class of quantum Markov semigroups that arise as gradient flow of the von Neumann entropy. This answers a question that arose in joint work with E. Carlen.

Joint work with Morris Brooks (IST Austria).

Alexander Mayer (Universität zu Köln): Estimation and inference in factor copula models with exogenous covariates

A factor copula model is proposed in which factors are either simulable or estimable from exogenous information. Point estimation and inference are based on a simulated methods of moments (SMM) approach with non-overlapping simulation draws. Consistency and limiting normality of the estimator is established and the validity of bootstrap standard errors is shown. Doing so, previous results from the literature are verified under low-level conditions imposed on the individual components of the factor structure. Monte Carlo evidence confirms the accuracy of the asymptotic theory in finite samples and an empirical application illustrates the usefulness of the model to explain the cross-sectional dependence between stock returns.

(Joined with Dominik Wied)

Marina Meila (University of Washington): Manifold coordinates with physical meaning via Riemannian geometry

One of the aims of dimension reduction is to find intrinsic coordinates that describe the data manifold. Manifold Learning algorithm developed in Machine Learning return abstract coordinates; finding their physical or domain-related meaning is not formalized and left to domain experts. In this talk, I propose a method to explain embedding coordinates of a manifold as non-linear compositions of functions from a user-defined dictionary. This generic algorithm can reveal the non-linear relationships between the data-driven, learned coordinates and the domain-specific variables, essentially providing a new set of domain-specific coordinates for the data. We show that this problem can be set up as a sparse linear Group Lasso recovery problem, find sufficient recovery conditions, and demonstrate its effectiveness on data. With this class of new methods, called ManifoldLasso, a scientist can specify a (large) set of functions of interest, and obtain from them intrinsic coordinates for her data in a semi-automatic, principled fashion. In the more general case, when functions with physical meaning are not available, I will present a statistically founded methodology to estimate and then cancel out the distortions introduced by a manifold learning algorithm, thus effectively preserving the Riemannian geometry of the original data. All the methods described are implemented by the python package megaman, and can be applied to data sets up to a million points.

This work is part of Marina Meila’s current research program ”Unsupervised Validation for Unsupervised Learning” which aims to design broad-ranging, mathematically and statistically grounded methods to interpret, verify and validate the output of Unsupervised Machine Learning algorithms with a minimum of assumptions and of human intervention.

Joint work with Dominique Perrault-Joncas, James McQueen, Jacob VanderPlas, Zhongyue Zhang, Yu-Chia Chen, Samson Koelle, Hanyu Zhang

Hans-Georg Müller (UC Davis): Regression Models for Distributional Data and Time Series

We present two regression models for the rapidly evolving field of distributional data analysis (DDA) for one-dimensional probability distributions and density data, working in the Wasserstein space of continuous distributions. These models feature distributional predictor and responses and include the case of i.i.d. predictor/response distributions as well as autoregressive models for distributional time series. The Wasserstein regression model [1] utilizes the Wasserstein manifold and parallel transport, mapping predictor and response distributions to a suitable tangent space, whereupon any preferred functional regression model is applied in the tangent space, followed by projecting to a convex invertibility set and mapping back to the distribution space. The optimal transport regression model and its autoregressive version (ATM) [2] is utilizes a transport algebra and relates optimal transports that serve as predictors to each other. This model does not require a projection step, operates on transport geodesics and thus is intrinsic. The models will be illustrated with age-at-death distributions and financial distributional data. This presentation is based on joint work with Yaqing Chen (Davis), Zhenhua Lin (Singapore) and Changbo Zhu (Davis).

Key words: Autoregressive Distributional Model, Distributional Data Analysis, Optimal Transport, Tangent Bundle, Wasserstein Space


[1] Chen, Y, Lin, Z, Müller, HG (2020). Wasserstein Regression. arXiv:2006.09660

[2] Zhu, C, Müller, HG (2021). Autoregressive Optimal Transport Models. arXiv:2105.05439.

Meike Neuwohner (Universität Bonn): An Improved Approximation Algorithm for the Maximum Weight Independent Set Problem in d-Claw Free Graphs

We consider the task of computing an independent set of maximum weight in a given $d$-claw free graph $G=(V,E)$ equipped with a positive weight function $w:V\rightarrow\mathbb{R}_{>0}$. In doing so, $d\geq 2$ is considered a constant. The previously best known approximation algorithm for this problem is the local improvement algorithm SquareImp proposed by Berman. It achieves a performance ratio of $\frac{d}{2}+\epsilon$ for any fixed $\epsilon>0$, which has remained unimproved for the last twenty years.\\ We investigate the structure of instances where the analysis provided by Berman is "almost tight", and show that they are "locally unweighted" in a certain sense. But for the unit weight case, a result by Hurkens and Schrijver implies that searching for local improvements of constant size produces approximation guarantees arbitrary close to $\frac{d-1}{2}$. Using this observation as a starting point, we study an algorithm that takes into account a broader class of local improvements, and show that it yields a (slightly) improved approximation ratio of $\frac{d}{2}-\frac{1}{63,700,993}$. Furthermore, the well-known reduction from the weighted $k$-Set Packing Problem to the Maximum Weight Independent Set Problem in $k+1$-claw free graphs provides a $\frac{k+1}{2}-\frac{1}{63,700,993}$-approximation algorithm for the weighted $k$-Set Packing Problem. This improves on the previously best known approximation guarantee of $\frac{k+1}{2}+\epsilon$ originating from the result of Berman.

Thomas Nikolaus (Universität Münster): Cohomology of orthogonal and symplectic groups over the integers

The talk is based on joint work with Fabian Hebestreit and Markus Land. Based on recent work connecting Grothendieck–Witt theory to L-theory and using Ranicki’s work, we explain how to compute the homotopy type of the Grothendieck–Witt spaces of the integers. This leads to a computation of the stable cohomology of arithmetic groups like O(n, n, Z) and Sp(n, Z).

Alessia Nota (University of L'Aquila): Homoenergetic solutions of the Boltzmann equation

In this talk I will consider a particular class of solutions of the Boltzmann equation, known as homoenergetic solutions, which were introduced by Galkin and Truesdell in the 1960s. These are a particular type of non-equilibrium solutions of the Boltzmann equation and they are useful to describe the dynamics of Boltzmann gases under shear, expansion or compression. Due to the fact that these solutions describe far-from-equilibrium phenomena their long-time asymptotics cannot always be described by Maxwellian distributions. I will discuss different possible long-time asymptotics of homoenergetic solutions of the Boltzmann equation, as well as some conjectures and open problems in this direction.

These are joint works with A.V.Bobylev, R.D.James and J.J.L.Velàzquez.

Georg Oberdieck (Universität Bonn): Motivic decompositions of hyper-Kähler varieties

By results of Deninger-Murre, Künnemann and Moonen the Chow motive of an abelian variety can be decomposed into isotypical components under a natural action of the Looijenga-Lunts-Verbitsky algebra. When passing to Chow groups this refines the classical Beauville decomposition. In this talk I will explain joint work with A. Negut and Q. Yin in which we construct a parallel decomposition for the Hilbert scheme of points of a K3 surface, the most prominent examples of a hyper-Kähler variety.

Eveliina Peltola (Universität Bonn): On large deviations of SLEs, real rational functions, and zeta-regularized determinants of Laplacians

When studying large deviations (LDP) of Schramm-Loewner evolution (SLE) curves, we recently introduced a ''Loewner potential'' that describes the rate function for the LDP. This object turned out to have several intrinsic, and perhaps surprising, connections to various fields. For instance, it has a simple expression in terms of zeta-regularized determinants of Laplace-Beltrami operators. On the other hand, minima of the Loewner potential solve a nonlinear first order PDE that arises in a semiclassical limit of certain correlation functions in conformal field theory, arguably also related to isomonodromic systems. Finally, and perhaps most interestingly, the Loewner potential minimizers classify rational functions with real critical points, thereby providing a novel proof for a version of the now well-known Shapiro-Shapiro conjecture in real enumerative geometry.

This talk is based on joint work with Yilin Wang (MIT).


Thomas Pock (TU Graz): Learning with energy-based models

In this talk, I will show how to use learning techniques to significantly improve energy-based models. I will start by showing that even for the simplest models such as total variation, one can greatly improve the accuracy of the numerical approximation by learning the „best“ discretization within a class of consistent discretizations. Then I will move forward to more expressive models and show how they can be learned in order to give state-of-the art performance for image reconstruction problems, such as denoising, superresolution, MRI and CT. Finally, I will show how energy based models for image labeling such as Markov random fields can be used in the framework of deep learning.

Yang Qi (INRIA Saclay /CMAP, École polytechnique): On best approximations of high-dimensional semialgebraic data sets

In practice, given a high-dimensional data set, due to the computational complexity, we need to find and study certain best approximations of this data set. This forces us to investigate the existence and uniqueness of such best approximations. In particular, when this data set possesses some intrinsic nonlinear structure, the problem of existence and uniqueness can be subtle. In this talk, we will study the case where the data set is semialgebraic, or more generally, subanalytic. As applications, we apply our techniques and results to two examples, namely, tensors and neural networks. More concretely, we will study the existence and uniqueness of best low rank tensor approximations and best fixed layer neural network approximations.

Michael Rapoport (Universität Bonn): On the singular nature of p-adic uniformization

Ever since Cherednik and Drinfeld stated in 1976 their result on the p-adic uniformization of certain Shimura curves, the question of understanding this result and finding its natural generalization to higher dimension has been at the center of my mathematical interests. I will report on some of the mathematics that has been developed in this context in the last 45 years.

Arunima Ray (Max Planck Institute for Mathematics, Bonn): A surface embedding theorem

When is a given map of a surface to a 4-manifold homotopic to an embedding? I’ll motivate this question and give a survey of related results, including the work of Freedman and Quinn, and culminating in a general surface embedding theorem. The talk will be based on joint work with Daniel Kasprowski, Mark Powell, and Peter Teichner.

Adam Rennie (University of Wollongong): The topology of the bulk-boundary correspondence

Pioneering work by Bellissard and colleagues described the quantum Hall effect using noncommutative index theory.

Further work showed that other topological insulators could also be described using real and Real variations of K-theory. Exact sequences in K-theory give a first method for relating invariants of bulk systems to invariants for the boundary.

I will describe how the Kasparov product can be used to relate topological invariants for bulk systems to invariants for boundary systems. Advances in the constructive approach to the Kasparov product allow us to carry out these topological computations using the physical observables of interest.

This is joint work with Chris Bourne, Alan Carey and Johannes Kellendonk.

Heiko Röglin (Universität Bonn): Beyond Worst-Case Analysis

The complexity of many optimization problems and algorithms seems well understood. However, often theoretical results contradict observations made in practice. Some NP-hard optimization problems can be solved efficiently in practice and for many problems algorithms with exponential worst-case running time outperform polynomial-time algorithms. The reason for this discrepancy is the pessimistic worst-case perspective, in which the performance of an algorithm is solely measured by its behavior on the worst possible input. In order to provide a more realistic measure that can explain the practical performance of algorithms various alternatives to worst-case analysis have been suggested. In this talk, I will survey some results in this area. I will focus mostly on results in the framework of smoothed analysis, in which the performance of an algorithm is measured on inputs that are subject to random noise.

Lennart Ronge (Universität Bonn): Hadamard coefficients for noncommutative geometry

Heat coefficients play an important role in noncommutative geometry, featuring, for example, in the asymptotic expansion for Connes' spectral action. In commutative geometry, they have a Lorentzian analogon called Hadamard coefficients. We develop a formula for the Hadamard coefficients in terms of functional analytic quantities, that might be used to define them in noncommutive Lorentzian geometry as well and obtain an analogue for heat coefficients there.

Angkana Rüland (Universität Heidelberg): On some instability mechanisms in inverse problems

It is well-known that many PDE driven inverse problems are notoriously ill-posed. In this talk I will discuss three robust instability mechanisms based on strong global, weak global and only microlocal smoothing properties of the forward operator. These are applicable to a variety of inverse problems including elliptic, parabolic and hyperbolic ones.

The talk is based on joint work with Herbert Koch (U. Bonn) and Mikko Salo (U. Jyväskylä).

Philipp Strack (Yale University): Overconfidence and Prejudice

We explore conclusions a person draws from observing society when he allows for the possibility that individuals' outcomes are affected by group-level discrimination. Injecting a single non-classical assumption, that the agent is overconfident about himself, we explain key observed patterns in social beliefs, and make a number of additional predictions. First, the agent believes in discrimination against any group he is in more than an outsider does, capturing widely observed self-centered views of discrimination. Second, the more group memberships the agent shares with an individual, the more positively he evaluates the individual. This explains one of the most basic facts about social judgments, in-group bias, as well as "legitimizing myths" that justify an arbitrary social hierarchy through the perceived superiority of the privileged group. Third, biases are sensitive to how the agent divides society into groups when evaluating outcomes. This provides a reason why some ethnically charged questions should not be asked, as well as a potential channel for why nation-building policies might be effective. Fourth, giving the agent more accurate information about himself increases all his biases. Fifth, the agent is prone to substitute biases, implying that the introduction of a new outsider group to focus on creates biases against the new group but lowers biases vis a vis other groups. Sixth, there is a tendency for the agent to agree more with those in the same groups. As a microfoundation for our model, we provide an explanation for why an overconfident agent might allow for potential discrimination in evaluating outcomes, even when he initially did not conceive of this possibility.

(joint work with Paul Heidhues and Botond Kőszegi)

Walter van Suijlekom (Raboud University Nijmegen): Noncommutative geometry, operator systems and state spaces

We extend the traditional framework of noncommutative geometry in order to deal with two types of approximation of metric spaces. On the one hand, we consider spectral truncations of geometric spaces, while on the other hand, we consider metric spaces up to a finite resolution. In our approach the traditional role played by $C^*$-algebras is taken over by so-called operator systems. Essentially, this is the minimal structure required on a space of operators to be able to speak of positive elements, states, pure states, etc. We consider $C^*$-envelopes and introduce a propagation number for operator systems, which we show to be an invariant under stable equivalence and use it to compare approximations of the same space. We illustrate our methods for concrete examples obtained by spectral truncations of the circle, and of metric spaces up to finite resolution. The first are operator systems of finite-dimensional Toeplitz matrices, the second are suitable subspaces of the compact operators. We also analyze the cones of positive elements and the pure state spaces for these operator systems, which turn out to possess a very rich structure.

(based on joint work with Alain Connes)

Leonardo Tolomeo (Universität Bonn): Quasi-invariance and growth of Sobolev norms for Hamiltonian PDEs

n this talk, we consider a Hamiltonian PDE with a suitable gaussian initial data, with the goal of studying the growth in time of the solution to this equation. The study of this kind of problems was (arguably) started by Bourgain in ’96, who considered the Schrödinger equation with cubic nonlinearity posed on the 2-dimensional torus (2d-NLS). In his work, Bourgain exploited the formal invariance of the Gibbs measure in order to construct solutions to 2d-NLS in negative Sobolev regularity. These solutions satisfy a logarithmic bound on the growth of their norm. A crucial observation is that Bourgain’s argument relies only on quasi-invariance, i.e. the existence of a density for the law of the solution with respect to the initial gaussian measure, together with some assumptions on the density. In 2015, N. Tzvetkov developed a strategy to prove quasi-invariance that depends only on the structure of the transport equation associated with the flow. This approach was expanded in 2018 by Planchon, Tzvetkov and Visciglia, who managed to incorporate some information about the solution that come from the deterministic study of the PDE. This allows to obtain quasi-invariance as a consequence of deterministic global well posedness in a plethora of situations. In this talk, we suggest a further improvement to the previous techniques, that relies on finer space-time properties of the density of the transported measure. As an application, we obtain quasi-invariance and polynomial growth of solutions for the fourth-order Schrödinger equation with initial data in negative Sobolev regularity.

This is a joint work with J. Forlano (UCLA).

Koen van den Dungen (Universität Bonn): Advances in unbounded KK-theory

KK-theory was introduced by Kasparov in the early 80's and provides a vast generalisation of both K-theory and K-homology of C*-algebras. While KK-theory in essence provides topological invariants, there are many examples in which the geometry can be captured by an 'unbounded' representative of the class in KK-theory. In this talk, I will give an overview of the recent advances (roughly from the past decade) on the unbounded picture of KK-theory and the constructive approach to the Kasparov product. I will in particular highlight some of my own contributions on the appropriate equivalence relation in unbounded KK-theory and on the Kasparov product for non-selfadjoint operators.

Paul Wedrich (Universität Bonn): A skein relation for singular Soergel bimodules

Soergel bimodules categorify Hecke algebras and lead to invariants of braids that take values in monoidal triangulated categories. In this process, the quadratic `skein relation' on standard generators is promoted to a distinguished triangle. I will talk about an analog of this relation in the setting of singular Soergel bimodules and the Rickard complexes of Chuang-Rouquier, in which the distinguished triangle gets replaced by a longer one-sided twisted complex.

Joint work with M. Hogancamp and D.E.V. Rose.

Benedikt Wirth (Universität Münster): On optimal transport on networks and network optimization

The branched transport problem, a popular recent variant of optimal transport, is a non-convex and non-smooth variational problem on Radon measures. The so-called urban planning problem, on the contrary, is a shape optimization problem that seeks the optimal geometry of a street or pipe network. We show that the branched transport problem is equivalent to a generalized version of the urban planning problem. Apart from unifying these two different models used in the literature, another advantage of the urban planning formulation for branched transport is that it provides a more transparent interpretation of the overall cost by separation into a transport (Wasserstein-1-distance) and a network maintenance term, and it splits the problem into the actual transportation task and a geometry optimization.

Pavel Zorin-Kranich (Universität Bonn): Fourier decoupling

Decoupling inequalities are results in harmonic analysis motivated by the Vinogradov mean value problem in number theory. I will survey the progress that has been made on estimating multidimensional Weyl sums by this approach in the last few years.