Goals of Research Area KL

Efficient Algorithms for Combinatorial and Geometric Optimisation.

Our research will continue to focus on design and analysis of efficient algorithms for fundamental combinatorial and geometric optimisation problems. In particular, we plan on developing new methods and methodologies for problems for which no good approximation algorithms are known. One of such problems is the traveling salesman problem (TSP) for which no progress on better approximation algorithms has been made since 1976. Another class of problems concerns geometric embedding of graphs and hypergraphs. For example, the best known approximation ratio known for the linear arrangement problem is O(\sqrt{\log n}\log \log n), and even worse for higherdimensional embeddings. These algorithms use sparsest cut approximation as subroutines, which in turn use a (high-dimensional) geometric embedding. This relation is not well understood, and we hope to improve the approximation ratios and obtain generalisations to more practically relevant problems.
We will also study Markov Chain Monte Carlo (MCMC) based algorithms for various hypergraph counting problems and investigate geometric path coupling methods for those problems. In particular we plan to study mixing time bounds for selected Metropolis algorithms.

Efficient Algorithms in Pure Mathematics.

We will continue studying the classical question of polynomial factorisation over finite fields and its connection to the generalised Riemann hypothesis (GRH) and certain combinatorial objects called m-schemes.
We shall extend our results for calculating the prime counting function \pi(x) further, combining our analytic method with FFT-methods to investigate certain problems concerning the behaviour of \pi(x) in large intervals, for which the only available approach so far was to sieve the entire interval by the sieve of Eratosthenes. As a byproduct of these calculations, classical conjectures about the zeros of the Riemann zeta function may be checked up to a certain bound on the imaginary part. We will make our code, which will also be applicable to other L-functions, publicly available. We also plan applications of related analytic techniques in connection with recent developments in the art of primality proving of general numbers, a field in which our group twice participated in obtaining a record.
The Naproche project has established contacts with leading groups in formal mathematics. Recently, proofs of important theorems like the prime number theorem, the Jordan curve theorem, or the four-color theorem have been formalised and verified. To make the formal proofs generally accessible, we shall combine Naproche techniques with Coq, HOL, Mizar, and other formal mathematics systems.

Structural and Computational Complexity.

We continue to study the complexity of polynomial identity testing (PIT) for depth-3 and depth-4 circuits and the corresponding problem for symbolic determinants.
We aim also at developing new methods for proving better lower approximation bounds for classes of sparse CSP problems. Lower bounds for such problems are used to obtain lower approximation bounds for other well-known problems, like TSP or the Steiner tree problem. We had some success with proving such bounds, but they are still far from satisfactory. Here, we plan to develop more direct PCP techniques for proving better bounds for those problems.
In order to better understand the discrepancy between excellent practical behaviour and poor worst-case guarantees of certain algorithms, in particular local search heuristics, we will study their running time and quality of results in the framework of smoothed analysis
In set theory, the generalised continuum hypothesis, first formulated by Hausdorff, has been pivotal. After the independence results on the continuum function at regular cardinals, the question for singular cardinals, known as the singular cardinal hypothesis, turned out to be much more involved and technically fruitful. Following our initial results on negations of this hypothesis in models without the axiom of choice [L:GKb], we propose a wider project to explore the possibilities for exponentiation for all cardinals in choiceless situation. We conjecture that exponentiation can be arranged very freely, up to some trivial monotonicity laws.

New Models of Computation.

We shall continue to explore the paradigm of infinite computations within set theory. We expect to be able to characterise further classes from descriptive set theory and constructible set theory by computations. There are also other processes indexed by natural numbers which may be generalised to ordinal indexing, like abstract dynamic systems or Blum-Shub-Smale machines. These characterisations should allow some aspects of computability like complexity questions to be transferred to new domains. In constructibility theory, generalised computability may allow the definition of fine structural hierarchies adequate for inner models with large cardinals.

Chip Design.

We plan to extend our expertise in chip design to transistor-level layout. This is a domain that is still largely done manually by human experts because of the lack of efficient algorithms that produce competitive or superior results. We already have first successes in leaf cell design for 22 nm  microprocessors, where we place the transistors in a provably optima way. This requires exact algorithms that combine branch-and-bound, constraint programming, and polyhedral techniques.
By now, routing is quite well understood from a theoretical point of view. Placement is not (see the above comments on hypergraph embedding), nor is timing optimisation. An area that is widely open is logic synthesis with provably good performance. So far we can only optimise one fan-in tree subject to timing constraints [K:WRS07], but we know almost nothing about general logic optimisation.
A major part of our research will focus on the interplay of placement, routing, and timing optimisation, which are today still largely treated as independent tasks. However, the enormous amount of buffers and inverters on today’s chips require that we treat these problems in a more unified way: their demand, mainly for timing reasons, depends on placement and routing, but they also largely impact routability and required placement area. Our theoretical research on multiflows and hypergraph embedding should lead to a better  understanding of combining placement and routability. Our new resource sharing algorithm [K:MRV11] allows us to deal with routing and buffering subject to timing constraints as a single optimisation problem and approximates it very well. This very recent theoretical result needs to be verified in practice. Combining all three – placement, timing, and routing – remains a grand challenge.