Report on former Research Area K

Efficient Algorithms for Combinatorial Optimisation in Networks.

The most basic optimisation problem in networks asks for a shortest path in an undirected graph with nonnegative weights. In chip routing, for example, we need to find approximately 108 such paths in graphs with often more than 1011 vertices. Our new generalised version of Dijkstra’s algorithm [K:PRV09] exploits the graph structure to solve this fundamental problem much faster than any previous algorithm.
If we want to connect more than two terminals, we have the Steiner tree problem in weighted graphs. Vygen found an exact algorithm that is faster than all previous ones in most cases. Karpinski and co-authors improved the best known approximation factor for the Steiner tree problem and the generalised Steiner tree problem [K:BKZ10] with distances 1 and 2. These results advance the state of the art of this classical NP-hard problem significantly.
Often, paths or Steiner trees must be packed subject to various constraints. Many such problems can be formulated as instances of the (block-angular) min-max resource sharing problem. Vygen and co-authors proposed an algorithm [K:MRV11] for this general nonlinear problem that works even with weak block solvers as oracles and is at the same time faster and more general than any previous algorithm, even for the linear special case (which includes the well-known multi-commodity flow problem).
The algorithm is also very efficient in practice. We are now able to solve problems with millions of resources and customers in a few minutes almost optimally [K:MRV11]. In the special case of multi-commodity flows this corresponds to linear programs with approximately 1013 variables and 1013 constraints. Our new algorithm will have many applications in different areas, which we are currently exploring.
We also improved the best known approximation ratio for various other classical problems, including universal facility location [K:Vyg07], sink clustering [K:MV08], the betweenness problem in tournaments [L:KS10], transitive reductions in directed networks, and several optimisation problems in wireless networks. Moreover, Hougardy devised (almost) linear-time approximation algorithms for degree-constrained subgraph problems, including weighted matching, and a linear-time 1.4-approximation algorithm for square packing [K:Hou11].
Another interesting research area is solving very hard problems optimally. Among the hardest fundamental combinatorial problems (both from a theoretical and practical standpoint) are graph coloring [K:HCS11], rectangle packing, and placement. In each of these cases, Held and Hougardy (with co-authors) were able to solve much larger instances than was possible before. Our new techniques include combinations of branch-and-bound with constraint programming or with numerically safe linear programming [K:HCS11].

Chip Design.

We continued our successful work on chip design, one of the most fruitful and important application areas of mathematics. By advancing the mathematical foundations, developing novel algorithms based on this theory, and implementing these algorithms in efficient software, we have established a direct link between theory and practice. Mathematics helps design better chips, and practical challenges motivate interesting fundamental research questions.
Our most important recent results are substantially improved algorithms for  placement, routing, and timing optimisation. Some of these are the first algorithms that guarantee near-optimality. BonnPlace [K:BSV08] is based on analytical placement and our new algorithm for the Hitchcock transportation problem [K:Bre08], recently combined with a novel flow-based partitioning approach. BonnRoute contains an extremely fast global router based on our new resource sharing algorithm [K:MRV11] and a detailed router combining algorithms for on-track (longdistance) connections [K:PRV09] and off-grid pin access. In BonnOpt, our timing optimisation toolkit, our new algorithms for repeater tree topologies [K:BHM+10], buffering, gate sizing, and logic restructuring [K:WRS07] are most notable.
These BonnTools [K:KRV07] are widely used in industry for the design of most complex chips. For example, IBM relies on them for all their advanced chips. They are continuously enhanced to meet new technological challenges and incorporate better and faster algorithms.
We recently widened our application area to microprocessor design, which is traditionally done hierarchically and with a huge amount of manual effort. Besides adapting our existing Bonn-Tools, we developed completely new algorithms for hierarchical problems such as macro placement. With these one can place many instances optimally for the first time.
Currently, we are developing and implementing the first algorithms for routing in technologies with double patterning lithography, which will be predominant at the 14nm technology node, and for fully automated transistor-level layout.

Activities.

The most notable event of Research Area K was the Bonn Workshop on Combinatorial Optimisation in November 2008, organised by Cook, Korte, Lovász, and Vygen. Most of the top researchers in the field came from all over the world; see www.or.uni-bonn.de/workshop. The proceedings [K:CLV09] contain many surveys by prominent researchers and are a valuable foundation for current and future research. We also organised an innovative spring school on mathematics of chip design in Hangzhou (March/April 2009), sponsored by the DFG and the National Natural Science Foundation of China (NSFC); see www.cs.zju.edu.cn/people/yedeshi/chipdesign09.

The 5th edition of our combinatorial optimisation book [K:KV08] will appear in 2012. This book has been translated into six languages. In our new master’s program we established a regular lecture course on the mathematics of chip design, probably the first such course worldwide. We are currently writing the first mathematical textbook in this area.
We also set up a grant program for outstanding students in Discrete Mathematics. The selected students mentor younger students and conduct many extracurricular activities.

Report on former Research Area L

Efficient Algorithms for Combinatorial and Geometric Optimisation.

Karpinski and Schudy [L:KS09] have resolved the computational status of the well-known Gale-Berlekamp game optimisation problem, answering a long time open question. As an application of the method, the first linear time approximation schemes have been designed for a general class of fragile minimisation problems including the nearest codeword problem, Unique Games, constrained form of matrix rigidity and maximum likelihood decoding of an error correcting code [L:KS09]. Using similar methods, we have also constructed the currently best exact algorithms for feedback arc set and Kemeny rank aggregation [L:KS10].
Karpinski and co-authors designed the first polynomial-time algorithm for the perfect matching problem [L:KRS10] and the Hamiltonian cycle problem in dense k-uniform hypergraphs.
In [L:GJK10], we have considered Metropolis Glauber dynamics for sampling proper colorings of regular trees and proved almost tight (upper and lower) bounds on the mixing time. We obtained new results on sequential Markov Chain Monte Carlo methods and the bounds on their approximation errors. New geometric path coupling methods were introduced for counting independent sets and colorings in hypergraphs.

Efficient Algorithms in Pure Mathematics.

Karpinski and Saxena [L:IKRS, L:IKS09] have intensively studied the deterministic computational complexity of the well-known problem of factoring polynomials over finite fields and its intimate connection to the generalised Riemann hypothesis (GRH). Factoring occupies a central place in computational complexity and is connected to the famous question whether BPP = P, i.e. whether randomised polynomial-time and deterministic polynomial-time classes are identical. In [L:IKRS] we developed a general technique to eliminate the GRH from various deterministic polynomial factoring algorithms over finite fields. These results represent the first progress in this direction for more than 25 years.
Implementations of Number Field Sieve methods were used by Franke and collaborators for breaking records in the field of integer factorisation, like the factorisation of a 768-Bit RSA number [L:KAF+10]. Recently, the first practical implementation of an analytic method, assuming the Riemann hypothesis, for calculating the prime counting function p(x) has been achieved and the largest known values of p(x) have been determined.
In the Naproche project (Natural language proof checking), Koepke and collaborators from computational linguistics connect ordinary mathematical texts with fully formal mathematics, using automatic provers to supply implicit arguments. Efficient automatic checking of proof steps requires the selection of small sets of possible premises. We have studied selection algorithms based on metrics within proofs graphs and on clues provided by the natural language input [L:CKKS10]. Parts of Landau’s Grundlagen der Analysis and Euclid’s Elements have been reformulated into human readable and computer checked formats.

Structural and Computational Complexity.

Saxena joined ongoing research by Karpinski on circuit complexity and the problems of derandomisation. In particular, Saxena obtained new, interesting results related to the algebraic version of the P = NP question.
Several new results were obtained by Saxena and co-authors on the polynomial identity testing (PIT) problem for depth-3 circuits, e.g. [L:SS11]. In [L:BKS07] Karpinski and co-authors have answered an open problem of Iwama and Takaki by designing a polynomial-time algorithm for restricted instances of 3SAT which are syntactically very close to instances known to be NPhard. Very recently, a number of improved upper and lower approximation bounds have been established for subdense instances of problems like Vertex Cover, Set Cover, and Steiner Tree.

New Models of Computation.

The concept of computability on ordinals has lead to a unifying spectrum of computabilities parametrised by bounds on ordinal time and space [L:Koe09]. Classical classes of hyperarithmetic, D1/2-, Gödel-constructible sets and others could be characterised by Ordinal Register Machines and Ordinal Turing Machines. Typical properties of the classes could be recast and proved in terms of transfinite computations [L:KS09]. The ‘Bonn InternationalWorkshop on Ordinal Computability’ (January 2007) brought together researchers in ordinal computability with eminent logicians from other fields, exploring new ways to ordinalise classical computation mechanisms.

Further Activities.

Jointly with Research Area G, we hosted the workshop on ‘Stochastic Processes and Algorithms’ at HIM in September 2007, organised by Chayes, Eberle and Karpinski, and a workshop partially supported by HCM and BIGS on ‘Design and Analysis of Randomised and Approximation Algorithms’ at Dagstuhl in May 2008, organised by Dyer, Jerrum, and Karpinski. Several worldwide leading researchers contributed to the success of these workshops.