Prof. Dr. Holger Rauhut

Former Bonn Junior Fellow
Current position: Professor (W3), RWTH Aachen

E-mail: rauhut(at)
Institute: Institute for Numerical Simulation

Academic Career


Dr. rer. nat., TU Munich


Postdoc, University of Wroclaw, Poland

2005 - 2008

Postdoc, University of Vienna, Austria

2008 - 2013

Professor (W2, Bonn Junior Fellow), University of Bonn

Since 2013

Professor (W3) for Mathematics, Head of Chair C for Mathematics (Analysis), RWTH Aachen

Research Profile

I work on the areas of applied and computational harmonic analysis, compressive sensing, nonasymptotic random matrix theory, recovery problems for functions in high dimensions, and convex optimization.

Research Projects and Activities

“Sparse Signals and Operators: Theory, Methods and Applications” (SPORTS), funded by WWTF
Principal Investigator, jointly with Dr. G. Tauböck, Prof. Dr. F. Hlawatsch (both TU Vienna), and Dr. C. Haisch (TU Munich), 2008 – 2012

ERC Starting Grant “Sparse and Low Rank Recovery”
2011 - 2015

Contribution to Research Areas

Research Area G
My research within Research Area G focuses on the non-asymptotic analysis of certain random matrices arising in compressive sensing [1,2], as well as on related deviation and moment inequalities.
Recovery of sparse expansions in redundant dictionaries from random measurements was studied in [3]. I showed the first (near-)optimal nonuniform recovery result for partial random circulant matrices in [2]. As crucial tool, I extended noncommutative Khintchine inequalities on the moments of matrix-valued Rademacher sums in Schatten-class norms to derived noncommutative Khintchine inequalities for Rademacher chaos expansions of order two [2]. The best estimates available so far on the so-called restricted isometry constants of partial random circulant matrices were shown in [4]. The first probabilistic analysis of multichannel sparse recovery via convex relaxation was derived in [5].
Research Area J
My research within Research Area J focuses on compressed sensing [1] and its potential for attacking high-dimensional problems, as well as on function spaces suitable for the analysis of high dimensional problems.
Gelfand widths are closely related to the best possible performance of recovery algorithms and measurement matrices. Good models for compressible signals are <br>ell_p-balls with 0<p<br>leq 1, so that it is important to understand the Gelfand widths of such <br>ell_p-balls.
The case p=1 was already derived in the 1980ies by Kashin and Gluskin-Garnaev. In [6], we were able to provide lower bounds, which are new for the case p < 1. In [7], we were able to provide lower bounds, which are new for the case p < 1.
Based on random sampling of sparse multivariate trigonometric polynomials via compressive sensing [2], we introduced a model of functions in high-dimensions that promotes “sparsity with respect to dimensions”, that is, the function to be recovered from samples is allowed to be rough only in a small number of a priori unknown variables, and smooth with respect to most variables. It is shown in [8] that such functions can be reconstructed via compressive sensing techniques with small error from a number of samples that scales only logarithmic in the spatial dimension - in contrast to many models, which suffer the curse of dimension, i.e., an exponential scaling.
Optimization algorithms for inverse problems regularized by joint sparsity constraints were developed and analyzed in [9,10].
The analysis of sparse grid methods for high dimensional problems often uses function spaces of dominating mixed smoothness as a model class. New characterizations using orthonormal and biorthogonal wavelets and local means were derived in [11].
Former Research Area L
My research within Research Area L focuses on compressive sensing [1], low-rank matrix recovery and matrix completion, and, in particular, on efficient algorithms for these tasks.
The goal of compressed sensing is to recover a sparse vector x from incomplete linear information y = Ax, where A is an m <br>times N matrix with m <br>ll N. This leads to the problem of finding the sparsest vector consistent with a linear system of equations (the so-called <br>ell_0-minimization problem). Unfortunately, this problem is NP hard in general. It came as a surprise, that under certain conditions on the matrix A, a sufficiently sparse vector x can nevertheless be recovered exactly using efficient algorithms such as <br>ell_1-minimization or certain greedy algorithms.
Low rank matrix recovery extends the ideas of compressed sensing to the reconstruction of low rank matrices from a small number of linear measurements. A particular instance is matrix completion, where one seeks to fill in missing entries of a low rank matrix. While this problem is NP hard as well, it could nevertheless be shown that under certain assumptions efficient algorithms such as nuclear norm minimization can recover low rank matrices exactly. Together with M. Fornasier and R. Ward (Preprint arXiv:1010.2471) an efficient algorithm based on iteratively reweighted least squares minimization was developed and analyzed. The algorithm beats standard semidefinite programming techniques by far.

Selected Publications

[1] M. Fornasier, H. Rauhut
Compressive Sensing
Handbook of Mathematical Methods in Imaging
Publisher: Springer
DOI: 10.1007/978-0-387-92920-0_6
[3] Holger Rauhut, Karin Schnass, Pierre Vandergheynst
Compressed sensing and redundant dictionaries
IEEE Trans. Inform. Theory , 54: (5): 2210--2219
DOI: 10.1109/TIT.2008.920190
[5] Yonina C. Eldar, Holger Rauhut
Average case analysis of multichannel sparse recovery using convex relaxation
IEEE Trans. Inform. Theory , 56: (1): 505--519
DOI: 10.1109/TIT.2009.2034789
[8] A. Cohen, R. DeVore, S. Foucart, H. Rauhut
Recovery of functions of many variables via compressive sensing
Proc. SampTA 2011, Singapore

Publication List

MathSciNet Publication List (external link)


• Acta Applicandae Mathematicae (since 2010)



Youth Prize of the Eduard Rhein Foundation


Marie Curie Fellowship


Best Paper Award, Journal of Complexity


ERC Starting Grant

Selected Invited Lectures


Probability and Geometry in High Dimensions, Marne-la-Vallée, France


New Trends in Harmonic and Complex Analysis, Bremen


Workshop on Sparse Dictionary Learning, London, England, UK


From Abstract to Computational Harmonic Analysis, Strobl, Austria


International Conference on Multivariate Approximation, Hagen

Selected PhD students

Ulas Ayaz (2014): “Time-Frequency and Wavelet Analysis of Functions with Symmetry Properties”,
now Postdoctoral Associate, MIT Laboratory for Information and Decision Systems, MA, USA

Zeljka Stojanac (2016): “Low-rank Tensor Recovery”

Max Hügel (since 2011)

Supervised Theses

  • Diplom theses: 3, currently 2
  • PhD theses currently: 3
Download Profile