

2004  Dr. rer. nat., TU Munich  2005  Postdoc, University of Wroclaw, Poland  2005  2008  Postdoc, University of Vienna, Austria  2008  2013  Professor (W2, Bonn Junior Fellow), University of Bonn  Since 2013  Professor (W3) for Mathematics, Head of Chair C for Mathematics (Analysis), RWTH Aachen 


I work on the areas of applied and computational harmonic analysis, compressive sensing, nonasymptotic random matrix theory, recovery problems for functions in high dimensions, and convex optimization.


“Sparse Signals and Operators: Theory, Methods and Applications” (SPORTS), funded by WWTF
Principal Investigator, jointly with Dr. G. Tauböck, Prof. Dr. F. Hlawatsch (both TU Vienna), and Dr. C. Haisch (TU Munich), 2008 – 2012
ERC Starting Grant “Sparse and Low Rank Recovery”
2011  2015


Research Area G My research within Research Area G focuses on the nonasymptotic analysis of certain random matrices arising in compressive sensing [1,2], as well as on related deviation and moment inequalities.
Recovery of sparse expansions in redundant dictionaries from random measurements was studied in [3]. I showed the first (near)optimal nonuniform recovery result for partial random circulant matrices in [2]. As crucial tool, I extended noncommutative Khintchine inequalities on the moments of matrixvalued Rademacher sums in Schattenclass norms to derived noncommutative Khintchine inequalities for Rademacher chaos expansions of order two [2]. The best estimates available so far on the socalled restricted isometry constants of partial random circulant matrices were shown in [4]. The first probabilistic analysis of multichannel sparse recovery via convex relaxation was derived in [5].  Research Area J My research within Research Area J focuses on compressed sensing [1] and its potential for attacking highdimensional problems, as well as on function spaces suitable for the analysis of high dimensional problems.
Gelfand widths are closely related to the best possible performance of recovery algorithms and measurement matrices. Good models for compressible signals are balls with , so that it is important to understand the Gelfand widths of such balls.
The case was already derived in the 1980ies by Kashin and GluskinGarnaev. In [6], we were able to provide lower bounds, which are new for the case . In [7], we were able to provide lower bounds, which are new for the case p < 1.
Based on random sampling of sparse multivariate trigonometric polynomials via compressive sensing [2], we introduced a model of functions in highdimensions that promotes “sparsity with respect to dimensions”, that is, the function to be recovered from samples is allowed to be rough only in a small number of a priori unknown variables, and smooth with respect to most variables. It is shown in [8] that such functions can be reconstructed via compressive sensing techniques with small error from a number of samples that scales only logarithmic in the spatial dimension  in contrast to many models, which suffer the curse of dimension, i.e., an exponential scaling.
Optimization algorithms for inverse problems regularized by joint sparsity constraints were developed and analyzed in [9,10].
The analysis of sparse grid methods for high dimensional problems often uses function spaces of dominating mixed smoothness as a model class. New characterizations using orthonormal and biorthogonal wavelets and local means were derived in [11].  Former Research Area L My research within Research Area L focuses on compressive sensing [1], lowrank matrix recovery and matrix completion, and, in particular, on efficient algorithms for these tasks.
The goal of compressed sensing is to recover a sparse vector from incomplete linear information , where is an matrix with . This leads to the problem of finding the sparsest vector consistent with a linear system of equations (the socalled minimization problem). Unfortunately, this problem is NP hard in general. It came as a surprise, that under certain conditions on the matrix , a sufficiently sparse vector can nevertheless be recovered exactly using efficient algorithms such as minimization or certain greedy algorithms.
Low rank matrix recovery extends the ideas of compressed sensing to the reconstruction of low rank matrices from a small number of linear measurements. A particular instance is matrix completion, where one seeks to fill in missing entries of a low rank matrix. While this problem is NP hard as well, it could nevertheless be shown that under certain assumptions efficient algorithms such as nuclear norm minimization can recover low rank matrices exactly. Together with M. Fornasier and R. Ward (Preprint arXiv:1010.2471) an efficient algorithm based on iteratively reweighted least squares minimization was developed and analyzed. The algorithm beats standard semidefinite programming techniques by far. 


[ 1] M. Fornasier, H. Rauhut
Compressive Sensing Handbook of Mathematical Methods in Imaging Publisher: Springer 2011 DOI: 10.1007/9780387929200_6[ 3] Holger Rauhut, Karin Schnass, Pierre Vandergheynst
Compressed sensing and redundant dictionaries IEEE Trans. Inform. Theory , 54: (5): 22102219 2008[ 5] Yonina C. Eldar, Holger Rauhut
Average case analysis of multichannel sparse recovery using convex relaxation IEEE Trans. Inform. Theory , 56: (1): 505519 2010[ 8] A. Cohen, R. DeVore, S. Foucart, H. Rauhut
Recovery of functions of many variables via compressive sensing Proc. SampTA 2011, Singapore 2011




• Acta Applicandae Mathematicae (since 2010)


1996  Youth Prize of the Eduard Rhein Foundation  2006  Marie Curie Fellowship  2010  Best Paper Award, Journal of Complexity  2010  ERC Starting Grant 


2010  Probability and Geometry in High Dimensions, MarnelaVallée, France  2010  New Trends in Harmonic and Complex Analysis, Bremen  2011  Workshop on Sparse Dictionary Learning, London, England, UK  2011  From Abstract to Computational Harmonic Analysis, Strobl, Austria  2011  International Conference on Multivariate Approximation, Hagen 


Ulas Ayaz (2014): “TimeFrequency and Wavelet Analysis of Functions with Symmetry Properties”,
now Postdoctoral Associate, MIT Laboratory for Information and Decision Systems, MA, USA
Zeljka Stojanac (2016): “Lowrank Tensor Recovery”
Max Hügel (since 2011)


 Diplom theses: 3, currently 2
 PhD theses currently: 3


Download Profile 