While problems in science and technology depend increasingly on the acquisition of vast amounts of data, the data often possesses a simple underlying structure (sparse signals/images, low rank matrices/tensors, low-dimensional models in statistics and machine learning): the actual information content is orders of magnitude smaller than the data dimension. Compressive sensing is about the idea that in many such cases, the data can be acquired with a complexity proportional to the information content rather than the data dimension, using ideas from optimization. A popular approach to this is convex regularization, a special case of it being the problem of identifying a sparse signal from an underdetermined system of linear equations by minimizing the l_1 norm under linear constraints.
In this talk I give a short overview of the main ideas in the field, and present a rigorous mathematical explanation of when and why convex optimization works in solving underdetermined systems of equations with structural constraints. In particular, the observed phase transition phenomenon in convex optimization will be explained: the probability of successfully recovering a signal changes abruptly from zero to one as the number of constraints increases past a certain threshold, the statistical dimension of the problem.