Compressive Sensing and Sparsity: Theory and Applications in Tomography

The University of Manchester, 12th & 13th November 2015

Organisers: Oliver Dorn and Martin Lotz


To request a place at this event, please complete the online registration form.


Compressive sensing and sparsity have experienced intensive attention and research interest during the last 10 years in a large variety of applications, including imaging and tomography. The idea is to make use of the fact that many images or signals are sparse in specific problem-dependent bases or frames.  Exploiting sparsity is usually a hard combinatorial problem. A breakthrough has been obtained when it was shown that, under certain conditions (usually including linearity of the underlying problem), the output of an l1 minimization routine or other convex and non-convex optimization methods can produce sparse results. This has opened a wide area of research ranging from theory to numerical schemes and further to applications. When it comes to nonlinear problems, the goal to obtain sparse reconstructions is by far less understood. This workshop is aiming at connecting latest results on compressive sensing and sparsity in linear tomography problems to the quite novel area of sparse reconstructions in nonlinear inverse problems. Participants are selected from a more mathematical background  (theory and algorithms) towards engineering scientists working on practical reconstruction problems. Thereby this workshop is highly interdisciplinary (and also intra-disciplinary).


Preliminary list of invited speakers:

Giovanni Alberti, ETH Zurich, Switzerland.

Jonathan Fell, RWTH Aachen, Germany.
Bangti Jin, University College London, UK.
Jakob Jorgensen, Technical University, Denmark.
Tobias Kluth, University Bremen, Germany.
Felix Lucka, University College London, UK.
Andrea Massa, University Trento, Italy and L2S/CEA-LIST, Paris, France.
Alessandro Perelli, University of Edinburgh, UK.
Holger Rauhut, RWTH Aachen, Germany.
Carola-Bibiane Schoenlieb, University of Cambridge, UK.
Jared Tanner, University of Oxford, UK.


Thursday Nov 12th, 2015, Morning Session:  (Alan Turing Building, Frank Adams Room 1)

9:30-10:00 Registration and Welcome
10:00-10:45 Bangti Jin - title and abstract
10:45-11:15 Coffee/Tea Break
11:15-12:00 Holger Rauhut - title and abstract
12:00-13:00 Lunch at AT Bldg (Posters)

Thursday Nov 12th, 2015, Afternoon Session:  (Alan Turing Building, Frank Adams Room 1)

13:00-13:30 Giovanni Alberti - title and abstract
13:30-14:00 Tobias Kluth - title and abstract
14:00-14:30 Coffee/Tea break (Posters)
14:30-15:00 Felix Lucka - title and abstract
15:00-15:30 Alessandro Perelli - title and abstract
15:30-16:00 Coffee/Tea break (Posters)
16:00-16:30 Jakob Jorgensen - title and abstract
16:30-17:00 Jonathan Fell - title and abstract
18:30 Workshop Dinner at Sam's Chop House

Friday Nov 13th, 2015, Morning Session:  (Alan Turing Building, Frank Adams Room 1)

9:00 – 9:15 Coffee and Registration
9:15-10:00 Andrea Massa - title and abstract
10:00-10:15 Coffee/Tea Break
10:15-11:00 Carola-Bibiane Schoenlieb - title and abstract
11:00-11:30 Coffee/Tea Break
11:30-12:15 Jared Tanner - title and abstract
12:15-13:00 Closing of Workshop and lunch at AT Bldg

Friday Nov 13th, 2015, Afternoon:

14:00 – 15:00 School of Maths NASC Seminar (Jean-Luc Bouchot) - title and abstract

Titles and Abstracts

Giovanni Alberti:


Disjoint sparsity for signal separation and applications to quantitative photoacoustic tomography


This is joint work with H Ammari. The main focus of this talk is the reconstruction of the signals $f$ and $g_{i}$, $i=1,\dots,N$, from the knowledge of their sums $h_{i}=f+g_{i}$, under the assumption that $f$ and the $g_{i}$s can be sparsely represented with respect to two different dictionaries $A_{f}$ and $A_{g}$. This generalises the well-known ``morphological component analysis'' to a multi-measurement setting. The main result states that $f$ and the $g_{i}$s can be uniquely and stably reconstructed by finding sparse representations of $h_{i}$ for every $i$ with respect to the concatenated dictionary $[A_{f},A_{g}]$, provided that enough incoherent measurements $g_{i}$s are available. The incoherence is measured in terms of their mutual disjoint sparsity.

This method finds applications in the reconstruction procedures of several hybrid imaging inverse problems, where internal data are measured. These measurements usually consist of the main unknown multiplied by other unknown quantities, and so the disjoint sparsity approach can be directly applied. As an example, I will show how to apply the method to the reconstruction in quantitative photoacoustic tomography, also in the case when the Grüneisen parameter, the optical absorption and the diffusion coefficient are all unknown.

Tobias Kluth:


3D electrical impedance tomography with sparsity constraints: algorithm and reconstructions from real data


Electrical impedance tomography (EIT) is the inverse problem of determining the distribution of conductivity in the interior of an object from simultaneous measurements of currents and voltages on its boundary. An algorithm for 3D reconstruction with sparsity constraints is presented and evaluated on real data. Using the complete electrode model, the nonlinear inverse problem is solved by minimizing a Tykhonov-type functional with a sparsity-promoting l^1-penalty term. The algorithm is then evaluated on datasets from two different measurement devices.

Jakob Jorgensen:


Phase-transition behavior in X-ray CT matches that of Gaussian sensing matrices


Sparsity regularization in X-ray computed tomography (CT) has shown large promise for accurate reconstruction from reduced data. Compressed sensing (CS) has motivated these developments by offering guarantees of accurate reconstruction of sparse images from reduced data under suitable assumptions on the sampling matrix. However, existing CS guarantees, e.g., based on the restricted isometry property, do not apply to structured and sparse sampling matrices in X-ray CT. Using empirical phase diagrams  we study image recoverability from X-ray CT data as function of image sparsity and undersampling level. We consider sparsity in the image and gradient domains and reconstruction by 1-norm and TV regularization. We demonstrate empirically that X-ray CT exhibits a pronounced relation between image sparsity and the possible undersampling level as well as sharp phase transitions for certain image classes. We demonstrate that the observed phase-transition behavior of X-ray CT is comparable with theoretical results for Gaussian sensing matrices. However, we also highlight an important difference between X-ray CT and Gaussian sensing, namely that structure in nonzero locations and not just sparsity level affects recoverability in X-ray CT. Finally we present preliminary results of ongoing theoretical work on phase-transition behavior for a simple class of images consisting of sparse superpositions of radial basis functions. Our analysis exploits non-negativity of both sampling matrix and signal and utilizes new mathematical tools from CS theory in the form of expander graphs, which have recently been employed in the context of discrete tomography.

Andrea Massa:


Compressive Sensing as a Tool for exploiting Sparsity and Incoherence in Electromagnetic Tomography


The widely known Shannon/Nyquist theorem relates the number of samples required to reliably retrieve a "signal" to its (spatial and temporal) bandwidth. This fundamental criterion yields to both theoretical and experimental constraints in several Electromagnetic Engineering applications. Indeed, there is a relation between the number of measurements/data (complexity of the acquisition/ processing), the degrees of freedom of the field/signal (temporal/spatial bandwidth), and the retrievable information regarding the phenomena at hand (e.g., dielectric features of an unknown object, presence/position of damages in an array, location of an unknown incoming signal). The new paradigm of Compressive Sensing (CS) is enabling to completely revisit these concepts by distinguishing the "informative content" of signals from their bandwidth. Indeed, CS theory asserts that one can recover certain signal/phenomena exactly from far fewer measurements than it is indicated by Nyquist sampling rate. To achieve this goal, CS relies on the fact that many natural phenomena are sparse (i.e., they can be represented by few non-zero coefficients in suitable expansion bases) and on the use of aperiodic sampling strategies, which can guarantee, under suitable conditions, a perfect recovery of the information content of the signal. In this framework, the aim of this talk is to discuss CS paradigm starting from its fundamentals and to illustrate its features and potentialities in inverse scattering & imaging methods, also envisaging possible future trends in CS.

Joint work with G Oliveri, L Poli and N. Anselmi.

School of Maths NASC Seminar (Jean-Luc Bouchot)


A multi-level compressed sensing Petrov Galerkin method for the approximation of parametric PDEs


In this talk, we review the use of compressed sensing and its weighted version in the context of high dimensional parametric and stochastic PDEs. We see that under some rather weak summability and ellipticity assumptions, the polynomial chaos expansion of the solution map shows some compressibility property. We further derive a multi-level scheme to speed up the calculations, leading to a method that has a computational cost in the order of a single PDE solve at the finest level of approximation and still has reliable guarantees. This is based on joint work with Benjamin Bykowski, Holger Rauhut, and Christoph Schwab.

Carola-Bibiane Schoenlieb:


What is the right sparsity in imaging?


When assigned with the task of reconstructing an image from imperfect data the first challenge one faces is the derivation of a truthful image and data model. In the context of sparse reconstructions, some of this task amounts to selecting the right basis in which the image has a sparse representation. This can be determined by the a-priori knowledge about the image, the data and their relation to each other. The source of this knowledge is either our understanding of the type of images we want to reconstruct and of the physics behind the acquisition of the data or we can thrive to learn parametric models from the data itself. The common question arises: how can we optimise our model choice?  Starting from the first modelling strategy this talk will lead us from the total variation as one of the most successful sparse regularisation models for images today to non-smooth second- and third-order regularisers, combined with data models for Gaussian and Poisson distributed data as well as impulse noise. Applications for image denoising, inpainting and surface reconstruction are given. After a critical discussion of these different image and data models we will turn towards the second modelling strategy and propose to combine it with the first one using a bilevel optimization method. In particular, we will consider optimal parameter derivation for total variation denoising with multiple noise distributions and optimising total generalised variation regularisation for its application in photography.



Felix Lucka:


Variational Image Reconstruction in 4D Photoacoustic Tomography


The acquisition time of current high-resolution 3D photoacoustic tomography (PAT) devices limits their ability to image dynamic processes in living tissue (4D PAT). In our work, we try to overcome this limitation by combining recent advances in spatio-temporal sub-sampling schemes, variational image reconstruction and convex optimization with the development of tailored data acquisition systems.  We first show that images with acceptable spatial resolution can be obtained from suitably sub-sampled PAT data if sparsity-constrained image reconstruction techniques such as total variation (TV) regularization enhanced by Bregman iterations are used. A further increase of the dynamic frame rate can be achieved by exploiting the temporal redundancy of the data through the use of sparsity-constrained dynamic models.  While simulated data from numerical phantoms will be used to illustrate the potential of the developed methods, we will also discuss the results of their application to different measured data sets. Furthermore, we will outline how to combine GPU computing and state-of-the-art optimization approaches to cope with the immense computational challenges imposed by 4D PAT.

Joint work with Marta Betcke, Simon Arridge, Ben Cox, Nam Huynh, Edward Zhang and Paul Beard.


Bangti Jin:

Title:  Sparsity regularization for inverse problems


In this talk, I will discuss the role of sparsity regularization in inverse problems. I will begin with a brief overview of the mathematical theory and efficient algorithms for sparsity regularization. Then I will discuss a primal dual active set algorithm for a class of nonconvex sparsity promoting penalties.

Alessandro Perelli:


Compressive Computed Tomography Image Reconstruction with Denoising Message Passing Algorithms


The compressive reconstruction of images from a limited number of projections is becoming more appealing to reduce the X-ray radiation dose in medical Computed Tomography (CT) or to detect explosives and related materials using dual energy CT scanners but still reconstruction algorithms do not achieve high diagnostic performances. In this talk we present a preconditioned variant of the denoising approximate message passing (DAMP) algorithm aiming to extend its applicability from the theoretical domain of i.i.d. random sensing matrices to deterministic ones using preconditioning and from sparse signal models to generically structured ones.
DAMP is an iterative algorithm that at each iteration linearly estimates the conditional probability of the image given the measurements and employs a non-linear denoising function which implicitly imposes an image prior, i.e. incorporates many different structured signal models. The ability to use the different denoisers (e.g. wavelets, total variation, BM3D) gives DAMP the flexibility not exploited in other reconstruction algorithms.
The proposed DAMP approach has been tested on simulated CT baggage and the reconstruction results with reduced number of views are promising compared to traditional filtered back projection and to a state-of-the-art model based iterative reconstruction method.

Jared Tanner:


Matrix completion and two efficient algorithms for different settings


Matrix completion and low rank approximation from under sampled data allow a model for simplicity in the data that is determined by the algorithm rather than imposed by the user.  We present two algorithms, CGIHT and ASD, for these two related questions, and show how the complexity of the algorithm can be based on the cost of the measurement and adjoint operators. 

Joint work with Jeffrey D. Blanchard and Ke Wei.

Jonathan Fell:


Compressive sensing based methods for computer tomography


The talk discusses theoretical and numerical results for compressive sensing (CS) based recovery in computer tomography (CT). The first part is concerned with classical and weighted l1-minimization of wavelet coefficients as well as with analysis l1-minimization of frame coefficients (with respect to shearlet frames). The second part presents a wide range of examples where both algorithms have been applied to artificially constructed test data as well as actual measurements taken from a CT.  We will also discuss the performance of different frames for the use in analysis-l1minimization.

Holger Rauhut

Titel: Compressive Sensing: An Overview


I will give an overview on the field of compressive sensing. Both theoretical developments and potential applications will be presented.

▲ Up to the top