LMS Workshop: Variational Methods Meet Machine Learning

Talks

Speaker: Thomas Pock

Title: Learning better models for inverse problems in imaging.

Abstract: In this talk, I will present our recent activities in learning better models for inverse problems in imaging. We considered classical variational models used for inverse problems but generalized these models by introducing a large number of free model parameters. We learn the free model parameters by minimizing a loss function comparing the reconstructed images obtained from the variational modesl with ground truth solutions from a training data base. I will also show very recent results on learning "deeper" regularizers that are already able to capture semantic information of images. We show applications to different inverse problems in imaging where we put a particular focus on image reconstruction from undersampled MRI data.

Speaker: Andreas Hauptmann

Title: Model based learning for accelerated, limited-view 3D photoacoustic tomography.

Abstract: In photoacoustic tomography we aim to obtain high resolution 3D images of optical absorption by sensing laser-generated ultrasound (US). In many practical applications, this spatial sampling of the US signal is not optimal to obtain high quality reconstructions with fast, filtered-back-projection-like image reconstruction methods: Limited view artefacts arise from geometric restrictions and spatial undersampling, which is performed to accelerate the data acquisition. Iterative image reconstruction methods that employ an explicit model of the US propagation in combination with spatial sparsity constraints can provide significantly better results in these situations. However, a crucial drawback of these methods is their considerably higher computational complexity and the difficulty to handcraft sparsity constraints that capture the spatial structure of the target. Recent advances in deep learning for tomographic reconstructions have shown great potential to create such realistic high quality images with considerable speed-up. In this work we present a deep neural network that is specifically designed to provide high resolution 3D images from restricted photoacoustic measurements. The network is designed to represent an iterative scheme and incorporates gradient information of the data fit to compensate for limited view artefacts. Due to the high complexity of the photoacoustic forward operator, we separate training and computation of the gradient information. Aa suitable prior for the desired images structures is learned as part of the training. The resulting network is trained and tested on a set of segmented vessels from lung CT scans and then applied to real measurement data.

Speaker: Jonas Adler

Title: Learned iterative reconstruction schemes, theory and practice.

Abstract: We discuss recent developments in learned iterative reconstruction methods such as Learned Gradient and Learned Primal-Dual schemes. These schemes represent a middle way between classical variational regularization and fully learned reconstruction schemes and thus allow high quality reconstructions with reasonable training times and data requirements.

One of the most pressing issues when evaluating learned reconstruction schemes is to derive computationally feasible upper bounds to the reconstruction quality given some training data. We derive such an upper bound using the theory of Bayesian inverse problems and computationally realize it using MCMC. We then compare the proposed methods to this upper bound and discuss possible future improvements.

Related preprints:

"Solving ill-posed inverse problems using iterative deep neural networks." arXiv

"Learning Primal-Dual Reconstruction." arXiv, website

Speaker: Christian Etmann

Title: Regularization of neural networks with input saliencies.

Abstract: Deep neural networks are able to reach high classification accuracies in a variety of challenging tasks. Despite this, they tend to overfit and are notorious for being black boxes. One simple approach for interpreting these models is by analyzing their input-output relationship via gradients ('saliencies'). These highlight which aspects of the input contribute most to the model's classification decision. By penalizing these gradients during training in a certain way, one is able to influence the neural network's behaviour in a desired manner. One example for this is the classification of mass spectra for tumor subtyping, where the chemical differences are assumed to be based on just a few biomarkers. By imposing a sparsity-enducing penalty on the input gradients, one obtains a neural network that is able to classify tumor samples robustly while taking into account mostly known histochemical tumor markers.

Speaker: Joana Grah

Title: Learning filter functions in regularisers by minimising quotients.

Abstract: Learning approaches have recently become very popular in the field of inverse problems and a large variety of methods have been established. However, most learning approaches only aim at fitting parametrised models to favourable training data whilst ignoring misfit training data completely. In contrast to that, we present a learning framework for parametrised regularisation functions based on quotient minimisation, where we allow for both fit- and misfit-training data in the numerator and denominator, respectively. We present results resembling behaviour of well-established derivative-based sparse regularisers. This is accomplished by learning favourable scales and geometric properties while at the same time avoiding unfavourable ones. Finally, we apply and extend the learning framework to classification problems.

Speaker: Mila Nikolova

Title: Fast solvers for approximating inconsistent systems of linear inequalities.

Abstract: The need for solving a system of linear inequalities \(Ax \leq b\) arises in many applications. However it may happen that requirements of the system are inconsistent. In such a case it is often desired to find the least correction of \(b\) that recovers feasibility. The "least" is usually defined in the least-square sense, in some cases via \(\ell_1\) or \(\ell_\infty\) norms. The existing algorithms use at each iteration Newton's method. There are two main drawbacks: (i) each step requires exact minimisation, and (ii) it is a nested scheme which implies accumulations of computational errors in each step depending on the stopping rule at each iteration. In this work, we reformulate these problems as well posed variational minimax problems and propose simple and fast convergent algorithms with explicit iterations.