Numerical Optimization and Inverse Problems
|Unit level:||Level 6|
|Teaching period(s):||Semester 2|
|Offered by||School of Mathematics|
|Available as a free choice unit?:||N
Students are not permitted to take, for credit, MATH46132 in an undergraduate programme and then MATH66132 in a postgraduate programme at the University of Manchester, as the courses are identical.
To provide a mathematical understanding of modern numerical optimization, and to discuss practical aspects of implementation and available software. To introduce inverse and ill-posed problems and their application to industrial, geophysical and medical imaging problems.
This course unit develops the theory and practice of numerical methods for nonlinear optimization with and without constraints, covering methods suitable for both small and large-scale problems and discussing available software. The subject is of considerable importance in scientific, industrial and economic problems, and the ideas and methods of solution have been stimulated by practical applications.
Optimization problems as discussed in this course are encountered by many practicians working in financial mathematics, industrial mathematics or in mathematical economics. For example, one often needs to infer the material properties of an object from some physical measurement. Often the measurements are taken exterior to the object and we wish to identify variable material properties on the inside: in industrial and medical imaging, one applies ultra sound, X-rays or an electromagnetic field to an object, makes measurements on the outside and then attempts to form an image of the inside. These problems are examples of so-called 'inverse problems' and are typically ill-posed in the sense that the mapping taking data to images is discontinuous, and numerically reconstruction algorithms tend to be unstable unless one makes sufficient assumptions, such as smoothness, about the image. As discretized inverse problems are ill-conditioned, we have to constrain the solution using extra (a priori) information and the usual compromise results in an optimization problem that trades off fitting the data against satisfying the constraints given by the a priori information. Numerical optimisation techniques are often used to solve such problems.
The course will be illustrated by practical examples including visits to experimental groups in the University, and will include numerical examples illustrated by MATLAB programs.
On successful completion of this course unit students will be able to:
- Derive standard optimality conditions for unconstrained optimization problems from first principles.
- Identify and solve unconstrained optimization problems appearing in practical applications using standard gradient-based algorithms.
- Design and analyse efficient line search techniques for gradient-based algorithms.
- Derive standard optimization techniques for output least squares problems and apply them to simple test cases.
- Identify constrained optimization problems, formulate the corresponding first order optimality conditions and solve the resulting optimality system using efficient techniques.
- Identify and analyse ill-posed linear inverse problems in various practical applications and apply efficient regularization techniques for solving them using techniques from numerical linear algebra.
- Identify nonlinear parameter estimation problems in various practical applications and apply efficient techniques based on output least squares for their solution.
- Other - 25%
- Written exam - 75%
Assessment Further Information
- Mid-semester coursework: 25%
- End of semester examination: three hours weighting 75%
1. Introduction. 
Motivating examples. Continuous vs discrete optimization. Local vs Global optimization.
2. Unconstrained Optimization. 
Optimality conditions. Convexity results. Local and Global solutions. Convergence of steepest descent algorithm. Search directions: nonlinear conjugate gradient method, Newton's method; Quasi-Newton methods. Sums of squares problems: Gauss-Newton and Levenberg-Marquardt methods.
3. Constrained Optimization. 
Optimality conditions. Linear algebra for solving KKT systems. Overview of methods for constrained optimisation.
4. Inverse Problems  Introduction to inverse problems. Tikhonov regularization of linear ill-posed problems. Total Variation regularization in image deblurring/denoising. Solving the Euler Lagrange equations. Adjoint formulation and seismic imaging. The Radon Transform and X-Ray computerized Tomography.
- J. T. Betts, Practical Methods for Optimal Control using Nonlinear Programming, SIAM 2001.
- R. Fletcher, Practical Methods of Optimization, Wiley, 1987.
- Philip E. Gill, Walter Murray, and Margaret H. Wright, Practical Optimization, Academic Press, 1981.
- Jorge Nocedal and Stephen J. Wright, Numerical Optimization, Springer-Verlag, 1999.
- N. Gould and S. Leyffler, An introduction to algorithms for nonlinear optimization; in Frontiers in Numerical Analysis (Durham, 2002), Springer, 2003.
- M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, Taylor & Francis, 1998.
- P. C. Hansen, Rank-deficient and Discrete Ill-posed Problems, SIAM, 1987.
- R. Aster, B. Borchers and C. Thurber, Parameter Estimation and Inverse Problems. Academic Press, 2012.
- A. Tarantola, Inverse Problem Theory, SIAM, 2005.
A more comprehensive list of textbooks will be distributed at the beginning of the course.
Feedback tutorials will provide an opportunity for students' work to be discussed and provide feedback on their understanding. Coursework or in-class tests (where applicable) also provide an opportunity for students to receive feedback. Students can also get feedback on their understanding directly from the lecturer, for example during the lecturer's office hour.
- Lectures - 22 hours
- Tutorials - 11 hours
- Independent study hours - 117 hours