The talk concerns the optimization of the \(j\)th largest eigenvalue of a
Hermitian and analytic matrix-valued function depending on several
parameters for a prescribed \(j\). The underlying eigenvalue
function is typically a nonconvex function, excluding special
cases such as the largest eigenvalue of a Hermitian matrix-valued function
whose entries are linear functions of the parameters. The majority of interest
into eigenvalue optimization has originated from the latter convex problem
due to its intimate connection with convex semidefinite
programs. This interest has diminished to a degree after the invention of
interior point methods, which made numerical solutions of small convex
semidefinite programs possible. Nonconvex eigenvalue optimization problems
have been explored especially after late 1980s, the research in this direction
has concentrated on particular problems for instance motivated by control
theory and depending on only a few optimization variables.
This talk starts with a brief review of a general algorithm for
nonconvex eigenvalue optimization problems over a few variables.
The algorithm is based on global piece-wise quadratic models for the
nonconvex eigenvalue function. It is suited well especially for the
maximization of an eigenvalue function whose left-hand derivative is
greater than or equal to its right-hand derivative globally along
The second part features a greedy subspace framework for large-scale eigenvalue
optimization problems involving large matrix-valued functions. At every
iteration the subspace framework solves a reduced eigenvalue optimization
problem obtained by projecting the large matrix-valued function onto
a small subspace. It then expands the subspace with the addition of the
eigenvectors at an optimal point of the reduced problem. The proposed
framework converges superlinearly w.r.t. the subspace dimension both in
theory and in practice.
The third part concerns the adaption of the subspace framework for
convex semidefinite programs with large matrix variables. Here we benefit
from the eigenvalue optimization characterization of the dual of a
semidefinite program. The reduced problems are expressed as
semidefinite programs, and solved by interior point methods.
If time permits, the talk will conclude with a discussion of
how the ideas can be adapted to optimize robust stability.
Specifically, it will focus on the maximization of the stability
radius of the closed-loop system \(x' = (A + BKC)x\) for given
matrices \(A\),\(B\),\(C\) and over the variable \(K\).