Rational Krylov subspaces have been proven to be useful for many applications, like the approximation of matrix functions or the solution of matrix equations. These rational subspaces are built by not only matrix vector products but also by the inverses of a matrix times a vector. It’s these inverses that can result in significant faster convergence, but on the other hand they can as well create quite some problems.
It will be shown that extended and rational Krylov subspaces —under some assumptions— can be retrieved without any explicit inversion or system solves involved. Instead we do the necessary computations in an implicit way using the information from an enlarged standard Krylov subspace.
As such we can get rid of the problems, but there is a price to be paid.
In this lecture the audience will be introduced to the generic building blocks underlying rational Krylov subspaces, which are, rotations, twisted QR-factorizations, turnovers, fusions, …. Building on these blocks, we will shrink a large Krylov subspace by unitary similarity transformations to a much smaller rational Krylov subspace without — if everything goes well— essential data loss. This smaller space can then be used to solve the original application without any problem.
Numerical experiments support our claims that this approximation can be very good and thus can culminate in dimensionality reduction and as such also can lead to time savings when approximating, e.g., matrix functions or solving ODE’s.