Eigenvalues, Sylvester Equations, and Projectors

This page documents verified eigenvalue computation, Sylvester equation solvers, and spectral projectors.

Overview

We implement several algorithms to compute rigorous enclosures of eigenvalues, following Ref. [7]. The interested reader may refer to the treatment in Eigenvalues in Arb for a deeper discussion on the topic.

Standard Eigenvalue Problems

Main Functions

Result Types

Generalized Eigenvalue Problems

For the generalized eigenvalue problem $Ax = \lambda Bx$:

Result Types

Sylvester Equations

Fast verified computation for solutions of Sylvester equations $AX + XB = C$, following Ref. [3].

Spectral Projectors

Computation of spectral projectors for eigenvalue clustering, following Ref. [8].

Eigenspace Projection

Result Types

Block Schur Decomposition

Rigorous block Schur decomposition for eigenvalue clustering.

Auxiliary Functions

Result Types

Limitations of Mixed-Precision Schur Refinement

The iterative Schur refinement algorithm ([9]) achieves fast convergence by solving the triangular matrix equation $\operatorname{stril}(TL - LT) = -E$ in low precision (Float64). This works well when the eigenvalues of $T$ are well-separated in Float64 arithmetic. However, the approach can fail on matrices with extreme eigenvalue clustering.

When does it fail?

The triangular solve requires dividing by eigenvalue differences $T_{ii} - T_{jj}$. When these differences fall below Float64 machine epsilon ($\approx 2.2 \times 10^{-16}$), the low-precision solver cannot distinguish them and zeroes out the corresponding entries of $L$. This prevents the correction $W = L - L^H$ from fixing the associated components of the residual.

Symptom: the orthogonality defect $\|Q^H Q - I\|$ converges quadratically (Newton-Schulz keeps working), but the residual $\|\operatorname{stril}(Q^H A Q)\|_2 / \|A\|_F$ stalls at a fixed value, typically around $10^{-22}$ to $10^{-16}$, far above the target tolerance.

Solving the triangular equation in BigFloat instead does not help either: when eigenvalue gaps are tiny (e.g. $10^{-60}$), the entries of $L$ become enormous ($\|L\| \gg 1$), violating the small-perturbation assumption of the Newton-like iteration, which then diverges.

Example: GKW transfer operator (257 × 257)

A concrete example arises from Gauss-Kuzmin-Wirsing transfer operators truncated at $K = 256$. The resulting matrix has:

PropertyValue
Condition number$\approx 1.5 \times 10^{61}$
Eigenvalues with ``\lambda
Eigenvalues with ``\lambda
Minimum eigenvalue separation$\approx 3.5 \times 10^{-60}$
Spectral radius$\approx 1.0$

With a Float64 Schur seed, refinement stalls at residual $\approx 3.75 \times 10^{-22}$ (target: $\approx 8.9 \times 10^{-160}$), because Float64 cannot resolve 80% of the eigenvalue spectrum.

For matrices where the eigenvalue separation is below Float64 resolution:

  1. Direct BigFloat Schur via GenericSchur.jl: compute the Schur decomposition directly at the target precision, bypassing iterative refinement entirely. Pass the result via the schur_seed keyword argument of rigorous_schur_bigfloat to certify it.

  2. Block Schur refinement: group nearly-degenerate eigenvalue clusters into blocks and refine the block structure, avoiding division by small eigenvalue differences. See rigorous_block_schur.