Eigenvalues, Sylvester Equations, and Projectors
This page documents verified eigenvalue computation, Sylvester equation solvers, and spectral projectors.
Overview
We implement several algorithms to compute rigorous enclosures of eigenvalues, following Ref. [7]. The interested reader may refer to the treatment in Eigenvalues in Arb for a deeper discussion on the topic.
Standard Eigenvalue Problems
Main Functions
rigorous_eigenvalues- General eigenvalue verificationrump_2022a_eigenvalue_bounds- Individual eigenvector error bounds (Rump 2022a method)rump_lange_2023_cluster_bounds- Schur-based cluster boundsrefine_cluster_bounds- Iterative refinement of cluster bounds
Result Types
RigorousEigenvaluesResult- Result from rigorous eigenvalue computationRump2022aResult- Result with individual eigenvector boundsRumpLange2023Result- Schur-based result
Generalized Eigenvalue Problems
For the generalized eigenvalue problem $Ax = \lambda Bx$:
verify_generalized_eigenpairs- Verify all eigenpairsrigorous_generalized_eigenvalues- Full rigorous computationcompute_beta_bound- Compute β bound for B matrix
Result Types
GEVResult- Generalized eigenvalue resultRigorousGeneralizedEigenvaluesResult- Full result with bounds
Sylvester Equations
Fast verified computation for solutions of Sylvester equations $AX + XB = C$, following Ref. [3].
sylvester_miyajima_enclosure- General Sylvester solvertriangular_sylvester_miyajima_enclosure- Triangular variant
Spectral Projectors
Computation of spectral projectors for eigenvalue clustering, following Ref. [8].
miyajima_spectral_projectors- Main spectral projector computationcompute_spectral_projector_schur- Schur-based projectorcompute_spectral_projector_hermitian- Hermitian case
Eigenspace Projection
project_onto_eigenspace- Project vector onto eigenspaceproject_onto_schur_subspace- Project onto Schur subspacecompute_eigenspace_projector- Compute eigenspace projection matrixcompute_schur_projector- Compute Schur subspace projector
Result Types
RigorousSpectralProjectorsResult- Spectral projector resultSchurSpectralProjectorResult- Schur-based result
Block Schur Decomposition
Rigorous block Schur decomposition for eigenvalue clustering.
rigorous_block_schur- Block Schur decompositionextract_cluster_block- Extract diagonal block for clusterrefine_schur_decomposition- Newton-based iterative refinementrigorous_schur_bigfloat- BigFloat precision variantcertify_schur_decomposition- Wrap aSchurRefinementResultinBallMatrixform
Auxiliary Functions
newton_schulz_orthogonalize!- Newton-Schulz orthogonalization
Result Types
RigorousBlockSchurResult- Block Schur resultSchurRefinementResult- Refinement result
Limitations of Mixed-Precision Schur Refinement
The iterative Schur refinement algorithm ([9]) achieves fast convergence by solving the triangular matrix equation $\operatorname{stril}(TL - LT) = -E$ in low precision (Float64). This works well when the eigenvalues of $T$ are well-separated in Float64 arithmetic. However, the approach can fail on matrices with extreme eigenvalue clustering.
When does it fail?
The triangular solve requires dividing by eigenvalue differences $T_{ii} - T_{jj}$. When these differences fall below Float64 machine epsilon ($\approx 2.2 \times 10^{-16}$), the low-precision solver cannot distinguish them and zeroes out the corresponding entries of $L$. This prevents the correction $W = L - L^H$ from fixing the associated components of the residual.
Symptom: the orthogonality defect $\|Q^H Q - I\|$ converges quadratically (Newton-Schulz keeps working), but the residual $\|\operatorname{stril}(Q^H A Q)\|_2 / \|A\|_F$ stalls at a fixed value, typically around $10^{-22}$ to $10^{-16}$, far above the target tolerance.
Solving the triangular equation in BigFloat instead does not help either: when eigenvalue gaps are tiny (e.g. $10^{-60}$), the entries of $L$ become enormous ($\|L\| \gg 1$), violating the small-perturbation assumption of the Newton-like iteration, which then diverges.
Example: GKW transfer operator (257 × 257)
A concrete example arises from Gauss-Kuzmin-Wirsing transfer operators truncated at $K = 256$. The resulting matrix has:
| Property | Value |
|---|---|
| Condition number | $\approx 1.5 \times 10^{61}$ |
| Eigenvalues with `` | \lambda |
| Eigenvalues with `` | \lambda |
| Minimum eigenvalue separation | $\approx 3.5 \times 10^{-60}$ |
| Spectral radius | $\approx 1.0$ |
With a Float64 Schur seed, refinement stalls at residual $\approx 3.75 \times 10^{-22}$ (target: $\approx 8.9 \times 10^{-160}$), because Float64 cannot resolve 80% of the eigenvalue spectrum.
Recommended alternatives
For matrices where the eigenvalue separation is below Float64 resolution:
Direct BigFloat Schur via
GenericSchur.jl: compute the Schur decomposition directly at the target precision, bypassing iterative refinement entirely. Pass the result via theschur_seedkeyword argument ofrigorous_schur_bigfloatto certify it.Block Schur refinement: group nearly-degenerate eigenvalue clusters into blocks and refine the block structure, avoiding division by small eigenvalue differences. See
rigorous_block_schur.