API - Core Types

Core ball arithmetic types and basic operations.

Ball Type

BallArithmetic.BallType
Ball{T, CT}

Closed floating-point ball with midpoint type CT and radius type T. Each value represents the set { c + δ : |δ| ≤ r } where c::CT is the stored midpoint and r::T ≥ 0 is the radius. Both real and complex midpoints are supported as long as the radius is expressed in the underlying real field. The type behaves as a number and participates in arithmetic with rigorous outward rounding.

source
BallArithmetic.BallF64Type
Ball{T, CT}

Closed floating-point ball with midpoint type CT and radius type T. Each value represents the set { c + δ : |δ| ≤ r } where c::CT is the stored midpoint and r::T ≥ 0 is the radius. Both real and complex midpoints are supported as long as the radius is expressed in the underlying real field. The type behaves as a number and participates in arithmetic with rigorous outward rounding.

source
BallArithmetic.BallComplexF64Type
Ball{T, CT}

Closed floating-point ball with midpoint type CT and radius type T. Each value represents the set { c + δ : |δ| ≤ r } where c::CT is the stored midpoint and r::T ≥ 0 is the radius. Both real and complex midpoints are supported as long as the radius is expressed in the underlying real field. The type behaves as a number and participates in arithmetic with rigorous outward rounding.

source
BallArithmetic.midFunction
mid(x)

Return the midpoint of x. For plain numbers the midpoint is the value itself, while for balls the stored center is returned.

source
mid(v::AbstractVector{<:Ball})

Return a vector of midpoints for a collection of balls.

source
mid(M::AbstractMatrix{<:Ball})

Return a matrix of midpoints for a collection of balls.

source
mid(A::AbstractArray)

Fallback definition that treats ordinary arrays as their own midpoint representation. Specialisations for BallArray overload this method to return the stored midpoint data.

source
mid(A::BallArray)

Return the stored midpoint array.

source
mid(A::AbstractMatrix)

Return the midpoint matrix associated with A. For plain matrices the midpoint equals the matrix itself; for BallMatrix values this method is extended elsewhere to extract the stored midpoint data.

source
mid(v::AbstractVector)

Treat ordinary vectors as their own midpoints.

source
BallArithmetic.radFunction
rad(x)

Return the radius associated with x. Numbers default to a zero radius, and balls return their stored uncertainty.

source
rad(v::AbstractVector{<:Ball})

Return a vector of radii for a collection of balls.

source
rad(M::AbstractMatrix{<:Ball})

Return a matrix of radii for a collection of balls.

source
rad(A::AbstractArray)

Return a zero array of matching size that serves as the default radius for non-ball arrays.

source
rad(A::BallArray)

Return the stored radius array.

source
rad(A::AbstractMatrix{T})

Return a matrix of radii matching the size of A. For non-ball matrices this defaults to a zero matrix, while for BallMatrix values the method is overloaded to provide the stored uncertainty information.

source
rad(A::AbstractMatrix{Complex{T}})

Return a matrix of real radii matching the size of the complex matrix A. Even for complex entries the radius is measured over the underlying real field, hence the resulting matrix has element type T.

source
rad(v::AbstractVector)

Default radius for non-ball vectors: a zero vector of the appropriate floating-point type.

source
BallArithmetic.midtypeFunction
midtype(::Ball)

Return the type used to store the midpoint component of a Ball. This is useful for allocating arrays that mirror the internal layout of a ball or a collection of balls.

source
BallArithmetic.radtypeFunction
radtype(x)

Return the floating-point type used to store radii for x. The helper accepts either a ball instance or the associated type, mirroring the behaviour of midtype.

source
BallArithmetic.infFunction
inf(x::Ball)

Return the infimum (lower endpoint) of the set represented by x by evaluating mid(x) - rad(x) with downward rounding.

source
BallArithmetic.supFunction
sup(x::Ball)

Return the supremum (upper endpoint) of the set represented by x by evaluating mid(x) + rad(x) with outward rounding.

source
BallArithmetic.ball_hullFunction
ball_hull(a::Ball, b::Ball)

Return the smallest ball that contains both a and b. For real centres the function encloses the convex hull on the real line. When the midpoints are complex the result encloses both discs while keeping the centre as close as possible to one of the inputs so that subsequent operations remain stable.

source
BallArithmetic.intersect_ballFunction
intersect_ball(a::Ball, b::Ball)

Return the intersection of the real balls a and b. When the balls do not overlap the function returns nothing to indicate that the intersection is empty.

source

Array Types

BallArithmetic.BallArrayType
BallArray{T, N, NT, BT, CA, RA}

Multi-dimensional array whose entries are Ball values. The type stores midpoint data c::CA and radius data r::RA separately while presenting an AbstractArray{BT, N} interface that behaves like an array of balls. The parameters mirror the element type and storage layout and are inferred automatically from the provided midpoint and radius containers.

source
BallArithmetic.BallMatrixType
BallMatrix{T, NT, BT, CM, RM}

Alias for the two-dimensional BallArray used to represent matrices with rigorous error control. The type parameters mirror those of BallArray and describe the base floating-point type T, the element type NT, the Ball container BT, and the concrete matrix types used for the midpoints (CM) and radii (RM).

Users typically construct instances through the BallMatrix constructors below rather than specifying these parameters explicitly.

source

Norm Bounds

BallArithmetic.upper_bound_normFunction
upper_bound_norm(A::BallMatrix, p::Real = 2)

Compute a rigorous upper bound for the p-norm of a BallMatrix by computing the norm of the midpoint and radius arrays with upward rounding. The default p = 2 corresponds to the Frobenius norm.

source
upper_bound_norm(v::BallVector, p::Real = 2)

Compute a rigorous upper bound for the p-norm of a BallVector.

source
BallArithmetic.upper_bound_L2_opnormFunction
upper_bound_L_inf_opnorm(A::BallMatrix{T})

Returns a rigorous upper bound on the ℓ²-norm of the ball matrix A using the best between the Collatz bound and the interpolation bound

source
BallArithmetic.collatz_upper_bound_L2_opnormFunction
collatz_upper_bound_L2_opnorm(A::BallMatrix; iterates=10)

Give a rigorous upper bound on the ℓ² norm of the matrix A by using the Collatz theorem.

We use Perron theory here: if for two matrices with B positive |A| < B we have ρ(A)<=ρ(B) by Wielandt's theorem Wielandt's theorem

The keyword argument iterates is used to establish how many times we are iterating the vector of ones before we use Collatz's estimate.

source
BallArithmetic.svd_bound_L2_opnormFunction
svd_bound_L2_opnorm(A::BallMatrix{T})

Returns a rigorous upper bound on the ℓ²-norm (spectral norm) of the ball matrix A using the rigorous SVD enclosure from rigorous_svd.

This method computes the largest certified singular value, providing the tightest possible upper bound on ‖A‖₂. The bound is essentially exact (typically <0.01% overestimation), but requires O(n³) computation for the SVD.

Comparison with other L2 norm methods

MethodSpeedAccuracyBest use case
svd_bound_L2_opnormO(n³)~0% overest.When accuracy is critical
collatz_upper_bound_L2_opnormFast0-500% overest.*Structured matrices
upper_bound_L2_opnormFastmin of aboveGeneral use

*Collatz performs well on structured matrices (tridiagonal: ~0%, Hilbert: ~0%, diagonally dominant: ~26%) but poorly on random matrices (~200-500%).

Example

A = BallMatrix(randn(50, 50))
bound = svd_bound_L2_opnorm(A)  # Tight bound, slower
fast_bound = upper_bound_L2_opnorm(A)  # Looser bound, faster

See also: upper_bound_L2_opnorm, collatz_upper_bound_L2_opnorm, rigorous_svd

source
BallArithmetic.rump_oishi_2024_triangular_boundFunction
rump_oishi_2024_triangular_bound(T::BallMatrix, k::Int; method=:hybrid)

Compute rigorous upper bound on ‖T[1:k,:]⁻¹‖₂ for upper triangular T.

Implements the bounds from RumpOishi2024 with improved handling of the Collatz spectral radius estimate for triangular matrices.

Arguments

  • T::BallMatrix: Upper triangular ball matrix
  • k::Int: Block size (compute bound for first k rows/columns)
  • method::Symbol: Method to use
    • :psi - ψ-bound method (original RumpOishi2024)
    • :backward - Backward substitution method
    • :hybrid - Use best of both methods (default)

Method Details

Ψ-bound method:

For block structure T = [A B; 0 D] where A is k×k:

  1. Compute E = A⁻¹B via backward substitution
  2. Compute F = D_d⁻¹ D_f where D = D_d + D_f (diagonal + off-diagonal)
  3. Estimate using ψ bounds: ‖T⁻¹‖ ≤ max(α, β) · ψ(E) where α = ‖A⁻¹‖, β = ‖D_d⁻¹‖/(1-‖F‖)

Backward substitution method:

Recursively compute bounds for singular values using: σᵢ = (1/|dᵢᵢ|) · √(1 + ‖bᵢ‖² · σᵢ₊₁²) where bᵢ is the i-th row tail and dᵢᵢ is the i-th diagonal.

Returns

Rigorous upper bound on ‖T[1:k,:]⁻¹‖₂ as a floating-point number.

Reference

  • Rump & Oishi (2024), "A Note on Oishi's Lower Bound...", SIAM J. Matrix Anal. Appl.
source
BallArithmetic.backward_singular_value_boundFunction
backward_singular_value_bound(A::BallMatrix)

Compute rigorous upper bounds for singular values of upper triangular matrix using backward recursion.

For upper triangular A, computes bounds σᵢ such that σᵢ(A) ≤ σᵢ for i=1,...,n using the recursion:

σᵢ = (1/|aᵢᵢ|) · √(1 + ‖row_tail_i‖² · σᵢ₊₁²)

starting from σₙ₊₁ = 0.

Reference

  • RumpOishi2024, Theorem 3.2
source

L2 Operator Norm: Method Comparison

The package provides several methods for computing rigorous upper bounds on ‖A‖₂:

MethodComplexityAccuracyDescription
svd_bound_L2_opnormO(n³)Exact (~0%)Computes rigorous SVD, returns largest σ
collatz_upper_bound_L2_opnormO(k·n²)0-500%*Iterative power method on |A|ᵀ|A|
sqrt(‖·‖₁·‖·‖∞)O(n²)87-564%Uses ‖A‖₂ ≤ √(‖A‖₁·‖A‖∞)
upper_bound_L2_opnormO(k·n²)min of aboveTakes minimum of Collatz and sqrt

*Collatz accuracy depends heavily on matrix structure:

  • Tridiagonal/banded: ~0% overestimation (essentially exact)
  • Hilbert-like matrices: ~0% overestimation
  • Diagonally dominant: ~26% overestimation
  • Random dense matrices: 200-500% overestimation

Benchmark Results (n×n matrices)

Matrix type               Collatz   sqrt(‖·‖₁·‖·‖∞)   SVD bound
─────────────────────────────────────────────────────────────────
Random (n=50)             186%      249%              0%
Diagonally dominant       26%       36%               0%
Tridiagonal               0.1%      0.1%              0%
Symmetric positive def.   140%      230%              0%
Low rank + noise          54%       143%              0%
Hilbert-like              0%        117%              0%

Recommendations

  • General use: upper_bound_L2_opnorm (fast, takes best of cheap bounds)
  • Accuracy critical: svd_bound_L2_opnorm (exact but O(n³))
  • Structured matrices: collatz_upper_bound_L2_opnorm often gives near-exact results

Oishi 2023 Schur Complement Bounds

Lower bounds on minimum singular values using the Schur complement method from Oishi (2023), "Lower bounds for the smallest singular values of generalized asymptotic diagonal dominant matrices".

BallArithmetic.Oishi2023ResultType
Oishi2023Result{T}

Result from the Oishi 2023 Schur complement bound computation.

Fields

  • sigma_min_lower: Rigorous lower bound on the minimum singular value
  • G_inv_upper: Upper bound on ‖G⁻¹‖₂ (= 1/σ_min)
  • A_inv_norm: Upper bound on ‖A⁻¹‖₂
  • A_inv_B_norm: Upper bound on ‖A⁻¹B‖₂
  • C_A_inv_norm: Upper bound on ‖CA⁻¹‖₂
  • schur_contraction: Upper bound on ‖Dd⁻¹(Df - CA⁻¹B)‖₂
  • verified: Whether all conditions of Theorem 1 are satisfied
  • block_size: The block size m used
source
BallArithmetic.oishi_2023_sigma_min_boundFunction
oishi_2023_sigma_min_bound(G::BallMatrix{T}, m::Int) where {T}

Compute a rigorous lower bound on the minimum singular value of G using the Schur complement method from Oishi (2023).

Arguments

  • G::BallMatrix: The matrix to analyze
  • m::Int: Block size for the partition (A is m×m)

Method (Theorem 1 from Oishi 2023)

Partition G as:

G = [A  B]
    [C  D]

where A ∈ Mm, B ∈ M{m,n-m}, C ∈ M{n-m,m}, D ∈ M{n-m,n-m}.

Let Dd and Df be the diagonal and off-diagonal parts of D.

If the following conditions hold:

  1. ‖A⁻¹B‖₂ < 1
  2. ‖CA⁻¹‖₂ < 1
  3. ‖Dd⁻¹(Df - CA⁻¹B)‖₂ < 1

Then G is invertible and:

‖G⁻¹‖₂ ≤ max{‖A⁻¹‖₂, ‖D_d⁻¹‖₂/(1 - ‖D_d⁻¹(D_f - CA⁻¹B)‖₂)}
         / ((1 - ‖A⁻¹B‖₂)(1 - ‖CA⁻¹‖₂))

Since σmin(G) = 1/‖G⁻¹‖₂, this gives a lower bound on σmin.

Returns

Oishi2023Result containing the bounds and verification status.

Reference

Oishi, S. (2023), Japan J. Indust. Appl. Math. 40:1569-1585

source

Rump-Oishi 2024 Improved Schur Complement Bounds

Improved bounds from Rump & Oishi (2024), "A Note on Oishi's Lower Bound for the Smallest Singular Value of Linearized Galerkin Equations".

Key Improvements over Oishi 2023

  1. Removes conditions 1 & 2: No longer requires ‖A⁻¹B‖ < 1 and ‖CA⁻¹‖ < 1. Only condition 3 (‖Dd⁻¹(Df - CA⁻¹B)‖ < 1) is needed.

  2. Uses exact ψ(N) formula: Instead of 1/(1-‖N‖), uses ψ(μ) = √(1 + 2αμ√(1-α²) + α²μ²) which is tighter and works for any μ ≥ 0.

  3. Fast γ bound (optional): Uses π(N) = √(‖N‖₁·‖N‖∞) to quickly check if the method will succeed, avoiding expensive matrix products and reducing complexity from O((n-m)²m) to O((n-m)m²).

Example

using BallArithmetic, LinearAlgebra

# Create a diagonally dominant matrix (Example 3 from paper)
k, n = 0.9, 50
G_mid = [i == j ? Float64(i) : k^abs(i-j) for i in 1:n, j in 1:n]
G = BallMatrix(G_mid, zeros(n, n))

# Compute bounds with optimal block size
best_m, result = rump_oishi_2024_optimal_block_size(G; max_m=30)

if result.verified
    println("σ_min(G) ≥ ", result.sigma_min_lower)
    println("‖G⁻¹‖ ≤ ", result.G_inv_upper)
    println("Optimal block size: m = ", best_m)
    println("Used fast γ bound: ", result.used_fast_gamma)
end

API Reference

BallArithmetic.RumpOishi2024ResultType
RumpOishi2024Result{T}

Result from the Rump-Oishi 2024 improved Schur complement bound computation.

This improves upon Oishi 2023 by:

  1. Removing the conditions ‖A⁻¹B‖ < 1 and ‖CA⁻¹‖ < 1
  2. Using the exact ψ(N) formula instead of 1/(1-‖N‖)
  3. Optionally using a fast γ bound to avoid expensive matrix products

Fields

  • sigma_min_lower: Rigorous lower bound on the minimum singular value
  • G_inv_upper: Upper bound on ‖G⁻¹‖₂ (= 1/σ_min)
  • A_inv_norm: Upper bound on ‖A⁻¹‖₂
  • psi_A_inv_B: ψ(‖A⁻¹B‖₂) factor
  • psi_C_A_inv: ψ(‖CA⁻¹‖₂) factor
  • schur_contraction: Upper bound on ‖Dd⁻¹(Df - CA⁻¹B)‖₂
  • used_fast_gamma: Whether the fast γ bound was used
  • verified: Whether the method succeeded (only requires schur_contraction < 1)
  • block_size: The block size m used

Reference

Rump, S.M. & Oishi, S. (2024), "A Note on Oishi's Lower Bound for the Smallest Singular Value of Linearized Galerkin Equations"

source
BallArithmetic.rump_oishi_2024_sigma_min_boundFunction
rump_oishi_2024_sigma_min_bound(G::BallMatrix{T}, m::Int;
                                 try_fast_gamma::Bool=true) where {T}

Compute a rigorous lower bound on the minimum singular value of G using the improved Schur complement method from Rump-Oishi (2024).

Arguments

  • G::BallMatrix: The matrix to analyze
  • m::Int: Block size for the partition (A is m×m)
  • try_fast_gamma::Bool=true: Whether to try the fast γ bound first

Improvements over Oishi 2023

  1. Removes conditions 1 & 2: No longer requires ‖A⁻¹B‖ < 1 and ‖CA⁻¹‖ < 1. Only condition 3 (‖Dd⁻¹(Df - CA⁻¹B)‖ < 1) is needed.

  2. Uses exact ψ(N) formula: Instead of 1/(1-‖N‖), uses ψ(μ) = √(1 + 2αμ√(1-α²) + α²μ²) which is tighter and works for any μ ≥ 0.

  3. Fast γ bound (optional): Uses π(N) = √(‖N‖₁·‖N‖∞) to quickly check if the method will succeed, avoiding expensive matrix products.

Method (Theorem 1.3 from Rump-Oishi 2024)

Partition G as:

G = [A  B]
    [C  D]

If ‖Dd⁻¹(Df - CA⁻¹B)‖ < 1, then:

‖G⁻¹‖ ≤ max{‖A⁻¹‖, ‖Dd⁻¹‖/(1 - ‖Dd⁻¹(Df - CA⁻¹B)‖)} · ψ(‖A⁻¹B‖) · ψ(‖CA⁻¹‖)

Returns

RumpOishi2024Result containing the bounds and verification status.

Reference

Rump, S.M. & Oishi, S. (2024), "A Note on Oishi's Lower Bound for the Smallest Singular Value of Linearized Galerkin Equations"

source
BallArithmetic.rump_oishi_2024_optimal_block_sizeFunction
rump_oishi_2024_optimal_block_size(G::BallMatrix{T}; max_m::Int=100,
                                    try_fast_gamma::Bool=true) where {T}

Find the optimal block size m that gives the tightest bound on σ_min using the Rump-Oishi 2024 method.

Returns a tuple (bestm, bestresult).

source
BallArithmetic.psi_schur_factorFunction
psi_schur_factor(μ::T) where {T}

Compute ψ(μ) from Lemma 1.2 of Rump-Oishi 2024.

For a block triangular matrix H = [I, -N; 0, I] with ‖N‖ = μ, this computes the exact value of ‖H‖ = ‖H⁻¹‖.

The formula is: α = √(½(1 + 1/√(1 + 4/μ²))) ψ(μ) = √(1 + 2αμ√(1-α²) + α²μ²)

This replaces the weaker bound 1/(1-μ) used in Oishi 2023, and works for any μ ≥ 0 (no restriction μ < 1).

Returns

ψ(μ) as an upper bound (computed with directed rounding).

source
BallArithmetic.pi_normFunction
pi_norm(M::BallMatrix{T}) where {T}

Compute π(M) = √(‖M‖₁ · ‖M‖∞), an upper bound on ‖M‖₂.

This is a cheap bound useful for the fast γ estimate in Rump-Oishi 2024.

source