API - Linear Systems
Verified linear system solvers and related functions.
Gaussian Elimination
BallArithmetic.GaussianEliminationResult — Type
GaussianEliminationResult{T, VT}Result from interval Gaussian elimination.
Fields
solution::VT: Solution enclosure (or empty if singular)success::Bool: Whether elimination succeededsingular::Bool: Whether matrix is provably singularL::Matrix: Lower triangular factor (if computed)U::BallMatrix: Upper triangular factorp::Vector{Int}: Permutation vector (pivoting)
BallArithmetic.interval_gaussian_elimination — Function
interval_gaussian_elimination(A::BallMatrix{T}, b::BallVector{T};
pivoting::Symbol=:partial,
store_factors::Bool=false) where {T}Solve Ax = b using interval Gaussian elimination with partial pivoting.
Algorithm
Forward elimination with partial pivoting:
- For each column k, find pivot with maximum magnitude
- Swap rows if needed
- Eliminate entries below pivot
- Check for singular matrix (zero on diagonal)
Backward substitution:
- Solve Ux = y from bottom to top
- xi = (yi - Σ{j>i} u{ij}xj) / u{ii}
Arguments
A: Coefficient ball matrix (n×n)b: Right-hand side ball vector (n)pivoting: Pivoting strategy (:partialor:none)store_factors: Whether to store L and U factors
Returns
GaussianEliminationResult with solution and factorization information.
Example
A = BallMatrix([2.0 1.0 1.0; 4.0 3.0 3.0; 8.0 7.0 9.0], fill(1e-10, 3, 3))
b = BallVector([4.0, 10.0, 24.0], fill(1e-10, 3))
result = interval_gaussian_elimination(A, b)
if result.success
println("Solution: ", result.solution)
elseif result.singular
println("Matrix is singular")
endNotes
- O(n³) complexity
- Detects singularity during elimination
- Partial pivoting improves numerical stability
- Can suffer from overestimation (wrapping effect)
- Preconditioning recommended for better accuracy
BallArithmetic.interval_gaussian_elimination_det — Function
interval_gaussian_elimination_det(A::BallMatrix{T}) where {T}Compute determinant enclosure using Gaussian elimination.
Algorithm
The determinant is the product of diagonal elements after elimination: det(PA) = ∏{i=1}^n u{ii} × sign(permutation)
Arguments
A: Square ball matrix (n×n)
Returns
Ball enclosure of the determinant.
Example
A = BallMatrix([2.0 1.0; 1.0 2.0], fill(1e-10, 2, 2))
det_enclosure = interval_gaussian_elimination_det(A)
println("det(A) ∈ ", det_enclosure)Notes
- O(n³) complexity
- Can suffer from overestimation
- Zero in result indicates possible singularity
- Useful for singularity detection
BallArithmetic.is_regular_gaussian_elimination — Function
is_regular_gaussian_elimination(A::BallMatrix{T}) where {T}Test if matrix [A] contains only regular (nonsingular) matrices using Gaussian elimination.
Algorithm
Perform Gaussian elimination and check if zero appears on diagonal. If zero does NOT appear, matrix is proven regular.
Arguments
A: Square ball matrix (n×n)
Returns
trueif matrix is proven regularfalseif test is inconclusive
Example
A = BallMatrix([3.0 1.0; 1.0 2.0], fill(1e-10, 2, 2))
if is_regular_gaussian_elimination(A)
println("Matrix is regular")
else
println("Cannot determine regularity")
endNotes
- Sufficient but not necessary test
- O(n³) complexity
- If returns true, regularity is guaranteed
- If returns false, matrix may still be regular
Iterative Methods
BallArithmetic.IterativeResult — Type
IterativeResult{T, VT}Result from iterative interval linear system solver.
Fields
solution::VT: Enclosure of the solution setconverged::Bool: Whether the iteration convergediterations::Int: Number of iterations performedfinal_width::T: Maximum width of final solution componentsconvergence_rate::T: Observed convergence rate (ratio of successive widths)
BallArithmetic.interval_gauss_seidel — Function
interval_gauss_seidel(A::BallMatrix{T}, b::BallVector{T};
x0::Union{Nothing, BallVector{T}}=nothing,
max_iterations::Int=100,
tol::T=T(1e-10),
use_epsilon_inflation::Bool=true,
ϵ::T=T(1e-15),
r::T=T(0.001)) where {T}Solve Ax = b using interval Gauss-Seidel iteration.
Algorithm
The Gauss-Seidel method updates each component x_i using:
x_i^(k+1) = (b_i - Σ_{j<i} a_{ij}x_j^(k+1) - Σ_{j>i} a_{ij}x_j^(k)) / a_{ii}where updated components x_j^(k+1) for j < i are used immediately.
Arguments
A: Coefficient ball matrix (n×n)b: Right-hand side ball vector (n)x0: Initial enclosure (computed if not provided)max_iterations: Maximum number of iterationstol: Convergence tolerance (maximum component width)use_epsilon_inflation: Apply ε-inflation before intersectionϵ: Absolute inflation factorr: Relative inflation factor
Returns
IterativeResult with solution enclosure and convergence information.
Example
A = BallMatrix([3.0 1.0; 1.0 2.0], fill(1e-10, 2, 2))
b = BallVector([5.0, 4.0], fill(1e-10, 2))
result = interval_gauss_seidel(A, b)
if result.converged
println("Solution: ", result.solution)
println("Iterations: ", result.iterations)
endNotes
- Generally converges faster than Jacobi method
- Converges for strictly diagonally dominant matrices
- Uses most recent updates immediately (sequential updates)
- ε-inflation helps ensure non-empty intersection
BallArithmetic.interval_jacobi — Function
interval_jacobi(A::BallMatrix{T}, b::BallVector{T};
x0::Union{Nothing, BallVector{T}}=nothing,
max_iterations::Int=100,
tol::T=T(1e-10),
use_epsilon_inflation::Bool=true,
ϵ::T=T(1e-15),
r::T=T(0.001)) where {T}Solve Ax = b using interval Jacobi iteration.
Algorithm
The Jacobi method updates all components simultaneously:
x_i^(k+1) = (b_i - Σ_{j≠i} a_{ij}x_j^(k)) / a_{ii}Arguments
A: Coefficient ball matrix (n×n)b: Right-hand side ball vector (n)x0: Initial enclosure (computed if not provided)max_iterations: Maximum number of iterationstol: Convergence tolerance (maximum component width)use_epsilon_inflation: Apply ε-inflation before intersectionϵ: Absolute inflation factorr: Relative inflation factor
Returns
IterativeResult with solution enclosure and convergence information.
Example
A = BallMatrix([4.0 1.0; 1.0 3.0], fill(1e-10, 2, 2))
b = BallVector([7.0, 6.0], fill(1e-10, 2))
result = interval_jacobi(A, b)
if result.converged
println("Solution: ", result.solution)
println("Convergence rate: ", result.convergence_rate)
endNotes
- Easily parallelizable (all updates independent)
- Converges for strictly diagonally dominant matrices
- Generally slower than Gauss-Seidel but more parallelizable
- All components updated simultaneously (parallel updates)
HBR Method
BallArithmetic.HBRResult — Type
HBRResult{T, VT}Result from Hansen-Bliek-Rohn method.
Fields
solution::VT: Tight solution enclosuresuccess::Bool: Whether all systems solved successfullynum_systems_solved::Int: Number of real systems solved (≤ 2n)max_residual::T: Maximum residual from solved systems
BallArithmetic.hbr_method — Function
hbr_method(A::BallMatrix{T}, b::BallVector{T};
preconditioner::Union{Nothing, Matrix{T}}=nothing,
check_residuals::Bool=true,
residual_tol::T=T(1e-8)) where {T}Compute tight enclosure of Ax = b using Hansen-Bliek-Rohn method.
Algorithm
The HBR method computes the hull of the solution set by solving 2n real systems:
For each component i = 1, ..., n:
- Compute lower bound: Solve Aσ^i x = bc where σ^i selects extremal matrix
- Compute upper bound: Solve Aτ^i x = bc where τ^i selects opposite extremal matrix
The extremal matrices are chosen to minimize/maximize the i-th component.
Mathematical Details
For minimizing x_i:
- Choose Aσ entries: aij = inf(Aij) if sgn(Ĉji) ≥ 0, else sup(A_ij) where Ĉ = C^(-1) is the inverse preconditioner
For maximizing x_i:
- Flip the selection rule
Arguments
A: Coefficient ball matrix (n×n)b: Right-hand side ball vector (n)preconditioner: Approximate inverse C ≈ A^(-1) (computed if not provided)check_residuals: Verify solutions satisfy Oettli-Prager conditionresidual_tol: Tolerance for residual checking
Returns
HBRResult with tight solution enclosure.
Example
A = BallMatrix([2.0 1.0; 1.0 2.0], fill(0.1, 2, 2))
b = BallVector([3.0, 3.0], fill(0.1, 2))
result = hbr_method(A, b)
if result.success
println("Tight solution: ", result.solution)
println("Systems solved: ", result.num_systems_solved)
endNotes
- O(n⁴) complexity - expensive but provides tightest enclosures
- Solves 2n real linear systems
- More accurate than Krawczyk or iterative methods
- Recommended when high accuracy is required and n is small (n ≤ 20)
- Requires good preconditioner for optimal vertex selection
BallArithmetic.hbr_method_simple — Function
hbr_method_simple(A::BallMatrix{T}, b::BallVector{T}) where {T}Simplified HBR method without preconditioner optimization.
Uses identity matrix as preconditioner, which may give suboptimal but still valid bounds.
Arguments
A: Coefficient ball matrix (n×n)b: Right-hand side ball vector (n)
Returns
HBRResult with solution enclosure.
Example
A = BallMatrix([4.0 1.0; 1.0 3.0], fill(0.05, 2, 2))
b = BallVector([5.0, 4.0], fill(0.05, 2))
result = hbr_method_simple(A, b)
println("Solution: ", result.solution)Notes
- Simpler but may not give tightest possible bounds
- Uses identity matrix for vertex selection
- Faster preconditioner computation
- Still O(n⁴) due to 2n system solves
Shaving
BallArithmetic.ShavingResult — Type
ShavingResult{T, VT}Result from interval shaving method.
Fields
solution::VT: Shaved (refined) solution enclosureshaved_amount::T: Total amount shaved from all boundariesiterations::Int: Number of shaving passes performedcomponents_shaved::Int: Number of component boundaries that were improved
BallArithmetic.interval_shaving — Function
interval_shaving(A::BallMatrix{T}, b::BallVector{T}, x0::BallVector{T};
max_iterations::Int=10,
min_improvement::T=T(1e-6),
R::Union{Nothing, Matrix{T}}=nothing) where {T}Refine solution enclosure x0 by shaving infeasible boundaries using Sherman-Morrison updates.
Algorithm
For each component i and each boundary (lower/upper):
- Fix x_i at its boundary value
- Update inverse using Sherman-Morrison formula
- Compute bounds on x_i from remaining equations
- If bound excludes fixed value, shrink interval
- Repeat until no significant improvement
Mathematical Details
When fixing xi = α, the system becomes: A * x = b with xi = α
This is equivalent to: (A with i-th column replaced by e_i) * x' = b - α * A[:,i]
The inverse can be updated efficiently using Sherman-Morrison formula.
Arguments
A: Coefficient ball matrix (n×n)b: Right-hand side ball vector (n)x0: Initial solution enclosure to be refinedmax_iterations: Maximum number of shaving passesmin_improvement: Minimum relative improvement to continue shavingR: Preconditioner A^(-1) (computed if not provided)
Returns
ShavingResult with refined solution enclosure.
Example
A = BallMatrix([3.0 1.0; 1.0 2.0], fill(0.1, 2, 2))
b = BallVector([4.0, 3.0], fill(0.1, 2))
# Get initial enclosure (e.g., from Krawczyk)
result_krawczyk = krawczyk_linear_system(A, b)
x0 = result_krawczyk.solution
# Refine with shaving
result_shaved = interval_shaving(A, b, x0)
println("Original width: ", maximum(rad(x0)))
println("Shaved width: ", maximum(rad(result_shaved.solution)))
println("Improvement: ", result_shaved.shaved_amount)Notes
- O(n²) per boundary test (efficient!)
- Typically applied after Krawczyk or other method
- Can significantly reduce interval widths
- Diminishing returns after few iterations
- Sherman-Morrison makes this practical
BallArithmetic.sherman_morrison_inverse_update — Function
sherman_morrison_inverse_update(A_inv::Matrix{T}, u::Vector{T}, v::Vector{T}) where {T}Compute (A + uv^T)^(-1) using Sherman-Morrison formula.
Sherman-Morrison Formula
(A + uv^T)^(-1) = A^(-1) - (A^(-1)uv^T A^(-1)) / (1 + v^T A^(-1)u)This allows O(n²) update of inverse for rank-1 perturbation instead of O(n³) recomputation.
Arguments
A_inv: Inverse of base matrix A (n×n)u: First vector of rank-1 update (n)v: Second vector of rank-1 update (n)
Returns
(A + uv^T)^(-1)
Example
A = [3.0 1.0; 1.0 2.0]
A_inv = inv(A)
u = [1.0, 0.0]
v = [0.0, 1.0]
# Efficiently compute inv(A + u*v')
A_updated_inv = sherman_morrison_inverse_update(A_inv, u, v)Notes
- O(n²) complexity vs O(n³) for full inverse
- Numerically stable if |1 + v^T A^(-1)u| is not too small
- Critical for efficient shaving implementation
Preconditioning
BallArithmetic.PreconditionerType — Type
PreconditionerTypeEnumeration of available preconditioning strategies.
:midpoint: C = A_c^(-1) (inverse of midpoint):lu: C from LU factorization of midpoint:ldlt: C from LDLT factorization of midpoint (symmetric matrices):identity: C = I (no preconditioning)
BallArithmetic.PreconditionerResult — Type
PreconditionerResult{T}Result from preconditioner computation.
Fields
preconditioner::Matrix{T}: Approximate inverse Cmethod::PreconditionerType: Method usedcondition_number::T: Estimated condition numbersuccess::Bool: Whether computation succeededfactorization::Any: Stored factorization (for LU/LDLT)
BallArithmetic.compute_preconditioner — Function
compute_preconditioner(A::BallMatrix{T};
method::Symbol=:midpoint,
check_conditioning::Bool=true) where {T}Compute preconditioner C ≈ A^(-1) for interval matrix A.
Methods
:midpoint: Direct inverse of A_c (default, simplest):lu: LU factorization of A_c (more stable):ldlt: LDLT factorization of A_c (for symmetric A):identity: Identity matrix (no preconditioning)
Arguments
A: Interval matrix to preconditionmethod: Preconditioning strategycheck_conditioning: Compute and warn about condition number
Returns
PreconditionerResult with approximate inverse and diagnostic info.
Example
A = BallMatrix([4.0 1.0; 1.0 3.0], fill(0.1, 2, 2))
# Midpoint inverse (default)
prec = compute_preconditioner(A)
C = prec.preconditioner
# LU factorization (more stable)
prec_lu = compute_preconditioner(A, method=:lu)Notes
- Midpoint inverse: O(n³), simplest but may be numerically unstable
- LU factorization: O(n³) once, O(n²) per solve, more stable
- LDLT factorization: O(n³/6) once, O(n²) per solve, only for symmetric
- Identity: O(1), use for well-conditioned or diagonally dominant matrices
BallArithmetic.apply_preconditioner — Function
apply_preconditioner(prec::PreconditionerResult{T}, v::Vector{T}) where {T}Apply preconditioner to vector: compute C * v.
If factorization is available, uses it for efficiency.
Arguments
prec: Preconditioner result fromcompute_preconditionerv: Vector to multiply
Returns
C * v
Example
A = BallMatrix([3.0 1.0; 1.0 2.0], fill(0.1, 2, 2))
prec = compute_preconditioner(A, method=:lu)
b = [5.0, 4.0]
Cb = apply_preconditioner(prec, b)apply_preconditioner(prec::PreconditionerResult{T}, M::Matrix{T}) where {T}Apply preconditioner to matrix: compute C * M.
Arguments
prec: Preconditioner resultM: Matrix to multiply
Returns
C * M
BallArithmetic.is_well_preconditioned — Function
is_well_preconditioned(A::BallMatrix{T}, prec::PreconditionerResult{T};
threshold::T=T(0.5)) where {T}Check if preconditioner is effective.
A good preconditioner should make ‖I - CA‖ small (ideally < 0.5).
Arguments
A: Original interval matrixprec: Preconditioner resultthreshold: Threshold for ‖I - CA‖ (default 0.5)
Returns
true if ‖I - CA‖ < threshold
Example
A = BallMatrix([3.0 1.0; 1.0 2.0], fill(0.1, 2, 2))
prec = compute_preconditioner(A)
if is_well_preconditioned(A, prec)
println("Good preconditioner")
endOverdetermined Systems
BallArithmetic.OverdeterminedResult — Type
OverdeterminedResult{T, VT}Result from overdetermined interval linear system solver.
Fields
solution::VT: Solution enclosure (or empty if unsolvable)solvable::Bool: Whether system is proven solvablemethod::Symbol: Method usedsubsystems_checked::Int: Number of subsystems examinedresidual::T: Maximum residual norm
BallArithmetic.subsquares_method — Function
subsquares_method(A::BallMatrix{T}, b::BallVector{T};
max_subsystems::Int=1000,
solver::Symbol=:gaussian_elimination) where {T}Solve overdetermined system Ax = b using subsquares approach.
Algorithm
For m×n system with m > n:
- Consider all (m choose n) square subsystems
- Solve each n×n subsystem
- Check if solution satisfies all m equations (Oettli-Prager)
- Take hull of all valid solutions
Mathematical Background
A solution exists if at least one n×n subsystem has a solution that satisfies all m equations.
Arguments
A: Coefficient ball matrix (m×n, m > n)b: Right-hand side ball vector (m)max_subsystems: Maximum number of subsystems to checksolver: Solver for square subsystems (:gaussian_elimination,:krawczyk)
Returns
OverdeterminedResult with solution enclosure.
Example
A = BallMatrix([2.0 1.0; 1.0 2.0; 3.0 1.0], fill(0.1, 3, 2))
b = BallVector([3.0, 3.0, 4.0], fill(0.1, 3))
result = subsquares_method(A, b)
if result.solvable
println("Solution: ", result.solution)
println("Checked ", result.subsystems_checked, " subsystems")
endNotes
- Combinatorial complexity: O(C(m,n) × n³)
- Only practical for small m and n
- Guaranteed to find solution if one exists (complete method)
- May be very slow for large m or n
- Recommended: m ≤ 20, n ≤ 10
BallArithmetic.multi_jacobi_method — Function
multi_jacobi_method(A::BallMatrix{T}, b::BallVector{T};
x0::Union{Nothing, BallVector{T}}=nothing,
max_iterations::Int=100,
tol::T=T(1e-10)) where {T}Solve overdetermined system using Multi-Jacobi iteration.
Algorithm
For each variable x_j, compute intersection of bounds from all equations:
x_j^(k+1) = ⋂_{i=1}^m [(b_i - ∑_{l≠j} a_{il}x_l^(k)) / a_{ij}]where the intersection is over all rows i where a_{ij} ≠ 0.
Arguments
A: Coefficient ball matrix (m×n, m > n)b: Right-hand side ball vector (m)x0: Initial enclosure (computed if not provided)max_iterations: Maximum iterationstol: Convergence tolerance
Returns
OverdeterminedResult with solution enclosure.
Example
A = BallMatrix([3.0 1.0; 1.0 2.0; 2.0 3.0], fill(0.1, 3, 2))
b = BallVector([4.0, 3.0, 5.0], fill(0.1, 3))
result = multi_jacobi_method(A, b)
if result.solvable
println("Solution: ", result.solution)
println("Iterations: ", result.subsystems_checked)
endNotes
- O(mn²) per iteration - faster than subsquares
- May not converge for all systems
- Empty intersection indicates unsolvability
- Works well when system is "nearly square" or well-conditioned
BallArithmetic.interval_least_squares — Function
interval_least_squares(A::BallMatrix{T}, b::BallVector{T};
method::Symbol=:normal_equations) where {T}Compute interval least squares solution to overdetermined system.
Minimizes ||Ax - b||² over the interval matrix/vector.
Arguments
A: Coefficient ball matrix (m×n, m ≥ n)b: Right-hand side ball vector (m)method: Solution method (:normal_equationsor:qr)
Returns
OverdeterminedResult with least squares solution enclosure.
Example
A = BallMatrix([2.0 1.0; 1.0 2.0; 3.0 1.0], fill(0.1, 3, 2))
b = BallVector([3.1, 2.9, 4.2], fill(0.05, 3))
result = interval_least_squares(A, b)
println("LS solution: ", result.solution)Notes
- Solves normal equations: A^T A x = A^T b
- O(mn² + n³) complexity
- May not satisfy all equations exactly
- Useful when exact solution doesn't exist
- Returns minimum norm solution in interval sense
H-Matrix Systems
BallArithmetic.VerifiedLinearSystemResult — Type
VerifiedLinearSystemResultResult structure for verified linear system solution.
BallArithmetic.verified_linear_solve_hmatrix — Function
verified_linear_solve_hmatrix(A::BallMatrix, b::BallVector;
method=:improved_method_a,
R=nothing,
x_approx=nothing,
max_iterations=1,
compute_perron_vector=true)Compute verified error bounds for the solution of Ax = b using H-matrix properties.
Arguments
A: Coefficient ball matrixb: Right-hand side ball vectormethod: Verification method to use:rump_original- Original Rump (2013) bound (Theorem 3.1):minamihata_2015- Minamihata et al. (2015) bound (Theorem 3.2, m=1):improved_method_a- Improved bound for Method (a) (Theorem 3.3, default):improved_method_b- Improved bound for Method (b) (Corollary 3.2)
R: Approximate inverse of A (computed if not provided)x_approx: Approximate solution (computed if not provided)max_iterations: Number of refinement iterations for improved boundscompute_perron_vector: Whether to compute Perron vector (more accurate)
Returns
VerifiedLinearSystemResultcontaining verified error bounds
Method Comparison
- Method (a): Uses C = [▽(RA), △(RA)] - more expensive but tighter
- Method (b): Uses C = [▽(RA), ▽(RA) + 2nu|R||A|] - half cost, weaker bounds
The improved methods (Theorem 3.3 and Corollary 3.2) provide tighter bounds than conventional approaches, especially for ill-conditioned systems.
Example
A = BallMatrix([3.0 1.0; 1.0 2.0], fill(1e-10, 2, 2))
b = BallVector([5.0, 4.0], fill(1e-10, 2))
result = verified_linear_solve_hmatrix(A, b; method=:improved_method_a)
println("Verified: ", result.verified)
println("Solution: ", result.x_approx)
println("Error bound: ", result.error_bound)Sylvester Equations
BallArithmetic.sylvester_miyajima_enclosure — Function
sylvester_miyajima_enclosure(A, B, C, X̃)Compute a Miyajima-style verified enclosure for the solution of the Sylvester problem A * X + X * B = C. The method follows the componentwise certificate from Ref. [3] and returns a BallMatrix whose midpoint is the supplied approximation X̃ and whose radii enclose the exact solution entrywise.
The routine raises an error when the spectral gaps λ_i(A) + λ_j(B) vanish or when the contraction bound is not satisfied.
BallArithmetic.triangular_sylvester_miyajima_enclosure — Function
triangular_sylvester_miyajima_enclosure(T, k)Construct a verified enclosure for the Sylvester system associated with the upper-triangular matrix T partitioned as
T = [T₁₁ T₁₂;
0 T₂₂],where T₁₁ is k × k. The enclosure is computed for the solution Y₂ of the transformed Sylvester equation T₂₂' * Y₂ - Y₂ * T₁₁' = T₁₂'. Forming the standard Sylvester data A = T₂₂', B = -T₁₁', and C = T₁₂', the routine first attempts a Miyajima-style eigenvector-based enclosure, and falls back to a direct column-by-column triangular solve in ball arithmetic when the eigenvector approach fails (e.g. for large ill-conditioned triangular matrices where the eigenvector condition number is too large).
The matrix T must be square and upper triangular, and the block size k must satisfy 1 ≤ k < size(T, 1).
triangular_sylvester_miyajima_enclosure(T_ball::BallMatrix, k::Integer)Miyajima enclosure for the Sylvester system when the triangular matrix T is given as a BallMatrix (midpoint + radii).
The midpoint mid(T_ball) is used to solve the Sylvester equation via the scalar method. The radii of T_ball produce a first-order perturbation bound on the solution Y, which inflates the returned enclosure.
Algorithm
- Solve on midpoint:
Y_mid = triangular_sylvester_miyajima_enclosure(mid(T_ball), k) - Compute separation:
sep = min_{i,j} |T₁₁[i,i] - T₂₂[j,j]|(lower-bounded rigorously) - First-order perturbation bound on Y from Tball radii:
δY ≤ sep⁻¹ · (‖ΔT₂₂‖·‖Y‖ + ‖ΔT₁₁‖·‖Y‖ + ‖ΔT₁₂‖)where `ΔTij` are the radius sub-blocks - Inflate:
BallMatrix(mid(Y_mid), rad(Y_mid) .+ δY)
The matrix T_ball must be square, and mid(T_ball) must be upper triangular.
Matrix Regularity
BallArithmetic.RegularityResult — Type
RegularityResult{T}Result from regularity testing.
Fields
is_regular::Bool: True if proven regular, false if inconclusivemethod::Symbol: Method used for testingcertificate::T: Numerical certificate (e.g., separation value)is_definitive::Bool: Whether result is definitive or just sufficient condition
BallArithmetic.is_regular — Function
is_regular(A::BallMatrix{T};
methods::Vector{Symbol}=[:sufficient_condition, :gershgorin, :diagonal_dominance],
verbose::Bool=false) where {T}Test regularity using multiple methods.
Tries several sufficient conditions in order until one succeeds.
Arguments
A: Interval matrix (n×n)methods: Vector of methods to try (in order)verbose: Print information about each method
Available Methods
:sufficient_condition: Eigenvalue-based (Theorem 11.12):gershgorin: Gershgorin circle theorem:diagonal_dominance: Strict diagonal dominance
Returns
RegularityResult from first successful method, or last result if all fail.
Example
A = BallMatrix([3.0 0.5; 0.5 2.0], fill(0.1, 2, 2))
result = is_regular(A, verbose=true)
if result.is_regular
println("Matrix is regular (proven by ", result.method, ")")
else
println("Regularity could not be proven")
endNotes
- Tries multiple sufficient conditions
- Returns true only if regularity is proven
- False result is inconclusive (matrix may still be regular)
- Order methods by speed: diagonaldominance → gershgorin → sufficientcondition
BallArithmetic.is_regular_sufficient_condition — Function
is_regular_sufficient_condition(A::BallMatrix{T}) where {T}Test regularity using sufficient condition from Theorem 11.12 (Horáček, p. 183).
Sufficient Condition
Matrix [A] is regular if: λmax(AΔ^T AΔ) < λmin(Ac^T Ac)
where Ac is center and AΔ is radius matrix.
Arguments
A: Interval matrix (n×n)
Returns
RegularityResult with test outcome.
Example
A = BallMatrix([3.0 1.0; 1.0 2.0], fill(0.05, 2, 2))
result = is_regular_sufficient_condition(A)
if result.is_regular
println("Matrix is proven regular")
println("Separation: ", result.certificate)
else
println("Test inconclusive")
endNotes
- O(n³) complexity (eigenvalue computation)
- Sufficient but not necessary
- If returns true, regularity is guaranteed
- If returns false, matrix may still be regular
- Works well for small radius matrices
BallArithmetic.is_regular_gershgorin — Function
is_regular_gershgorin(A::BallMatrix{T}) where {T}Test regularity using Gershgorin circle theorem.
Gershgorin Criterion
Matrix [A] is regular if for all i, 0 is not in the Gershgorin disc: Di = {z : |z - a{ii}| ≤ ∑{j≠i} |a{ij}|}
Arguments
A: Interval matrix (n×n)
Returns
RegularityResult with test outcome.
Example
A = BallMatrix([4.0 1.0 0.5; 0.5 3.0 0.5; 0.5 0.5 2.0], fill(0.1, 3, 3))
result = is_regular_gershgorin(A)
if result.is_regular
println("Matrix is proven regular by Gershgorin")
endNotes
- O(n²) complexity - very fast
- Sufficient but not necessary
- Works well for diagonally dominant matrices
- Conservative for general matrices
BallArithmetic.is_regular_diagonal_dominance — Function
is_regular_diagonal_dominance(A::BallMatrix{T};
strict::Bool=true) where {T}Test regularity using diagonal dominance.
Criterion
Matrix [A] is regular if it is strictly diagonally dominant: |a{ii}| > ∑{j≠i} |a_{ij}| for all i
Arguments
A: Interval matrix (n×n)strict: Require strict (>) or weak (≥) diagonal dominance
Returns
RegularityResult with test outcome.
Example
A = BallMatrix([5.0 1.0 1.0; 1.0 4.0 0.5; 0.5 1.0 3.0], fill(0.1, 3, 3))
result = is_regular_diagonal_dominance(A)
if result.is_regular
println("Matrix is strictly diagonally dominant")
endNotes
- O(n²) complexity
- Very fast test
- Sufficient for regularity
- Many practical matrices satisfy this
BallArithmetic.is_singular_sufficient_condition — Function
is_singular_sufficient_condition(A::BallMatrix{T}) where {T}Test singularity using sufficient condition from Theorem 11.13 (Horáček, p. 183).
Sufficient Condition for Singularity
Matrix [A] contains at least one singular matrix if: λmin(Ac^T Ac) < λmax(AΔ^T AΔ)
This is the dual of the regularity condition.
Arguments
A: Interval matrix (n×n)
Returns
true if proven singular, false if inconclusive.
Example
A = BallMatrix([1.0 1.0; 1.0 1.0], fill(0.05, 2, 2))
if is_singular_sufficient_condition(A)
println("Matrix contains singular matrices")
endNotes
- O(n³) complexity
- Sufficient but not necessary
- Dual of regularity test
Determinant Bounds
BallArithmetic.DeterminantResult — Type
DeterminantResult{T}Result from interval determinant computation.
Fields
determinant::Ball{T}: Enclosure of determinantmethod::Symbol: Method usedcomputation_time::T: Time spent (optional)tight::Bool: Whether enclosure is known to be tight
BallArithmetic.interval_det — Function
interval_det(A::BallMatrix{T};
method::Symbol=:auto,
check_regularity::Bool=true) where {T}Compute interval determinant enclosure using specified or automatic method.
Methods
:auto: Choose automatically based on matrix size and properties:hadamard: Hadamard inequality (fast, conservative):gershgorin: Gershgorin-based bounds (fast, moderate):gaussian_elimination: Gaussian elimination (moderate speed, good accuracy):cramer: Cramer's rule (slow, exact for small n)
Arguments
A: Interval matrix (n×n)method: Computation methodcheck_regularity: First check if matrix is regular
Returns
DeterminantResult with determinant enclosure.
Example
A = BallMatrix([3.0 1.0; 1.0 2.0], fill(0.1, 2, 2))
# Automatic method selection
result = interval_det(A)
println("det(A) ∈ ", result.determinant)
# Specific method
result_hadamard = interval_det(A, method=:hadamard)Notes
- Auto mode chooses:
- Cramer for n ≤ 3
- Gaussian elimination for n ≤ 20
- Hadamard for n > 20
- Check regularity first to avoid wasted computation
BallArithmetic.det_hadamard — Function
det_hadamard(A::BallMatrix{T}) where {T}Compute determinant bound using Hadamard's inequality.
Hadamard's Inequality
For any n×n matrix A: |det(A)| ≤ ∏{i=1}^n ||ai||
where a_i is the i-th row of A.
Arguments
A: Interval matrix (n×n)
Returns
DeterminantResult with determinant bounds.
Example
A = BallMatrix([2.0 1.0; 1.0 2.0], fill(0.1, 2, 2))
result = det_hadamard(A)
println("det(A) ∈ ", result.determinant)Notes
- O(n²) complexity - very fast
- Provides only upper bound on |det(A)|
- Very conservative for most matrices
- Useful for quick nonsingularity check
- Not tight except for special cases
BallArithmetic.det_gershgorin — Function
det_gershgorin(A::BallMatrix{T}) where {T}Compute determinant bound using Gershgorin circle theorem.
Gershgorin Approach
Eigenvalues lie in union of Gershgorin discs: λi ∈ {z : |z - a{ii}| ≤ ∑{j≠i} |a{ij}|}
Determinant is product of eigenvalues.
Arguments
A: Interval matrix (n×n)
Returns
DeterminantResult with determinant bounds.
Example
A = BallMatrix([3.0 0.5; 0.5 2.0], fill(0.1, 2, 2))
result = det_gershgorin(A)
println("det(A) ∈ ", result.determinant)Notes
- O(n²) complexity - fast
- Can provide tighter bounds than Hadamard for diagonally dominant matrices
- Conservative for general matrices
- Uses product of disc bounds
BallArithmetic.det_cramer — Function
det_cramer(A::BallMatrix{T}) where {T}Compute determinant using Cramer's rule (cofactor expansion).
Warning
This method has O(n!) complexity and should only be used for n ≤ 4.
Arguments
A: Small interval matrix (n ≤ 4)
Returns
DeterminantResult with exact interval determinant.
Example
A = BallMatrix([2.0 1.0; 1.0 3.0], fill(0.05, 2, 2))
result = det_cramer(A)
println("det(A) ∈ ", result.determinant) # Exact resultNotes
- O(n!) complexity - only for tiny matrices
- Provides exact interval arithmetic result
- Tight enclosure (no overestimation from method)
- Wrapping effect still present
BallArithmetic.contains_zero — Function
contains_zero(det_result::DeterminantResult{T}) where {T}Check if determinant enclosure contains zero (possible singularity).
Arguments
det_result: Result from determinant computation
Returns
true if 0 ∈ det(A), false otherwise.
Example
A = BallMatrix([1.0 1.0; 1.0 1.0], fill(0.1, 2, 2))
result = interval_det(A)
if contains_zero(result)
println("Matrix may be singular")
end