iTranslated by AI

The content below is an AI-generated translation. This is an experimental feature, and may contain errors. View original article
🍝

Solving Eigenvalue Problems in Single, Double, Quadruple, and Octuple Precision

に公開

This is Day 19 of the Numerical Analysis Advent Calendar 2022 🎄

In this note, we calculate the eigenvalues of Frank matrices and Hilbert matrices using single, double, (and for good measure) pseudo-quadruple, quadruple, and octuple precision. In the case of Frank matrices, I confirmed that for a size of around 1000×1000, single precision fails to calculate even a single digit of the minimum eigenvalue correctly. For Hilbert matrices, which have a much larger condition number, I found that at even smaller sizes of about 5×5 to 35×35, single, double, pseudo-quadruple, quadruple, and octuple precision cannot calculate even one digit of the minimum eigenvalue accurately.

Packages and Environment

The quadruple precision Float128 from Quadmath.jl complies with IEEE 754 binary128. I also added the pseudo-quadruple precision Double64 from DoubleFloats.jl. Since I encountered errors during the installation and build of QD.jl's octuple precision Float256, I used BigFloat instead this time. Since the significand of BigFloat can be changed freely, I set it to 237 bits to match IEEE 754 binary256, though it seems the exponent part cannot be changed from the default, making it larger than IEEE 754 binary256. Note that the backend for BigFloat is MPFR, which is included in Julia by default, so no special packages are required. Later, I will change the precision of the significand using setprecision(237). Eigenvalues up to Float32 and Float64 can be calculated using the standard library LinearAlgebra.jl (the eigenvalue part uses LAPACK). Double64, Float128, and BigFloat are calculated using GenericLinearAlgebra.jl.

Input
# using Pkg
# Pkg.add("Quadmath")
# Pkg.add("DoubleFloats")
# Pkg.add("GenericLinearAlgebra")

using Printf               # Display
using DoubleFloats         # Support for Double64 type
using Quadmath             # Support for Float128 type
using LinearAlgebra        # Eigenvalue calculation up to Float32, Float64
using GenericLinearAlgebra # Eigenvalue calculation for Double64, Float128, BigFloat

versioninfo()
Output
Julia Version 1.8.3
Commit 0434deb161 (2022-11-14 20:14 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: 8 × 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-13.0.1 (ORCJIT, tigerlake)
  Threads: 1 on 8 virtual cores

Using BigFloat instead of Octuple Precision

First, let's test if BigFloat can be used as a substitute for single, double, pseudo-quadruple, and quadruple precision. The number of bits in the significand for each type is as follows:

Input
for t in [Float32, Float64, Double64, Float128, BigFloat]
    println(t, " \t", precision(t))
end
Output
Float32 	24
Float64 	53
Double64 	106
Float128 	113
BigFloat 	256

For example, if we use setprecision(53), the number of bits in the significand of BigFloat matches that of Float64, so they are expected to behave similarly. Converting the rational number 1/3 to each type results in the following. Note that we represent a BigFloat with a 53-bit significand as BigFloat(53).

Input
for t in [Float32, Float64, Double64, Float128]
    p = precision(t)
    setprecision(p) do
        println("$t \t", big(convert(t,1//3)))
        println("BigFloat($(precision(BigFloat)))\t", BigFloat(1//3))
        # The above line uses BigFloat directly to check correspondence
        println()
    end
end

setprecision(237) do
    println("BigFloat($(precision(BigFloat)))\t", BigFloat(1//3))
    println()
end

println("BigFloat($(precision(BigFloat)))\t", BigFloat(1//3))
Output
Float32 	0.333333343
BigFloat(24)	0.333333343

Float64 	0.33333333333333331
BigFloat(53)	0.33333333333333331

Double64 	0.333333333333333333333333333333335
BigFloat(106)	0.333333333333333333333333333333335

Float128 	0.333333333333333333333333333333333317
BigFloat(113)	0.333333333333333333333333333333333317

BigFloat(237)	0.3333333333333333333333333333333333333333333333333333333333333333333333326

BigFloat(256)	0.3333333333333333333333333333333333333333333333333333333333333333333333333333348

It was confirmed that by matching the significand of BigFloat to other types, at least the digits align when converting from a rational number. While they may not be perfectly identical, BigFloat(237) should serve as a reasonable approximation for Float256. In fact, the later eigenvalue calculations also show that Float128 and BigFloat(113) can calculate up to the same matrix size.

Frank Matrix

Here is an example implementation of a Frank matrix. I first create elements as integers and then convert them to the type specified by the datatype keyword. Note that portability to Fortran and other languages was prioritized over speed.

Input
# Fast example:
# Frank(n; datatype=Int64) = convert.(datatype, [n - max(i,j) + 1 for i=1:n, j=1:n])

# Slow example:
function Frank(n; datatype=Int64)
    A = zeros(datatype, n, n)
    for j in 1:n
        for i in 1:n
            A[i,j] = convert(datatype, n - max(i,j) + 1)
        end
    end
    return A
end

A = Frank(5)
Output
5×5 Matrix{Int64}:
 5  4  3  2  1
 4  4  3  2  1
 3  3  3  2  1
 2  2  2  2  1
 1  1  1  1  1

Referring to Table 4.7 on p.76 of Tomoki Koya's "Introduction to LAPACK/BLAS" (Morikita Publishing, 2016), we calculate the minimum eigenvalues for n=10, 100, 1000. Similar to Table 4.7, it can be confirmed that for n=1000, the minimum eigenvalue calculated in single precision does not match even a single digit.

Input
# using LinearAlgebra
# using GenericLinearAlgebra

for n in [10,100,1000]
    println("\nn = $n  \tλ[1]")
    for datatype in [Float32,Float64]#,Double64,Float128,BigFloat]
        A = Frank(n, datatype=datatype)
        λ = eigvals(A)
        println(datatype, " \t", λ[1])
    end
end
Output
n = 10  	λ[1]
Float32 	0.25567994
Float64 	0.2556795627964355

n = 100  	λ[1]
Float32 	0.2500508
Float64 	0.2500610827206968

n = 1000  	λ[1]
Float32 	0.12658979
Float64 	0.2500006162349238

It was confirmed that with Float32, the minimum eigenvalue of a 1000×1000 Frank matrix cannot be calculated correctly to even one digit. In fact, it starts to oscillate and show strange behavior from around 900×900. Since memory and computation time limits are reached first with Frank matrices, we will look at the limits of single, double, pseudo-quadruple, quadruple, and octuple precision using the more ill-conditioned Hilbert matrix.

Hilbert Matrix

Here is an example implementation of a Hilbert matrix. First, we create the elements as rational types and then convert them to the type specified by the datatype keyword. Since using 1 / (i + j - 1) causes it to be automatically interpreted as Float64 and results in rounding errors, using 1 // (i + j - 1) allows for direct conversion from Rational to other types, minimizing rounding errors. Note that portability to languages like Fortran was prioritized over speed.

Input
# Fast example:
# Hilbert(n; datatype=Rational) = convert.(datatype, [1 // (i + j - 1) for i=1:n, j=1:n])

# Slow example:
function Hilbert(n; datatype=Rational)
    A = zeros(datatype, n, n)
    for j in 1:n
        for i in 1:n
            A[i,j] = convert(datatype, 1 // (i + j - 1))
        end
    end
    return A
end

H = Hilbert(5)
Output
5×5 Matrix{Rational}:
 1//1  1//2  1//3  1//4  1//5
 1//2  1//3  1//4  1//5  1//6
 1//3  1//4  1//5  1//6  1//7
 1//4  1//5  1//6  1//7  1//8
 1//5  1//6  1//7  1//8  1//9

We calculate the eigenvalues for various sizes of Hilbert matrices using Float32, Float64, Double64, Float128, BigFloat(113), BigFloat(237), and BigFloat(256).

Input
# using Printf
# using DoubleFloats
# using Quadmath
# using LinearAlgebra
# using GenericLinearAlgebra

for n in 2:40
    println("\nn = $n   \tλ[1]")
    for datatype in [Float32, Float64, Double64, Float128]
        A = Hilbert(n, datatype=datatype)
        λ = eigvals(A)
        @printf("%s \t%.9e\n", datatype, λ[1])
    end

    for p in [113,237,256,1024]
        setprecision(p) do
            A = Hilbert(n, datatype=BigFloat)
            λ = eigvals(A)
            @printf("%s \t%.9e\n", "BigFloat($(precision(BigFloat)))", λ[1])
        end
    end

end

# setprecision(BigFloat, 256)
Output
n = 2   	λ[1]
Float32 	6.574144959e-02
Float64 	6.574145409e-02
Double64 	6.574145409e-02
Float128 	6.574145409e-02
BigFloat(113) 	6.574145409e-02
BigFloat(237) 	6.574145409e-02
BigFloat(256) 	6.574145409e-02
BigFloat(1024) 	6.574145409e-02

n = 3   	λ[1]
Float32 	2.687271917e-03
Float64 	2.687340356e-03
Double64 	2.687340356e-03
Float128 	2.687340356e-03
BigFloat(113) 	2.687340356e-03
BigFloat(237) 	2.687340356e-03
BigFloat(256) 	2.687340356e-03
BigFloat(1024) 	2.687340356e-03

n = 4   	λ[1]
Float32 	9.672460874e-05
Float64 	9.670230402e-05
Double64 	9.670230402e-05
Float128 	9.670230402e-05
BigFloat(113) 	9.670230402e-05
BigFloat(237) 	9.670230402e-05
BigFloat(256) 	9.670230402e-05
BigFloat(1024) 	9.670230402e-05

n = 5   	λ[1]
Float32 	3.361120889e-06
Float64 	3.287928772e-06
Double64 	3.287928772e-06
Float128 	3.287928772e-06
BigFloat(113) 	3.287928772e-06
BigFloat(237) 	3.287928772e-06
BigFloat(256) 	3.287928772e-06
BigFloat(1024) 	3.287928772e-06

n = 6   	λ[1]
Float32 	1.306005259e-07
Float64 	1.082799486e-07
Double64 	1.082799485e-07
Float128 	1.082799485e-07
BigFloat(113) 	1.082799485e-07
BigFloat(237) 	1.082799485e-07
BigFloat(256) 	1.082799485e-07
BigFloat(1024) 	1.082799485e-07

n = 7   	λ[1]
Float32 	7.071950847e-08
Float64 	3.493898716e-09
Double64 	3.493898606e-09
Float128 	3.493898606e-09
BigFloat(113) 	3.493898606e-09
BigFloat(237) 	3.493898606e-09
BigFloat(256) 	3.493898606e-09
BigFloat(1024) 	3.493898606e-09

n = 8   	λ[1]
Float32 	-4.826334887e-08
Float64 	1.111539635e-10
Double64 	1.111538966e-10
Float128 	1.111538966e-10
BigFloat(113) 	1.111538966e-10
BigFloat(237) 	1.111538966e-10
BigFloat(256) 	1.111538966e-10
BigFloat(1024) 	1.111538966e-10

n = 9   	λ[1]
Float32 	2.470161853e-09
Float64 	3.499736372e-12
Double64 	3.499676403e-12
Float128 	3.499676403e-12
BigFloat(113) 	3.499676403e-12
BigFloat(237) 	3.499676403e-12
BigFloat(256) 	3.499676403e-12
BigFloat(1024) 	3.499676403e-12

n = 10   	λ[1]
Float32 	-6.898610394e-08
Float64 	1.093880753e-13
Double64 	1.093153819e-13
Float128 	1.093153819e-13
BigFloat(113) 	1.093153819e-13
BigFloat(237) 	1.093153819e-13
BigFloat(256) 	1.093153819e-13
BigFloat(1024) 	1.093153819e-13

n = 11   	λ[1]
Float32 	-8.925326256e-08
Float64 	3.375236254e-15
Double64 	3.393218595e-15
Float128 	3.393218595e-15
BigFloat(113) 	3.393218595e-15
BigFloat(237) 	3.393218595e-15
BigFloat(256) 	3.393218595e-15
BigFloat(1024) 	3.393218595e-15

n = 12   	λ[1]
Float32 	-3.159367168e-08
Float64 	1.630874793e-16
Double64 	1.047946398e-16
Float128 	1.047946398e-16
BigFloat(113) 	1.047946398e-16
BigFloat(237) 	1.047946398e-16
BigFloat(256) 	1.047946398e-16
BigFloat(1024) 	1.047946398e-16

n = 13   	λ[1]
Float32 	-6.428467003e-08
Float64 	-3.723992712e-17
Double64 	3.222901015e-18
Float128 	3.222901015e-18
BigFloat(113) 	3.222901015e-18
BigFloat(237) 	3.222901015e-18
BigFloat(256) 	3.222901015e-18
BigFloat(1024) 	3.222901015e-18

n = 14   	λ[1]
Float32 	-2.169667468e-08
Float64 	6.961897267e-18
Double64 	9.877051735e-20
Float128 	9.877051735e-20
BigFloat(113) 	9.877051735e-20
BigFloat(237) 	9.877051735e-20
BigFloat(256) 	9.877051735e-20
BigFloat(1024) 	9.877051735e-20

n = 15   	λ[1]
Float32 	-5.260721281e-08
Float64 	-1.970813429e-17
Double64 	3.017915296e-21
Float128 	3.017915296e-21
BigFloat(113) 	3.017915296e-21
BigFloat(237) 	3.017915296e-21
BigFloat(256) 	3.017915296e-21
BigFloat(1024) 	3.017915296e-21

n = 16   	λ[1]
Float32 	-2.375249863e-08
Float64 	-7.991212304e-17
Double64 	1.971645939e-22
Float128 	9.197419821e-23
BigFloat(113) 	9.197419821e-23
BigFloat(237) 	9.197419821e-23
BigFloat(256) 	9.197419821e-23
BigFloat(1024) 	9.197419821e-23

n = 17   	λ[1]
Float32 	-7.643087940e-08
Float64 	-3.084917681e-17
Double64 	2.796721645e-24
Float128 	2.796721632e-24
BigFloat(113) 	2.796721632e-24
BigFloat(237) 	2.796721632e-24
BigFloat(256) 	2.796721632e-24
BigFloat(1024) 	2.796721632e-24

n = 18   	λ[1]
Float32 	-3.352907640e-08
Float64 	-2.922712756e-17
Double64 	1.804347127e-25
Float128 	8.487413205e-26
BigFloat(113) 	8.487413251e-26
BigFloat(237) 	8.487413230e-26
BigFloat(256) 	8.487413230e-26
BigFloat(1024) 	8.487413230e-26

n = 19   	λ[1]
Float32 	-2.656284792e-08
Float64 	-1.790409527e-17
Double64 	3.747605947e-27
Float128 	3.747605889e-27
BigFloat(113) 	3.747605889e-27
BigFloat(237) 	2.571239705e-27
BigFloat(256) 	2.571239705e-27
BigFloat(1024) 	2.571239705e-27

n = 20   	λ[1]
Float32 	-2.125183407e-08
Float64 	-2.797264572e-16
Double64 	-1.905490628e-25
Float128 	1.233279673e-28
BigFloat(113) 	1.233279672e-28
BigFloat(237) 	7.777377397e-29
BigFloat(256) 	7.777377397e-29
BigFloat(1024) 	7.777377397e-29

n = 21   	λ[1]
Float32 	-5.624558241e-08
Float64 	-1.348650289e-17
Double64 	-2.714301231e-23
Float128 	3.678871784e-30
BigFloat(113) 	3.678871738e-30
BigFloat(237) 	2.349182538e-30
BigFloat(256) 	2.349182538e-30
BigFloat(1024) 	2.349182538e-30

n = 22   	λ[1]
Float32 	-9.640318410e-08
Float64 	-1.951507152e-16
Double64 	-2.703080487e-30
Float128 	-2.703634429e-30
BigFloat(113) 	-2.703493685e-30
BigFloat(237) 	7.086841582e-32
BigFloat(256) 	7.086841582e-32
BigFloat(1024) 	7.086841582e-32

n = 23   	λ[1]
Float32 	-1.269348502e-07
Float64 	-1.840525074e-16
Double64 	-1.395136546e-26
Float128 	-1.395136676e-26
BigFloat(113) 	-1.395136676e-26
BigFloat(237) 	2.135463438e-33
BigFloat(256) 	2.135463438e-33
BigFloat(1024) 	2.135463438e-33

n = 24   	λ[1]
Float32 	-5.662219849e-08
Float64 	-1.536692857e-16
Double64 	-6.294692986e-25
Float128 	-6.294693199e-25
BigFloat(113) 	-6.294693199e-25
BigFloat(237) 	6.428050325e-35
BigFloat(256) 	6.428050325e-35
BigFloat(1024) 	6.428050325e-35

n = 25   	λ[1]
Float32 	-9.834495529e-08
Float64 	-1.158400702e-16
Double64 	-9.786027902e-24
Float128 	3.475441588e-36
BigFloat(113) 	7.644790285e-36
BigFloat(237) 	1.933092811e-36
BigFloat(256) 	1.933092811e-36
BigFloat(1024) 	1.933092811e-36

n = 26   	λ[1]
Float32 	-8.256382245e-08
Float64 	-1.745151220e-16
Double64 	-9.530727276e-23
Float128 	-3.164126445e-29
BigFloat(113) 	-3.164126445e-29
BigFloat(237) 	5.808263531e-38
BigFloat(256) 	5.808263531e-38
BigFloat(1024) 	5.808263531e-38

n = 27   	λ[1]
Float32 	-1.064803996e-07
Float64 	-3.982034472e-17
Double64 	-2.741452206e-34
Float128 	-1.690538599e-27
BigFloat(113) 	-1.690538566e-27
BigFloat(237) 	1.743773036e-39
BigFloat(256) 	1.743773036e-39
BigFloat(1024) 	1.743773036e-39

n = 28   	λ[1]
Float32 	-8.385376304e-08
Float64 	-3.548348719e-16
Double64 	-1.942851263e-34
Float128 	-2.793018499e-26
BigFloat(113) 	-2.793018543e-26
BigFloat(237) 	5.231305960e-41
BigFloat(256) 	5.231305960e-41
BigFloat(1024) 	5.231305960e-41

n = 29   	λ[1]
Float32 	-1.160353804e-07
Float64 	-6.199549616e-17
Double64 	2.031105710e-35
Float128 	-2.822973889e-25
BigFloat(113) 	-2.822973889e-25
BigFloat(237) 	1.568304495e-42
BigFloat(256) 	1.568304495e-42
BigFloat(1024) 	1.568304495e-42

n = 30   	λ[1]
Float32 	-5.065797382e-08
Float64 	-5.289389957e-17
Double64 	-1.102984167e-34
Float128 	-2.146399554e-24
BigFloat(113) 	-2.146399554e-24
BigFloat(237) 	4.698636565e-44
BigFloat(256) 	4.698636565e-44
BigFloat(1024) 	4.698636565e-44

n = 31   	λ[1]
Float32 	-7.601806118e-08
Float64 	-7.329944390e-17
Double64 	-2.916407324e-34
Float128 	-1.255399562e-26
BigFloat(113) 	-1.255399570e-26
BigFloat(237) 	1.406868380e-45
BigFloat(256) 	1.406868380e-45
BigFloat(1024) 	1.406868380e-45

n = 32   	λ[1]
Float32 	-6.675949749e-08
Float64 	-2.374236517e-17
Double64 	-1.025210430e-34
Float128 	-3.257260908e-36
BigFloat(113) 	-3.257273966e-36
BigFloat(237) 	4.210099032e-47
BigFloat(256) 	4.210099032e-47
BigFloat(1024) 	4.210099032e-47

n = 33   	λ[1]
Float32 	-3.973698597e-07
Float64 	-6.226694532e-16
Double64 	7.189674368e-35
Float128 	-2.193596836e-36
BigFloat(113) 	-2.196233777e-36
BigFloat(237) 	1.259226436e-48
BigFloat(256) 	1.259226436e-48
BigFloat(1024) 	1.259226436e-48

n = 34   	λ[1]
Float32 	-2.554655794e-07
Float64 	-3.335481520e-16
Double64 	-3.129737789e-34
Float128 	-1.976669371e-25
BigFloat(113) 	-1.976656135e-25
BigFloat(237) 	3.764454505e-50
BigFloat(256) 	3.764454505e-50
BigFloat(1024) 	3.764454505e-50

n = 35   	λ[1]
Float32 	-2.991216945e-07
Float64 	-5.137067885e-16
Double64 	-2.859278165e-34
Float128 	-2.330807894e-36
BigFloat(113) 	-4.558199889e-36
BigFloat(237) 	2.318729279e-51
BigFloat(256) 	1.124863268e-51
BigFloat(1024) 	1.124863268e-51

n = 36   	λ[1]
Float32 	-1.958887168e-07
Float64 	-4.311007920e-16
Double64 	-2.255124275e-34
Float128 	-3.238481422e-36
BigFloat(113) 	-3.284703834e-36
BigFloat(237) 	6.919587088e-53
BigFloat(256) 	3.359762135e-53
BigFloat(1024) 	3.359762135e-53

n = 37   	λ[1]
Float32 	-1.831468239e-07
Float64 	-6.535979384e-16
Double64 	-2.116415176e-34
Float128 	-4.635128397e-36
BigFloat(113) 	-4.608945882e-36
BigFloat(237) 	2.064207412e-54
BigFloat(256) 	1.003088162e-54
BigFloat(1024) 	1.003088162e-54

n = 38   	λ[1]
Float32 	-3.336613190e-07
Float64 	-4.512357590e-16
Double64 	-4.432253018e-34
Float128 	-7.250642462e-36
BigFloat(113) 	-4.329796608e-36
BigFloat(237) 	4.617382896e-56
BigFloat(256) 	6.155701539e-56
BigFloat(1024) 	2.993651511e-56

n = 39   	λ[1]
Float32 	-2.744432948e-07
Float64 	-5.247024262e-16
Double64 	-1.290923053e-34
Float128 	-3.133177866e-36
BigFloat(113) 	-3.137829276e-36
BigFloat(237) 	1.376487504e-57
BigFloat(256) 	1.835102354e-57
BigFloat(1024) 	8.931079340e-58

n = 40   	λ[1]
Float32 	-2.591677912e-07
Float64 	-4.441535386e-16
Double64 	-4.783821138e-34
Float128 	-1.174660257e-36
BigFloat(113) 	-1.174234562e-36
BigFloat(237) 	3.420000952e-59
BigFloat(256) 	4.102179409e-59
BigFloat(1024) 	2.663517273e-59

Float32 loses all correct digits at n=6 \rightarrow 7, and Float64 loses all correct digits at n=12 \rightarrow 13. Since Float128 and BigFloat(113) stop being able to calculate eigenvalues at the same size, it seems BigFloat(237) serves as a good benchmark for Float256.

Summary

We calculated the eigenvalues of Frank matrices and Hilbert matrices in single, double, quadruple (and pseudo-quadruple), and octuple precision. In the case of Frank matrices, we confirmed that with single precision, not even a single digit of the eigenvalues could be calculated correctly for a size of about 1000×1000. For Hilbert matrices, which have a larger condition number, the results were as follows:

Type Size at which eigenvalues cannot be calculated correctly even to one digit
Float32
n=6\rightarrow7
Float64
n=12\rightarrow13
Double64
n=17\rightarrow18
Float128
n=18\rightarrow19
BigFloat(113) ≈ Float128
n=18\rightarrow19
BigFloat(237) ≈ Float256
n=34\rightarrow35
BigFloat(256)
n=37\rightarrow38

In numerical computation, it's easy to fall into a mindset where speed is everything, but there is no point in pursuing speed if correct results cannot be obtained. While quadruple and octuple precision are said to be significantly slower than double precision, writing this article helped me understand the motivation for developing high-precision arithmetic libraries. I hope you were also able to sense the potential of high-precision computation in Julia.

Appendix: Memory Consumption

Since this simply multiplies the memory consumption of each element by the number of elements, the actual values will be slightly larger. You can check the exact values using varinfo().

Input
# using Printf

function KMG(byte)
    if byte > 10^9
        return @sprintf("%5.1f GB", byte/10^9)
    elseif byte > 10^6
        return @sprintf("%5.0f MB", byte/10^6)
    elseif byte > 10^3
        return @sprintf("%5.0f KB", byte/10^3)
    else
        return @sprintf("%5.0f  B", byte)
    end
end


println("size\t\t", [@sprintf("%9s", "Float$j \t") for j in [32,64,128,256]]...)
for i in union(1,10,100,1000:1000:20000, 30000:10000:100000)
    println("$(i)²\t\t", [KMG(i^2*j/8)*"\t" for j in [32,64,128,256]]...)
end
Output
size		 Float32 	 Float64 	Float128 	Float256 	
1²		    4  B	    8  B	   16  B	   32  B	
10²		  400  B	  800  B	    2 KB	    3 KB	
100²		   40 KB	   80 KB	  160 KB	  320 KB	
1000²		    4 MB	    8 MB	   16 MB	   32 MB	
2000²		   16 MB	   32 MB	   64 MB	  128 MB	
3000²		   36 MB	   72 MB	  144 MB	  288 MB	
4000²		   64 MB	  128 MB	  256 MB	  512 MB	
5000²		  100 MB	  200 MB	  400 MB	  800 MB	
6000²		  144 MB	  288 MB	  576 MB	  1.2 GB	
7000²		  196 MB	  392 MB	  784 MB	  1.6 GB	
8000²		  256 MB	  512 MB	  1.0 GB	  2.0 GB	
9000²		  324 MB	  648 MB	  1.3 GB	  2.6 GB	
10000²		  400 MB	  800 MB	  1.6 GB	  3.2 GB	
11000²		  484 MB	  968 MB	  1.9 GB	  3.9 GB	
12000²		  576 MB	  1.2 GB	  2.3 GB	  4.6 GB	
13000²		  676 MB	  1.4 GB	  2.7 GB	  5.4 GB	
14000²		  784 MB	  1.6 GB	  3.1 GB	  6.3 GB	
15000²		  900 MB	  1.8 GB	  3.6 GB	  7.2 GB	
16000²		  1.0 GB	  2.0 GB	  4.1 GB	  8.2 GB	
17000²		  1.2 GB	  2.3 GB	  4.6 GB	  9.2 GB	
18000²		  1.3 GB	  2.6 GB	  5.2 GB	 10.4 GB	
19000²		  1.4 GB	  2.9 GB	  5.8 GB	 11.6 GB	
20000²		  1.6 GB	  3.2 GB	  6.4 GB	 12.8 GB	
30000²		  3.6 GB	  7.2 GB	 14.4 GB	 28.8 GB	
40000²		  6.4 GB	 12.8 GB	 25.6 GB	 51.2 GB	
50000²		 10.0 GB	 20.0 GB	 40.0 GB	 80.0 GB	
60000²		 14.4 GB	 28.8 GB	 57.6 GB	115.2 GB	
70000²		 19.6 GB	 39.2 GB	 78.4 GB	156.8 GB	
80000²		 25.6 GB	 51.2 GB	102.4 GB	204.8 GB	
90000²		 32.4 GB	 64.8 GB	129.6 GB	259.2 GB	
100000²		 40.0 GB	 80.0 GB	160.0 GB	320.0 GB

References

https://gist.github.com/ohno/239c6452a2633f24a403f2bad21081dc

P.S.

I have created implementation examples in Fortran/LAPACK.
https://github.com/ohno/ill-conditioned

https://zenn.dev/ohno/articles/4fff91b048e0d3

https://zenn.dev/ohno/articles/fd1c669be6fa38

Discussion