iTranslated by AI
Can Julia Handle the Law of Error Propagation?
This is Day 4 of the Numerical Analysis Advent Calendar 2022. Since topics regarding errors have been continuing, I decided to join in.
I remember briefly learning the law of error propagation in a mandatory undergraduate experiment. Although I moved on to a theoretical research lab, I occasionally need it to compare calculated values with experimental ones, and I always find myself staring at formulas while performing calculations. It's quite a chore to have to calculate square roots for the error parts every single time. While searching for a good package, I found Measurements.jl, so I decided to give it a try while verifying if it calculates correctly.
In this note, we will perform calculations involving errors using Measurements.jl. First, we'll confirm the basic usage, then verify that the law of error propagation holds. Finally, we aim to evaluate the errors in the energy levels of a hydrogen atom.
Environment / Packages
For the first time, install Measurements.jl in the REPL or on Jupyter Notebook. You can install it on a Notebook by uncommenting the lines below. Declare using Measurements every time before use. Additionally, you can check the Julia version and terminal information with versioninfo().
# using Pkg
# Pkg.add("Measurements")
using Measurements
versioninfo()
Julia Version 1.8.3
Commit 0434deb161 (2022-11-14 20:14 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: 8 × 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-13.0.1 (ORCJIT, tigerlake)
Threads: 1 on 8 virtual cores
Usage
We will try it out by referring to the Official Documentation.
Declaration
The following three declarations all seem to mean the same thing. It's handled casually here, but it's actually quite impressive. Since CODATA values are all provided in the form 12.34(56), the fact that it can parse this is incredibly helpful.
julia> measurement(12.34, 0.56)
12.34 ± 0.56
julia> measurement("12.34(56)")
12.34 ± 0.56
julia> 12.34 ± 0.56
12.34 ± 0.56
Operations and Functions
In addition to basic arithmetic operations, you can also use functions. It reportedly supports Special Functions as well, so please read the Official Documentation if you are interested.
julia> a = 12.3 ± 0.4
julia> b = 23.4 ± 0.5
julia> a + b
35.7 ± 0.64
julia> a - b
-11.1 ± 0.64
julia> a * b
288.0 ± 11.0
julia> a / b
0.526 ± 0.02
julia> a % b
12.3 ± 0.4
julia> a ^ b
3.2e25 ± 4.7e25
julia> 2a
24.6 ± 0.8
julia> a ^ 2
151.3 ± 9.8
julia> 2 ^ a
5000.0 ± 1400.0
julia> sqrt(a)
3.507 ± 0.057
julia> sin(a)
-0.26 ± 0.39
julia> cos(a)
0.96 ± 0.11
julia> tan(a)
-0.27 ± 0.43
julia> log(a)
2.51 ± 0.033
julia> exp(a)
220000.0 ± 88000.0
Extracting Values and Errors
The data at this point is stored as follows.
julia> a = 12.3 ± 0.4
julia> dump(a)
Measurement{Float64}
val: Float64 12.3
err: Float64 0.4
tag: UInt64 0x0000000000000006
der: Measurements.Derivatives{Float64}
parent: Measurements.Derivatives{Float64}
parent: #undef
key: Tuple{Float64, Float64, UInt64}
1: Float64 2.61e-321
2: Float64 9.06030910405e-312
3: UInt64 0x0000000000000098
value: Float64 3.16e-322
key: Tuple{Float64, Float64, UInt64}
1: Float64 12.3
2: Float64 0.4
3: UInt64 0x0000000000000006
value: Float64 1.0
nothing
Therefore, values and errors can be extracted using .val and .err as shown below. For operations between variables, you can also extract them like (a + b).val. (The same can be achieved with Measurements.value() and Measurements.uncertainty().)
julia> a = 12.3 ± 0.4
julia> a.val
12.3
julia> a.err
0.4
julia> (a + b).val
35.7
julia> (a + b).err
0.6403124237432849
Law of Error Propagation
This package seems quite useful, but let's verify if it correctly performs calculations based on the laws of propagation. You can find the laws of propagation here and here. While the notation for multiplication is slightly different, you will see that they match if you rearrange the equations.
Let's check if the calculations in Measurements.jl satisfy these laws of propagation.
Basic Arithmetic and Powers
First, I will show an example of verifying the law of propagation for addition.
julia> a = 12.3 ± 0.4
julia> b = 23.4 ± 0.5
julia> a + b # Calculation by Measurements.jl
35.7 ± 0.64
julia> a.val + b.val # Verification by implementing the formula directly
35.7
julia> sqrt(a.err ^ 2 + b.err ^ 2) # Verification by implementing the formula directly
0.6403124237432849
It seems the error part is also correct. Let's verify if other operations match as well.
julia> a = 12.3 ± 0.4
julia> b = 23.4 ± 0.5
julia> (a + b).val - (a.val + b.val)
0.0
julia> (a - b).val - (a.val - b.val)
0.0
julia> (a * b).val - a.val * b.val
0.0
julia> (a / b).val - a.val / b.val
0.0
julia> (a ^ b).val - a.val ^ b.val
0.0
julia> (a + b).err - sqrt(a.err ^ 2 + b.err ^ 2)
0.0
julia> (a - b).err - sqrt(a.err ^ 2 + b.err ^ 2)
0.0
julia> (a * b).err - a.val * b.val * sqrt((a.err / a.val) ^ 2 + (b.err / b.val) ^ 2)
0.0
julia> (a / b).err - (a.val / b.val) * sqrt((a.err / a.val) ^ 2 + (b.err / b.val) ^ 2)
0.0
julia> (a ^ b).err - sqrt((b.val * a.val ^ (b.val - 1) * a.err) ^ 2 + (log(a.val) * a.val ^ b.val * b.err) ^ 2)
0.0
Everything was fine. In addition, I conducted tests with tens of thousands of patterns using random numbers, and there were no instances where results clearly violated the law of propagation.
Calculations with BigFloat
While the laws of error propagation seem to be satisfied, the difference between the calculation by Measurements.jl and a straightforward implementation of the formula may not always be zero due to numerical errors. However, since Julia comes with built-in arbitrary-precision arithmetic (BigFloat, BigInt), it provides reliable results. For example, even in calculations where numerical errors occur in relatively large digits, using BigFloat can minimize the numerical error as shown below.
julia> a = 100.0 ± 0.1
julia> b = 1.0e-5 ± 0.1
julia> (a / b).err - (a.val / b.val) * sqrt((a.err / a.val) ^ 2 + (b.err / b.val) ^ 2)
-1.52587890625e-5
julia> a = big"100.0" ± big"0.1"
julia> b = big"0.00001" ± big"0.1"
julia> (a / b).err - (a.val / b.val) * sqrt((a.err / a.val) ^ 2 + (b.err / b.val) ^ 2)
2.373891936439949686868310567390489288555241458404986241569944997707252057661203e-66
Functions
Since basic arithmetic and powers have already been verified, their combinations can also be said to be correct. While I don't think the implementation of most functions is a simple Taylor expansion, I'll try to see if the error can be verified using Taylor expansion here. One thing to note when using Taylor expansion is the problem that factorial(21)) overflows within the range of Int64. While this might be a bit difficult in other languages, it can be easily solved in Julia because BigFloat is supported by default.
First, let's calculate for the case of the exponential function. It converges at factorial(big(i)) to factorial(i), it will overflow at
using Printf
a = big"1.23" ± big"0.04"
for n in union(0:12, 100)
b = sum([a^i / factorial(big(i)) for i in 0:n])
@printf("n = %d\t%5f ± %5f\n", n, b.val, b.err)
end
b = exp(a) # ℯ^a
@printf("exp(a)\t%5f ± %5f\n", b.val, b.err)
n = 0 1.000000 ± 0.000000
n = 1 2.230000 ± 0.040000
n = 2 2.986450 ± 0.089200
n = 3 3.296594 ± 0.119458
n = 4 3.391964 ± 0.131864
n = 5 3.415425 ± 0.135679
n = 6 3.420234 ± 0.136617
n = 7 3.421079 ± 0.136809
n = 8 3.421209 ± 0.136843
n = 9 3.421227 ± 0.136848
n = 10 3.421229 ± 0.136849
n = 11 3.421230 ± 0.136849
n = 12 3.421230 ± 0.136849
n = 100 3.421230 ± 0.136849
exp(a) 3.421230 ± 0.136849
It converges properly. Since the exponential function is a special case of powers, I feel like the error for other powers could also be obtained through basic arithmetic operations. Now, let's test if sin(x) converges well. For your information, if you change factorial(big(2*i+1)) to factorial(2*i+1), it overflows at
# using Printf
a = big"1.23" ± big"0.04"
for n in union(0:5, 10, 100)
b = sum([(-1)^i * a^(2*i+1) / factorial(big(2*i+1)) for i in 0:n])
@printf("n = %d\t%5f ± %5f\n", n, b.val, b.err)
end
b = sin(a)
@printf("sin(a)\t%5f ± %5f\n", b.val, b.err)
n = 0 1.230000 ± 0.040000
n = 1 0.919855 ± 0.009742
n = 2 0.943316 ± 0.013557
n = 3 0.942471 ± 0.013364
n = 4 0.942489 ± 0.013370
n = 5 0.942489 ± 0.013370
n = 10 0.942489 ± 0.013370
n = 100 0.942489 ± 0.013370
sin(a) 0.942489 ± 0.013370
This also converges properly. Since we are considering the Taylor expansion around a = big"1.23" ± big"0.04" and testing it. Additionally, for sin and cos, you can speed up convergence even for values far from a = mod(big"12.3" ± big"0.4", 2π).
Energy Levels of the Hydrogen Atom
You might find the Schrödinger equation of relative motion introduced here somewhat unfamiliar. This is because, in fields like quantum chemistry and solid-state physics, it is standard to treat the motion of the nucleus and electrons separately using the Born-Oppenheimer approximation, and the coordinate systems considered are fundamentally different. Even in textbook explanations of the hydrogen atom in quantum mechanics, the mass of the proton
Also, please note that this is a model of point masses and point charges. We treat the hydrogen atom as a two-body system consisting of a proton and an electron. Although a proton is actually a many-body system made of three quarks, we do not consider its internal structure or size, and instead treat both the proton and the electron equally as point masses. This is an implicit assumption in models that use point-charge potentials.
Schrödinger Equation
The time-independent Schrödinger equation for a hydrogen-like atom is as follows:
Where, for the electron and the proton, the respective masses are
Here, the reduced mass is defined as
Exact Solution
It is known that the eigenvalues for the above equation of relative motion are given by the following expression (partially modified based on Eq. (2.29) in Takayanagi (2000)).
Where
This shows that we only need to use the experimental value of the proton-to-electron mass ratio
Atomic Units and Non-dimensionalization
Before proceeding with the specific calculations, we will non-dimensionalize the variables. Usually, this is done at the stage of setting up the equations, but this time I have kept them dimensional until just before the calculation. The clearest explanation of atomic units and related matters can be found in IUPAC GreenBook 3.9.2.
In our calculations, we will use these non-dimensional quantities marked with asterisks. However, since the asterisks are often omitted, leading to potential misunderstandings or mistakes, it is necessary to explain this carefully for those outside the field. It might even be so detailed as to become confusing instead.
Experimental Values
We will use the 2018 CODATA recommended values for all constants. The conversion factor from
Additionally, for the proton-to-electron mass ratio
Specific Calculations
The minimum eigenvalue (ground state energy) is as follows:
When converting units from
julia> mₑ = 1
1
julia> mₚ = measurement("1836.15267343(11)")
1836.15267343 ± 1.1e-7
julia> hrev = measurement("27.211386245988(53)")
27.211386245988 ± 5.3e-11
julia> μ = 1 / (1 / mₑ + 1 / mₚ)
0.999455679424763 ± 3.3e-14
julia> E = -μ / 2
-0.499727839712381 ± 1.6e-14
julia> E * hrev
-13.598287264287 ± 2.6e-11
Summary
Julia was able to handle the law of error propagation. And it was truly impressive. Since Julia comes with built-in arbitrary-precision arithmetic (BigFloat, BigInt), it provides reliable results, such as removing the effects of numerical errors that arise during calculations. While writing this article, I conducted tests with tens of thousands of patterns using random numbers, and there were no instances where results clearly violated the law of propagation. The error in division (a / b).err seems to be more susceptible to numerical errors than other calculations, but since it can be solved with BigFloat, you should be fine using BigFloat as shown in the example if you are concerned. From now on, I will use Measurements.jl with confidence.
References
Measurements.jl
Law of Error Propagation
- http://www.tagen.tohoku.ac.jp/labo/ishijima/gosa-03.html
- http://ax00.web.fc2.com/atomic/beta/s/s0070.html
Hydrogen Atom
- Yoshiya Harada, Quantum Chemistry, Vol. 2 (Shokabo, 2007) §A2.3
- Kazuo Takayanagi, Asakura Physics Series 11: Atomic and Molecular Physics (Asakura Shoten, 2000) pp. 11-22
- Mitsuji Kawai and Shiro Yoshida, Asakura Physics Series 19: Theory of Nuclear Reactions (Asakura Shoten, 2002) pp. 8-12, 25-38
- Frank Jensen, Introduction to Computational Chemistry, 2nd Edition (John Wiley & Sons, 2007) pp. 14-17
Atomic Units and CODATA Recommended Values
- IUPAC GreenBook 3.9.2
- Eite Tiesinga, Peter J. Mohr, David B. Newell, and Barry N. Taylor, CODATA recommended values of the fundamental physical constants: 2018, Rev. Mod. Phys. 93, 025010 (2021) https://doi.org/10.1103/RevModPhys.93.025010
Postscript
Measurements.jl was introduced on Day 15 of the JuliaLang Advent Calendar 2022! While this article treated error propagation formulas and the package as a black box, that article provides a detailed explanation of the theory of linear error propagation. It also explains combinations with other packages, which is very helpful. Please take a look.
Discussion