Suppose that we have two differentiable real-valued functions $f$ and $g$ defined on some interval $[a,b]$. We wish to numerically compute their $L^2$ error $$\|f-g\|_{L^2([a,b])} = (\int_a^b|f(x)-g(x)|^2dx)^{1/2}$$ and their $L^{\infty}$ error $$\|f-g\|_{L^{\infty}([a,b])} = \max\{|f(x)-g(x)|:a \leq x \leq b\}.$$
We start with the $L^2$ error. To compute this, we use a Gaussian quadrature rule to approximate the integral. To learn how to implement Gaussian quadrature in Python, I refer you to the following set of notes: https://jordanhoffart.github.io/f22m610/gauss_quadrature.html. Borrowing the notation from those notes, we have that $$\int_a^b|f(x)-g(x)|^2dx \approx \frac{b-a}{2}\sum_{i=1}^nw_i|f(x_i)-g(x_i)|^2,$$ where each $$x_i = \frac{b-a}{2}s_i + \frac{b+a}{2}$$ and the pairs $(w_1,s_1),\dots,(w_n,s_n)$ are the corresponding weights and points for the $n$-th order Gaussian quadrature rule. Tables for these weights and points can be found in the notes linked above, immediately below, as well as on the Wikipedia page for Gaussian quadrature at https://en.wikipedia.org/wiki/Gaussian_quadrature.
n | points | weights |
---|---|---|
1 | s_1 = 0 | w_1 = 2 |
2 | s_1=-1/sqrt(3), s_2 = 1/sqrt(3) | w_1 = 1, w_2 = 1 |
3 | s_1 = -sqrt(3/5), s_2 = 0, s_3 = sqrt(3/5) | w_1 = 5/9, w_2 = 8/9, w_3 = 5/9 |
After approximating the integral with Gaussian quadrature, the $L^2$ error is approximated by $$\|f-g\|_{L^2([a,b])} \approx (\frac{b-a}{2}\sum_{i=1}^nw_i|f(x_i)-g(x_i)|^2)^{1/2}.$$ As we use a higher order quadrature rule (ie as $n$ grows larger), this approximation converges to the true $L^2$ error. It is recommended to use at least a second order Gaussian quadrature, possibly even higher. However, using extremely high quadrature rules gives minimal improvements to accuracy, so one must experiment to see what order quadrature rule is best for the given scenario.
Here we assumed that $f$ and $g$ are differentiable on the entire interval $[a,b]$. In practice, we may not have such a scenario. A common, more general, scenario is that $f$ and $g$ are both piecewise differentiable on some subintervals $a \leq t_1 < t_2 < \cdots < t_m \leq b$. To compute their $L^2$ error in this case, we follow a similar approach to in the notes referenced above:
Now we show how to compute the $L^{\infty}$ error. There are no integrals involved in the error computation, so we don't use any quadrature rules. Instead, to approximate this error, we pick distinct points $a \leq x_1, x_2, \dots, x_n \leq b$ and compute the approximate $L^{\infty}$ error $$\|f-g\|_{L^{\infty}([a,b])} \approx \max\{|f(x_1)-g(x_1)|,\dots,|f(x_n)-g(x_n)|\}.$$ As we pick more points (i.e. as $n$ grows larger), this approximation converges to the true $L^{\infty}$ error.
As an example, let $f(t) = \cos(2\pi t)$, let $k > 0$, and let $g_k$ be the piecewise linear nodal interpolant to $f$ on $[0,1]$ at the nodes $0 = t_0 < t_1 < \cdots < t_{2^k} = 1$, where $t_i = i/2^k$ for each $i = 0,1,\dots, 2^k$. Explicitly, this means that $g_k(t_i) = f(t_i)$ for each $i$, and on each subinterval $[t_i,t_{i+1}]$, $g_k$ is the straight line connecting the point $(t_i,f(t_i))$ to $(t_{i+1},f(t_{i+1}))$. We will approximate the errors $\|f-g_k\|_{L^2([a,b])}$ and $\|f-g_k\|_{L^{\infty}([a,b])}$ for $k = 5,6,7$. For the $L^2$ error, we will use a first, second, and third order Gaussian quadrature rule ($n = 1,2,3$), and for the $L^{\infty}$ error, we will use $n = 10, 100, 1000$ points. We expect, for fixed $n$, the errors to decrease as $k$ increases. We also expect, for fixed $k$, the errors to converge to something as $n$ increases.
import numpy
def f(t): return numpy.cos(2*numpy.pi*t)
# we need to define the nodes x0,...,x_{2**k} and the corresponding y-values
# f(x0),...,f(x_{2**k}) to define our nodal interpolants gk(x)
ts5 = [i/2**5 for i in range(0,2**5+1)]
ts6 = [i/2**6 for i in range(0,2**6+1)]
ts7 = [i/2**7 for i in range(0,2**7+1)]
ys5 = [f(t) for t in ts5]
ys6 = [f(t) for t in ts6]
ys7 = [f(t) for t in ts7]
# we will use the numpy.interp function to define our interpolants
def g5(t): return numpy.interp(t, ts5, ys5)
def g6(t): return numpy.interp(t, ts6, ys6)
def g7(t): return numpy.interp(t, ts7, ys7)
# we need to implement the gaussian quadrature rules now
def gauss1(f,a,b):
return (b-a)*f((b+a)/2)
def gauss2(f,a,b):
x1 = (b-a)/2*(-1/numpy.sqrt(3)) + (b+a)/2
x2 = (b-a)/2*(1/numpy.sqrt(3)) + (b+a)/2
return (b-a)/2*(f(x1) + f(x2))
def gauss3(f,a,b):
x1 = (b-a)/2*(-numpy.sqrt(3/5)) + (b+a)/2
x2 = (b+a)/2
x3 = (b-a)/2*(numpy.sqrt(3/5)) + (b+a)/2
return (b-a)/2*(5/9*f(x1) + 8/9*f(x2) + 5/9*f(x3))
# now we compute the L2 errors
def integrand5(t): return (f(t)-g5(t))**2
def integrand6(t): return (f(t)-g6(t))**2
def integrand7(t): return (f(t)-g7(t))**2
# we have to compute the errors by looping over each subinterval. let's write a
# function that does this.
def compute_piecewise_error(quad_rule, integrand, ts):
error = 0
for i in range(len(ts)-1):
error += quad_rule(integrand, ts[i], ts[i+1])
return error
print("L2 errors")
print("k = 5")
print("n = 1, error =", numpy.sqrt(compute_piecewise_error(gauss1, integrand5, ts5)))
print("n = 2, error =", numpy.sqrt(compute_piecewise_error(gauss2, integrand5, ts5)))
print("n = 3, error =", numpy.sqrt(compute_piecewise_error(gauss3, integrand5, ts5)))
print()
print("k = 6")
print("n = 1, error =", numpy.sqrt(compute_piecewise_error(gauss1, integrand6, ts6)))
print("n = 2, error =", numpy.sqrt(compute_piecewise_error(gauss2, integrand6, ts6)))
print("n = 3, error =", numpy.sqrt(compute_piecewise_error(gauss3, integrand6, ts6)))
print()
print("k = 7")
print("n = 1, error =", numpy.sqrt(compute_piecewise_error(gauss1, integrand7, ts7)))
print("n = 2, error =", numpy.sqrt(compute_piecewise_error(gauss2, integrand7, ts7)))
print("n = 3, error =", numpy.sqrt(compute_piecewise_error(gauss3, integrand7, ts7)))
print()
# now we compute the Linfty errors
print("Linfty errors")
print("k = 5")
print("n = 10, error =", max([abs(f(i/9)-g5(i/9)) for i in range(0,10)]))
print("n = 100, error =", max([abs(f(i/99)-g5(i/99)) for i in range(0,100)]))
print("n = 1000, error =", max([abs(f(i/999)-g5(i/999)) for i in range(0,1000)]))
print()
print("k = 6")
print("n = 10, error =", max([abs(f(i/9)-g6(i/9)) for i in range(0,10)]))
print("n = 100, error =", max([abs(f(i/99)-g6(i/99)) for i in range(0,100)]))
print("n = 1000, error =", max([abs(f(i/999)-g6(i/999)) for i in range(0,1000)]))
print()
print("k = 7")
print("n = 10, error =", max([abs(f(i/9)-g7(i/9)) for i in range(0,10)]))
print("n = 100, error =", max([abs(f(i/99)-g7(i/99)) for i in range(0,100)]))
print("n = 1000, error =", max([abs(f(i/999)-g7(i/999)) for i in range(0,1000)]))
L2 errors k = 5 n = 1, error = 0.003404912423356284 n = 2, error = 0.0022697390130072635 n = 3, error = 0.0024865298784972514 k = 6 n = 1, error = 0.0008517410855587699 n = 2, error = 0.0005678147212673178 n = 3, error = 0.0006220195775592062 k = 7 n = 1, error = 0.00021296734228060598 n = 2, error = 0.00014197743626727526 n = 3, error = 0.00015552909625550143 Linfty errors k = 5 n = 10, error = 0.0036652925475426645 n = 100, error = 0.004788150256366808 n = 1000, error = 0.0047919592550578205 k = 6 n = 10, error = 0.0011193979542117871 n = 100, error = 0.0011331865579669032 n = 1000, error = 0.0012023150904674207 k = 7 n = 10, error = 0.00015828486908098238 n = 100, error = 0.0002930864301735614 n = 1000, error = 0.00030090051325226685
As we can see, whenever we fix $k$ and increase $n$, the errors appear to converge to something. This something is the true error between $f$ and $g_k$, which is what we want to measure. Also, as we expected, whenever we fix $n$ and increase $k$, the errors converge to $0$. The key takeaway from this is that one should always use a large enough $n$ when computing errors. When testing the convergence rates of numerical methods, we never want errors introduced from using too small of $n$ to dominate the true approximation errors that we wish to observe.
One last remark. Given two continuous piecewise differentiable functions $f$ and $g$ on subintervals $a = t_1 < t_2 < \cdots < t_N = b$, we may wish to approximate the $L^{\infty}$ error of $f$ and $g$ by first approximating it locally on each subinterval $[t_i, t_{i+1}]$ and then taking the maximum over all subintervals: for each $i$, we choose points $t_i = t_{i1} < t_{i2} < \cdots < t_{in} = t_{i+1}$ in $[t_i, t_{i+1}]$, and then we approximate the $L^{\infty}$ error of $f$ and $g$ by $$\|f-g\|_{L^{\infty}([a,b])} \approx \max\{\max\{|f(t_{ij})-g(t_{ij})| : j = 1,\dots, n\} : i = 1,\dots, N-1\}.$$ Some code illustrating how to do this is given below. We use the functions f
and g7
from above.
Linf_error = 0
for i in range(len(ts7)-1):
local_ts = numpy.linspace(ts7[i],ts7[i+1],5) # we sample 5 points from t_i to t_{i+1}
local_Linf_error = max([abs(g7(t) - f(t)) for t in local_ts])
Linf_error = max(Linf_error, local_Linf_error)
print('Linf_error =', Linf_error)
Linf_error = 0.00030109059361804746