The previous post looked at computing recurrence relations. That post ends with a warning that recursive evaluations may nor may not be numerically stable. This post will give examples that illustrate stability and instability.

There are two kinds of Bessel functions, denoted *J* and *Y*. These are called Bessel functions of the first and second kinds respectively. These functions carry a subscript *n* denoting their order. Both kinds of Bessel functions satisfy the same recurrence relation:

*f*_{n+1} – (2*n*/*x*) *f*_{n} + *f*_{n-1} = 0

where *f* is *J* or *Y*.

If you apply the recurrence relation in the increasing direction, it is unstable for *J* but stable for *Y*.

If you apply the recurrence relation in the opposite direction, it is stable for *J* and unstable for *Y*.

We will illustrate the above claims using the following Python code. Since both kinds of Bessel function satisfy the same recurrence, we pass the Bessel function in as a function argument. SciPy implements Bessel functions of the first kind as `jv`

and Bessel functions of the second kind as `yv`

. [1]

from scipy import exp, pi, zeros from scipy.special import jv, yv def report(k, computed, exact): print(k, computed, exact, abs(computed - exact)/exact) def bessel_up(x, n, f): a, b = f(0, x), f(1, x) for k in range(2, n+1): a, b = b, 2*(k-1)*b/x - a report(k, b, f(k,x)) def bessel_down(x, n, f): a, b = f(n,x), f(n-1,x) for k in range(n-2, -1, -1): a, b = b, 2*(k+1)*b/x - a report(k, b, f(k,x))

We try this out as follows:

bessel_up(1, 20, jv) bessel_down(1, 20, jv) bessel_up(1, 20, yv) bessel_down(1, 20, yv)

When we compute *J*_{n}(1) using `bessel_up`

, the relative error starts out small and grows to about 1% when *n* = 9. The relative error increases rapidly from there. When *n* = 10, the relative error is 356%.

For *n* = 20, the recurrence gives a value of 316894.36 while the true value is 3.87e-25, i.e. the computed value is 30 orders of magnitude larger than the correct value!

When we use `bessel_down`

, the results are correct to full precision.

Next we compute *Y*_{n}(1) using `bessel_up`

the results are correct to full precision.

When we compute *Y*_{n}(1) using `bessel_down`

, the results are about as bad as computing *J*_{n}(1) using `bessel_up`

. We compute *Y*_{0}(1) as 0 5.7e+27 while the correct value is roughly 0.088.

There are functions, such as Legendre polynomials, whose recurrence relations are stable in either direction, at least for some range of inputs. But it would be naive to assume that a recurrence is stable without some exploration.

## Miller’s algorithm

There is a trick for using the downward recurrence for Bessel functions known as **Miller’s algorithm**. It sounds crazy at first: Assume *J*_{N}(*x*) = 1 and *J*_{N+1}(*x*) = 0 for some large *N*, and run the recurrence downward.

Since we don’t know *J*_{N}(*x*) our results be off by some constant proportion. But there’s a way to find out what that proportionality constant is using the relation described here.

We add up out computed values for the terms on the right side, then divide by the sum to normalize our estimates. Miller’s recurrence algorithm applies more generally to other recurrences where the downward recurrence is stable and there exists a normalization identity analogous to the one for Bessel functions.

The following code lets us experiment with Miller’s algorithm.

def miller(x, N): j = zeros(N) # array to store values a, b = 0, 1 for k in range(N-1, -1, -1): a, b = b, 2*(k+1)*b/x - a j[k] = b norm = j[0] + sum(2*j[k] for k in range(2,N,2)) j /= norm for k in range(N-1, -1, -1): expected, computed = j[k], jv(k,x) report(k, j[k], jv(k,x))

When we call `miller(pi, 20)`

we see that Miller’s method computes *J*_{n}(π) accurately. The error starts out moderately small and decreases until the results are accurate to floating point precision.

|----+------------| | k | rel. error | |----+------------| | 19 | 3.91e-07 | | 17 | 2.35e-09 | | 16 | 2.17e-11 | | 15 | 2.23e-13 | | 14 | 3.51e-15 | |----+------------|

For smaller *k* the relative error is also around 10^{-15}, i.e. essentially full precision.

[1] Why do the SciPy names end in “v”? The order of a Bessel function does not have to be an integer. It could be any real number, and the customary mathematical notation is to use a Greek letter ν (nu) as a subscript rather than *n* as a reminder that the subscript might not represent an integer. Since a Greek ν looks similar to an English v, SciPy uses v as a sort of approximation of ν.

I’d love to see a third post in the series that explores why the recurrences are unstable.

Thanks. I may do that.

These results are for the real arguments of these functions

What happens for complex arguments?. It happens the same