Numerical differentiation

Today I needed to the derivative of the zeta function. SciPy implements the zeta function, but not its derivative, so I needed to write my own version.

The most obvious way to approximate a derivative would be to simply stick a small step size into the definition of derivative:

f’(x) ≈ (f(x + h) − f(x)) / h

However, we could do much better using

f’(x) ≈ (f(x + h) − f(x − h)) / 2h

To see why, expand f(x) in a power series:

f(x + h) = f(x) + h f‘(x) + h2 f”(x)/2 + O(h3)

A little rearrangement shows that the error in the one-sided difference, the first approximation above, is O(h). Now if you replace h with –h and do a little algebra you can also show that the two-sided difference is O(h2). When h is small, h2 is very small, so the two-sided version will be more accurate for sufficiently small h.

So how small should h be? The smaller the better, in theory. In computer arithmetic, you lose precision whenever you subtract two nearly equal numbers. The more bits two numbers share, the more bits of precision you may lose in the subtraction. In my application, h = 10−5 works well: the precision after the subtraction in the numerator is comparable to the precision of the (two-sided) finite difference approximation. The following code was adequate for my purposes.

    from scipy.special import zeta

    def zeta_prime(x):
        h = 1e-5
        return (zeta(x+h,1) - zeta(x-h,1))/(2*h)

The zeta function in SciPy is Hurwitz zeta function, a generalization of the Riemann zeta function. Setting the second argument to 1 gives the Riemann zeta function.

There’s a variation on the method above that works for real-valued functions that extend to a complex analytic function. In that case you can use the complex step differentiation trick to use

Im( f(x + ih)/h )

to approximate the derivative. It amounts to the two-sided finite difference above, except you don’t need to have a computer carry out the subtraction, and so you save some precision. Why’s that? When x is real, xih and xih are complex conjugates, and f(x − ih) is the conjugate of f(x + ih), i.e. conjugation and function application commute in this setting. So (f(x + ih) − f(x − ih)) is twice the imaginary part of f(x + ih).

SciPy implements complex versions many special functions, but unfortunately not the zeta function.

6 thoughts on “Numerical differentiation

  1. The symmetric version of the definition of the derivative you’re using for approximation purposes seemed like a more natural definition of the derivative to me. Until I found out it’s not equivalent to the usual, forward-difference definition that is. Example: |x| is non-differentiable at 0 with the ordinary definition, but has derivative 0 using the symmetric version. Not that it matters for application, just thought it was interesting. Broke my intuition anyway.

  2. The complex step method is only good as a workaround for libraries that don’t do what you want. Never use it if you have complete control of the code. The work involved in computing the derivative this way is almost always more work than using automatic differentiation and it’s less accurate and predictable.

  3. It also depends on what you’re using the derivative for. If you need to do (say) some kind of Newton-Raphson/Levenberg-Marquardt kind of thing, then a small amount of Richardson extrapolation goes a long way.

Comments are closed.