Stand-alone error function erf(x)

The question came up on StackOverflow this morning how to compute the error function erf(x) in Python. The standard answer for how to compute anything numerical in Python is “Look in SciPy.” However, this person didn’t want to take on the dependence on SciPy. I’ve seen variations on this question come up in several different contexts lately, including questions about computing the normal distribution function, so I thought I’d write up a solution.

Here’s a Python implementation of erf(x) based on formula 7.1.26 from A&S. The maximum error is below 1.5 × 10-7.

import math

def erf(x):
    # constants
    a1 =  0.254829592
    a2 = -0.284496736
    a3 =  1.421413741
    a4 = -1.453152027
    a5 =  1.061405429
    p  =  0.3275911

    # Save the sign of x
    sign = 1
    if x < 0:
        sign = -1
    x = abs(x)

    # A & S 7.1.26
    t = 1.0/(1.0 + p*x)
    y = 1.0 - (((((a5*t + a4)*t) + a3)*t + a2)*t + a1)*t*math.exp(-x*x)

    return sign*y

This problem is typical in two ways: A&S has a solution, and you’ve got to know a little background before you can use it.

The formula given in A&S is only good for x ≥ 0. That’s no problem if you know that the error function is an odd function, i.e. erf(-x) = -erf(x). But if you’re an engineer who has never heard of the error function but needs to use it, it may take a while to figure out how to handle negative inputs.

One other thing that someone just picking up A&S might not know is the best way to evaluate polynomials. The formula appears as 1 – (a1t1 + a2t2 + a3t3 + a4t4 + a5t5)exp(-x2), which is absolutely correct. But directly evaluating an nth order polynomial takes O(n2) operations, while the factorization used in the code above uses O(n) operations. This technique is known as Horner’s method.

14 thoughts on “Stand-alone error function erf(x)

  1. erf is actually a standard function in C “math.h”

    For some reason, it is not included in Python’s math module but, if you have SWIG, it is easy to write a SWIG wrapper for it. An example on a Linux machine:

    bash$ cat erf.i
    
    %module erf
    #include 
    double erf(double);
    
    bash$ swig -o erf_wrap.c -python erf.i
    bash$ gcc -o erf_wrap.os -c -fPIC -I/usr/include/python2.4 erf_wrap.c
    bash$ gcc -o _erf.so -shared erf_wrap.os
    bash$ python
    >>> from erf import erf
    >>> erf(1)
    0.84270079294971489
    
  2. Thanks! That’s handy sample code for other problems too.

    On POSIX systems, erf is included in math.h. But it’s not part of the ISO standard requirement, and Microsoft doesn’t implement it with their compiler. For reasons I don’t understand, Microsoft does implement Bessel functions, but it doesn’t implement function that I imagine are more commonly used, particularly the gamma function.

  3. I re-factored a sixth order polynomial using Horner’s method. The evaluation is called many times. The re-factoring lead to a dramatic improvement in execution times.

    Definitely worth the effort, and unfortunately, easy to make an error until you’ve performed the operation a few times.

    So much for plowing through a polynomial using Math.Pow(x, y).

  4. I had never heard about “A&S” so I followed the Amazon link only to see the very familiar cover of my Abramowitz & Stegun!

  5. Just found this through a google search. Very helpful… epecially now that I’ve tried it and it works. Thanks.

  6. Gene, I’m quoting below a couple of paragraphs, from “The Art of Scientific Computing” by Press et al…

    —–

    We assume that you know enough never to evaluate a polynomial this way:
    p=c[0]+c[1]*x+c[2]*x*x+c[3]*x*x*x+c[4]*x*x*x*x;
    or (even worse!),
    p=c[0]+c[1]*x+c[2]*pow(x,2.0)+c[3]*pow(x,3.0)+c[4]*pow(x,4.0);
    Come the (computer) revolution, all persons found guilty of such criminal behavior will be summarily executed, and their programs won’t be! It is a matter of taste, however, whether to write
    p=c[0]+x*(c[1]+x*(c[2]+x*(c[3]+x*c[4])));
    or
    p=(((c[4]*x+c[3])*x+c[2])*x+c[1])*x+c[0];
    If the number of coefficients c[0..n-1] is large, one writes
    p=c[n-1];
    for(j=n-2;j>=0;j–) p=p*x+c[j];
    or
    p=c[j=n-1];
    while (j>0) p=p*x+c[–j];

  7. Can I use the code snippet above to compute the erf(x) or are there any issues with the code snippet? Thanks

  8. Thanks for this — I would like to distribute a modified version of this code — can you tell me what license you intend to distribute this under?

    (My version will be support code for Think Stats, a book I am working on. The current draft is at thinkstatsbook.com)

  9. Thanks so much for making website for stand-alone code for people in need like me.
    I truly appreciate it.

  10. This is nice, thanks for writing this up.

    If you’re using this routine, one thing to watch out for is large relative error when |x| is small; e.g. erf(1e-9) calculated in this approximation has no correct decimal digits. The correct answer is small here, so the absolute error is not large, but the relative error is.

    You can fix this up if necessary by switching to using one term of the Taylor series for erf(x) when |x| < 1e-3 or so.

    E.g. add


    if abs(x) < 1e-3:
    return 1.1283791670955126*x

    The constant is a decimal expansion of 2/sqrt(π).

  11. //cdf() for ES6
    var cdf = (xx) => (
    {abs,sqrt,exp}=Math,
    x = abs(xx) / sqrt(2.0),
    t = 1.0 / (1.0 + 0.3275911*x),
    y = 1.0 – (((((1.061405429*t + -1.453152027)*t) + 1.421413741)*t + -0.284496736)*t + 0.254829592)*t*exp(-x*x),
    0.5 * (1.0 + y * (xxconsole.log(el,cdf(el)));

Comments are closed.