# How to compute jinc(x)

The function jinc(x) that I wrote about yesterday is almost trivial to implement, but not quite. I’ll explain why it’s not quite as easy as it looks and how one might implement it in C and Python.

The function jinc(x) is defined as J1(x) / x, so if you have code to compute J1 then it ought to be a no-brainer. For example, why not use the following C code?

#include <math.h>
double jinc(double x) {
return j1(x) / x;
}

The problem is that if you pass in 0, the code will divide by 0 and return a NaN. The function jinc(x) is defined to be 1/2 at x = 0 because that’s the limit of J1(x)(x) / x as x goes to 0. So we try again:

#include <math.h>
double jinc(double x) {
return (x == 0.0) ? 0.5 : j1(x) / x;
}

Does that work? Technically, it could still fail — we’ll come back to that at the end — but we’ll assume for now that it’s OK.

We could write the analogous Python code, and it would be adequate as long as we’re only calling the function with scalars and not NumPy arrays.

from scipy.special import j1
def jinc(x):
if x == 0.0:
return 0.5
return j1(x) / x

Now suppose you want to plot this function. You create an array of points, say

x = np.linspace(-1, 1, 25)

and plot jinc(x). You’ll get a warning: “ValueError: The truth value of an array with one element is ambiguous. Use a.any() or a.all().” Incidentally, if we called linspace with an even integer in the last argument, our array of points would avoid zero and the naive implementation of jinc would work.

When Python tries to apply jinc to an array, it doesn’t know how to interpret the test x == 0. The warning suggests “Do you mean if any component of x is 0? Or if all components of x are 0?” Neither option is what we want. We want to apply jinc as written to each element of x. We could do this by calling the vectorize function.

jinc = np.vectorize(jinc)

This replaces our original jinc function with one that handles NumPy arrays correctly.

There is an extremely unlikely scenario in which the code above could fail. The value of J1(x) is approximately x/2 for small values of x. If the floating point value x is so small that 0.5*x returns 0, our function will return 0, even though it should return 0.5. The C code above works for values of x as small as DBL_MIN and even values much smaller. (DBL_MIN is not the smallest value of a double, only the smallest normalized double.) But if you set

x = DBL_MIN / pow(2.0, 52);

then jinc(x) will return 0. If you want to be absolutely safe, you could change the implementation to

#include <math.h>
double jinc(double x) {
return (fabs(x) < 1e-8) ? 0.5 : j1(x) / x;
}

Why test for whether the absolute value is less than 10-8 rather than a much smaller number? For small x, the error in approximating jinc(x) with 1/2 is on the order of x2/16. So for x as large as 10-8, the approximation error is below the resolution of a double. As a bonus, the function jinc(x) will be more efficient for |x| < 10-8 since it avoids a call to j1.

Related posts:

Tagged with: ,
Posted in Python
###### 8 comments on “How to compute jinc(x)”
1. Juanlu001 says:

I don’t understand the second part: jinx(x) is meant to be 1/2 for small x and not x/2, so I cannot see where’s the problem you are trying yo address with the last snippet.

On the other hand, the vectorize() thing is very useful for piecewise defined functions, functions with singular values such as sinc(x) if one had to define it, etc.

2. John says:

Jauanlu: Thanks for pointing that out. I had said “jinc” when I meant to say J1. For small x, J1(x) is approximately x/2. I corrected the post.

3. Juanlu001 says:

Ah right, I understand now, thanks.

4. Canageek says:

For those of us not familiar with higher level math could you explain what you would use a function like this for? I’m finishing a chemistry degree, so when I think odd function I think wavefunction of an electron, or of trying to model some physical process. I’m sure I’ve heard of sinc before (probably in my quantum mechanics class) but would have no idea what you would use either of those functions for.

5. John says:

Bessel functions often come up in physics problems stated in cylindrical coordinates, so often that they’re called “cylindrical functions.” They also come up in many other contexts.

I’m not as familiar with the jinc function in particular. One place it comes up is in the Fourier transform of a function that has value 1 inside a disk and 0 outside. In that sense it’s analogous to the sinc function, the Fourier transform of a function that’s 1 inside an interval and 0 elsewhere.

6. Alessandro says:

Why do you use 1e-8 in the last code snippet?
Because (1e-8)^2 (the size of the error in approximating jinc with 1/2) will be almost equal to std::numeric_limits<double>::epsilon() ?

7. John says:

Alessandro: The Taylor series for jinc(x) is x/2 + x^2/16 + O(x^4). If x = 1e-8, then x^2/16 = 6.25e-18. Adding the quadratic term would make no difference because 0.5 + 6.25e-18 = 0.5 in IEEE 754 double precision arithmetic.

8. Emery Carr says:

In Real Computing Made Real, Forman Acton gives an implementation of the sinc function which uses a truncated Taylor series expansion directly for small values of x, i.e. (fabs(x) < .1) ? (1.0 – x^2 * (1.0 – x^2/20.0)/6.0) ? sin(x)/x . The idea is to preserve more significant figures for small values. Perhaps something similar could be useful for jinc(x)?

###### 1 Pings/Trackbacks for "How to compute jinc(x)"
1. [...] How to compute jinc(x) programmare è difficile, quasi impossibile; se mi capitasse una cosa così penserei OK, panico! ::: The Endeavour [...]