The previous post looked an integral that is “impossible” in the sense that it cannot be computed in closed form. It can be integrated in terms of special functions, and it can easily be computed numerically to as much accuracy as anyone would want.

In this post I’ll present a simple approximation that calculus students should find accessible.

The basic idea is to tweak a Taylor polynomial into something we can take the square root of. Specifically, we will replace the Taylor series

with

If θ is small, θ^{5} is extremely small, and so the approximation should be work well for small *x*, and it doesn’t need to work for *x* very large because the integrand is only real for *x* less than π.

Define

Then

Here’s a plot showing how good the approximation is.

So the approximation is good to about seven figures for *x* < 0.2, and even for *x* as large as 1.5 the approximation is still good to three figures.

Since the sin θ is positive for 0 ≤ θ ≤ π, it’s natural to evaluate *f*(*x*) for 0 ≤ *x* ≤ π. Our approximation error would continue to increase if we directly used values of *x* greater than π/2. But because sine is symmetric about π/2, we can avoid this problem: If π/2 ≤ *x* ≤ π, then we can compute *f*(*x*) as 2*f*(π/2) – *f*(π – *x*).

That’s a neat post.

The last equation should be 2*f(π/2) – f(π – x), right?

As written, for f(π/2) + f(π – x), the f(π – x) decreases with x so at x = π, he expression would be just f(π/2).

Since the integrand is non-negative on [0, pi], as an extremely crude, upper bound calculation you might also try using the root-mean square value as a proxy for the mean value. This gives a quick-and-dirty approximation of sqrt(x*(1-cos(x)). It’s not great.. better than 6% or so, but not too bad for a just a few seconds of work.

Why wouldn’t the anti-derivative of the first few terms serve just as well ? Just curious.