Carl Jacobi’s advice to mathematicians was “always invert.” See what you can find out by turning a problem around. This post came from following Jacobi’s advice.
A few days ago I wrote a note about how you can approximate a normal probability density by one period of a cosine. Specifically, the approximation has density
f(x) = (1 + cos(x))/2 π.
for x between -π and π and zero outside that interval. This distribution has variance σ2 = π2/3 – 2 and so σ f(σx) is approximately a standard normal density.
Why approximate a normal density by a cosine? The cosine is more familiar than the normal density and can easily be integrated in closed form. The rule to always invert suggests it might be useful to approximate a cosine by a normal density.
What is a nice property of the normal density? For one thing, it is its own Fourier transform. So it might be worthwhile to approximate cosine by a normal density to have an idea what its Fourier transform looks like. Maybe the function σ f(σx) above doesn’t change too much when taking Fourier transforms. Is that right? Let’s look at a graph.
The solid blue line is σ f(σx) and the dashed orange line is its Fourier transform.
The cosine approximation for the normal density is more interesting than practical if your goal is simply to compute normal probabilities; there are more accurate approximations. But there are other uses of the cosine approximation, such as the example above. How else might you exploit the approximate relationship between sine waves and the normal distribution? I have the sense that there’s some application out there where this approximation swoops in and greatly simplifies a problem.
Update (6 May 2010): Here’s the Mathematica code used to create the graph above in PDF form. The code goes into more detail than the text of this post.