Sinc and Jinc integrals

The sinc function is defined by sinc(x) = sin(x)/x. Philip Woodward introduced the name of the function in 1952, saying it “occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own.”

Here’s an elegant equation involving the integrals of the sinc function:

\int_{-\infty}^\infty \mbox{sinc}(x) \, dx = \int_{-\infty}^\infty \mbox{sinc}^2(x) \, dx = \pi

When I ran across this recently I wondered two things: How hard is it to compute these two integrals? What are the corresponding results for the jinc function? The jinc function is analogous to sinc, but using a Bessel function in place of sine: jinc(x) = J1(x)/x.

The Fourier transform of the box function, the function box(x) that is 1 on the interval [-1/2, 1/2] and zero everywhere else, is sinc(π ω). (That’s one of the reasons sinc comes up so often in Fourier analysis, as Woodward observed.) So the Fourier transform of sinc(x) is π box(π x). The integral of a function is the value of its Fourier transform at zero, so sinc integrates to π. [1]

By Plancherel’s theorem, the integral of sinc2(x) is the integral of its Fourier transform squared, which equals π.

[There are several conventions for defining the Fourier transform. Here I’m using what I call the (-1, τ, 1) definition in these notes. See that page for other conventions and how to convert between them.]

Now for the jinc function. It also has a simple Fourier transform: f(ω) = 2 √(1 – (2πω)) for |x| < 1/2π and zero otherwise. As above, we can compute the integral of jinc over the real line by evaluating its Fourier transform at 0, which equals 2.

Also as above, the integral of jinc2 is the integral of its Fourier transform squared, which works out to 8/3π.

Update: See the next post for the analogous relations for sums.

More on sinc and jinc functions

[1] You may have a couple objections to this calculation. I found the Fourier transform of the box function was sinc, then concluded that the transform of sinc is the box function. But applying the Fourier transform twice doesn’t give you the original function back, right? When you transform f(x) twice you get  f(-x), but the functions involved here are even, so  f(-x) =  f(x).

OK, but you may still have another objection: the sinc function does not have bounded L1 norm, so you can’t just take its Fourier transform. True, but you can justify the transform in terms of L2 theory or distribution theory.

Colors of noise

The term white noise is fairly common. People unfamiliar with its technical meaning will describe some sort of background noise, like a fan, as white noise. Less common are terms like pink noise, red noise, etc.

The colors of noise are defined various ways, but they’re all based on an analogy between the power spectrum of the noisy signal and the spectrum of visible light. This post gives the motivations and intuitive definitions. I may give rigorous definitions in some future post.

White noise has a flat power spectrum, analogous to white light containing all other colors (frequencies) of light.

Pink noise has a power spectrum inversely proportional to its frequency f (or in some definitions, inversely proportional to fα for some exponent α near 1). Visible light with such a spectrum appears pink because there is more power toward the low (red) end of the spectrum, but a substantial amount of power at higher frequencies since the power drops off slowly.

The spectrum of red noise is more heavily weighted toward low frequencies, dropping off like  1/f2, analogous to light with more red and less white. Confusingly, red noise is also called Brown noise, not after the color brown but after the person Robert Brown, discoverer of Brownian motion.

Blue noise is the opposite of red, with power increasing in proportion to frequency, analogous to light with more power toward the high (blue) frequencies.

Grey noise is a sort of psychologically white noise. Instead of all frequencies having equal power, all frequencies have equal perceived power, with lower actual power in the middle and higher actual power on the high and low end.

The Fast Fourier Transform (FFT) and big data

The most direct way to compute a Fourier transform numerically takes O(n2) operations. The Fast Fourier Transform (FFT) can compute the same result in O(n log n) operations. If n is large, this can be a huge improvement.

James Cooley and John Tukey (re)discovered the FFT in 1965. It was thought to be an original discovery at the time. Only later did someone find a sketch of the algorithm in the papers of Gauss.

Daniel Rockmore wrote the article on the Fast Fourier Transform in The Princeton Companion to Applied Mathematics. He says

In this new world of 1960s “big data,” a clever reduction in computational complexity (a term not yet widely in use) could make a tremendous difference.

Rockmore adds a very interesting footnote to the sentence above:

Many years later Cooley told me that he believed that the Fast Fourier transform could be thought of as one of the inspirations for asymptotic algorithmic analysis and the study of computational complexity, as previous to the publication of his paper with Tukey very few people had considered data sets large enough to suggest the utility of an asymptotic analysis.

Related posts

Remove noise, remove signal

Whenever you remove noise, you also remove at least some signal. Ideally you can remove a large portion of the noise and a small portion of the signal, but there’s always a trade-off between the two. Averaging things makes them more average.

Statistics has the related idea of bias-variance trade-off. An unfiltered signal has low bias but high variance. Filtering reduces the variance but introduces bias.

If you have a crackly recording, you want to remove the crackling and leave the music. If you do it well, you can remove most of the crackling effect and reveal the music, but the music signal will be slightly diminished. If you filter too aggressively, you’ll get rid of more noise, but create a dull version of the music. In the extreme, you get a single hum that’s the average of the entire recording.

This is a metaphor for life. If you only value your own opinion, you’re an idiot in the oldest sense of the word, someone in his or her own world. Your work may have a strong signal, but it also has a lot of noise. Getting even one outside opinion greatly cuts down on the noise. But it also cuts down on the signal to some extent. If you get too many opinions, the noise may be gone and the signal with it. Trying to please too many people leads to work that is offensively bland.

Related post: The cult of average

The opening chord of "A Hard Day’s Night"

The opening chord of the Beatles song “A Hard Day’s Night” has been something of a mystery. Guitarists have tried to reproduce the chord with limited success. Turns out there’s a good reason why they haven’t figured it out: the chord cannot be played on a guitar alone.

Jason Brown has digitally analyzed the chord using Fourier analysis and determined that there must have been a piano in the recording studio playing along with the guitars. Brown has determined what notes each member of the Beatles were playing.

I heard Jason Brown’s story on the Mathematical Moments podcast. In addition to the chord discussed above, Brown talks about other things he has discovered about the Beatles and about the relationship between music and math in general. Unfortunately, Mathematical Moments does not make it easy to link to individual episodes. Here is a link to a PDF file of show notes with the audio embedded. The file is slow to download, and your PDF viewer may not support it. Here’s a link directly to just the MP3 audio file.

The Mathematical Moments podcast also does not make it obvious that you can subscribe to the podcast; they only provide links to individual episodes with fat PDF files. However, you can subscribe by using the URL http://www.ams.org/rss/mathmoments.rss.

Update: Here’s a paper that goes into some details.

PNG vs JPEG

Bill the Lizard answered an image compression question on StackOverflow by pointing out the image below that shows the difference between PNG and JPEG compression when applied to line drawings.

The image comes from lbrandy.com. The left side of the image uses PNG compression, a lossless compression format. The right side uses JPEG, a lossy format that computes a Fourier transform and discards the highest frequency components. JPEG can produce smaller files for natural photographic images. But for line drawings, the artifacts of the JPEG compression are noticeable.