The Fast Fourier Transform (FFT) and big data

The most direct way to compute a Fourier transform numerically takes O(n2) operations. The Fast Fourier Transform (FFT) can compute the same result in O(n log n) operations. If n is large, this can be a huge improvement.

James Cooley and John Tukey (re)discovered the FFT in 1965. It was thought to be an original discovery at the time. Only later did someone find a sketch of the algorithm in the papers of Gauss.

Daniel Rockmore wrote the article on the Fast Fourier Transform in The Princeton Companion to Applied Mathematics. He says

In this new world of 1960s “big data,” a clever reduction in computational complexity (a term not yet widely in use) could make a tremendous difference.

Rockmore adds a very interesting footnote to the sentence above:

Many years later Cooley told me that he believed that the Fast Fourier transform could be thought of as one of the inspirations for asymptotic algorithmic analysis and the study of computational complexity, as previous to the publication of his paper with Tukey very few people had considered data sets large enough to suggest the utility of an asymptotic analysis.

Related posts

2 thoughts on “The Fast Fourier Transform (FFT) and big data

  1. There are several issues going on. For one, sometimes the person credited with a discovery really made the discovery. This is common and unremarkable.

    Another issue is that it’s a little fuzzy to tell when something was discovered. Gauss’ notes were unpublished, and as I understand it they didn’t exactly spell out the algorithm and its uses, though he had the core idea. Do you name something after the first person to express an idea in embryonic form (e.g. Bayes’ theorem) or the first person to formalize it (e.g. Riemann integration)?

Comments are closed.