Computing Fourier series coefficients with the FFT

The Discrete Fourier Transform (DFT) is a mathematical function, and the Fast Fourier Transform (FFT) is an algorithm for computing that function. Since the DFT is almost always computed via the FFT, the distinction between the two is sometimes lost.

It is often not necessary to distinguish between the two. In my previous post, I used the two terms interchangeably because the distinction didn’t matter in that context.

Here I will make a distinction between the DFT and the FFT; I’ll use DFT to refer to the DFT as it is defined in [1], and FFT for the DFT as computed in NumPy‘s FFT routine. The differences are due to varying conventions; this is often an issue.

Suppose we have a function f on an interval [-A/2, A/2] and we sample f at N points where N is an even integer.

Define

 \begin{align*} \Delta x &= A/N \\ x_n &= n \Delta x \\ f_n &= f(x_n) \end{align*}

for n running from –N/2 + 1 to N/2.

DFT of the sequence {fn} is defined in [1] as the sequence {Fk} where

F_k = \frac{1}{N}\sum_{n = -N/2 + 1}^{N/2} f_n \exp(-2 \pi i n k/N)

Now suppose the function f that we sampled has a Fourier series

f(x) = \sum_{k = -\infty}^\infty c_k \exp(2 \pi i k x/A)

The Fourier coefficients ck relate to the DFT output Fk according to the Discrete Poisson Summation Formula [1]:

F_k = c_k + \sum_{j=-\infty}^\infty \left(c_{k+jN} - c_{k-jN}\right)

This means that the ck simply equal the Fk if f has no frequency components higher than N/2 Hz because all the terms in the infinite sum above are zero. That is, if f is band-limited and we have sampled at a frequency higher than the Nyquist frequency, then we can simply read the Fourier coefficients off the DFT.

However, when f is not band-limited, or when it is band-limited but we have not sampled it finely enough, we will have aliasing. Our Fourier coefficients will differ from our DFT output by an error term involving high-frequency Fourier series coefficients.

In application it’s usually too much to hope that your function is exactly band-limited, but it may be approximately band-limited. The Fourier coefficients of smooth functions eventually decay rapidly (see details here) and so the error in approximating Fourier series coefficients by DFT terms is small.

Computing a DFT with the FFT

We defined the DFT of the sequence {fn} above to be the sequence {Fk} where

F_k = \frac{1}{N}\sum_{n = -N/2 + 1}^{N/2} f_n \exp(-2 \pi i n k/N)

and k runs from –N/2 + 1 to N/2.

NumPy, on the other hand, defines the DFT of the sequence {an} to be the sequence {Ak} where

A_k = \sum_{n = 0}^{N-1} a_n \exp(-2 \pi i n k/N)

and k runs from 0 to N-1.

Relative to the definition in the previous post, the NumPy definition difference in three ways:

  1. The normalization factor 1/N is missing.
  2. The input indices are numbered differently.
  3. The output is arranged differently.

The first difference is trivial to overcome: simply divide the FFT output by N.

The second difference is easy to deal with if we think of the inputs to the FFT being samples from a periodic function, which they usually are. The fk come from sampling a periodic function f over an interval [-A/2, A/2]. If we sample the same function over [0, A] we get the an. We have

\begin{align*} f(0) &= f_0 = a_0 \\ f(A/N) &= f_1 = a_1 \\ f(2A/N) &= f_2 = a_2 \\ \cdots \\ f(A) &= f_{N/2} = a_{N/2} \end{align*}

If we extend the fs and the as periodically past their original ranges of definition, then they all agree. But since we start our sampling in difference places, our samples would be listed in different orders if we stored them by increasing index.

Something similar occurs with the output.

For 0 ≤ kN/2,

Fk = Ak.

But for N < k < N,

Fk = ANk.

Example

We’ll use a band-limited function in our example so that we find the Fourier coefficients exactly.

f(x) = 7 cos(6πx) – 5 sin(4πx)

We compute the FFT as follows.

    import numpy as np

    def f(x):
        return 7*np.sin(3*2*np.pi*x) - 5*np.cos(2*2*np.pi*x)

    N = 8
    delta = 1/N
    x = [f(n*delta) for n in range(N)]
    print( np.fft.fft(x)/N )

The output is

[0, 0, -2.5, -3.5i, 0, 3.5i, -2.5, 0]

This says F2 and F-2 = 5/2, and so the coefficient of cos(2·2πx) is 5.

Also F3 = -7/2 and F-3 = 7/2, so the coefficient of cos(3·2πx) is 7. These results follow from Euler’s equation

exp(iθ) = cos(θ) + i sin(θ)

[1] William L. Briggs and Van Emden Henson. The DFT: An Owner’s Manual for the Discrete Fourier Transform. SIAM 1995.

Support of a signal and its FFT

The previous post looked at the Fourier uncertainty principle. This post looks at an analogous result for the discrete Fourier transform.

The uncertainty principle for the (continuous) Fourier transform says a signal cannot be localized in both the time domain and the frequency domain. The more bunched up a function is, the more spread out its transform is.

The discrete analog of the Fourier uncertainty principle given in [1] says something similar, that a signal and its discrete Fourier transform cannot both be concentrated. The difference is in how concentration is measured.

In the previous post, concentration was measured in terms of variance, very similar to how variance is defined for random variables. But in the discrete case, we do something simpler: we count how many terms are non-zero.

The support of a function is the set of points where the function is not zero. A highly concentrated function is one with small support. The discrete Fourier uncertainty principle says that a non-zero signal and its Fourier transform cannot both have small support.

Specifically, let f be a sequence of length N and let F be its discrete Fourier transform. Then

| support of f | × | support of F | ≥ N.

where | S | is the number of elements in a set S.

As with the continuous case, we have a lower bound on the product of the uncertainty of something and its Fourier transform. The discrete form is simpler in that the lower bound does not depend on one’s convention for defining the Fourier transform. As with the continuous case, there are many conventions for defining the DFT. But these variations in convention don’t effect counting zeros.

Examples

As pointed out in [1], you can construct examples where the inequality above holds exactly. These sequences are evenly spaced constants. If N = pq then the FFT of a sequence of 1s every p places will have non-zero elements every q places.

For example,

    >>> import numpy as np
    >>> np.fft.fft([1,0,0,1,0,0])
    array([2, 0, 2, 0, 2, 0])

In this instance, the original sequence has support of size 2, its transform has support of size 3, and the product of the support sizes is their length 6.

Note that multiplying a Python list (not a NumPy array) by an integer concatenates the list to itself that many times. The example above could be read as saying the FFT of

    [1, 0, 0]*2

is

    [2, 0]*3

.

As another example, we can show that the FFT of

    [1, 0, 0, 0, 0]*3

is

    [3, 0, 0]*5

.

The code below is complicated a bit by NumPy bookkeeping, but essentially it shows that the FFT of [1,0,0,0,0]*3 is [3,0,0]*5.

    >>> x = np.fft.fft([1, 0, 0, 0, 0]*3)
    >>> y = 3*np.array([1, 0, 0]*5, dtype=complex)
    >>> np.all(x == y)
    True

The lower bound on the product of support sizes is tight for multiples and rotations of sequences like those above.

For example,

    >>> np.fft.fft([0, 7, 0, 0, 7, 0])
    array([14. +0.j        ,  0. +0.j        , -7.-12.12435565j,
            0. +0.j        , -7.+12.12435565j,  0. +0.j        ])

Here we took the first example, changed the 1s to 7s, and rotated everything one position. The FFT is a little more complicated in this case, but it’s still the case that the input has support of size 2 and the output has support of size 3.

The only sequences for which the discrete uncertainty principle lower bound is tight are those like we’ve seen above: multiples and rotations of evenly spaced 1s. For any other sequences the inequality above is strict. For example, here’s what happens if our 1’s are not evenly spaced.

    >>> np.fft.fft([1, 0, 0, 0, 1, 0])
    array([2. +0.00000000e+00j, 0.5+8.66025404e-01j, 0.5-8.66025404e-01j,
           2. -5.55111512e-17j, 0.5+8.66025404e-01j, 0.5-8.66025404e-01j])

Now the input has support 2, the output has support 6.

Related posts

[1] David L. Donoho and Philip B. Stark. Uncertainty Principles and Signal Recovery. SIAM Journal on Applied Mathematics, June 1989, Vol. 49, No. 3, pp. 906-931

Fourier uncertainty principle

Heisenberg’s uncertainty principle says there is a limit to how well you can know both the position and momentum of anything at the same time.  The product of the uncertainties in the two quantities has a lower bound.

There is a closely related principle in Fourier analysis that says a function and its Fourier transform cannot both be localized. The more concentrated a signal is in the time domain, the more spread out it is in the frequency domain.

There are many ways to quantify how localized or spread out a function is, and corresponding uncertainty theorems. This post will look at the form closest to the physical uncertainty principle of Heisenberg, measuring the uncertainty of a function in terms of its variance. The Fourier uncertainty principle gives a lower bound on the product of the variance of a function and the variance of its Fourier transform.

Variance

When you read “variance” above you might immediately thing of variance as in the variance of a random variable. Variance in Fourier analysis is related to variance in probability, but there’s a twist.

If f is a real-valued function of a real variable, its variance is defined to be

V(f) = \int_{-\infty}^\infty x^2 \, f(x)^2 \, dx

This is the variance of a random variable with mean 0 and probability density |f(x)|². The twist alluded to above is that f is not a probability density, but |f(x)|² is.

Since we said f is a real-valued function, we could leave out the absolute value signs and speak of f(x)² being a probability density. In quantum mechanics applications, however, f is complex-valued and |f(x)|² is a probability density. In other words, we multiply f by its complex conjugate, not by itself.

The Fourier variance defined above applies to any f for which the integral converges. It is not limited to the case of |f(x)|² being a probability density, i.e. when |f(x)|² integrates to 1.

Uncertainty principle

The Fourier uncertainty principle is the inequality

V(f) \, \, V(\hat{f}) \geq C\, ||f||_2^2 \,\,||\hat{f}||_2^2

where the constant C depends on your convention for defining the Fourier transform [1]. Here ||f||2² is the integral of f², the square of the L² norm.

Perhaps a better way to write the inequality is

\frac{V(f)}{||\phantom{\hat{f}}\!\!f\,||_2^2} \; \frac{V(\hat{f})}{||\phantom{\hat{.}}\hat{f}\,\,||_2^2} \geq C

for non-zero f. Rather than look at the variances per se, we look at the variances relative to the size of f. This form is scale invariant: if we multiply f by a constant, the numerators and denominators are multiplied by that constant squared.

The inequality is exact when f is proportional to a Gaussian probability density. And in this case the uncertainty is easy to interpret. If f is proportional to the density of a normal distribution with standard deviation σ, then its Fourier transform is proportional to the density of a normal distribution with standard deviation 1/σ, if you use the radian convention described in [1].

Example

We will evaluate both sides of the Fourier uncertainty principle with

    h[x_] := 1/(x^4 + 1)

and its Fourier transform

    g[w_] := FourierTransform[h[x], x, w]

We compute the variances and the squared norms with

    v0 = Integrate[x^2 h[x]^2, {x, -Infinity, Infinity}] 
    v1 = Integrate[w^2 g[w]^2, {w, -Infinity, Infinity}]
    n0 = Integrate[    h[x]^2, {x, -Infinity, Infinity}]
    n1 = Integrate[    g[w]^2, {w, -Infinity, Infinity}]

The results in mathematical notation are

\begin{align*} V(h) &= \frac{\pi}{4 \sqrt{2}} \\ V(\hat{h}) &= \frac{5\pi}{8\sqrt{2}} \\ || h ||_2^2 &= \frac{3\pi}{4 \sqrt{2}} \\ ||\hat{h}||_2^2 &= \frac{3\pi}{4 \sqrt{2}} \end{align*}

From here we can calculate that the ratio of the left side to the right side of the uncertainty principle is 5/18, which is larger than the lower bound C = 1/4.

By the way, it’s not a coincidence that h and its Fourier transform have the same norm. That’s always the case. Or rather, that is always the case with the Fourier convention we are using here. In general, the L² norm of the Fourier transform is proportional to the L² norm of the function, where the proportionality constant depends on your definition of Fourier transform but not on the function.

Here is a page that lists the basic theorems of Fourier transforms under a variety of conventions.

Related posts

[1] In the notation of the previous post C = 1/(4b²). That is, C = 1/4 if your definition has a term exp(± i ω t) and C = 1/16π² if your definition has a exp(±2 π i ω t) term.

Said another way, if you express frequency in radians, as is common in pure math, C = 1/4. But if you express frequency in Hertz, as is common in signal processing, C = 1/16π².

Fourier transform definitions also have varying constants outside the integral, e.g. possibly dividing by √2π, but this factor effects both sides of the Fourier uncertainty principle equally.

Fourier transforms in Mathematica

Unfortunately there are many slightly different ways to define the Fourier transform. So the first two questions when using Mathematica (or any other software) to compute Fourier transforms are what definition of Fourier transform does it use, and what to do if you want to use a different definition.

The answer to the first question is that Mathematica defines the Fourier transform of f as

\hat{f}(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty \exp(i\omega x)\, f(x) \, dx

The answer to the second question is that Mathematica defines a parameterized Fourier transform by

\hat{f}(\omega) = \sqrt{\frac{|b|}{(2\pi)^{1-a}}} \, \int_{-\infty}^\infty \exp(ib\omega x)\, f(x) \, dx

where a defaults to 0 and b defaults to 1.

Examples

For example, if φ(x) = exp(-x²/2), then we can compute Mathematica’s default Fourier transform with

    φ[x_] := Exp[-x^2/2]
    FourierTransform[φ[x], x, ω]

This computes

 \hat{\varphi}(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty \exp(i\omega x)\, \varphi(x) \, dx = \exp(-\omega^2/2)

But if we set (a, b) to (0, 2π) with

    FourierTransform[φ[x], x, ω, FourierParameters -> {0, 2 Pi}]

we compute

\hat{\varphi}(\omega) = \int_{-\infty}^\infty \exp(2\pi i\omega x)\, \varphi(x) \, dx = \sqrt{2\pi} \exp(-2\pi^2\omega^2/2)

Theorems under various conventions

Several years ago I got frustrated enough with the multitude of Fourier transform conventions that I made a sort of Rosetta Stone, giving several of the most basic theorems under each of eight definitions. I used the parameterization

\hat{f}(\omega) = \frac{1}{\sqrt{m}} \int_{-\infty}^\infty \exp(\sigma q i\omega x)\, f(x) \, dx

where m is either 1 or 2π, σ is either 1 or -1, and q is either 1 or 2π.

My σ and q are the sign and absolute value of Mathematica’s b. The relation between my m and Mathematica’s a is slightly more complicated and given in the table below.

\begin{tabular}{lllll} \hline \(\sigma\) & m & q & a & b\\ \hline \textpm{} 1 & 1 & 1 & 1 & \textpm{} 1\\ \textpm{} 1 & 1 & 2\(\pi\) & 0 & \textpm{} 2\(\pi\)\\ \textpm{} 1 & 2\(\pi\) & 1 & 0 & \textpm{} 1\\ \textpm{} 1 & 2\(\pi\) & 2\(\pi\) & -1 & \textpm{} 2\(\pi\)\\ \hline \end{tabular}

Zeros of trigonometric polynomials

A periodic function has at least as many real zeros as its lowest frequency Fourier component. In more detail, the Sturm-Hurwitz theorem says that

f(x) = \sum_{k=n}^N a_k \sin kx + b_k \cos kx

has at least 2n zeros in the interval [0, 2π) if an and bn are not both zero. You could take N to be infinity if you’d like.

Note that the lowest frequency term

a_n \sin nx + b_n \cos nx

can be written as

 c \sin(nx + \varphi)

for some amplitude c and phase φ as explained here. This function clearly has 2n zeros in each period. The remarkable thing about the Sturm-Hurwitz theorem is that adding higher frequency components can increase the number of zeros, but it cannot decrease the number of zeros.

To illustrate this theorem, we’ll look at a couple random trigonometric polynomials with n = 5 and N = 9 and see how many zeros they have. Theory says they should have at least 10 zeros.

The first has 16 zeros:

And the second has 12 zeros:

(It’s difficult to see just how many zeros there are in the plots above, but if we zoom in by limiting the vertical axis we can see the zeros more easily. For example, we can see that the second plot does not have a zero between 4 and 5; it almost reaches up to the x-axis but doesn’t quite make it.)

Here’s the code that made these plots.

    import matplotlib.pyplot as plt
    import numpy as np
    
    n = 5
    N = 9
    np.random.seed(20210114)
    
    for p in range(2):
        a = np.random.random(size=N+1)
        b = np.random.random(size=N+1)
    
        x = np.linspace(0, 2*np.pi, 200)
        y = np.zeros_like(x)
        for k in range(n, N+1):
            y += a[k]*np.sin(k*x) + b[k]*np.cos(k*x)
        plt.plot(x, y)
        plt.grid()
        plt.savefig(f"sturm{p}.png")
        plt.ylim([-0.1, 0.1])
        plt.savefig(f"sturm{p}zoom.png")
        plt.close()

Cesàro summation

There’s more than one way to sum an infinite series. Cesàro summation lets you compute the sum of series that don’t have a sum in the classical sense.

Suppose we have an infinite series

a_0 + a_1 + a_2 + \cdots

The nth partial sum of the series is given by

S_n = \sum_{i=0}^n a_i

The classical sum of the series, if it exists, is defined to be the limit of its partial sums. That is,

\sum_{i=0}^\infty a_i\stackrel{\text{def}}{=} \lim_{n\to\infty} \sum_{i=0}^n a_i

Cesàro summation takes a different approach. Instead of taking the limit of the partial sums, it takes the limit of the averages of the partial sums. To be specific, define

C_n = \frac{S_0 + S_1 + S_2 + \ldots + S_n}{n}

and define the Cesàro summation to be the limit of the Cn as n goes to infinity. If a series has a sum in the classical sense, it also has a sum in the Cesàro sense, and the limits are the same. But some series have a Cesàro sum that do not have a classical sum. Or maybe both limits exist but the intermediate steps of Cesàro summation are better behaved, as we’ll see in an example below.

If you express the Cn in terms of the original an terms you get

C_n = \sum_{i=0}^n \frac{n-i+1}{n+1} a_i

In other words, the nth Cesàro partial sum is a reweighting of the classical partial sums, with the weights changing as a function of n. Note that for fixed i, the fraction multiplying ai goes to 1 as n increases.

Fejér summation and Gibbs phenomenon


Fejér summation
is Cesàro summation applied to Fourier series. The (ordinary) partial sums of a Fourier series give the best approximation to a function as measured by least squares norm. But the Cesàro partial sums may be qualitatively more like the function being approximated. We demonstrate this below with a square wave.

The 30th ordinary partial sum shows the beginnings of Gibbs phenomenon, the “bat ears” at the top of the square wave and their mirror image at the bottom. The 30th Cesàro partial sum is smoother and eliminates Gibbs phenomena near the discontinuity in the square wave.

More Fourier series posts

Gibbs phenomenon

I realized recently that I’ve written about generalized Gibbs phenomenon, but I haven’t written about its original context of Fourier series. This post will rectify that.

The image below comes from a previous post illustrating Gibbs phenomenon for a Chebyshev approximation to a step function.

Gibbs phenomena for Chebyshev interpolation

Although Gibbs phenomena comes up in many different kinds of approximation, it was first observed in Fourier series, and not by Gibbs [1]. This post will concentrate on Fourier series, and will give an example to correct some wrong conclusions one might draw about Gibbs phenomenon from the most commonly given examples.

The uniform limit of continuous function is continuous, and so the Fourier series of a function cannot converge uniformly where the function is discontinuous. But what does the Fourier series do near a discontinuity?

It’s easier to say what the Fourier series does exactly at a discontinuity. If a function is piecewise continuous, then the Fourier series at a jump discontinuity converges to the average of the limits from the left and from the right at that point.

What the Fourier series does on either side of the discontinuity is more interesting. You can see high-frequency oscillations on either side. The series will overshoot on the high side of the jump and undershoot on the low side of the jump.

The amount of overshoot and undershoot is proportional to the size of the gap, about 9% of the gap. The exact proportion, in the limit, is given by the Wilbraham-Gibbs constant

\frac{1}{\pi} \int_0^\pi \frac{\sin t}{t} \, dt - \frac{1}{2} = 0.0894898\ldots

Gibbs phenomenon is usually demonstrated with examples that have a single discontinuity at the end of their period, such as a square wave or a saw tooth wave. But Gibbs phenomenon occurs at every discontinuity, wherever located, no matter how many there are.

The following example illustrates everything we’ve talked about above. We start with the function f plotted below on [-π, π] and imagine it extended periodically.

Notice three things about f:

  1. It is continuous at the point where it repeats since it equals 0 at -π and π.
  2. It has two discontinuities inside [-π, π].
  3. One of the discontinuities is larger than the other.

The following plot shows the sum of the first 100 terms in the Fourier series for f plotted over [-2π, 2π].

Notice three things about this plot that correspond to the three observations about the function we started with:

  1. There nothing remarkable about the series at -π and π.
  2. You can see Gibbs phenomenon at the discontinuities of f.
  3. The overshoot and undershoot are larger at the larger discontinuity.

Related to the first point above, note that the derivative of f is discontinuous at the period boundary. A discontinuity in the derivative does not cause Gibbs phenomena.

Here’s a close-up plot that shows the wiggling near the discontinuities.

Gibbs phenomena for other series

[1] Henry Wilbraham first described what Josiah Gibbs discovered independently 50 years later, what we now call Gibbs phenomenon. This is an example of Stigler’s law of eponymy.

Square wave, triangle wave, and rate of convergence

There are only a few examples of Fourier series that are relatively easy to compute by hand, and so these examples are used repeatedly in introductions to Fourier series. Any introduction is likely to include a square wave or a triangle wave [1].

By square wave we mean the function that is 1 on [0, 1/2] and -1 on [1/2, 1], extended to be periodic.

square wave

Its nth Fourier (sine) coefficient is 4/nπ for odd n and 0 otherwise.

By triangle wave we mean 1 – |2x – 1|, also extended to be periodic.

triangle wave

Its nth Fourier (cosine) coefficient is -4/(nπ)² for odd n and 0 otherwise.

This implies the Fourier coefficients for the square wave are O(1/n) and the Fourier coefficients for the triangle wave are O(1/n²). (If you’re unfamiliar with big-O notation, see these notes.)

In general, the smoother a function is, the faster its Fourier coefficients approach zero. For example, we have the following two theorems.

  1. If a function f has k continuous derivatives, then the Fourier coefficients are O(1/nk).
  2. If the Fourier coefficients of f are O(1/nk+1+ε) for some ε > 0 then f has k continuous derivatives.

There is a gap between the two theorems so they’re not converses.

If a function is continuous but not differentiable, the Fourier coefficients cannot decay any faster than 1/n², so the Fourier coefficients for the triangle wave decay as fast as they can for a non-differentiable function.

More post on rate of convergence

[1] The third canonical example is the saw tooth wave, the function f(x) = x copied repeatedly to make a periodic function. The function rises then falls off a cliff to start over. This discontinuity makes it like the square wave: its Fourier coefficients are O(1/n).

Clipped sine waves

One source of distortion in electronic music is clipping. The highest and lowest portions of a wave form are truncated due to limitations of equipment. As the gain is increased, the sound doesn’t simply get louder but also becomes more distorted as more of the signal is clipped off.

Clipping 0.2

For example, here is what a sine wave looks like when clipped 20%, i.e. cut off to be between -0.8 and 0.8.

Sine clipped at 0.8

A simple sine wave has only one Fourier component, itself. But when we clip the sine wave, we move energy into higher frequency components. We can see that in the Fourier components below.

Fourier coefficients of sine clipped at 0.8

You can show by symmetry that the even-numbered coefficients are exactly zero.

Clipping 0.6

Here are the corresponding plots for 60% clipping, i.e. the absolute value of the signal is cut off to be 0.4. First the signal

Sine clipped at 0.8

and then its Fourier components.

Fourier coefficients of sine clipped at 0.8

Here are the first five sine waves with the amplitudes given by the Fourier coefficients.

Fourier components

And here we see how the of the sines above do a pretty good job of reconstructing the original clipped sine. We’d need an infinite number of Fourier components to exactly reconstruct the original signal, but the first five components do most of the work.

Adding up the first five Fourier components

Continuous range of clipping

Next let’s look at the ratio of the energy in the 3rd component to that of the 1st component as we continuously vary the amount of clipping.

Ratio of energy in 3rd harmonic to fundamental

Now for the 5th harmonic. This one is interesting because it’s not strictly increasing but rather has a little bump before it starts increasing.

Ratio of energy in 5th harmonic to fundamental

Finally, here’s the ratio of the energy in all higher frequencies to the energy in the fundamental.

Ratio of energy in all higher frequences combined to fundamental

More Fourier series posts

Building high frequencies out of low frequencies

If you have sines and cosines of some fundamental frequency, and you’re able to take products and sums, then you can construct sines and cosines at any multiple of the fundamental frequency.

Here’s a proof.

\begin{align*} \cos n\theta + i\sin n\theta &= \exp(in\theta) \\ &= \exp(i\theta)^n \\ &= (\cos\theta + i\sin\theta)^n \\ &= \sum_{j=0}^n {n\choose j} i^j(\cos\theta)^{n-j} (\sin\theta)^j \end{align*}

Taking real parts gives us cos nθ in the first equation and the even terms of the sum in the last equation.

Taking imaginary parts gives us sin nθ in the first equation and the odd terms of the sum in the last equation.

A radio may use this technique to create signals with frequencies higher than the frequency of its oscillator.

***

Read more on mathematics and radio.