The Central Limit Theorem says that if you average enough independent copies of a random variable, the result has a nearly normal (Gaussian) distribution. Of course that’s a very rough statement of the theorem. What are the precise requirements of the theorem? That question took two centuries to resolve. You can see the final answer here.

The first version of the Central Limit Theorem appeared in 1733, but necessary and sufficient conditions weren’t known until 1935. I won’t recap the entire history here. I just want to comment briefly on how the Central Limit Theorem began and how different the historical order of events was from the typical order of presentation.

A typical probability course might proceed as follows.

- Define the normal distribution.
- State and prove a special case of the Central Limit Theorem.
- Present the normal approximation to the binomial as a corollary.

This is the opposite of the historical order of events.

Abraham de Moivre discovered he could approximate binomial distribution probabilities using the integral of exp(-x^{2}) and proved an early version of the Central Limit Theorem in 1733. At the time, there was no name given to his integral. Only later did anyone think of exp(-x^{2}) as the density of a probability distribution. De Moivre certainly didn’t use the term “Gaussian” since Gauss was born 44 years after de Moivre’s initial discovery. De Moivre also didn’t call his result the “Central Limit Theorem.” George Pólya gave the theorem that name in 1920 as it was approaching its final form.

For more details, see The Life and Times of the Central Limit Theorem.

**Related links**:

Sums of uniform random variables

Quantifying the error in the central limit theorem

Three central limit theorems

## 3 thoughts on “How the central limit theorem began”