# Modes of convergence

Real analysis studies four basic modes of convergence for a sequence of
functions f_{n} on a measure space Ω with measure
μ.

**Almost everywhere**(AE). The sequence f_{n}converges almost everywhere to f if f_{n}(x)converges to f(x) except possibly for a set of x values of measure zero.**Almost uniform**(AU). The sequence f_{n}converges almost uniformly if for every ε > 0 there exists a corresponding set A_{ε}of measure less than ε such that f_{n}converges uniformly to f on the complement of A_{ε}.**L**. The sequence f^{p}_{n}converges to f in L^{p }norm if

**In measure**(M). The sequence f_{n}converges to f in measure if for every ε> 0,

In probability and statistics, convergence in measure is called**convergence in probability**.

This page summarizes and diagrams the relations between these four modes of convergence in general, in a finite measure space, and under dominated convergence. Then counterexamples are given that show no more relationships exist in general.

## Diagrams

The relationships between the various modes of convergence can be summarized in the diagram below. A solid line means that convergence in the mode at the tail of the arrow implies convergence in the mode at the head. A dashed line means that convergence in the mode at the tail of the arrow implies the existence of a subsequence that converges in the mode at the head of the arrow. (The idea for this kind of diagram came from Elements of Integration by Robert Bartle, 1966.)

### General measure spaces

The first diagram shows the general relationships between the four modes of convergence.

### Finite measure spaces

Now suppose μ(Ω). For finite measure spaces, almost everywhere and almost uniform convergence are equivalent. Convergence in measure is the weakest form of convergence since it is implied by the other forms. The following diagram summarizes the relationships between the four modes of convergence for finite measure spaces.

### Dominated convergence

If Ω is a general measure space, but the sequence f_{n} is uniformly
dominated by an *L ^{p}* function g, then more relationships exist, as
summarized in the diagram below.

Note that the even though we do not require Ω to be a finite measure space, all the convergence relationships for finite measure spaces continue to hold, as well as two new ones.

In this setting, almost everywhere and almost uniform convergence are
equivalent. Also, *L ^{p}* convergence and convergence in measure are equivalent.

## Counterexamples

Three counter examples suffice to prove that the diagrams above are complete. When an interval is referred to as a function, the function is the indicator function of that interval.

First, consider the functions [0, 1], [0, 1/2],
[1/2, 1], [0, 1/3], [1/3, 2/3], etc. The sequence f_{n} converges to 0
in measure and in *L ^{p}*. However, there is no x for which
f

_{n}(x) converges to 0 (f

_{n}(x) = 1 infinitely often) and so f

_{n}converges neither almost everywhere nor almost uniformly. Note that in this case Ω = [0, 1] is a finite measure space, and the constant function 1 is an

*L*bound on the sequence.

^{p}Next, consider the functions f_{n} = n [1/n, 2/n].
The sequence f_{n} converges pointwise to 0 everywhere. It converges
almost uniformly and converges in measure. However, the *L ^{p}* norm
of f

_{n}is 1 for all n and so no subsequence converges to 0 in

*L*norm. Note again Ω = [0, 1] is a finite measure space in this example.

^{p}
Finally, let f_{n} be [n, n+1]. Then f_{n}
converges to 0 everywhere, but f_{n} does not converge in measure.
In this case Ω is not a finite measure space.

## Probability

In probability theory, almost everywhere convergence is called **
almost certain convergence** or **almost sure convergence**.
Convergence in measure is called **convergence in probability**.
The measure space in question is always finite because probability measures
assign probability 1 to the entire space.

In a finite measure space, almost everywhere convergence implies convergence in measure. Therefore almost convergence implies convergence in probability.

Convergence in measure is the weakest form of convergence generally studied
in analysis, but in probability theory there is an even weaker form of
convergence, convergence in distribution. A sequence of random variables
**converges in distribution** if their corresponding
distribution functions converge pointwise. Convergence in probability
implies convergence in distribution.

The weak law of large numbers is an example of convergence in probability. The strong law of large numbers is an example of almost certain convergence. The central limit theorem is an example of convergence in distribution.

## Other mathematical diagrams

See this page for more diagrams on this site including diagrams for probability and statistics, analysis, topology, and category theory.