Suppose you’re drawing random samples uniformly from some interval. How likely are you to see a new value outside the range of values you’ve already seen?

The problem is more interesting when the interval is unknown. You may be trying to estimate the end points of the interval by taking the max and min of the samples you’ve drawn. But in fact we might as well assume the interval is [0, 1] because the probability of a new sample falling within the previous sample range does not depend on the interval. The location and scale of the interval cancel out when calculating the probability.

Suppose we’ve taken *n* samples so far. The range of these samples is the difference between the 1st and the *n*th order statistics, and for a uniform distribution this difference has a beta(*n*-1, 2) distribution. Since a beta(*a*, *b*) distribution has mean *a*/(*a*+*b*), the expected value of the sample range from *n* samples is (*n*-1)/(*n*+1). This is also the probability that the next sample, or any particular future sample, will lie within the range of the samples seen so far.

If you’re trying to estimate the size of the total interval, this says that after *n* samples, the probability that the next sample will give you any new information is 2/(*n*+1). This is because we only learn something when a sample is less than the minimum so far or greater than the maximum so far.