When do moments determine a function?

Two girls playing on a seesaw

The use of the word “moment” in mathematics is related to its use in physics, as in moment arm or moment of inertia. For a non-negative integer n, the nth moment of a function f is the integral of xn f(x) over the function’s domain.

Uniqueness

If two continuous functions f and g have all the same moments, are they the same function? The answer is yes for functions over a finite interval, but no for functions over an unbounded interval.

Existence

Now let’s consider starting with a set of moments rather than starting with a function. Given a set of moments m0, m1, m2, … is there a function that has these moments? Typically no.

A better question is what conditions on the moments are necessary for there to exist a function with these moments. This question breaks into three questions

  1. The Hausdorff moment problem
  2. The Stieltjes moment problem
  3. The Hamburger moment problem

depending on whether the function domain is a finite interval, a half-bounded interval, or the real line. For each problem there are known conditions that are necessary and sufficient, but the conditions are different for each problem.

Interestingly, each of the three names Hausdorff, Stieltjes, and Hamburger are well known. Felix Hausdorff is best known for his work in topology: Hausdorff spaces, etc. Thomas Stieltjes is best known for the Riemann-Stieltjes integral, and for his work on continued fractions. Hans Ludwig Hamburger is not as well known, though his last name is certainly familiar.

Finite moments

A practical question in probability is how well a finite number of moments determine a probability distribution. They cannot uniquely determine the distribution, but the do establish bounds for how different the two distributions can be. See this post.

Related posts

RNG, PRNG, CSPRNG

Most random number generators are pseudorandom number generators (PRNGs). The distinction may be pedantic or crucial, depending on context. In the context of cryptography, it’s critical.

For this post, RNG will mean a physical, true random number generator.

A PRNG may be suitable for many uses—Monte Carlo simulation, numerical integration, game development, etc.—but not be suitable for cryptography. A PNRG which is suitable for cryptography is called a CSPRNG (cryptographically secure pseudorandom number generator).

A PRNG may have excellent statistical properties, and pass standard test suites for random number generators, and yet be insecure. The output of an insecure generator may have no statistical regularities, and yet have regularities that a sneaky cryptanalyst can exploit. For example, the popular Mersenne Twister is fine for simulations but its output can be predicted after a relatively short run. The prediction depends on a clever set of calculations that would be unnatural from a statistical perspective, which is why its statistical performance is better than its cryptographic performance.

CSPRNGs tend to be much slower than PRNGs, so you pay for security. And for a non-cryptographic application this cost isn’t worth it.

In general, statistical tests give necessary but not sufficient conditions for a PRNG to be a CSPRNG. If a PRNG fails statistical tests, it has some sort of regularity that potentially could be exploited by cryptanalysis. I had to tell a client once that although his PRNG passed the standard statistical tests, I was pretty sure I could break the cryptographic system he wanted to use it in. This news was not well received.

Lava lamp photo

I suspect that a physical RNG with good statistical properties will have good cryptographic properties as well, contrary to the usual case. Cloudflare famously uses lava lamps to generate random bits for TLS keys. Cryptanalysts have exploited minor flaws in PRNGs, and so the lava lamps give Cloudflare one less thing to worry about. (I’m sure they still have plenty else to worry about.)

A physical RNG might fail statistical tests. For example, maybe the physical process is truly random but biased. Or maybe the process of turning physical phenomena into numbers introduces some bias. But it’s hard to imagine that an RNG could have a clean bill of statistical health and yet have a cryptographically exploitable weakness. It’s conceivable that a statistically impeccable physical RNG might have some unforeseen exploitable regularity, but this seems highly doubtful.

Related posts

Testing random number generators

Random number generators are subtle. Unless the generator is some physical device, random number generators (RNGs) are usually technically pseudorandom number generators (PRNGs), deterministic algorithms designed to mimic randomness.

Suppose you have a PRNG that produces the digits 0 through 9. How might you test the output to see whether it (acts like it) is random? An obvious test would be to see how often each digit is produced. As the number of samples n increases, you’d expect the frequency of each digit to approach n/10.

Starting with χ²

If your “random” number generator simply cycles through the digits 0 to 9 in order, the frequencies will match expectations. In fact, they will match too well. A two-sided χ² test will catch this problem. The χ² will be too small, indicating a suspiciously even distribution.

Nick Lord [1] gives a construction that has a much more subtle pattern. In his example, the frequencies also converge to the expect values, but the χ² statistic diverges to ∞ as n increases. So rather than producing too small a χ² value, his example produces too large a value. This shows that the χ² test is a stronger test than simply looking at frequencies, but it’s only a start.

RNG testing service

There are more sophisticated tests, and standard suites of tests: DIEHARDER, NIST STS, TestU01, etc. We’ve run these test suites for several clients. More on that here.

It’s curious how this work comes in spurts. We had a run of clients wanting RNG testing, then nothing for a long time, and now we have a new RNG testing project in the queue.

Related posts

[1] A Chi-Square Nightmare. The Mathematical Gazette, Vol. 76, No. 476 (July 1992), p. 274.

The Cauchy distribution’s counter-intuitive behavior

Someone with no exposure to probability or statistics likely has an intuitive sense that averaging random variables reduces variance, though they wouldn’t state it in those terms. They might, for example, agree that the average of several test grades gives a better assessment of a student than a single test grade. But data from a Cauchy distribution doesn’t behave this way.

Averages and scaling

If you have four independent random variables, each normally distributed with the same scale parameter σ, then their average is also normally distributed but with scale parameter σ/2.

If you have four independent random variables, each Cauchy distributed with the same scale parameter σ, then their average is also Cauchy distributed but with exact same scale parameter σ.

So the normal distribution matches common intuition, but the Cauchy distribution does not.

In the case of random variables with a normal distribution, the scale parameter σ is also the standard deviation. In the case of random variables with a Cauchy distribution, the scale parameter σ is not the standard deviation because Cauchy random variables don’t have a variance, so they don’t have a standard deviation.

Modeling

Some people object that nothing really follows a Cauchy distribution because the Cauchy distribution has no mean or variance. But nothing really follows a normal distribution either. All probability distributions are idealizations. The question of any probability distribution is whether it adequately captures the aspect of reality it is being used to model.

Mean

Suppose some phenomenon appears to behave like it has a Cauchy distribution, with no mean. Alternately, suppose the phenomenon has a mean, but this mean is so variable that it is impossible to estimate. There’s no practical difference between the two.

Variance

And in the alternate case, suppose there is a finite variance, but the variance is so large that it is impossible to estimate. If you take the average of four observations, the result is still so variable that the variance is impossible to estimate. You’ve cut the theoretical variance in half, but that makes no difference. Again this is practically indistinguishable from a Cauchy distribution.

Truncating

Now suppose you want to tame the Cauchy distribution by throwing out samples with absolute value less than M. Now you have a truncated Cauchy distribution, and it has finite mean and variance.

But how do you choose M? If you don’t have an objective reason to choose a particular value of M, you would hope that your choice doesn’t matter too much. And that would be the case for a thin-tailed probability distribution like the normal, but it’s not true of the Cauchy distribution.

The variance of the truncated distribution will be approximately equal to M, so by choosing M you choose the variance. So if you double your cutoff for outliers that are to be discarded, you approximately double the variance of what’s left. Your choice of M matters a great deal.

Related posts

The negative binomial distribution and Pascal’s triangle

The Poisson probability distribution gives a simple, elegant model for count data. You can even derive from certain assumptions that data must have a Poisson distribution. Unfortunately reality doesn’t often go along with those assumptions.

A Poisson random variable with mean λ also has variance λ. But it’s often the case that data that would seem to follow a Poisson distribution has a variance greater than its mean. This phenomenon is called over-dispersion: the dispersion (variance) is larger than a Poisson distribution assumption would allow.

One way to address over-dispersion is to use a negative binomial distribution. This distribution has two parameters, r and p, and has the following probability mass function (PMF).

P(X = x) = \binom{r + x - 1}{x} p^r(1-p)^x

As the parameter r goes to infinity, the negative binomial distribution converges to a Poisson distribution. So you can think of the negative binomial distribution as a generalization of the Poisson distribution.

These notes go into the negative binomial distribution in some detail, including where its name comes from.

If the parameter r is a non-negative integer, then the binomial coefficients in the PMF for the negative binomial distribution are on the (r+1)st diagonal of Pascal’s triangle.

Pascal's triangle

The case r = 0 corresponds to the first diagonal, the one consisting of all 1s. The case r = 1 corresponds to the second diagonal consisting of consecutive integers. The case r = 2 corresponds to the third diagonal, the one consisting of triangular numbers. And so forth.

Related posts

Variance matters more than mean in the extremes

Suppose you have two normal random variables, X and Y, and that the variance of X is less than the variance of Y.

Let M be an equal mixture of X and Y. That is, to sample from M, you first chose X or Y with equal probability, then you choose a sample from the random variable you chose.

Now suppose you’ve observed an extreme value of M. Then it is more likely the that the value came from Y. The means of X and Y don’t matter, other than determining the cutoff for what “extreme” means.

High-level math

To state things more precisely, there is some value t such that the posterior probability that a sample m from M came from Y, given that |m| > t, is greater than the posterior probability that m came from X.

Let’s just look at the right-hand tails, even though the principle applies to both tails. If X and Y have the same variance, but the mean of X is greater, then larger values of Z are more likely to have come from X. Now suppose the variance of Y is larger. As you go further out in the right tail of M, the posterior probability of an extreme value having come from Y increases, and eventually it surpasses the posterior probability of the sample having come from X. If X has a larger mean than Y that will delay the point at which the posterior probability of Y passes the posterior probability of X, but eventually variance matters more than mean.

Detailed math

Let’s give a name to the random variable that determines whether we choose X or Y. Let’s call it C for coin flip, and assume C takes on 0 and 1 each with probability 1/2. If C = 0 we sample from X and if C = 1 we sample from Y. We want to compute the probability P(C = 1 | Mt).

Without loss of generality we can assume X has mean 0 and variance 1. (Otherwise transform X and Y by subtracting off the mean of X then divide by the standard deviation of X.) Denote the mean of Y by μ and the standard deviation by σ.

From Bayes’ theorem we have

\text{P}(C = 1 \mid M \geq t) = \frac{ \text{P}(Y \geq t) }{ \text{P}(X \geq t) + \text{P}(Y \geq t) } = \frac{\Phi^c\left(\frac{t-\mu}{\sigma}\right)}{\Phi^c(t) + \Phi^c\left(\frac{t-\mu}{\sigma}\right)}

where Φc(t) = P(Zt) for a standard normal random variable.

Similarly, to compute P(C = 1 | Mt) just flip the direction of the inequality signs replace Φc(t) = P(Zt) with Φ(t) = P(Zt).

The calculation for P(C = 1  |  |M| ≥ t) is similar

Example

Suppose Y has mean −2 and variance 10. The blue curve shows that a large negative sample from M very likely comes from Y and the orange line shows that large positive sample very likely comes from Y as well.

The dip in the orange curve shows the transition zone where Y‘s advantage due to a larger mean gives way to the disadvantage of a smaller variance. This illustrates that the posterior probability of Y increases eventually but not necessarily monotonically.

Here’s a plot showing the probability of a sample having come from Y depending on its absolute value.

Related posts

Why do medical tests always have error rates?

Most people implicitly assume medical tests are infallible. If they test positive for X, they assume they have X. Or if they test negative for X, they’re confident they don’t have X. Neither is necessarily true.

Someone recently asked me why medical tests always have an error rate. It’s a good question.

A test is necessarily a proxy, a substitute for something else. You don’t run a test to determine whether someone has a gunshot wound: you just look.

A test for a disease is based on a pattern someone discovered. People who have a certain condition usually have a certain chemical in their blood, and people who do not have the condition usually do not have that chemical in their blood. But it’s not a direct observation of the disease.

“Usually” is as good as you can do in biology. It’s difficult to make universal statements. When I first started working with genetic data, I said something to a colleague about genetic sequences being made of A, C, G, and T. I was shocked when he replied “Usually. This is biology. Nothing always holds.” It turns out some viruses have a Zs (aminoadenine) rather than As (adenine).

Error rates may be very low and still be misleading. A positive test for rare disease is usually a false positive, even if the test is highly accurate. This is a canonical example of the need to use Bayes theorem. The details are written up here.

The more often you test for something, the more often you’ll find it, correctly or incorrectly. The more conditions you test, the more conditions you find, correctly or incorrectly.

Wouldn’t it be great if your toilet had a built in lab that tested you for hundreds of diseases several times a day? No, it would not! The result would be frequent false positives, leading to unnecessary anxiety and expense.

Up to this point we’ve discussed medical tests, but the same applies to any kind of test. Surveillance is a kind of test, one that also has false positives and false negatives. The more often Big Brother reads you email, the more likely it is that he will find something justifying a knock on your door.

Related posts

Rényi’s parking constant

Parallel parking photo by IFCAR, Public Domain

Imagine parallel parking is available along the shoulder of a road, but no parking spaces are marked.

The first person to park picks a spot on the shoulder at random. Then another car also chooses a spot along the shoulder at random, with the constraint that the second car can’t overlap the first. This process continues until no more cars can park. How many people can park this way?

Assume all cars have the same length, and we choose our units so that cars have length 1. The expected number of cars that can park is a function M(x) of the length of the parking strip x. Clearly if x < 1 then M(x) = 0. Alfréd Rényi [1] found that for x ≥ 1, M(x) satisfies the equation

M(x) = 1 + \frac{2}{x-1}\int_0^{x-1} M(t)\, dt

This is an unusual equation, difficult to work with because it defined M only implicitly. It’s not even clear that the equation has a solution. But it does, and the ratio of M(x) to x approaches a constant as x increases.

m = \lim_{x\to\infty} \frac{M(x)}{x} = 0.74759\ldots

The number m is known as Rényi’s parking constant.

This says for a long parking strip, parking sequentially at random will allow about 3/4 as many cars to park as if you were to pack the cars end-to-end.

More posts on Rényi’s work

[1] Steven R. Finch. Mathematical Constants. Cambridge University Press, 2003.

Fitting a line without an intercept term

The other day I was looking at how many lumens LED lights put out per watt. I found some data on Wikipedia, and as you might expect the relation between watts and lumens is basically linear, though not quite.

If you were to do a linear regression on the data you’d get a relation

lumens = a × watts + b

where the intercept term b is not zero. But this doesn’t make sense: a light bulb that is turned off doesn’t produce light, and it certainly doesn’t produce negative light. [1]

You may be able fit the regression and ignore b; it’s probably small. But what if you wanted to require that b = 0? Some regression software will allow you to specify zero intercept, and some will not. But it’s easy enough to compute the slope a without using any regression software.

Let x be the vector of input data, the wattage of the LED bulbs. And let y be the corresponding light output in lumens. The regression line uses the slope a that minimizes

(a xy)² = a² x · x − 2a x · y + y · y.

Setting the derivative with respect to a to zero shows

a = x · y / x · x

Now there’s more to regression than just line fitting. A proper regression analysis would look at residuals, confidence intervals, etc. But the calculation above was good enough to conclude that LED lights put out about 100 lumens per watt.

It’s interesting that making the model more realistic, i.e. requiring b = 0, is either a complication or a simplification, depending on your perspective. It complicates using software, but it simplifies the math.

Related posts

[1] The orange line in the image above is the least squares fit for the model y = ax, but it’s not quite the same line you’d get if you fit the model y = ax + b.

Small probabilities add, big ones don’t

A video has been making the rounds in which a well-known professor [1] says that if something has a 20% probability of happening in one attempt, then it has a 40% chance of happening in two attempts, a 60% chance in happening in three attempts, etc.

This is wrong, but it’s a common mistake. And one reason it’s common is that a variation on the mistake is approximately correct, which we will explain shortly.

It’s obvious the reasoning in the opening paragraph is wrong when you extend it to five, or especially six, attempts. Are you certain to succeed after five attempts? What does it even mean that you have a 120% chance of success after six attempts?!

But let’s reduce the probabilities in the opening paragraph. If there’s a 2% chance of success on your first attempt, is there a 4% chance of success in two attempts and a 6% chance of success in three attempts? Yes, approximately.

Two attempts

Here’s is the correct formula for the probability of an event happening in two tries.

P(A \cup B) = P(A) + P(B) - P(A\cap B)

In words, the probability of A or B happening equals the probability of A happening, plus the probability of B happening, minus the probability of A and B both happening. The last term is a correction term. Without it, you’re counting some possibilities twice.

So if the probability of success on each attempt is 0.02, the probability of success on two attempts is

0.02 + 0.02 − 0.0004 = 0.0396 ≈ 0.04.

When the probabilities of A and B are each small, the probability of A and B both happening is an order of magnitude smaller, assuming independence [2]. The smaller the probabilities of A and B, the less the correction term matters.

If the probability of success on each attempt is 0.2, now the probability of success after two attempts is 0.36. Simply adding probabilities and neglecting the correction term is incorrect, but not terribly far from correct in this case.

Three attempts

When you consider more attempts, things get more complicated. The probability of success after three attempts is given by

\begin{align*} P(A \cup B \cup C) &= P(A) + P(B) + P(C) \\ &- P(A\cap B) - P(B \cap C) - P(A \cap C) \\ &+ P(A \cap B \cap C) \end{align*}

as I discuss here. Adding the probabilities of success separately over-estimates the correct probability. So you correct by subtracting the probabilities of pairs of successes. But then this is over-corrects, because you need to add back in the probability of three successes.

If A, B, and C all have a 20% probability, the probability of A or B or C happening is 48.8%, not 60%, again assuming independence.

The error from naively adding probabilities increases when the number of probabilities increase.

n attempts

Now let’s look at the general case. Suppose your probability of success on each attempt is p. Then your probability of failure on each independent attempt is 1 − p. The probability of at least one success out of n attempts is the complement of the probability of all failures, i.e.

1 - (1 - p)^n

When p is small, and when n is small, we can approximate this by np. That’s why naively adding probabilities works when the probabilities are small and there aren’t many of them. Here’s a way to say this precisely using the binomial theorem.

\begin{align*} 1 - (1 - p)^n &= 1 - \left(1 - np + {n \choose 2}p^2 - {n \choose 3}p^3 - \cdots\right ) \\ &= np + {\cal O}(p^2) \end{align*}

The exact probability is np plus (n − 1) terms that involve higher powers of p. When p and n are sufficiently small, these terms can be ignored.

 

[1] I’m deliberately not saying who. My point here is not to rub his nose in his mistake. This post will be online long after the particular video has been forgotten.

[2] Assuming A and B are independent. This is not always the case, and wrongly assuming independence can have disastrous consequences as I discuss here, but that’s a topic for another day.