Nassim Taleb, author of The Black Swan, was part of a panel discussion atĀ a statistical conferenceĀ in Denver yesterday. His book contains some provocative criticisms of statisticians, so I was eager to see what the discussion might be like. His rhetoric at the meeting was far more subdued than in his book though his message was essentially the same. His main point was that there are severe limits to the ability of statistics to estimate the probabilities of rare events. Precise statements about very small probabilities are often nonsense.

Taleb argued that statisticians can make the problem of predicting rare events worse by reassuring non-statisticians that risks are under control when common sense would leave more room for doubt. (Anybody remember Long Term Capital Management?) He made an analogy to the former practice of suppressing all forest fires. The success in fighting small forest fires created a false sense of security while also creating the conditions for enormous forest fires by not clearing out underbrush. The success of statisticians in predicting the frequency of not-so-rare events lends confidence to predictions that are past the limits of their models.

The relative error in estimating the probability of rare events is only a problem when these rare events also have huge consequences. In a previous post I explained how normal distributions don’t do a good job of predicting the number of extremely tall people. When you’re predicting what proportion of the population meets the height requirements of the US Army, it makes no difference whether the probability of a woman being seven feet tall is one in a million (10^{6}) or one in a billion (10^{9}). But if you are insuring against a multi-billion dollar disaster, the difference between one in a million or one in a billion chance matters.

Taleb’s advice is to admit ignorance in predicting rare events and “organically” clip the tails of probability distributions by setting loss limits. This is what insurance companies do when they set caps on payoffs. By setting an upper limit on the amount they will pay, companies no longer need accurate estimates for the probabilities of rare but extremely costly events. Seems like very sensible advice to me.

” But if you are insuring against a multi-billion dollar disaster, the difference between one in a million or one in a billion chance matters.”

This also points to the incentive problem, though–that differs matters collectively and to the organization, but both probabilities are close enough to zero that maybe they are the same for an individual making the decision. (they’re not facing billions of dollars of losses)