The primary way to quantify uncertainty is to use probability. Subject to certain axioms that aim to capture common-sense rules for quantifying uncertainty, probability theory is essentially the only way. (This is Cox’s theorem.)
Other methods, such as fuzzy logic, may be useful, though they must violate common sense (at least as defined by Cox’s theorem) under some circumstances. They may be still useful when they provide approximately the results that probability would have provided and at less effort and stay away from edge cases that deviate too far from common sense.
There are various kinds of uncertainty, principally epistemic uncertainty (lack of knowledge) and aleatory uncertainty (randomness), and various philosophies for how to apply probability. One advantage to the Bayesian approach is that it handles epistemic and aleatory uncertainty in a unified way.
Blog posts related to quantifying uncertainty:
- How loud is the evidence?
- The law of small numbers
- Example of the law of small numbers
- Laws of large numbers and small numbers
- Plausible reasoning
- What is a confidence interval?
- Learning is not the same as gaining information
- What a probability means
- Irrelevant uncertainty
- Probability and information
- False positives for medical papers
- False positives for medical tests
- Most published research results are false
- Determining distribution parameters from quantiles
- Fitting a triangular distribution
- Musicians, drunks, and Oliver Cromwell
- Information theory
Probability can often be far too restrictive to characterise uncertainty and quantify the unknown. Allais and Ellsberg paradoxes show that many situations which we can reason about cannot be handled by probability and by Cox’s axioms. So far from being unreasonable I would say imprecise probability (including fuzzy logic but also Dempster-Schaeffer theory, interval probability, previsions, credal sets etc ) is one of the only reasonable approaches to characterising the unknown when we fail to understand the probability of an event with precision.
Bayesian probability is logically sound but how do you put a subjective probability to an event precisely. Why 29.3% and not 28.7%? Sometimes we can be quite sure but other times the most we can do is label it as likely, unlikely, very likely, 50-50 etc.
It is clear as well that we are averse to uncertainty, even moreso than to risk. This aversion really underpins the economic crisis and a new body of economic literature by the likes of Sargent and Hansen is being built to address the economics of uncertainty aversion. Robust statistics and robust control theory offer some guidance as to how to approach these many compelling issues.
What Nick said.
Probability theory is a model of uncertainty that has certain (very) nice properties. It’s incredibly powerful, and has proven to be incredibly useful as well. But it is not a good descriptive model of human preferences under uncertainty, and it’s often not even a good prescriptive model of human preferences under uncertainty.
I wasn’t familiar with Cox’s Theorem, so I did some quick googling and read about it. It doesn’t seem to be as powerful as you make it out to be, John. In particular, it doesn’t hold for finite domains, and it requires more than just “common sense rules” (and more axioms than Cox realized) to hold in infinite domains.
Are you aware of the critiques of Cox’s “theorem”, say, by Halpern? Seems pretty damning and I no longer reference them for this reason.