Plausible reasoning

If Socrates is probably a man, he’s probably mortal.

How do you extend classical logic to reason with uncertain propositions, such as the statement above? Suppose we agree to represent degrees of plausibility with real numbers, larger numbers indicating greater plausibility. If we also agree to a few axioms to quantify what we mean by consistency and common sense, there is a unique system that satisfies the axioms. The derivation is tedious and not well suited to a blog posting, so I’ll cut to the chase: given certain axioms, the inevitable system for plausible reasoning is probability theory.

There are two important implications of this result. First, it is possible to develop probability theory with no reference to sets. This renders much of the controversy about the interpretation of probability moot. Instead of arguing about what a probability can and cannot represent, one could concede the point. “We won’t use probabilities to represent uncertain information. We’ll use ‘plausibilities’ instead, derived from rules of common sense reasoning. And by the way, the resulting theory is identical to probability theory.”

The other important implication is that all other systems of plausible reasoning — fuzzy logic, neural networks, artificial intelligence, etc. — must either lead to the same conclusions as probability theory, or violate one of the axioms used to derive probability theory.

See the first two chapters of Probability Theory by E. T. Jaynes (ISBN 0521592712) for a full development. It’s interesting to note that the seminal paper in this area came out over 60 years ago. (Richard Cox, “Probability, frequency, and reasonable expectation”, 1946.)

6 thoughts on “Plausible reasoning

  1. Sadly, courses in AI are teaching people that fuzzy logic, Possibility theory, Dempster-Shafer theory, Transfereable Belief and Rough Sets are all valid and equally useful alternatives to probability theory.

  2. It’s also interesting that Jaynes’ treatment posits a Machine that will make judgments based upon evidence and uses that as a descriptive and rhetorical device to construct his axioms.

  3. Interesting. Explain this… Suppose whether Socrates was a man is uncertain. Suppose whether he was mortal is also uncertain. Can we still have certainty in the relationship, if he was a man then he was mortal? Can the logical connective have a higher certainty value than either its antecedent or consequent?

    I dont believe in AI. Assuming you define AI to include the notion of artificial sentience/sapience. I believe that original thought is an artifact of humanity and cannot be programmed. I foresee the very real dilemma within programming, a challenge I do not see anyone overcoming. I believe sentience, real human original thought – to actually create a person in a machine – I believe this requires us to program emotion. I dont see a machine developing the same quality of being if not for their capacity to perceive emotion. And emotion cannot be programmed – we dont understand it well enough and even if we did, it cannot be quantified. So we are forced, then, to allow emotion to mature and evolve on its own within the machine, independently of human involvement. But is that even possible? Why would machines change in such a way in the first place? How would they do so, for lack of a better word, “naturally”? Even if emotion could be programmed first, what will exist there to feel it? A machine without a sentient mind cannot feel the emotions that are being fed to it. So, the dilemma is simply, we cannot program emotions without there first being a mind in the machine… and there can be no mind in the machine without the ability to feel emotion. We cannot program a mind without the ability to feel emotion, and we cannot program emotion without a mind to feel them. Its a paradox of programming, I think.

  4. @CognitoErgoCognitoSum
    What is creativity? Does an intelligence necessarily need emotions to have either?
    I believe the “paradox of programming” as you call it is a result of us being stuck in a model where we try to perceive every intelligence through our own, or as a reflection of our own. I.e. we try to project our human intelligence onto machines, etc.
    Animals have intelligence. Some are have very limited intelligence, but still it is there.
    Even a single cell in the human body has intelligence. Its membrane is its brain. It receives signals through the membrane, and responds to them. It knows how to move towards nutrients and move away from threats. It is not a hard coded program and not a slave of it’s DNA. On the contrary. The DNA is only used as a template to produce many of the molecules the cell needs (a single DNA strand can be used by the cell to produce up to 2000 different molecules).
    So a cell is not a machine, it is a living, “thinking” (or responding) being.
    What is creativity?
    Take a look at the immune system in the human body. It is a collection of white blood cells and T-cells. Whenever the human body encounters a threat (or a foreign body) such as a virus or bacteria, the immune system tries to find an antidote. That process of finding an antidote is a creative process, because the immune system is trying to come up with a molecule that would attach to the virus’s receptors in such a way that would disable the virus, its ability to fold, or its ability to inject itself into cells (and thus not let it reproduce). That creative process is done by the immune system generating random mutations on strands until it finds one that fits the bill.

    The human body works as a community of cells to produce what we perceive as the human intelligence. Similarly in other animals we may perceive emotion, creativity, wants and even strategic planning (if you ever observed cats, for example).

    I argue that limiting ourselves to only one type of intelligence, artificial or otherwise, would indeed get us stuck in a seeming paradox.

    On a lighter note, an intelligence without emotion would sum it up as “I think therefore I don’t care.”

  5. Rather than get side tracked on the issues of AI and relevance, I will point out that I like to keep my axioms enumerated and in front of me.

    Some problems require exposure to cases where an axiom doesn’t hold.

    I do not get into this as heavily as someone that has made a profession out of numerical analysis, but I still find that it’s a good thing to be conscious of.

Comments are closed.