Primitive recursive functions and enumerable sets

The set of primitive recursive (PR) functions is the smallest set of functions of several integer arguments satisfying five axioms:

  1. Constant functions are PR.
  2. The function that picks the ith element of a list of n arguments is PR.
  3. The successor function S(n) = n+1 is PR.
  4. PR functions are closed under composition.
  5. PR functions are closed under primitive recursion.

The last axiom obviously gives PR functions their name. So what is primitive recursion? Given a PR function  that takes k arguments, and another PR function g that takes k+2 arguments, the primitive recursion of f and g is a function h of k+1 arguments satisfying two properties:

  1. h(0, x1, …, xk) = f(x1, …, xk)
  2. h(S(y), x1, …, xk) = g(yh(yx1, … xk), x1, …, xk)

Not every computable function is primitive recursive. For example, Ackermann’s function is a general recursive function, but not a primitive recursive function. General recursive functions are Turing complete. Turing machines, lambda calculus, and general recursive functions are all equivalent models of computation, but the first two are better known.

For this post, the main thing about general recursive functions is that, as the name implies, they are more general than primitive recursive functions.

Now we switch from functions to sets. The characteristic function of a set A is the function that is 1 for elements of A and zero everywhere else. In other areas of math, there is a sort of duality between functions and sets via characteristic functions. For example, the indicator function of a measurable set is a measurable function. And the indicator function of a convex set is a convex function. But in recursive functions, there’s an unexpected wrinkle in this analogy.

A set of integers is recursively enumerable if it is either empty or the image of a general recursive function. But there’s a theorem, due to Alonzo Church, that a non-empty recursively enumerable set is actually the image of a primitive recursive function. So although general recursive functions are more general, you can’t tell that from looking at function images. For example, although the Ackermann function is not primitive recursive, there is a primitive recursive function with the same image.

Intuitionistic logic in Gilbert and Sullivan

In a recent preprint, Philip Wadler introduces intuitionistic logic using the comic opera The Gondoliers.

In Gilbert and Sullivan’s The Gondoliers, Casilda is told that as an infant she was married to the heir of the King of Batavia, but that due to a mix-up no one knows which of two individuals, Marco or Giuseppe, is the heir. Alarmed, she wails “Then do you mean to say that I am married to one of two gondoliers, but it is impossible to say which?” To which the response is “Without any doubt of any kind whatever.”

… Intuitionists … insist that a proof of AB must show which of A or B holds, and hence they would reject the claim that Casilda is married to Marco or Giuseppe until one of the two was identified as her husband. Perhaps Gilbert and Sullivan anticipated intuitionism, for their story’s outcome is that the heir turns out to be a third individual, Luiz, with whom Casilda is, conveniently, already in love.

The smallest uninteresting number and fuzzy logic

I’ve tried to think of something interesting about the number 2013 and haven’t come up with anything. This reminds me of the interesting number paradox.

Theorem: All positive integers are interesting.

Proof: Let n be the smallest uninteresting positive integer. Then n is interesting by virtue of being the smallest such number.

The interesting number paradox is semi-serious, and so is the resolution I propose below. Both are jokes, but they touch on some serious ideas.

“Interestingness” is not an all-or-nothing property. Some numbers are more interesting than others, so perhaps we should use fuzzy logic to quantify how interesting a number is, say on a scale from 0 to 1.

For a given ε > 0, define as interesting the set of numbers whose interestingness is greater than ε. Suppose the interestingness of numbers trails off after some point. (Otherwise, if the interestingness dropped sharply, the first number after the drop would be interesting.) The largest interesting number then is barely interesting. The number one larger than a barely interesting number is even less interesting. So the proof of the interesting number paradox doesn’t apply in the continuous setting.

On a more serious note, many paradoxes in mathematics can be resolved by replacing a binary criterion with a continuous one.

For example, the sum of a trillion continuous functions is continuous, but the infinite sum of continuous functions may not be. How can that be? The problem is that we’re viewing continuity as an all-or-nothing property. If you have a series of continuous functions that converges to a discontinuous limit, the degree of continuity must be degrading. The partial sum after some large number of terms is continuous, but not very continuous. The modulus of continuity of each partial sum is finite, but is getting larger, and is infinite in the limit.

Classical statistics is filled with yes-no concepts that make more sense when replaced with continuous measures. For example, instead of asking whether an estimator is biased, it’s more practical to ask how biased it is.

Computer science is often concerned with whether something can be computed (i.e. exactly). But sometimes it’s more important to ask how well something can be computed. Many things that cannot be computed in theory can be computed well enough in practice.

Related post: How to solve supposedly intractable problems

Bad logic, but good statistics

Ad hominem arguments are bad logic, but good (Bayesian) statistics. A statement isn’t necessarily false because it comes from an unreliable source, though it is more likely to be false.

Some people are much more likely to know what they’re talking about than others, depending on context. You’re more likely to get good medical advice from a doctor than from an accountant, though the former may be wrong and the latter may be right. (Actors are not likely to know what they’re talking about when giving advice regarding anything but acting, though that doesn’t stop them.)

Ad hominem guesses are a reasonable way to construct a prior, but the prior needs to be updated with data. Given no other data, the doctor is more likely to know medicine than the accountant is. Assuming a priori that both are equally likely to be correct may be “fair,” but it’s not reasonable. However, as you gather data on the accuracy of each, you could change your mind. The posterior distribution could persuade you that you’ve been talking to a quack doctor or an accountant who is unusually knowledgeable of medicine.

Related post: Musicians, drunks, and Oliver Cromwell

Proofs of false statements

Mark Dominus brought up an interesting question last month: have there been major screw-ups in mathematics? He defines a “major screw-up” to be a flawed proof of an incorrect statement that was accepted for a significant period of time. He excludes the case of incorrect proofs of statements that were nevertheless true.

It’s remarkable that he can even ask the question. Can you imagine someone asking with a straight face whether there have ever been major screw-ups in, say, software development? And yet it takes some hard thought to come up with examples of really big blunders in math.

No doubt there are plenty of flawed proofs of false statements in areas too obscure for anyone to care about. But in mainstream areas of math, blunders are usually uncovered very quickly. And there are examples of theorems that were essentially correct but neglected some edge case. Proofs of statements that are just plain wrong are hard to think of. But Mark Dominus came up with a few.

Yesterday he gave an example of a statement by Kurt Gödel that was flat-out wrong but accepted for over 30 years. Warning: reader discretion advised. His post is not suitable for those who get queasy at the sight of symbolic logic.