Why? Because its optimal rational approximations (i.e. continued fraction approximations) are worse than those of any other number, in the sense of having a larger relative error.

]]>My interest in text analytics shows me no number or concept is an island. With enough data everything has some sort of connection. I like the idea of the degree something is interesting is scaled between 0 and 1 with nothing ever being exactly 0 or exactly 1.

]]>The positive integers (like the natural numbers) have a well-order. This means “that every non-empty subset of S [positive integers] has a least element in this ordering”. Going bigger this is not true, for example the subset of even numbers are non-empty but do not have a largest element.

The original paradox shows that the set of non-interesting numbers is empty, since if it were non-empty then it has a least element Q, and that this Q would therefore be interesting. Proof by contradiction.

Let me chase your reasoning to see where I go:

Generalising to the interestingness of a positive integer “x” being a non-negative real number “F(x)” and consider a threshold ε > 0. This lets you define a sequence of interesting/non-interesting subsets as ε gets smaller, I(ε) = {x | ε 0 for all positive integers x.

“Suppose the interestingness of numbers trails off after some point” forces all such interesting subsets I(ε) to be finite (and for small enough ε the I(ε) is non-empty), thus there must exist a largest interesting number L(ε). But this largest interesting number depended on ε and as ε decreases you get an non-decreasing sequence of L(ε). Because the real numbers are not a well-order and F(L(ε)) > 0 you can have infinitely long decreasing ε sequences and infinitely long strictly-increasing sequences of L(ε).

You then consider ε > 0 and the non-interesting number 1+L(ε). These also form an ever-increasing sequence as ε decreases. I note that 1+L(ε) was never discussed as being the smallest element not in the finite subset I(ε). Let me grant you that I(ε) is {0,1,2,…L(ε)} so that 1+L(ε) is this smallest element. Note that the interesting set I(ε) is finite and the non-interesting set is non-empty by definition, not by virtue of any proof. It is not that the original proof does not apply, it is that your continuous case has defined things to be this way.

Now one has the proof that the limit the infinite sequence of finite sets {1…L(ε)} as L(ε) increases is the infinite set of natural numbers, and that there are no non-interesting numbers in the limit ε goes to 0. This concept and proof is quite similar to the rest of the examples you discuss. A property that is true of each item in a sequence is not true of the new item which is the “infinite limit” of the sequence of the original items.

How does this proof in the continuous setting work? We are trying to prove that there are only interesting number in the limit ε goes to zero. For contradiction assume there remains a non-interesting number Q with F(Q) greater than zero. In the infinite decreasing sequence of ε to zero it must become less that F(Q) at some point: εQ < F(Q). So I(εQ) is in the sequence of finite sets and Q is in I(εQ). Since ε1 0 eventually holds. Varying ε does not change F(x), so there each I(ε) ends in some finite drop from F(L(ε)) to F(1+L(ε)). Perhaps you mean that this drop decreases as ε decreases?

]]>I do like the base 13 representation, though.

]]>@Harlan: Zeno’s paradoxes get a bad rap these days, IMHO. It’s mostly forgotten that he actually proposed a set of 6 paradoxes, which had to be considered all at once. Individually, they seemed to show that space must be discrete, space must be continuous, time must be discrete, time must be continuous, motion must be discrete, and motion must be continuous. Collectively, they showed that argumentation up to that time regarding the nature of space, time, and motion had been inconsistent and sloppy. People who dismiss Zeno by saying “Now we know that an infinite number of terms can add to a finite sum” have missed the point entirely. ]]>

You are just saying that between all integers, there exists a difference which is kind of a trivial truth.

I like your post, just in case there are any doubts after reading my post ^^ ]]>

Digital computers do not work better than analog ones, it is just that with a digital computer you get repeatable, and thus reproducible, trials.

`working better’ is a yes-no concept that make more sense when replaced with continuous measure

]]>It’s the first year we’ve had with all four digits unique since 1987 — every year since then (until now!) has including at least one duplicated digit.

It’s the first year where the digits can be rearranged to be sequential (0,1,2,3) since 1432, and we haven’t seen this particular sequence of digits since 1302.

That’s all that comes to mind at the moment… but surely there are more.

]]>Digital computers do not work better than analog ones, it is just that with a digital computer you get repeatable, and thus reproducible, trials. ]]>

2013 isn’t prime: 3*11*61 ]]>

This seems to be an unjustified supposition. E.g., the property of being prime might make a relatively uninteresting number more interesting than one less than that number.

Furthermore, the interestingness could trail off to a non-zero number (so for some ε all numbers are sufficiently interesting).

There may well be obvious problems with this counterargument (and I recognize that the semi-serious example is not the point).

]]>@Tomas, “discreteness” in Computer Science is ‘just’ a working hyphothesis (aka Church-Turin thesis) of nature, it can be violated at many different levels both in the principles and in the applications end of Computer Science (which in my opinion is a Natural Science like any other). ]]>

But even then, the more you go into “advanced” topics, the more you hear about polynomial time approximation schemes, and soft computing, and continuous numbers very much pop up.

]]>Naive mathematical analogy: the Mandelbrot set is defined by whether a given recurrence diverges or remains bounded — yes or no. But the really pretty pictures are based on how fast the divergent parts diverge.

]]>In natural sciences the similar problem is called The tyranny of the discontinuous mind.

]]>