Numerators of harmonic numbers

Harmonic numbers

The nth harmonic number, Hn, is the sum of the reciprocals of the integers up to and including n. For example,

H4 = 1 + 1/2 + 1/3 + 1/4 = 25/12.

Here’s a curious fact about harmonic numbers, known as Wolstenholme’s theorem:

For a prime p > 3, the numerator of Hp-1 is divisible by p2.

The example above shows this for p = 5. In that case, the numerator is not just divisible by p2, it is p2, though this doesn’t hold in general. For example, H10 = 7381/2520. The numerator 7381 is divisible by 112 = 121, but it’s not equal to 121.

Generalized harmonic numbers

The generalized harmonic numbers Hn,m are the sums of the reciprocals of the first n positive integers, each raised to the power m. Wolstenholme’s theorem also says something about these numbers too:

For a prime p > 3, the numerator of Hp-1,2 is divisible by p.

For example, H4,2 = 205/144, and the numerator is clearly divisible by 5.

Computing with Python

You can play with harmonic numbers and generalized harmonic numbers in Python using SymPy. Start with the import statement

from sympy.functions.combinatorial.numbers import harmonic

Then you can get the nth harmonic number with harmonic(n) and generalized harmonic numbers with harmonic(n, m).

To extract the numerators, you can use the method as_numer_denom to turn the fractions into (numerator, denominator) pairs. For example, you can create a list of the denominators of the first 10 harmonic numbers with

[harmonic(n).as_numer_denom()[0] for n in range(10)]

What about 0?

You might notice that harmonic(0) returns 0, as it should. The sum defining the harmonic numbers is empty in this case, and empty sums are defined to be zero.

When the last digits of powers don’t change

If you raise any integer to the fifth power, its last digit doesn’t change. For example, 25 = 32.

It’s easy to prove this assertion by brute force. Since the last digit of bn only depends on the last digit of b, it’s enough to verify that the statement above holds for 0, 1, 2, …, 9.

Although that’s a valid proof, it’s not very satisfying. Why fifth powers? We’re implicitly working in base 10, as humans usually do, so what is the relation between 5 and 10 in this context? How might we generalize it to other bases?

The key is that 5 = φ(10) + 1 where φ(n) is the number of positive integers less than n and relatively prime to n.

Euler discovered the φ function and proved that if a and m are relatively prime, then

aφ(m) = 1 (mod m)

This means that aφ(m) – 1 is divisible by m. (Technically I should use the symbol ≡ (U+2261) rather than = above since the statement is a congruence, not an equation. But since Unicode font support on various devices is disappointing, not everyone could see the proper symbol.)

Euler’s theorem tells us that if a is relatively prime to 10 then a4 ends in 1, so a5 ends in the same digit as a. That proves our original statement for numbers ending in 1, 3, 7, and 9. But what about the rest, numbers that are divisible by 2 or 5? We’ll answer that question and a little more general one at the same time. Let m = αβ where α and β are distinct primes. In particular, we could choose α = 2 and β = 5; We will show that

aφ(m)+1 = a (mod m)

for all a, whether relatively prime to m or not. This would show in addition, for example, that in base 15, every number keeps the same last “digit” when raised to the 9th power since φ(15) = 8.

We only need to be concerned with the case of a being a multiple of α or β since Euler’s theorem proves our theorem for the rest of the cases. If a = αβ our theorem obviously holds, so assume a is some multiple of α, a = kα with k less than β (and hence relatively prime to β).

We need to show that αβ divides (kα)φ(αβ)+1kα. This expression is clearly divisible by α, so the remaining task is to show that it is divisible by β. We’ll show that (kα)φ(αβ) – 1 is divisible by β.

Since α and k are relatively prime to β, Euler’s theorem tells us that αφ(β) and kφ(β) are congruent to 1 (mod β). This implies that kαφ(β) is congruent to 1, and so kαφ(α)φ(β) is also congruent to 1 (mod β). One of the basic properties of φ is that for relatively prime arguments α and β, φ(αβ) = φ(α)φ(β) and so we’re done.

Exercise: How much more can you generalize this? Does the base have to be the product of two distinct primes?

Fibonacci number system

Every positive integer can be written as the sum of distinct Fibonacci numbers. For example, 10 = 8 + 2, the sum of the fifth Fibonacci number and the second.

This decomposition is unique if you impose the extra requirement that consecutive Fibonacci numbers are not allowed. [1] It’s easy to see that the rule against consecutive Fibonacci numbers is necessary for uniqueness. It’s not as easy to see that the rule is sufficient.

Every Fibonacci number is itself the sum of two consecutive Fibonacci numbers—that’s how they’re defined—so clearly there are at least two ways to write a Fibonacci number as the sum of Fibonacci numbers, either just itself or its two predecessors. In the example above, 8 = 5 + 3 and so you could write 10 as 5 + 3 + 2.

The nth Fibonacci number is approximately φn/√5 where φ = 1.618… is the golden ratio. So you could think of a Fibonacci sum representation for x as roughly a base φ representation for √5x.

You can find the Fibonacci representation of a number x using a greedy algorithm: Subtract the largest Fibonacci number from x that you can, then subtract the largest Fibonacci number you can from the remainder, etc.

Programming exercise: How would you implement a function that finds the largest Fibonacci number less than or equal to its input? Once you have this it’s easy to write a program to find Fibonacci representations.

* * *

[1] This is known as Zeckendorf’s theorem, published by E. Zeckendorf in 1972. However, C. G. Lekkerkerker had published the same result 20 years earlier.

Counting primitive bit strings

A string of bits is called primitive if it is not the repetition of several copies of a smaller string of bits. For example, the 101101 is not primitive because it can be broken down into two copies of the string 101. In Python notation, you could produce 101101 by "101"*2. The string 11001101, on the other hand, is primitive. (It contains substrings that are not primitive, but the string as a whole cannot be factored into multiple copies of a single string.)

For a given n, let’s count how many primitive bit strings there are of length n. Call this f(n). There are 2n bit strings of length n, and f(n) of these are primitive. For example, there are f(12) primitive bit strings of length 12. The strings that are not primitive are made of copies of primitive strings: two copies of a primitive string of length 6, three copies of a primitive string of length 4, etc. This says

 2^{12} = f(12) + f(6) + f(4) + f(3) + f(2) + f(1)

and in general

2^n = \sum_{d \mid n} f(d)

Here the sum is over all positive integers d that divide n.

Unfortunately this formula is backward. It gives is a formula for something well known, 2n, as a sum of things we’re trying to calculate. The Möbius inversion formula is just what we need to turn this formula around so that the new thing is on the left and sums of old things are on the right. It tells us that

f(n) = \sum_{d \mid n} \mu\left(\frac{n}{d}\right) 2^d

where μ is the Möbius function.

We could compute f(n) with Python as follows:

from sympy.ntheory import mobius, divisors

def num_primitive(n):
    return sum( [mobius(n/d)*2**d for d in divisors(n)] )

The latest version of SymPy, version 0.7.6, comes with a function mobius for computing the Möbius function. If you’re using an earlier version of SymPy, you can roll your own mobius function:

from sympy.ntheory import factorint

def mobius(n):
    exponents = factorint(n).values()
    lenexp = len(exponents)
    m = 0 if lenexp == 0 else max(exponents)
    return 0 if m > 1 else (-1)**lenexp

The version of mobius that comes with SymPy 0.7.6 may be more efficient. It could, for example, stop the factorization process early if it discovers a square factor.

The great reformulation of algebraic geometry

“Tate helped shape the great reformulation of arithmetic and geometry which has taken place since the 1950’s.” — Andrew Wiles

At the Heidelberg Laureate Forum I has a chance to interview John Tate. In his remarks below, Tate briefly comments on his early work on number theory and cohomology. Most of the post consists of his comments on the work of Alexander Grothendieck.

* * *

JT: My first significant work after my thesis was to determine the cohomology groups of class field theory. The creators of the theory, including my thesis advisor Emil Artin, didn’t think in terms of cohomology, but their work could be interpreted as finding the cohomology groups H0, H1, and H2.

I was invited to give a series of three talks at MIT on class field theory. I’d been at a party, and I came home and thought about what I’d talk about. And I got this great idea: I realized I could say what all the higher groups are. In a sense it was a disappointing answer, though it didn’t occur to me then, that there’s nothing new in them; they were determined by the great work that had already been done. For that I got the Cole prize in number theory.

Later when I gave a talk on this work people would say “This is number theory?!” because it was all about cohomology groups.

JC: Can you explain what the great reformulation was that Andrew Wiles spoke of? Was it this greater emphasis on cohomology?

JT: Well, in the class field theory situation it would have been. And there I played a relatively minor part. The big reformulation of algebraic geometry was done by Grothendieck, the theory of schemes. That was really such a great thing, that unified number theory and algebraic geometry. Before Grothendieck, going between characteristic 0, finite characteristic 2, 3, etc. was a mess.

Grothendieck’s system just gave the right framework. We now speak of arithmetic algebraic geometry, which means studying problems in number theory by using your geometric intuition. The perfect background for that is the theory of schemes. ….

Grothendieck ideas [about sheaves] were so simple. People had looked at such things in particular cases: Dedekind rings, Noetherian rings, Krull rings, …. Grothendieck said take any ring. … He just had an instinct for the right degree of generality. Some people make things too general, and they’re not of any use. But he just had an instinct to put whatever theory he thought about in the most general setting that was still useful. Not generalization for generalization’s sake but the right generalization. He was unbelievable.

He started schemes about the time I got serious about algebraic geometry, as opposed to number theory. But the algebraic geometers classically had affine varieties, projective varieties, … It seemed kinda weird to me. But with schemes you had a category, and that immediately appealed to me. In the classical algebraic geometry there are all these birational maps, or rational maps, and they’re not defined everywhere because they have singularities. All of that was cleared up immediately from the outset with schemes. ….

There’s a classical algebraic geometer at Harvard, Joe Harris, who works mostly over the complex numbers. I asked him whether Grothendieck made much of a difference in the classical case — I knew for number theorists he had made a tremendous difference — and Joe Harris said yes indeed. It was a revolution for classical algebraic geometry too.

Multiple zeta

The Riemann zeta function, introduced by Leonard Euler, is defined by

\zeta(k) = \sum n^{-k}

where the sum is over all positive integers n.

Euler also introduced a multivariate generalization of the zeta function

\zeta(k_1, \ldots, k_r) = \sum n_1^{-k_1}\cdots n_r^{-k_r}

where the sum is over all decreasing k-tuples of positive integers. This generalized zeta function satisfies the following beautiful identity:

 \zeta(a)\,\zeta(b) = \zeta(a, b) + \zeta(b, a) + \zeta(a+b)

The multivariate zeta function and identities such as the one above are important in number theory and are the subject of open conjectures.

Source

Number theory determinant and SymPy

Let σ(n) be the sum of the positive divisors of n and let gcd(a, b) be the greatest common divisor of a and b.

Form an n by n matrix M whose (i, j) entry is σ(gcd(i, j)). Then the determinant of M is n!.

The following code shows that the theorem is true for a few values of n and shows how to do some common number theory calculations in SymPy.

from sympy import gcd, divisors, Matrix, factorial

def f(i, j):
    return sum( divisors( gcd(i, j) ) )

def test(n):
    r = range(1, n+1)
    M = Matrix( [ [f(i, j) for j in r] for i in r] )
    return M.det() - factorial(n)

for n in range(1, 11):
    print test(n)

As expected, the test function returns zeros.

If we replace the function σ above by τ where τ(n) is the number of positive divisors of n, the corresponding determinant is 1. To test this, replace sum by len in the definition of f and replace factorial(n) by 1.

In case you’re curious, both results are special cases of the following more general theorem. I don’t know whose theorem it is. I found it here.

For any arithmetic function f(m), let g(m) be defined for all positive integers m by

g(m) = \sum_{d \,\mid \,m} \mu(d) f\left(\frac{m}{d}\right)

Let M be the square matrix of order n with ij element f(gcd(i, j)). Then

\det M = \prod_i^n g(j)

Here μ is the Möbius function. The two special cases above correspond to g(m) = m and g(m) = 1.

A prime-generating formula and SymPy

Mills’ constant is a number θ such that the integer part of θ raised to a power of 3 is always a prime. We’ll see if we can verify this computationally with SymPy.

from sympy import floor, isprime
from sympy.mpmath import mp, mpf

# set working precision to 200 decimal places
mp.dps = 200

# Mills' constant http://oeis.org/A051021
theta = mpf("1.30637788386308069046861449260260571291678458515671364436805375996643405376682659882150140370119739570729")

for i in range(1, 7):
    n = floor(theta**3**i)
    print i, n, isprime(n)

Note that the line of code defining theta wraps Mill’s constant as a string and passes it as an argument to mpf. If we just wrote theta = 1.30637788386308… then theta would be truncated to an ordinary floating point number and most of the digits would be lost. Not that I did that in the first version of the code. :)

This code says that the first five outputs are prime but that the sixth is not. That’s because we are not using Mills’ constant to enough precision. The integer part of θ^3^6 is 85 digits long, and we don’t have enough precision to compute the last digit correctly.

If we use the value of Mills’ constant given here to 640 decimal places, and increase mp.dps to 1000, we can correctly compute the sixth term, but not the seventh.

Mill’s formula is just a curiosity with no practical use that I’m aware of, except possibly as an illustration of impractical formulas.

Fermat’s proof of his last theorem

Fermat famously claimed to have a proof of his last theorem that he didn’t have room to write down. Mathematicians have speculated ever since what this proof must have been, though everyone is convinced the proof must have been wrong.

The usual argument for Fermat being wrong is that since it took over 350 years, and some very sophisticated mathematics, to prove the theorem, it’s highly unlikely that Fermat had a simple proof. That’s a reasonable argument, but somewhat unsatisfying because it’s risky business to speculate on what a proof must require. Who knows how complex the proof of FLT in The Book is?

André Weil offers what I find to be a more satisfying argument that Fermat did not have a proof, based on our knowledge of Fermat himself. Dana Mackinzie summarizes Weil’s argument as follows.

Fermat repeatedly bragged about the n = 3 and n = 4 cases and posed them as challenges to other mathematicians … But the never mentioned the general case, n = 5 and higher, in any of his letters. Why such restraint? Most likely, Weil argues, because Fermat had realized that his “truly wonderful proof” did not work in those cases.

Dana comments:

Every mathematician has had days like this. You think you have a great insight, but then you go out for a walk, or you come back to the problem the next day, and you realize that your great idea has a flaw. Sometimes you can go back and fix it. And sometimes you can’t.

The quotes above come from The Universe in Zero Words. I met Dana Mackinzie in Heidelberg a few weeks ago, and when I came home I looked for this book and his book on the formation of the moon, The Big Splat.

Related posts:

Prime-generating fractions

I posted a couple prime-generating fractions on Google+ this weekend and said that I’d post an explanation later. Here it goes.

The decimal expansion of 18966017 / 997002999 is

.019 023 029 037 047 059 073 089 107 127 149 173 199 227 257 …

where each group of three digits is a prime number.

The fraction 4099200041 / 999700029999 produces primes in groups of four digits:

.0041 0043 0047 0053 0061 0071 0083 0097 0113 0131 0151 0173 0197 0223 0251 0281 0313 0347 0383 0421 0461 0503 0547 0593 0641 0691 0743 0797 0853 0911 0971 1033 1097 1163 1231 1301 1373 1447 1523 1601…

What’s going on here? Both fractions come from infinite sums involving prime-generating polynomials. No polynomial in one variable will generate only primes, but several do produce a long sequence of primes.

Euler discovered in 1772 that n2n + 41 is prime for values of n from -40 to 40. The second fraction above comes from Euler’s polynomial:

\frac{4099200041}{999700029999} = \sum_{n=1}^\infty \frac{n^2 - n +41}{10^{4n}}

The digits in the decimal expansion stop being prime after the 40th quartet because Euler’s polynomial isn’t prime for n = 41.

The first fraction above comes from a similar polynomial discovered by Legendre, n2 + n + 17. It produces primes for n = -16 to 15.

\frac{18966017}{997002999} = \sum_{n=1}^\infty \frac{n^2 + n +17}{10^{3n}}

You can make your own prime-generating fraction by finding a prime-generating polynomial and using the process above. Euler’s polynomial produces 4-digit primes before becoming composite, so the denominator in the sum is 104n. Legendre’s polynomial produces 3-digit primes for the corresponding denominator in the sum is 103n.

Related posts:

Gelfand’s question

Gelfands’s question asks whether there is a positive integer n such that the first digits of jn base 10 are all the same for j = 2, 3, 4, …, 9. (Thanks to @republicofmath for pointing out this problem.) This post will explore Gelfand’s question via probability.

The MathWorld article on Gelfand’s question says that the answer is no for values of n less than 100,000. That range seemed small to me. My intuition was that you’d need to try larger values of n to have a reasonable chance of finding a solution.

So how big a value of n should you explore? To answer that question, consider picking n at random. If the leading digits of jn were randomly distributed, what would the chances be that n would satisfy Gelfand’s question? Leading digits are not random, but they do often have regular distributions.

Suppose the leading digits are uniformly distributed among 1 to 9, and that the leading digits for each base are independent. Then the probability of jn beginning with 1 (or any other chosen digit) for all j from 2 to 9 would be (1/9)8, and the probability of all leading digits matching any digit would be 9 times larger. That would say the probability of all leading digits matching would be on the order of 2 × 10-7, so we should try on the order of 107 or 108 values of n to have a decent chance of finding one.

But there’s a problem with the above argument: leading digits are not uniformly distributed. Benford’s law says that leading digits are often distributed very unevenly. When a system follows Benford’s law, the probability of a leading digit being d is log10(1 + 1/d). An uneven distribution of leading digits raises the probability that all the digits will match. If the leading digits of jn follow Benford’s law, and are independent, then the probability of all matching is 6.84 × 10-5, two orders of magnitude higher than assuming a uniform distribution.

Do the leading digits of jn follow Benford’s law? Yes, at least approximately, based on testing values of n from 1 to 1,000.

Now suppose the probability of a randomly selected value of n satisfying Gelfand’s question is p = 6.84 × 10-5. If we tried 100,000 random values of n, the probability of having no successes would be about 0.00107. So my intuition was wrong. If the random heuristic given here were correct, we’d stand a good chance of seeing a solution if we tried values of n up to 100,000.

Where does the probabilistic heuristic go wrong? In the assumption that the distributions of the leading digits are independent.

Let xj be the sequence jn for n running from 1 to 1000. Some of the xj are correlated and some are not.

Some of the correlations are easy to see. For example, the sequences for x2 and x4 have correlation 0.493. That’s because 4n = (2n)2. So if you know the first digit of 2n, you can often guess the first digit of 4n. In light of this, it’s not surprising that x2 is also correlated with x8, and x3 is correlated with x9.

You might speculate that xj and xk are uncorrelated if and only if j and k are relatively prime. Some sequences, such as x2 and x3 support that guess. But x4 and x5 are negatively correlated, r = -0.433, for reasons that are not obvious. I suspect that the negative correlations are the key to understanding the question.

* * *

Update: I have verified that the answer to Gelfand’s question is no for n up to 1010.

Synchronizing cicadas with Python

Suppose you want to know when your great-grandmother was born. You can’t find the year recorded anywhere. But you did discover an undated letter from her father that mentions her birth and one curious detail:  the 13-year and 17-year cicadas were swarming.

You do a little research and find that the 13-year cicadas are supposed to come out next year, and that the 17-year cicadas came out last year. When was your great-grandmother born?

Since 13 and 17 are relatively prime, the 13-year and 17-year cicadas will synchronize their schedules every 13 × 17 = 221 years. Suppose your great-grandmother was born n years ago. The remainder when n is divided by 13 must be 12, and the remainder when n is divided by 17 must be 1. We have to solve the pair of congruences n = 12 mod 13 and n = 1 mod 17. The Chinese Remainder Theorem says that this pair of congruences has a solution, and the proof of the theorem suggests an algorithm for computing the solution.

The Python library SymPy has a function for solving linear congruences.

>>> from sympy.ntheory.modular import solve_congruence
>>> solve_congruence( (12, 13), (1, 17) )
(103, 221)

This says we’re 103 years into the joint cycle of the 13-year and 17-year cicadas. So your great-grandmother was born 103 years ago. (The solution 324 = 103 + 221 is also mathematically possible, but not biologically possible.)

You can use the same SymPy function to solve systems of more congruences. For example, when is the next year in which there will be summer Olympic games and the 13-year and and 17-year cicadas will swarm? Here are a couple ways to approach this. First, you could find the last time this happened, then find when it will happen next. You’d need to solve n = 1 mod 4 (since we had summer Olympics last year) and  n = 12 mod 13 and n = 1 mod 17.

>>> solve_congruence( (1, 4), (12, 13), (1, 17) )
(545, 884)

So the cicadas and the summer Olympics were in sync 545 years ago. (Well, they would have been if the modern Olympics had existed in the middle ages.) This says they’ll be in sync again in 885 – 545 = 339 years.

Here’s a more direct approach. We want to know when the summer Olympics will be 3 years ahead of where they are now in the cycle, when the 13-year cicadas will be 1 year ahead, and the 17-year cicadas will be 16 years ahead.

>>> solve_congruence( (3, 4), (1, 13), (16, 17) )
(339, 884)

By the way, you can use negative integers with the congruences, so you could have used (-1, 17) to say the 17-year cicadas will be 1 year back instead of 16 years ahead in their cycle.

Searching for Perrin pseudoprimes

A week ago I wrote about Perrin numbers, numbers Pn defined by a recurrence relation similar to Fibonacci numbers. If n is prime, Pn mod n = 0, and the converse is nearly always true. That is, if  Pn mod n = 0, n is usually prime. The exceptions are called Perrin pseudoprimes.

Matt McIrvin wrote an excellent post explaining how to compute Perrin pseudoprimes. Here I’m just going to elaborate on a couple points in his post.

Matt’s first point is that if you want to search for Perrin pseudoprimes, the most direct approach won’t get you very far. The obvious thing to do is compute Pn and then see whether it has remainder 0 when you divide by n. The problem is that Pn grows exponentially with n. In fact, Pn is approximately ρn where ρ = 1.3247… is the plastic number. This means that Pn has about n log10 ρ digits. So searching for pseudoprimes less than one billion would require working with numbers with over 100,000,000 digits. This can be done, but it’s slow and unnecessary.

Since the goal is to compute Pn mod n rather than Pn per se, we can carry out all calculations mod n and avoid extended precision arithmetic as long as n itself can fit in an ordinary precision integer. If we want to find pseudoprimes less than one billion, we calculate Pn mod n for each n up to N = 109. This only requires ordinary arithmetic.

However, this approach takes O(N2) time unless we’re clever. We have to compute Pn mod n separately for each n, and the most direct approach takes n steps. This leads to Matt’s second point: use matrix multiplication (mod n) to calculate Pn mod n. This requires calculating the nth power of a 3×3 matrix, which can be done in O(log n) time using fast exponentiation. This makes the search for pseudoprimes less than N require O(N log N) rather than O(N2) time. This is enough to make the search for pseudoprimes less than a billion practical.

Almost if and only if

The Perrin numbers have a definition analogous to Fibonacci numbers. Define P0 = 3, P1 = 0, and P2 = 2. Then for n > 2, define

Pn+3 = Pn+1 + Pn+0.

The Concrete Tetrahedron says

It appears that n is prime “almost if and only if” Pn mod n = 0.

The “only if” condition is true without qualification: if n is prime, Pn mod n = 0. It’s the “if” part that’s almost true. When Pn mod n = 0, n is usually prime. Composite numbers that satisfy the Perrin condition Pn mod n = 0 are called Perrin pseudoprimes. The smallest Perrin pseudoprime is 271,441. The next is 904,631.

There are only 17 Perrin pseudoprimes less than a billion. By comparison, there are 50,847,534 primes less than a billion.

So if you used the Perrin condition to test whether numbers less than a billion are prime, you would correctly identify all 50,847,534 primes as primes. But out of the 949,152,466 composite numbers, you would falsely report 17 of these as prime.  In other words, you would be 100% accurate in identifying primes as primes, but only 99.999998% accurate in identifying composite numbers as composite.

Related posts:

Finding 2013 in pi

My youngest daughter asked me this morning whether you can find the number 2013 in the digits of pi. I said it must be possible, then wrote the following Python code to find where 2013 first appears.

from mpmath import mp
mp.dps = 100000
digits = str(mp.pi)[2:]
digits.find('2013')

I use the multi-precision math package mpmpath to get pi to 100,000 significant figures. I save this as a string and cut off the “3.” at the beginning to have just the string of digits after the decimal point.

The code returns 6275, so “2013” appears in the 6275th position in the string of digits. However, we usually count decimal places starting from 1 but count positions in a string starting from 0, so 2013 starts in the 6276th decimal place of pi.

So π = 3.14159…2013… where the first “…” represents 6,270 digits not shown.

* * *

Now we jump off into deeper mathematical water.

For some purposes, the digits of pi are random. The digits are obviously not random — there are algorithms for calculating them — and yet they behave randomly, and random is as random does.

If the digits of pi were random, then we could almost certainly find any sequence we want if we look long enough. Can we find any finite sequence of digits in the decimal expansion of pi? I would assume so, but I don’t know whether that has been proven.

You might expect that not only can you find 2013 in pi, but that if you split the digits of pi into blocks of 4, then 2013 and every other particular block would occur with the same frequency in the limit. That is, one would expect that the expansion of pi is uniform in base 10,000.

More generally, you might conjecture that pi is a normal number, i.e. that the digits of pi are uniformly distributed in every base. This has not been proven. In fact, no one has proved that any particular number is normal [reference]. However, we do know that almost all numbers are normal. That is, the set of non-normal numbers has Lebesgue measure zero.