Graphs and square roots modulo a prime

Imagine a clock with a prime number of hours. So instead of being divided into 12 hours, it might be divided into 11 or 13, for example. You can add numbers on this clock the way you’d add numbers on an ordinary clock: add the two numbers as ordinary integers and take the remainder by p, the number of hours on the clock. You can multiply them the same way: multiply as ordinary integers, then take the remainder by p. This is called arithmetic modulo p, or just mod p.

Which numbers have square roots mod p? That is, for which numbers x does there exist a number y such that y2 = x mod p? It turns out that of the non-zero numbers, half have a square root and half don’t. The numbers that have square roots are called quadratic residues and the ones that do not have square roots are called quadratic nonresidues. Zero is usually left out of the conversation. For example, if you square the numbers 1 through 6 and take the remainder mod 7, you get 1, 4, 2, 2, 4, and 1. So mod 7 the quadratic residues are 1, 2, and 4, and the nonresidues are 3, 5, and 6.

A simple, brute force way to tell whether a number is a quadratic residue mod p is to square all the numbers from 1 to p-1 and see if any of the results equals your number. There are more efficient algorithms, but this is the easiest to understand.

At first it seems that the quadratic residues and nonresidues are scattered randomly. For example, here are the quadratic residues mod 23:

[1, 2, 3, 4, 6, 8, 9, 12, 13, 16, 18]

But there are patterns, such as the famous quadratic reciprocity theorem. We can see some pattern in the residues by visualizing them on a graph. We make the numbers mod p vertices of a graph, and join two vertices a and b with an edge if ab is a quadratic residue mod p.

Here’s the resulting graph mod 13:

And here’s the corresponding graph for p = 17:

And here’s Python code to produce these graphs:

import networkx as nx
import matplotlib.pyplot as plt

def residue(x, p):
    for i in range(1, p):
        if (i*i - x) % p == 0:
            return True
    return False

p = 17
G = nx.Graph()
for i in range(p):
    for j in range(i+1, p):
        if residue(i-j, p):
            G.add_edge(i, j)

nx.draw_circular(G)
plt.show()

However, this isn’t the appropriate graph for all values of p. The graphs above are undirected graphs. We’ve implicitly assumed that if there’s an edge between a and b then there should be an edge between b and a. And that’s true for p = 13 or 17, but not, for example, for p = 11. Why’s that?

If p leaves a remainder of 1 when divided by 4, i.e. p = 1 mod 4, then x is a quadratic residue mod p if and only if –x is a quadratic residue mod p. So if ab is a residue, so is ba. This means an undirected graph is appropriate. But if p = 3 mod 4, x is a residue mod p if and only if –x is a non-residue mod p. This means we should use directed graphs if p = 3 mod 4. Here’s an example for p = 7.

The code for creating the directed graphs is a little different:

G = nx.DiGraph()
for i in range(p):
    for j in range(p):
        if residue(i-j, p):
            G.add_edge(i, j)

Unfortunately, the way NetworkX draws directed graphs is disappointing. A directed edge from a to b is drawn with a heavy line segment on the b end. Any suggestions for a Python package that draws more attractive directed graphs?

The idea of creating graphs this way came from Roger Cook’s chapter on number theory applications in Graph Connections. (I’m not related to Roger Cook as far as I know.)

You could look at quadratic residues modulo composite numbers too. And I imagine you could also make some interesting graphs using Legendre symbols.

Click to learn more about consulting for complex networks

 

Related posts:

Curious numbers

A an n-digit number is said to be curious if the last n digits of its square are the same as the original number. For example, 252 = 625 and 762 = 5776. (Curious numbers are also known as automorphic numbers.)

There are bigger curious numbers, such as 212890625 and 787109376:

2128906252 = 45322418212890625

and

7871093762 = 619541169787109376.

And if the square of x has the same last n digits as x, so does the cube of x and all higher powers.

It turns out that for each n > 1, there are two curious numbers of length n.

Always two there are; no more, no less. — Yoda

There’s even a formula for the two solutions. The first is the remainder when 5 to the power 2n is divided by 10n and the second is 10n + 1 minus the first.

Here’s a little Python code to show that the first several solutions really are solutions.

for i in range(2, 20):
    a = 5**(2**i) % 10**i
    b = 10**i - a + 1
    print((a**2 - a)%10**i, (b**2 - b)%10**i)

Related: Applied number theory

New prime number record

Last year I wrote a couple posts about what was then the largest known prime, 257885161 – 1. Now there’s a new recordP = 274207281 – 1.

For most of the last 500 years, the largest known prime has been a Mersenne prime, a number of the form 2p – 1 where p is itself prime. Such numbers are not always prime. For example, 211 − 1 = 2047 = 23 × 89.

Euclid proved circa 300 BC that if M is Mersenne prime, then M(M + 1)/2 is a perfect number, i.e. it is equal to the sum of the divisors less than itself. Euler proved two millennia later that all even perfect numbers must have this form. Since no odd perfect numbers have been discovered, the discovery of a new Mersenne prime implies the discovery of a new perfect number.

Using the same methods as in the posts from last year, we can say some things about how P appears in various bases.

In binary, P is written as a string of 74,207,281 ones.

In base four, P is a 1 followed by 37,103,640 threes.

In base eight, P is a 1 followed by 24,735,760 sevens.

In base 16, P is a 1 followed by 18,551,820 F’s.

In base 10, P has 22,338,618 digits, the first of which is 3 and the last of which is 1.

Related posts:

Alternating sums of factorials

Richard Guy’s Strong Law of Small Numbers says

There aren’t enough small numbers to meet the many demands made of them.

In his article by the same name [1] Guy illustrates his law with several examples of patterns that hold for small numbers but eventually fail. One of these examples is

3! – 2! + 1! = 5

4! – 3! + 2! – 1! = 19

5! – 4! + 3! – 2! + 1! = 101

6! – 5! + 4! – 3! + 2! – 1! = 619

7! – 6! + 5! – 4! + 3! – 2! + 1! = 4421

8! – 7! + 6! – 5! + 4! – 3! + 2! – 1! = 35899

If we let f(n) be the alternating factorial sum starting with nf(n) is prime for n = 3, 4, 5, 6, 7, 8, but not for n = 9. So the alternating sums aren’t all prime. Is f(n) usually prime? f(10) is, so maybe 9 is the odd one. Let’s write a code to find out.

    from sympy import factorial, isprime

    def altfact(n):
        sign = 1
        sum = 0
        while n > 0:
            sum += sign*factorial(n)
            sign *= -1
            n -= 1
        return sum

    numprimes = 0
    for i in range(3, 1000):
        if isprime( altfact(i) ):
            print(i)
            numprimes += 1
    print(numprimes)

You could speed up this code by noticing that

    altfact(n+1) = factorial(n+1) - altfact(n)

and tabulating the values of altfact. The code above corresponds directly to the math, though it takes a little while to run.

So it turns out the alternating factorial sum is only prime for 15 values less than 1000. In addition to the values of n mentioned above, the other values are 15, 19, 41, 59, 61, 105, 160, and 601.

* * *

[1] The Strong Law of Small Numbers, Richard K. Guy, The American Mathematical Monthly, Vol. 95, No. 8 (Oct., 1988), pp. 697-712.

For daily tips on Python and scientific computing, follow @SciPyTip on Twitter.

Scipytip twitter icon

Fibonacci formula for pi

Here’s an unusual formula for pi based on the product and least common multiple of the first m Fibonacci numbers.

 

\pi = \lim_{m\to\infty} \sqrt{\frac{6 \log F_1 \cdots F_m}{\log \mbox{lcm}( F_1, \ldots, F_m )}}

Unlike the formula I wrote about a few days ago relating Fibonacci numbers and pi, this one is not as simple to prove. The numerator inside the root is easy enough to estimate asymptotically, but estimating the denominator depends on the distribution of primes.

Source: Yuri V. Matiyasevich and Richard K. Guy, A new formula for π, American Mathematical Monthly, Vol 93, No. 8 (October 1986), pp. 631-635.

 

Last digit of largest known prime

In my previous post, we looked at the largest known prime,  P = 257885161 – 1, and how many digits it has in various bases. This post looks at how to find the last digit of P in base b. We assume b < P. (If b = P then the last digit is 0, and if b > P the last digit is P.)

If b is a power of 2, we showed in the previous post that the last digit of P is b-1.

If b is odd, we can find the last digit using Euler’s totient theorem. Let φ(b) be the number of positive integers less than b and relatively prime to b. Then Euler’s theorem tells us that 2φ(b) ≡ 1 mod b since b is odd. So if r is the remainder when 57885161 is divided by φ(b), then the last digit of Q = P+1 in base b is the same as the last digit of 2r in base b.

For example, suppose we wanted to know the last digit of P in base 15. Since φ(15) = 8, and the remainder when 57885161 is divided by 8 is 1, the last digit of Q base 15 is 2. So the last digit of P is 1.

So we know how to compute the last digit in base b if b is a power or 2 or odd. Factor out all the powers of 2 from b so that b = 2k d and d is odd. We can find the last digit base 2k, and we can find the last digit base d, so can we combine these to find the last digit base b? Yes, that’s exactly what the Chinese Remainder Theorem is for.

To illustrate, suppose we want to find the last digit of P in base 12. P ≡ 3 mod 4 and P ≡ 1 mod 3, so P ≡ 7 mod 12. (The numbers are small enough here to guess. For a systematic approach, see the post mentioned above.) So the last digit of P is 7 in base 12.

If you’d like to write a program to play around with this, you need to be able to compute φ(n). You may have an implementation of this function in your favorite environment. For example, it’s sympy.ntheory.totient in Python. If not, it’s easy to compute φ(n) if you can factor n. You just need to know two things about φ. First, it’s a multiplicative function, meaning that if a and b are relatively prime, then φ(ab) = φ(a) φ(b). Second, if p is a prime, then φ(pk) = pkpk-1.

Update: There’s a new record prime as of January 2016, and it’s also a Mersenne prime. Update of this post here.

Related: Applied number theory

Playing with the largest known prime

The largest known prime at the moment is P = 257885161 – 1. This means that in binary, the number is a string of 57,885,161 ones.

You can convert binary numbers to other bases that are powers of two, say 2k, by starting at the right end and converting to the new base in blocks of size k. So in base 4, we’d start at the right end and convert pairs of bits to base 4. Until we get to the left end, all the pairs are 11, and so they all become 3’s in base four. So in base 4, P would be written as a 1 followed by 28,942,580 threes.

Similarly, we could write P in base 8 by starting at the right end and converting triples of bits to base 8. Since 57885161 = 3*19295053 + 2, we’ll have two bits left over when we get to the left end, so in base 8, P would be written as 3 followed by 19,295,053 sevens. And since 57885161 = 4*14471290 + 1, the hexadecimal (base 16) representation of P is a 1 followed by 14,471,290 F’s.

What about base 10, i.e. how many digits does P have? The number of digits in a positive integer n is ⌊log10n⌋ + 1 where ⌊x⌋ is the greatest integer less than x. For positive numbers, this just means chop off the decimals and keep the integer part.

We’ll make things slightly easier for a moment and find how many digits P+1 has.

log10(P+1) = log10(257885161) = 57885161 log10(2)  = 17425169.76…

and so P+1 has 17425170 digits, and so does P. How do we know P and P+1 have the same number of digits? There are several ways you could argue this. One is that P+1 is not a power of 10, so the number of digits couldn’t change between P and P+1.

Update: How many digits does P have in other bases?

We can take the formula above and simply substitute b for 10: The base b representation of a positive integer n has ⌊logbn⌋ + 1 digits.

As above, we’d rather work with Q = P+1 than P. The numbers P and Q will have the same number of digits unless Q is a power of the base b. But since Q is a power of 2, this could only happen if b is also a power of two, say b = 2k, and k is a divisor of 57885161. But 57885161 is prime, so this could only happen if b = Q. If b > P, then P has one digit base b. So we can concentrate on the case b < Q, in which case P and Q have the same number of digits. The number of digits in Q is then 57885161 logb(2) + 1.

If b = 2k, we can generalize the discussion above about bases 4, 8, and 16. Let q and r be the quotient and remainder when 57885161 is divided by k. The first digit is r and the remaining q digits are b-1.

Next post: Last digit of largest known prime

Update: There’s a new record prime as of January 2016, and it’s also a Mersenne prime. Update of this post here.

Related: Applied number theory

Last digits of Fibonacci numbers

If you write out a sequence of Fibonacci numbers, you can see that the last digits repeat every 60 numbers.

The 61st Fibonacci number is 2504730781961. The 62nd is 4052739537881. Since these end in 1 and 1, the 63rd Fibonacci number must end in 2, etc. and so the pattern starts over.

It’s not obvious that the cycle should have length 60, but it is fairly easy to see that there must be a cycle. There are only 10*10 possibilities for two consecutive digits. Since the Fibonacci numbers are determined by a two-term recurrence, and since the last digit of a sum is determined by the sum of the last digits, the sequence of last digits must repeat eventually. Here “eventually” means after at most 10*10 terms.

Replace “10” by any other base in the paragraph above to show that the sequence of last digits must be cyclic in any base. In base 16, for example, the period is 24. In hexadecimal notation the 25th Fibonacci number is 12511 and the 26th is 1DA31, so the 27th must end in 2, etc.

Here’s a little Python code to find the period of the last digits of Fibonacci numbers working in any base b.

from sympy import fibonacci as f

def period(b):
    for i in range(1, b*b+1):
        if f(i)%b == 0 and f(i+1)%b == 1:
            return(i)

This shows that in base 100 the period is 300. So in base 10 the last two digits repeat every 300 terms.

The period seems to vary erratically with base as shown in the graph below.

Related: Applied number theory

Numerators of harmonic numbers

Harmonic numbers

The nth harmonic number, Hn, is the sum of the reciprocals of the integers up to and including n. For example,

H4 = 1 + 1/2 + 1/3 + 1/4 = 25/12.

Here’s a curious fact about harmonic numbers, known as Wolstenholme’s theorem:

For a prime p > 3, the numerator of Hp-1 is divisible by p2.

The example above shows this for p = 5. In that case, the numerator is not just divisible by p2, it is p2, though this doesn’t hold in general. For example, H10 = 7381/2520. The numerator 7381 is divisible by 112 = 121, but it’s not equal to 121.

Generalized harmonic numbers

The generalized harmonic numbers Hn,m are the sums of the reciprocals of the first n positive integers, each raised to the power m. Wolstenholme’s theorem also says something about these numbers too:

For a prime p > 3, the numerator of Hp-1,2 is divisible by p.

For example, H4,2 = 205/144, and the numerator is clearly divisible by 5.

Computing with Python

You can play with harmonic numbers and generalized harmonic numbers in Python using SymPy. Start with the import statement

from sympy.functions.combinatorial.numbers import harmonic

Then you can get the nth harmonic number with harmonic(n) and generalized harmonic numbers with harmonic(n, m).

To extract the numerators, you can use the method as_numer_denom to turn the fractions into (numerator, denominator) pairs. For example, you can create a list of the numerators of the first 10 harmonic numbers with

[harmonic(n).as_numer_denom()[0] for n in range(10)]

What about 0?

You might notice that harmonic(0) returns 0, as it should. The sum defining the harmonic numbers is empty in this case, and empty sums are defined to be zero.

 

When the last digits of powers don’t change

If you raise any integer to the fifth power, its last digit doesn’t change. For example, 25 = 32.

It’s easy to prove this assertion by brute force. Since the last digit of bn only depends on the last digit of b, it’s enough to verify that the statement above holds for 0, 1, 2, …, 9.

Although that’s a valid proof, it’s not very satisfying. Why fifth powers? We’re implicitly working in base 10, as humans usually do, so what is the relation between 5 and 10 in this context? How might we generalize it to other bases?

The key is that 5 = φ(10) + 1 where φ(n) is the number of positive integers less than n and relatively prime to n.

Euler discovered the φ function and proved that if a and m are relatively prime, then

aφ(m) = 1 (mod m)

This means that aφ(m) – 1 is divisible by m. (Technically I should use the symbol ≡ (U+2261) rather than = above since the statement is a congruence, not an equation. But since Unicode font support on various devices is disappointing, not everyone could see the proper symbol.)

Euler’s theorem tells us that if a is relatively prime to 10 then a4 ends in 1, so a5 ends in the same digit as a. That proves our original statement for numbers ending in 1, 3, 7, and 9. But what about the rest, numbers that are divisible by 2 or 5? We’ll answer that question and a little more general one at the same time. Let m = αβ where α and β are distinct primes. In particular, we could choose α = 2 and β = 5; We will show that

aφ(m)+1 = a (mod m)

for all a, whether relatively prime to m or not. This would show in addition, for example, that in base 15, every number keeps the same last “digit” when raised to the 9th power since φ(15) = 8.

We only need to be concerned with the case of a being a multiple of α or β since Euler’s theorem proves our theorem for the rest of the cases. If a = αβ our theorem obviously holds, so assume a is some multiple of α, a = kα with k less than β (and hence relatively prime to β).

We need to show that αβ divides (kα)φ(αβ)+1kα. This expression is clearly divisible by α, so the remaining task is to show that it is divisible by β. We’ll show that (kα)φ(αβ) – 1 is divisible by β.

Since α and k are relatively prime to β, Euler’s theorem tells us that αφ(β) and kφ(β) are congruent to 1 (mod β). This implies that kαφ(β) is congruent to 1, and so kαφ(α)φ(β) is also congruent to 1 (mod β). One of the basic properties of φ is that for relatively prime arguments α and β, φ(αβ) = φ(α)φ(β) and so we’re done.

Exercise: How much more can you generalize this? Does the base have to be the product of two distinct primes?

Related: Applied number theory

Fibonacci number system

Every positive integer can be written as the sum of distinct Fibonacci numbers. For example, 10 = 8 + 2, the sum of the fifth Fibonacci number and the second.

This decomposition is unique if you impose the extra requirement that consecutive Fibonacci numbers are not allowed. [1] It’s easy to see that the rule against consecutive Fibonacci numbers is necessary for uniqueness. It’s not as easy to see that the rule is sufficient.

Every Fibonacci number is itself the sum of two consecutive Fibonacci numbers—that’s how they’re defined—so clearly there are at least two ways to write a Fibonacci number as the sum of Fibonacci numbers, either just itself or its two predecessors. In the example above, 8 = 5 + 3 and so you could write 10 as 5 + 3 + 2.

The nth Fibonacci number is approximately φn/√5 where φ = 1.618… is the golden ratio. So you could think of a Fibonacci sum representation for x as roughly a base φ representation for √5x.

You can find the Fibonacci representation of a number x using a greedy algorithm: Subtract the largest Fibonacci number from x that you can, then subtract the largest Fibonacci number you can from the remainder, etc.

Programming exercise: How would you implement a function that finds the largest Fibonacci number less than or equal to its input? Once you have this it’s easy to write a program to find Fibonacci representations.

* * *

[1] This is known as Zeckendorf’s theorem, published by E. Zeckendorf in 1972. However, C. G. Lekkerkerker had published the same result 20 years earlier.

Counting primitive bit strings

A string of bits is called primitive if it is not the repetition of several copies of a smaller string of bits. For example, the 101101 is not primitive because it can be broken down into two copies of the string 101. In Python notation, you could produce 101101 by "101"*2. The string 11001101, on the other hand, is primitive. (It contains substrings that are not primitive, but the string as a whole cannot be factored into multiple copies of a single string.)

For a given n, let’s count how many primitive bit strings there are of length n. Call this f(n). There are 2n bit strings of length n, and f(n) of these are primitive. For example, there are f(12) primitive bit strings of length 12. The strings that are not primitive are made of copies of primitive strings: two copies of a primitive string of length 6, three copies of a primitive string of length 4, etc. This says

 2^{12} = f(12) + f(6) + f(4) + f(3) + f(2) + f(1)

and in general

2^n = \sum_{d \mid n} f(d)

Here the sum is over all positive integers d that divide n.

Unfortunately this formula is backward. It gives is a formula for something well known, 2n, as a sum of things we’re trying to calculate. The Möbius inversion formula is just what we need to turn this formula around so that the new thing is on the left and sums of old things are on the right. It tells us that

f(n) = \sum_{d \mid n} \mu\left(\frac{n}{d}\right) 2^d

where μ is the Möbius function.

We could compute f(n) with Python as follows:

from sympy.ntheory import mobius, divisors

def num_primitive(n):
    return sum( [mobius(n/d)*2**d for d in divisors(n)] )

The latest version of SymPy, version 0.7.6, comes with a function mobius for computing the Möbius function. If you’re using an earlier version of SymPy, you can roll your own mobius function:

from sympy.ntheory import factorint

def mobius(n):
    exponents = factorint(n).values()
    lenexp = len(exponents)
    m = 0 if lenexp == 0 else max(exponents)
    return 0 if m > 1 else (-1)**lenexp

The version of mobius that comes with SymPy 0.7.6 may be more efficient. It could, for example, stop the factorization process early if it discovers a square factor.

Related: Applied number theory

The great reformulation of algebraic geometry

“Tate helped shape the great reformulation of arithmetic and geometry which has taken place since the 1950’s.” — Andrew Wiles

At the Heidelberg Laureate Forum I has a chance to interview John Tate. In his remarks below, Tate briefly comments on his early work on number theory and cohomology. Most of the post consists of his comments on the work of Alexander Grothendieck.

* * *

JT: My first significant work after my thesis was to determine the cohomology groups of class field theory. The creators of the theory, including my thesis advisor Emil Artin, didn’t think in terms of cohomology, but their work could be interpreted as finding the cohomology groups H0, H1, and H2.

I was invited to give a series of three talks at MIT on class field theory. I’d been at a party, and I came home and thought about what I’d talk about. And I got this great idea: I realized I could say what all the higher groups are. In a sense it was a disappointing answer, though it didn’t occur to me then, that there’s nothing new in them; they were determined by the great work that had already been done. For that I got the Cole prize in number theory.

Later when I gave a talk on this work people would say “This is number theory?!” because it was all about cohomology groups.

JC: Can you explain what the great reformulation was that Andrew Wiles spoke of? Was it this greater emphasis on cohomology?

JT: Well, in the class field theory situation it would have been. And there I played a relatively minor part. The big reformulation of algebraic geometry was done by Grothendieck, the theory of schemes. That was really such a great thing, that unified number theory and algebraic geometry. Before Grothendieck, going between characteristic 0, finite characteristic 2, 3, etc. was a mess.

Grothendieck’s system just gave the right framework. We now speak of arithmetic algebraic geometry, which means studying problems in number theory by using your geometric intuition. The perfect background for that is the theory of schemes. ….

Grothendieck ideas [about sheaves] were so simple. People had looked at such things in particular cases: Dedekind rings, Noetherian rings, Krull rings, …. Grothendieck said take any ring. … He just had an instinct for the right degree of generality. Some people make things too general, and they’re not of any use. But he just had an instinct to put whatever theory he thought about in the most general setting that was still useful. Not generalization for generalization’s sake but the right generalization. He was unbelievable.

He started schemes about the time I got serious about algebraic geometry, as opposed to number theory. But the algebraic geometers classically had affine varieties, projective varieties, … It seemed kinda weird to me. But with schemes you had a category, and that immediately appealed to me. In the classical algebraic geometry there are all these birational maps, or rational maps, and they’re not defined everywhere because they have singularities. All of that was cleared up immediately from the outset with schemes. ….

There’s a classical algebraic geometer at Harvard, Joe Harris, who works mostly over the complex numbers. I asked him whether Grothendieck made much of a difference in the classical case — I knew for number theorists he had made a tremendous difference — and Joe Harris said yes indeed. It was a revolution for classical algebraic geometry too.

Related: Applied number theory

Multiple zeta

The Riemann zeta function, introduced by Leonard Euler, is defined by

\zeta(k) = \sum n^{-k}

where the sum is over all positive integers n.

Euler also introduced a multivariate generalization of the zeta function

\zeta(k_1, \ldots, k_r) = \sum n_1^{-k_1}\cdots n_r^{-k_r}

where the sum is over all decreasing k-tuples of positive integers. This generalized zeta function satisfies the following beautiful identity:

 \zeta(a)\,\zeta(b) = \zeta(a, b) + \zeta(b, a) + \zeta(a+b)

The multivariate zeta function and identities such as the one above are important in number theory and are the subject of open conjectures.

Source

Number theory determinant and SymPy

Let σ(n) be the sum of the positive divisors of n and let gcd(a, b) be the greatest common divisor of a and b.

Form an n by n matrix M whose (i, j) entry is σ(gcd(i, j)). Then the determinant of M is n!.

The following code shows that the theorem is true for a few values of n and shows how to do some common number theory calculations in SymPy.

from sympy import gcd, divisors, Matrix, factorial

def f(i, j):
    return sum( divisors( gcd(i, j) ) )

def test(n):
    r = range(1, n+1)
    M = Matrix( [ [f(i, j) for j in r] for i in r] )
    return M.det() - factorial(n)

for n in range(1, 11):
    print( test(n) )

As expected, the test function returns zeros.

If we replace the function σ above by τ where τ(n) is the number of positive divisors of n, the corresponding determinant is 1. To test this, replace sum by len in the definition of f and replace factorial(n) by 1.

In case you’re curious, both results are special cases of the following more general theorem. I don’t know whose theorem it is. I found it here.

For any arithmetic function f(m), let g(m) be defined for all positive integers m by

g(m) = \sum_{d \,\mid \,m} \mu(d) f\left(\frac{m}{d}\right)

Let M be the square matrix of order n with ij element f(gcd(i, j)). Then

\det M = \prod_i^n g(j)

Here μ is the Möbius function. The two special cases above correspond to g(m) = m and g(m) = 1.

* * *

For daily tips on Python and scientific computing, follow @SciPyTip on Twitter.

Scipytip twitter icon