I’ve been looking back on some of my blog posts that included Mathematica code to see whether I could rewrite them using Python. For example, I rewrote my code for finding sonnet primes in Python a few days ago. Next I wanted to try testing the Narcissus prime.

Futility closet describes the Narcissus prime as follows:

Repeat the string 1808010808 1560 times, and tack on a 1 the end. The resulting 15601-digit number is prime, and because it’s a palindrome made up of the digits 1, 8, and 0, it remains prime when read backward, upside down, or in a mirror.

My Mathematica code for verifying this claim is posted here. Here’s Python code to do the same thing:

from sympy.ntheory import isprime isprime(int("1808010808"*1560 + "1"))

This does indeed return `True`

. However, the Mathematica code ran for about 2 minutes and the SymPy code took 17.5 hours, about 500 times longer.

Any reason as to why? Is Python’s isprime more expensive? Can it be improved?

Kyle: I don’t know. They’re probably using different algorithms, different big integer implementations, etc. I don’t know whether either algorithm is deterministic: probabilistic algorithms are much faster, but could possibly be wrong.

For reference: https://github.com/sympy/sympy/blob/master/sympy/ntheory/primetest.py

The last paragraph in this article describes the algorithm in Mathematica:

http://mathworld.wolfram.com/Rabin-MillerStrongPseudoprimeTest.html

It appears they use similar versions of one algorithm (although not identical), while mathematica also uses another algorithm.

In addition to the other issues, there are a couple things I notice in the sympy code:

1. It tests if the target number is a pseudoprime

afterrunning the mathematical test. I would think this would be a relatively fast operation so maybe it could be done first. This isn’t related to this issue, though.2. It doesn’t use any threading. Tests for different bases could be run simultaneously.

3. The actual test function does some of the exact same calculations multiple times for each base, even though the calculations in question are independent of the base.

4. If I am reading it correctly, the test function could skip the tests as long as b**2 is lower than n-1, since that case b**2%n will never equal n-1.

This one is deterministic, but when running it under pypy on my rig, it finishes in under 18 minutes. Surely a probabilistic algorithm can finish much faster.

def modpower(b, e, n):

'''Iterative method for Fermat's Test, O(n^3) instead of exponential.

From Computations in Number Theory Using Python: A Brief Introduction,

Jim Carlson, March 2003'''

result = 1

s = b

q = e

while q > 0:

#r = q % 2

r = q & 1

if r == 1:

result = result * s % n

# print q, r, s, result

s = s * s % n

#q = q/2

q = q >> 1

return result

`def is_prime(n):`

return modpower(2, n-1, n) == 1

Arguably, this is the cause of the problem. 😉

Perl6 version:

`say ("1808010808" x 1560 ~ "1").Int.is-prime;`

On Rakudo Perl 6 it runs in 4 hours.

This is a naive (and probably flawed) version for Julia:

I assume it is flawed because I get a segmentation fault. In fact, even with prime numbers as “small” as 953467954114363 I get segfaults often, though when the it works, it takes over a minute.