Some mathematical art

This evening I ran across a paper on an unusual coordinate system that creates interesting graphs based from simple functions. It’s called “circular coordinates,” but this doesn’t mean polar coordinates; it’s more complicated than that. [1]

Here’s a plot reproduced from [1], with some color added (the default colors matplotlib uses for multiple plots).

The plot above was based on a the gamma function. Here are a few plots replacing the gamma function with another function.

Here’s x/sin(x):

Here’s x5:

And here’s tan(x):

Here’s how the plots were created. For a given function f, plot the parametric curves given by

\begin{align*} x(t) &= \frac{2t \left(f(t) \right )^2}{t^2 + \left(f(t) \right )^2} \\ y(t) &= \frac{2t^2 f(t)}{t^2 + \left(f(t) \right )^2} \\ \end{align*}

See [1] for what this has to do with circles and coordinates.

The plots based on a function g(x) are given by setting f(x) = g(x) + c where c = -10, -9, -8, …, 10.

Related posts

[1] Elliot Tanis and Lee Kuivinen, Circular Coordinates and Computer Drawn Designs. Mathematics Magazine. Vol 52 No 3. May, 1979.

Counting triangles with integer sides

Let T(N) be the number of distinct (non-congruent) triangles with integer sides and perimeter N.  For example, T(12) = 3 because there are three distinct triangles with integer sides and perimeter 12. There’s the equilateral triangle with sides 4 : 4 : 4, and the Pythagorean triangle 3 : 4 : 5. With a little more work we can find 2 : 5 : 5.

Triangles 4:4:4, 3:4:5, and 2:5:5

The authors in [1] developed an algorithm for finding T(N). The following Python code is a direct implementation of that algorithm.

    def T(N :int):
        if N < 3:
            return 0
        base_cases = {4:0, 6:1, 8:1, 10:2, 12:3, 14:4}
        if N in base_cases:
            return base_cases[N]
        if N % 2 == 0:
            R = N % 12
            if R < 4:
                R += 12
            return (N**2 - R**2)//48 + T(R)
        if N % 2 == 1:
            return T(N+3)

If you’re running a version of Python that doesn’t support type hinting, just delete the :int in the function signature.

Since this is a recursive algorithm, we should convince ourselves that it terminates. In the branch for even N, the number R is an even number between 4 and 14 inclusive, and so it’s in the base_cases dictionary.

In the odd branch, we recurse on N+3, which is a little unusual since typically recursive functions decrease their argument. But since N is odd, N+3 is even, and we’ve already shown that the even branch terminates.

The code (N**2 - R**2)//48 raises a couple questions. Is the numerator divisible by 48? And if so, why specify integer division (//) rather than simply division (/)?

First, the numerator is indeed divisible by 48. N is congruent to R mod 12 by construction, and so NM is divisible by 12. Furthermore,

N² – R² = (NR)(N + R).

The first term on the right is divisible by 12, so if the second term is divisible by 4, the product is divisible by 48.  Since N and R are congruent mod 12, N + R is congruent to 2R mod 12, and since R is even, 2R is a multiple of 4 mod 12. That makes it a multiple of 4 since 12 is a multiple of 4.

So if (N² – R²)/48 is an integer, why did I write Python code that implies that I’m taking the integer part of the result? Because otherwise the code would sometimes return a floating point value. For example, T(13) would return 5.0 rather than 5.

Here’s a plot of T(N).

[1] J. H. Jordan, Ray Walch and R. J. Wisner. Triangles with Integer Sides. The American Mathematical Monthly, Vol. 86, No. 8 (Oct., 1979), pp. 686-689

Ripples and hyperbolas

I ran across a paper [1] this morning on the differential equation

y‘ = sin(xy).

The authors recommend having students explore numerical solutions to this equation and discover theorems about its solutions.

Their paper gives numerous theorems relating solutions and the hyperbolas xy = a: how many times a solution crosses a hyperbola, at what angle, under what conditions a solution can be tangent to a hyperbola, etc.

The plot above is based on a plot in the original paper, but easier to read. It wasn’t so easy to make nice plots 40 years ago. In the original plot the solutions and the asymptotes were plotted with the same thickness and color, making them hard to tell apart.

More differential equation posts

[1] Wendell Mills, Boris Weisfeiler and Allan M. Krall. Discovering Theorems with a Computer: The Case of y‘ = sin(xy). The American Mathematical Monthly, Nov., 1979, Vol. 86, No. 9 (Nov., 1979), pp. 733-739

Informative stopping

When the rule for stopping an experiment depends on the data in the experiment, the results could be biased if the stopping rule isn’t taken into account in the analysis [1].

For example, suppose Alice wants to convince Bob that π has a greater proportion of even digits than odd digits.

Alice: I’ll show you that π has more even digits than odd digits by looking at the first N digits. How big would you like N to be?

Bob: At least 1,000. Of course more data is always better.

Alice: Right. And how many more even than odd digits would you find convincing?

Bob: If there are at least 10 more evens than odds, I’ll believe you.

Alice: OK. If you look at the first 2589 digits, there are 13 more even digits than odd digits.

Now if Alice wanted to convince Bob that there are more odd digits, she could do that too. If you look at the first 2077 digits, 13 more are odd than even.

No matter what two numbers Bob gives, Alice can find find a sample size that will give the result she wants. Here’s Alice’s Python code.

    from mpmath import mp
    import numpy as np

    N = 3000 
    mp.dps = N+2
    digits = str(mp.pi)[2:]

    parity = np.ones(N, dtype=int)
    for i in range(N):
        if digits[i] in ['1', '3', '5', '7', '9']:
            parity[i] = -1
    excess = parity.cumsum()
    print(np.where(excess == 13))
    print(np.where(excess == -13))

The number N is a guess at how far out she might have to look. If it doesn’t work, she increases it and runs the code again.

The array parity contains a 1 in positions where the digits of π (after the decimal point) are even and a -1 where they are odd. The cumulative sum shows how many more even than odd digits there have been up to a given point, a negative number meaning there have been more odd digits.

Alice thought that stopping when there are exactly 10 more of the parity she wants would look suspicious, so she looked for places where the difference was 13.

Here are the results:

    [ 126,  128,  134,  …,  536, 2588, … 2726]
    [ 772,  778,  780,  …,  886, 2076, … 2994]

There’s one minor gotcha. The array excess is indexed from zero, so Alice reports 2589 rather than 2588 because the 2589th digit has index 2588.

Bob’s mistake was that he specified a minimum sample size. By saying “at least 1,000” he gave Alice the freedom to pick the sample size to get the result she wanted. If he specified an exact sample size, there probably would be either more even digits or more odd digits, but there couldn’t be both. And if he were more sophisticated enough, he could pick an excess value that would be unlikely given that sample size.

Related posts

[1] This does not contradict the likelihood principle; it says that informative stopping rules should be incorporated into the likelihood function.

Expert determination for CCPA

US and California flags

California’s CCPA regulation has been amended to say that data considered deidentified under HIPAA is considered deidentified under CCPA. The amendment was proposed last year and was finally signed into law on September 25, 2020.

This is good news because it’s relatively clear what deidentification means under HIPAA compared to CCPA. In particular, HIPAA has two well-established alternatives for determining that data have been adequately deidentified:

  1. Safe Harbor, or
  2. Expert determination.

The latter is especially important because most useful data doesn’t meet the requirements of Safe Harbor.

I provide companies with HIPAA expert determination. And now by extension I can provide expert determination under CCPA.

I’m not a lawyer, and so nothing I write should be considered legal advice. But I work closely with lawyers to provide expert determination. If you would like to discuss how I could help you, let’s talk.

Category theory for programmers made easier

I imagine most programmers who develop an interest in category theory do so after hearing about monads. They ask someone what a monad is, and they’re told that if they really want to know, they need to learn category theory.

Unfortunately, there are couple unnecessary difficulties anyone wanting to understand monads etc. is likely to face immediately. One is some deep set theory.

“A category is a collection of objects …”

“You mean like a set?”

“Ah, well, no. You see, Bertrand Russell showed that …”

There are reasons for such logical niceties, but they don’t matter to someone who wants to understand programming patterns.

Another complication is morphisms.

“As I was saying, a category is a collection of objects and morphisms between objects …”

“You mean like functions?”

“Well, they might be functions, but more generally …”

Yes, Virginia, morphisms are functions. It’s true that they might not always be functions, but they will be functions in every example you care about, at least for now.

Category theory is a framework for describing patterns in function composition, and so that’s why things like monads find their ultimate home in category theory. But doing category theory rigorously requires some setup that people eager to get into applications don’t have to be concerned with.

Patrick Honner posted on Twitter recently that his 8-year-old child asked him what area is. My first thought on seeing that was that a completely inappropriate answer would be that this is a deep question that wasn’t satisfactorily settled until the 20th century using measure theory. My joking response to Patrick was

Well, first we have to define σ-algebras. They’re kinda like topologies, but closed under countable union and intersection instead of arbitrarily union and finite intersection. Anyway, a measure is a …

It would be ridiculous to answer a child this way, and it is nearly as ridiculous to burden a programmer with unnecessary logical nuance when they’re trying to find out why something is called a functor, or a monoid, or a monad, etc.

I saw an applied category theory presentation that began with “A category is a graph …” That sweeps a lot under the rug, but it’s not a bad conceptual approximation.

So my advice to programmers learning category theory is to focus on the arrows in the diagrams. Think of them as functions; they probably are in your application [1]. Think of category theory as a framework for describing patterns. The rigorous foundations can be postponed, perhaps indefinitely, just as an 8-year-old child doesn’t need to know measure theory to begin understanding area.

More category theory posts

[1] The term “contravariant functor” has unfortunately become deprecated. In more modern presentations, all functors are covariant, but some are covariant in an opposite category. That does make the presentation more slick, but at the cost of turning arrows around that used to represent functions and now don’t really. In my opinion, category theory would be more approachable if we got rid of all “opposite categories” and said that functors come in two flavors, covariant and contravariant, at least in introductory presentations.

Is every number a random Fibonacci number?

The previous post looked at random Fibonacci sequences. These are defined by

f1 = f2 = 1,


fn = fn-1 ± fn-2

for n > 2, where the sign is chosen randomly to be +1 or -1.

Conjecture: Every integer can appear in a random Fibonacci sequence.

Here’s why I believe this might be true. The values in a random Fibonacci sequence of length n are bound between –Fn-3 and Fn.[1] This range grows like On) where φ is the golden ratio. But the number of ways to pick + and – signs in a random Fibonacci equals 2n.

By the pigeon hole principle, some choices of signs must lead to the same numbers: if you put 2n balls in φn boxes, some boxes get more than one ball since φ < 2. That’s not quite rigorous since the range is  On) rather than exactly φn, but that’s the idea. The graph included in the previous post shows multiple examples where different random Fibonacci sequences overlap.

Graph of random Fibonacci sequences

Now the pigeon hole principle doesn’t show that the conjecture is true, but it suggests that there could be enough different sequences that it might be true. The fact that the ratio of balls to boxes grows exponentially doesn’t hurt either.

Empirically, it appears that as you look at longer and longer random Fibonacci sequences, gaps in the range are filled in.

The following graphs consider all random Fibonacci sequences of length n, plotting the smallest positive integer and the largest negative integer not in the range. For the negative integers, we take the absolute value. Both plots are on a log scale.

First positive number missing:

Absolute value of first negative number missing:

The span between the largest and smallest possible random Fibonacci sequence value is growing exponentially with n, and the range of consecutive numbers in the range is apparently also growing exponentially with n.

The following Python code was used to explore the gaps.

    import numpy as np
    from itertools import product

    def random_fib_range(N):
        r = set()
        x = np.ones(N, dtype=int)
        for signs in product((-1,1), repeat=(N-2)):
            for i in range(2, N):
                b = signs[i-2]
                x[i] = x[i-1] + b*x[i-2]
        return sorted(list(r)) 

    def stats(r):
        zero_location = r.index(0)

        # r is sorted, so these are the min and max values
        neg_gap = r[0]  // minimum
        pos_gap = r[-1] // maximum
        for i in range(zero_location-1, -1, -1):
            if r[i] != r[i+1] - 1:
                neg_gap = r[i+1] - 1
        for i in range(zero_location+1, len(r)):
            if r[i] != r[i-1] + 1:
                pos_gap = r[i-1] + 1
        return  (neg_gap, pos_gap)

    for N in range(5,25):
        r = random_fib_range(N)
        print(N, stats(r))


Update: Nathan Hannon gives a simple proof of the conjecture by induction in the comments.

You can create the series (1, 2) and (1, 3). Now assume you can create (1, n). Then you can create (1, n+2) via (1, n, n+1, 1, n+2). So you can create any positive even number starting from (1, 2) and any odd positive number from (1, 3).

You can do something analogous for negative numbers via (1, n, n-1, -1, n-2, n-3, -1, 2-n, 3-n, 1, n-2).

This proof can be used to create an upper bound on the time required to hit a given integer, and a lower bound on the probability of hitting a given integer during a random Fibonacci sequence.

Nathan’s construction requires more steps to produce new negative numbers, but that is consistent with the range of random Fibonacci sequences being wider on the positive side, [-Fn-3, Fn].


[1] To minimize the random Fibonacci sequence, you can chose the signs so that the values are 1, 1, 0, -1, -1, -2, -3, -5, … Note that absolute value of this sequence is the ordinary Fibonacci sequence with 3 extra terms spliced in. That’s why the lower bound is –Fn-3.

Random Fibonacci numbers

The Fibonacci numbers are defined by F1 = F2 = 1, and for n > 2,

Fn = Fn-1 + Fn-2.

A random Fibonacci sequence f is defined similarly, except the addition above is replaced with a subtraction with probability 1/2. That is, f1 = f2 = 1, and for n > 2,

fn = fn-1 + b fn-2

where b is +1 or -1, each with equal probability.

Here’s a graph a three random Fibonacci sequences.

Graph of random Fibonacci sequences

Here’s the Python code that was used to produce the sequences above.

    import numpy as np

    def rand_fib(length):
        f = np.ones(length)
        for i in range(2, length):
            b = np.random.choice((-1,1))
            f[i] = f[i-1] + b*f[i-2]
        return f

It’s easy to see that the nth random Fibonacci number can be as large as the nth ordinary Fibonacci number if all the signs happen to be positive. But the numbers are typically much smaller.

The nth (ordinary) Fibonacci number asymptotically approaches φn is the golden ratio, φ = (1 + √5)/2 = 1.618…

Another way to say this is that

\lim_{n\to\infty}(F_n)^{1/n} = \varphi

The nth random Fibonacci number does not have an asymptotic value—it wanders randomly between positive and negative values—but with probability 1, the absolute values satisfy

\lim_{n\to\infty}|f_n|^{1/n} = 1.1319882\ldots

This was proved in 1960 [1].

Here’s a little Python code to show that we get results consistent with this result using simulation.

    N = 500
    x = [abs(rand_fib(N)[-1])**(1/N) for _ in range(10)]
    print(f"{np.mean(x)} ± {np.std(x)}")

This produced

1.1323 ± 0.0192

which includes the theoretical value 1.1320.

Update: The next post looks at whether every integer appear in a random Fibonacci sequence. Empirical evidence suggests the answer is yes.

Related posts

[1] Furstenberg and Kesten. Products of random matrices. Ann. Math. Stat. 31, 457-469.

Edsger Dijkstra, blogger

Edsger W. Dijkstra

I’ve been thinking about Edsger Dijkstra lately because I suspect some of the ideas he developed will be useful for a project I’m working on.

While searching for some of Dijkstra’s writings I ran across the article Edsger Dijkstra: The Man Who Carried Computer Science on His Shoulders. It occurred while reading this article that Dijkstra was essentially a blogger before there were blogs.

Here is a description of his writing from the article:

… Dijkstra’s research output appears respectable, but otherwise unremarkable by current standards. In this case, appearances are indeed deceptive. Judging his body of work in this manner misses the mark completely. Dijkstra was, in fact, a highly prolific writer, albeit in an unusual way.

In 1959, Dijkstra began writing a series of private reports. Consecutively numbered and with his initials as a prefix, they became known as EWDs. He continued writing these reports for more than forty years. The final EWD, number 1,318, is dated April 14, 2002. In total, the EWDs amount to over 7,700 pages. Each report was photocopied by Dijkstra himself and mailed to other computer scientists.

His large collection of small articles sounds a lot like a blog to me.

You can find Dijkstra’s “blog” here.

Gruntled vs disgruntled

My wife and I were talking this morning and the phrase”less disingenuous” came up. I thought about how sometimes a positive word fades into obscurity while the negative form lives on. The first example that came to mind is gruntled vs disgruntled. Yes, the former is an English word, but a rare one.

Here’s a comparison of the frequency of gruntled vs disgruntled from 1860 to 2000.

In 2000, disgruntled was about 200x more common than gruntled in the books in Google’s English corpus.

But if you look further back, gruntled was used a little more often.

But it turns out that the people who were gruntled in the 19th century were chiefly British. If we look at just the American English corpus, no one was gruntled.

There’s a rise in the frequency of disgruntled as you look backward from 1815, which prompted me to look further back. Looking at just the American English corpus, a lot of people were disgruntled between 1766 and 1776 for some reason.

More word frequency comparisons