The previous post was a riff on a tweet asking what you’d get if you extracted all the *i*‘s, *j*‘s, and *k*‘s from Finnegans Wake and multiplied them as quaternions. This post is a probabilistic variation on the previous one.

If you randomly select a piece of English prose, extract the *i*‘s, *j*‘s, and *k*‘s, and multiply them together as quaternions, what are you likely to get?

The probability that a letter in this sequence is an *i* is about 91.5%. There’s a 6.5% chance it’s a *k*, and a 2% chance it’s a *j*. (Derived from here.) We’ll assume the probabilities of each letter appearing next are independent.

You could think of the process multiplying all the *i*‘s, *j*‘s, and *k*‘s together as a random walk on the unit quaternions, an example of a Markov chain. Start at 1. At each step, multiply your current state by *i* with probability 0.915, by *j* with probability 0.02, and by *k* with probability 0.065.

After the first step, you’re most likely at *i*. You could be at *j* or *k*, but nowhere else. After the second step, you’re most likely at -1, though you could be anywhere except at 1. For the first several steps you’re much more likely to be at some places than others. But after 50 steps, you’re about equally likely to be at any of the eight possible values.

So did the weights matter as far as the limiting behavior being 8 equiprobable states is concerned?

No. The limiting behavior will be the same for any set of positive weights. But the large differences in weights slow down the initial convergence.