My previous post asked this question:

Start a knight at a corner square of an otherwise-empty chessboard. Move the knight at random by choosing uniformly from the legal knight-moves at each step. What is the mean number of moves until the knight returns to the starting square?

There is a mathematical solution that is a little arcane, but short and exact. You could also approach the problem using simulation, which is more accessible but not exact.

The mathematical solution is to view the problem as a random walk on a graph. The vertices of the graph are the squares of a chess board and the edges connect legal knight moves. The general solution for the time to first return is simply 2*N*/*k* where *N* is the number of edges in the graph, and *k* is the number of edges meeting at the starting point. Amazingly, the solution hardly depends on the structure of the graph at all. It only requires that the graph is connected. In our case *N* = 168 and *k* = 2.

For a full explanation of the math, see this online book, chapter 3, page 9. Start there and work your way backward until you understand the solution.

And now for simulation. The problem says to pick a legal knight’s move at random. The most direct approach would be to find the legal moves at a given point first, then choose one of those at random. The code below achieves the same end with a different approach. It first chooses a random move, and if that move is illegal (i.e. off the board) it throws that move away and tries again. This will select a legal move with the right probability, though perhaps that’s not obvious. It’s what’s known as an accept-reject random generator.

from random import randint # Move a knight from (x, y) to a random new position def new_position(x, y): while True: dx, dy = 1, 2 # it takes three bits to determine a random knight move: # (1, 2) vs (2, 1), and the sign of each r = randint(0, 7) if r % 2: dx, dy = dy, dx if (r >> 1) % 2: dx = -dx if (r >> 2) % 2: dy = -dy newx, newy = x + dx, y + dy # If the new position is on the board, take it. # Otherwise try again. if (newx >= 0 and newx < 8 and newy >= 0 and newy < 8): return (newx, newy) # Count the number of steps in one random tour def random_tour(): x, y = x0, y0 = 0, 0 count = 0 while True: x, y = new_position(x, y) count += 1 if x == x0 and y == y0: return count # Average the length of many random tours sum = 0 num_reps = 100000 for i in xrange(num_reps): sum += random_tour() print sum / float(num_reps)

A theorem is better than a simulation, but a simulation is a lot better than nothing. This problem illustrates how sometimes we think we need to simulate when we don’t. On the other hand, when you have a simulation *and* a theorem, you have more confidence in your solution because each validates the other.

A simulation can (at least for me) be an aid to understanding the problem in a way that a derived solution is not. I have to make all my assumptions and my understanding of the problem completely explicit, and have to at least briefly explore all the potential corner cases. In a way, 95% of the value of a simulation model is achieved without ever running the code.

I agree that writing the simulation makes you think about things in detail.

As someone said, if you want to understand something, explain it to another person, but if you

reallywant to understand it, explain it to a computer.The computer can do better than simulation in this case. It can easily calculate P(n), the probability of being home after n moves. All it can’t do is calculate sum_n n*P(n), because this sum has infinitely many terms. If T is a 64×64 transition probability matrix, this Octave code will do it up to a tolerance:

n=0; % iteration count

s=zeros(64,1); % probability vector

s(1)=1; % s(1)=Prob(@home) at iteration n

w=0; % running sum of n*P(n)

while(1)

n=n+1;

s=T*s;

w=w+s(1)*n;

s(1)=0;

if max(s)<=tol break; end

end

disp(w); % display the answer

r = randint(0, 7)

if r % 2:

dx, dy = dy, dx

if (r >> 1) % 2:

dx = -dx

if (r >> 2) % 2:

dy = -dy

This block is damn cool.

Adding bits to the (original article) code to track the min, and max trip lengths … then adding a frequency table capability … leads to interesting results.

Shortest trip – out and back: 2 steps (over 16% of the time)

Frequency drops from there, but the more runs you allow the higher the longest trip grows as you will run across a few truly ‘lost’ knights (using a GPS no doubt) 🙂 having 2000+ wandering steps.

The average hovers around 168 though.

Priezt – I agree, this is very cool code in that block!

SteveBrooklineMA – thanks for a great simulation / estimator.

Interesting that for n>30 or so, and even, the probability of returning after n moves looks to be very very close to k*exp( -0.0048536*n). So basically we have an exponential distribution with some modification for small n. Bonus points for an explanation of what -0.0048536 is. Note the probability of returning in an odd number of moves is zero.

I was asked this question in the job interview, yesterday morning. Could have read this post, before it

Dileep – I guess that just proves the interviewer reads this blog. Asking such things in interviews sadly does not achieve much, but people keep on doing it.

I have an interesting follow-up.

When I first looked at this problem, I saw that due to parity, the knight would return in an

evennumber of moves. We can project the knight’s random walk (i.e. its Markov chain) onto just the even-indexed subsequence of states. Obviously the projection is still a Markov chain, and we can work out its transition probabilities.Question 1: Will a Monte Carlo simulation of the projection — which has only 32 states — converge faster, slower, or at the same rate? (This, I do not know the answer, and would be curious to find out.)

Question 2: Would the closed-form formula still work in that case, with the

non-uniformtransition probabilities between states? (This, I do know the answer 🙂As a follow-up follow-up, consider the diagonal that passes through the knight’s starting square. For any (even-indexed) move that takes the knight

abovethat diagonal, we can just (*wink-and-a-nudge*) immediately reflect that move about the diagonal so the knight stayson or belowthe diagonal. This leaves just 8+6+4+2 = 20 states to consider. (Of course we have to adjust the transition probabilities yet again.) Does simulation of this projected-reflected chain converge slower/faster/at-the-same-rate? Does the closed-form formula still work?I think you gentlemen are lost in the details of these sorts of abstractions instead of looking at their true usefulness and meaning.

The simulation matched a theorem, okay fine. They are biased in similar fashion. Where is the value? Do a test, get creative. Sample a dripping water faucet, a failing neon bulb, or tie a tree branch to a piece of plastic with a strain gauge. Measure, sample and store data and then use it to move the knight on the virtual board. They key is you are trying to make informed decisions based on correct evidence and facts, not play childish guessing games.

>>> r = randint(0, 7)

The upper limit is exclusive, so this should be randint(0, 8) here. but since you got the right result you must have had this correct in your actual code anyway.

That should of course be randint( 0, 8 ) in my previous comment. Damn emoticons 🙂

Ethan: The upper limit of

`randint`

isinclusive, so I believe the code is correct. Perhaps you were thinking of`random`

where the upper limit is exclusive.@John: Sorry, I assumed you were using numpy.random.randint() rather than random.randint() – my mistake. I wonder why the two packages use different conventions… I see there is also random.randrange(), which sticks to the usual Python ways and excludes the upper endpoint, agreeing with the built-in function range().

Ethan: What a mess! Fortunately I usually work with the continuous case where you can be sloppier about the end points and get away with it. 🙂

Tips to notice from this post/comments for Python:

numpy.random.randint: [a,b)

scipy.random.randint: [a,b)

numpy.random.rand: [0,1)

scipy.random.rand: [0,1)

...

random.randint: [a,b]

The modulo code is cute, but slow- a table lookup is considerably faster (and IMHO clearer)

moves = [

(1, 2),

(1, -2),

(2, 1),

(2, -1),

(-1, 2),

(-1, -2),

(-2, 1),

(-2, -1)

]

`# Move a knight from (x, y) to a random new position`

def new_position(x, y):

`while True:`

# 8 possible moves

r = randint(0, 7)

dx, dy = moves [r]

`newx, newy = x + dx, y + dy`

# If the new position is on the board, take it.

# Otherwise try again.

if (newx >= 0 and newx = 0 and newy < 8):

return (newx, newy)

Interesting little problem though – like the post

Interesting. Thank you for sharing!

Just a few technical details about Python:

1. Using a variable named “sum” overrides the built-in “sum” function, which is a little dangerous (e.g. if the code is later modified, this breaks this assumption that “sum” is indeed the sum function).

2. “for i in xrange()” is customarily written “for _ in xrange()” so as to indicate that the value of the counter really does not matter and is not used.

3. The summation it self could more simply be written as “steps = sum(random_tour() for _ in xrange(num_reps))”.