Last week I wrote about how memoization reduced a simulation’s execution time from 300 hours down to 38 minutes. This weekend I came up with a different approach that reduces the time down to 21 minutes by taking advantage of a recurrence relation.

The most expensive function in the simulation has four integer arguments. Each time the function is called, one of the arguments is incremented by 1. A random process determines which argument is incremented each time. This function has a set of recurrence relations that make it faster to calculate a new value if you have a previously computed value for adjacent arguments.

The recurrence relation approach is about 100x faster than the memoization approach for small runs. However, the time per run with memoization decreases with each run. The longer it runs, the more values it caches, and so the greater the chances that a value has already been computed. In the end, the recurrence relation is about 2x faster than the memoization approach. With an even larger run, I suspect the time for the two approaches would be roughly equal.

The new approach is faster, but it was more work to implement, and it depends critically on the assumption that consecutive arguments differ only by one increment of one argument.

**Related post**:

Or it’s possible the gains from memoization will tend to plateau before surpassing the recurrence relation because of issues with memory usage and, if you’re particularly unlucky, having to go back and forth between RAM and swap.

Dynamic Programming, Memorization, and Recurrence Relation almost always come hand-in-hand, especially in algorithm design literature.

Not exactly sure about your problem, but it seems to me that you can do something like this:

Cache any computed result.

If the numerical integral need to be computed is already in the cache, great, just take it.

If the numerical integral is not in the cache, see if an adjacent result is available, apply the recurrence to get the result.

Otherwise – nothing work, let’s just evaluate the hard way – and of course, cache the result.