Understanding recursion II

People often joke that in order to understand recursion, you must first understand recursion. This is funny as far as it goes, but it illustrates a common misunderstanding. (I’m not saying the people who make the joke don’t understand what they’re doing. They probably do, and are having a little fun with the people who do not.)

Novices often think that recursion means solving a problem in terms of itself, which sounds like an infinite regress. And it might be. But an important qualifier is missing in this description. Recursion is about solving a problem in terms of smaller versions of itself. The process ends because the problems get smaller until the process reaches some problem small enough to be solved directly.

It would be more accurate, but less humorous, to say that in order to have a good understanding of recursion, you must first have a rough understanding of recursion.

Related post: The secret to understanding recursion

15 thoughts on “Understanding recursion II

  1. Absolutely. When you understand that most recursive functions pass state to themselves, and the result is obtained through exhausting or reaching a threshold for at least one of those, then the concept is much easier to understand.

    As you said in your earlier post, many are concerned with visualizing the flow of the function when that’s precisely what they don’t have to do.

  2. I always thought of it as an extension of mathematical induction… running backwards?

  3. It’d also important to understand that recursion od for the developer not the compiler or the computer. The compiler may unroll it to a loop or otherwise optimize based on tail recursion etc., but the point of recursion is it’s expressiveness, not it’s efficiency.

    This is only a problem when loops are a familiar pattern to the eyes and recursion is not.

    I regularly get pushback on recursion due to memory concerns or that a loop would be simpler and easier to understand. As for memory, yes you must understand the memory usage of your code. As for easier to understand, that’s really an expression of the tautology: if I understand A better than I understand B, then I don’t understand B as well as I understand A.

  4. Dan S wrote: “As for easier to understand, that’s really an expression of the tautology: if I understand A better than I understand B, then I don’t understand B as well as I understand A.”

    I disagree. A simple loop is probably easier to understand than generic recursion both in what is being done and in how the computer is likely to execute such. Non-tail recursion seems more attractive than the equivalent loop implementation in a C-like language (requiring explicit management of a secondary stack)–e.g., to implement flood-fill. Vector-like operations are perhaps even easier to understand–along the lines of “For every A in B do Y”.

    While the degree of difficulty in understanding may be strongly related to exposure to C-like programming languages, I suspect that “lather, rinse, repeat” is easier to understand for people in general than the equivalent tail recursive wash hair function. One might argue that it would be better to have one more powerful primitive (recursion) than providing loops (for the simple cases) as well as recursion, but human language trends seem to indicate that even that might not be appropriate (though human language trends, being descriptive, might simply indicate that the lazy approach creates new terms rather than applying modifiers to existing terms in a way that establishes a more coherent vocabulary).

    I am not a programmer and have had only limited exposure to C/C++ (with a little reading on other languages), so I might be quite wrong. (I am also a bit mentally odd, so it is not reasonable to assume people in general would think as I do–though programmers might be more similar than the general population.)

  5. There is another kind of recursion, in which the recursive reference is not to a simplified case, but to the original case itself. In this case, there is an infinite regress, and the true solution is a fixed point of the infinite regression. This form of recursion does not necessary give any algorithm for solving the recurrence; it only states a condition which the solution must satisfy.

    Here is a very simple example of this. Suppose we are summing an infinite geometric series S = 1 + x + x^2 + x^3 + …. In this case, you can say that S = 1 + x * S, a recursive definition of S in terms of itself.

    The Recursion Theorem is a much more general case of this phenomenon in computer science. Among other things, it states that a computer program can be defined with reference to its own source code — even though obviously the source code is of the same level of complexity as the program itself. More formally, if you define a computer program, but leave a blank space for where its source code reference will be, and you then recursively fill in the blank space with whatever you have defined so far, you will reach a (recursively-defined) fixed point.

  6. There is another interesting point about understanding recursion is the applicability of it.

    It is not about understanding some code already written in recursive form, but to come up with algorithm that leverage recursion (e.g. the divide and conquer paradigm) in situation that the usage of it is non-obvious, such as finding closest pair on a plane.

    http://en.wikipedia.org/wiki/Closest_pair_of_points_problem

  7. In order to fully understand recursion, you must know something trivial about recursion, and how to generalize from that.

  8. If you really want to understand recursion and appreciate its descriptive power, learn a language that doesn’t just support reasoning by recursion, but pretty much requires it — Prolog.

  9. I recently heard it stated, “Recursion is either trivial or slightly more complex than recursion.”

  10. Subhrajyoti Choudhury

    John can you explain a tail recursion and how is different from a normal recursion . I read about in context of quick sort and i believe that the writer was saying that the stack doesn’t grow with a tail recursion . If you have a recursion , how can the stack not grow ? isn’t that the whole point of recursion ?

  11. Subhrajyoti: Conceptually, tail recursion is just a special case of recursion. It’s not special in theory. But in practice, its special form makes it possible for a compiler to transform the code to remove the recursion. Conceptually the stack is growing with each call, but in actual execution it is not.

  12. Further clarifying tail recursion: Tail recursion is when the recursive call is at the end (tail) of the recursive function. Because a recursively called function will return to the end of its caller which will immediately return, all of the caller’s stack variables will be dead (the return address is still alive, but it is the same for all recursive calls). A tail recursive function can be treated as a loop with only the parameters (and any global values) living across the jump to the start of the function/loop. (I hope this helps.)

  13. David Harris: I believe what you are describing, building up from an original case rather than working down to a base case, is called co-recursion, the dual of recursion?

    Subhrajyoti: To add to what the others have written, tail recursion is also just a special case of a tail call. It may be helpful conceptually to consider the return addresses as living on their own simple stack, rather than thinking about data and return addresses being managed together in a stack frame structure on a single stack. To perform a call, the address of the next instruction is pushed on the return stack and a jump to the subroutine is performed. To perform a return, the return stack is popped and jump is made to the popped address. When compiling a procedure, if a compiler sees a subroutine call followed by a return from the caller, then it may eliminate the call and return push and pop, as they would just cancel out, and only compile the jump. Note that if the subroutine has a return, then it will pop the address of the caller’s caller’s next instruction and execution will continue there as desired.

  14. You might say John did explain tail recursion. For non-tail recursion: to understand it, first you need a rough understanding, which you then must refine.

Comments are closed.