Lisp – How Can Lisp Produce an Iterative Process from a Recursive Procedure?

clisprecursionsicp

I am starting to learn Lisp, using the SICP book. The authors mention that a procedure (i.e. function) can be recursive or iterative. Additionally, the process those procedures will generate will also be recursive or iterative, and that, surprisingly, a recursive procedure can sometimes generate an iterative process.

The example given is the factorial procedure, which is a recursive procedure, but which generates an iterative process:

(define factorial n)
    (iter 1 1 n))

(define (iter product counter max-count)
    (if (> counter max-count)
        product
        (iter (* counter product)
              (+ counter 1)
              max-count)))

And here's a quotation from the book:

One reason that the distinction between process and procedure may be
confusing is that most implementations of common languages (including
Ada, Pascal, and C) are designed in such a way that the interpretation
of any recursive procedure consumes an amount of memory that grows
with the number of procedure calls, even when the process described
is, in principle, iterative. The implementation of Scheme we shall
consider does not share this defect. It will execute an iterative
process in constant space, even if the iterative process is described
by a recursive procedure.

Question: I understood the principles involved (i.e. the what) but I still don't understand how the Lisp interpreter/compiler will generate an iterative process from a recursive function, being able to calculate it using constant space, and why most other languages are not able to do it.

Best Answer

Essentially with tail recursion, the problem with recursion in general is that you each function call gets it's own stack frame. In this stack frame you store all your local variables and whatnot.

So if you run a function like

  int fact(int n){
      if(n == 0)
           return 1;
      return n * fact(n-1);
  }

fact is called n times, allocating n stackframes. If n is big, that's a lot of memory.

There is hope however, when our function is structured in the form

 int f(int x){
     ...
     return g(foo); // Function call is in the final position
 }

Then we can throw away f's stack frame before entering g, this means that we won't allocate too many frames as long as we're using simple tail calls. It will actually compile to each function having it's own label and jmping to the tail called function, very fast.

All Schemes is tail recursive, as are most functional languages' implementations like those of SML, OCaml, Haskell, Scala, and (kind of) Clojure. This means that whenever you have a function call as the absolute last thing in your function, it won't allocate a new stack frame. Using this, we can write a do-while loop as follows in Scheme

  (define (do-while pred body)
     (body)                   ;; Execute the body
     (if (pred)               ;; Check the predicate
         (do-while pred body) ;; Tail call
         '()))                ;; Exit

And this runs in exactly the same amount of space as the equivalent imperative code would :) Pretty nifty.

Note, Tail Call Optimization /= Tail Recursion

A common misconception is that TCO is strictly confined to the case where a function tail calls itself. This is a specific subset of TCO and what most JVM languages provide. Languages like Scheme which aren't restricted by the JVM's limitations are properly TCO'd making it possible to say write a DFA which will run in constant memory by tail calling between states.