Essentially with tail recursion, the problem with recursion in general is that you each function call gets it's own stack frame. In this stack frame you store all your local variables and whatnot.
So if you run a function like
int fact(int n){
if(n == 0)
return 1;
return n * fact(n-1);
}
fact
is called n
times, allocating n
stackframes. If n
is big, that's a lot of memory.
There is hope however, when our function is structured in the form
int f(int x){
...
return g(foo); // Function call is in the final position
}
Then we can throw away f
's stack frame before entering g
, this means that we won't allocate too many frames as long as we're using simple tail calls. It will actually compile to each function having it's own label and jmp
ing to the tail called function, very fast.
All Schemes is tail recursive, as are most functional languages' implementations like those of SML, OCaml, Haskell, Scala, and (kind of) Clojure. This means that whenever you have a function call as the absolute last thing in your function, it won't allocate a new stack frame. Using this, we can write a do-while loop as follows in Scheme
(define (do-while pred body)
(body) ;; Execute the body
(if (pred) ;; Check the predicate
(do-while pred body) ;; Tail call
'())) ;; Exit
And this runs in exactly the same amount of space as the equivalent imperative code would :) Pretty nifty.
Note, Tail Call Optimization /= Tail Recursion
A common misconception is that TCO is strictly confined to the case where a function tail calls itself. This is a specific subset of TCO and what most JVM languages provide. Languages like Scheme which aren't restricted by the JVM's limitations are properly TCO'd making it possible to say write a DFA which will run in constant memory by tail calling between states.
Don't try to descend the stack in your head. That way madness lies. By the magic of mathematic induction, every recursive invocation below you can be assumed to be debugged and already working thanks to the programmer who is the future you. Treat it like a call to a library function that you don't have the source code to. If it helps you to work it out, comment it out temporarily and replace it with its results for a test case of reasonable size.
In other words, you don't need to be able to follow the complete algorithm. All you need to know is how to make the problem slightly smaller so you can call an algorithm that already works thanks to recursive you, and write a trivial base case. I can't stress that enough. Trust that the recursive call already works. Once you do that, recursive algorithms become way easier.
Best Answer
You keep on changing your function. But keep picking ones that will run forever without conversion..
Recursion gets complicated, fast. The first step to analyzing a proposed doubly recursive function is to try to trace it through on some sample values, to see what it does. If your calculation gets into an infinite loop, the function is not well defined. If your calculation gets into a spiral that keeps on getting bigger numbers (which happens very easily), it is probably not well defined.
If tracing it through gives an answer, you then try to come up with some pattern or recurrence relation between the answers. Once you have that, you can try to figure out its runtime. Figuring it out can be very, very complicated, but we have results such as the Master theorem that let us figure out the answer in many cases.
Beware that even with single recursion, it is easy to come up with functions whose runtime we do not know how to calculate. For instance consider the following:
It is currently unknown whether this function is always well-defined, let alone what its runtime is.