You're in luck! There (sort-of) is!
What you often want to do is identify the 2/3 cases:
- The base case
- The recursive case
- The exit case (sometimes optional)
That is:
- What you want to do
- Where you need to continue
- When you're done
Think of an example (DFS over a binary search tree):
bool DFS(Node currentNode, T searchValue)
{
// base case
if (currentNode.Value == searchValue)
return true;
// recursive case and exit case
if (curentNode.Left != null && DFS(currentNode.Left, searchValue))
return true;
// recursive case and exit case
if (curentNode.Right != null && DFS(currentNode.Right, searchValue))
return true;
return false;
}
So here we have:
- Base case: whether we have found our value
- Recursive case(s): run DFS in the child nodes
- Exit case: return true if DFS on the child nodes found the value
So now think of in-order traversal of the same tree:
- Base case: print out the node
- Recursive case(s):
- visit the left child
- visit the right child
- Exit case(s): does the node exist?
In the case of in-oder traversal it looks like:
void InOrder (Node currentNode)
{
// 3
if (currentNode == null)
return;
// 2
InOrder(currentNode.Left);
// 1
print(currentNode.Value);
// 2
InOrder(currentNode.Right);
}
Almost all recursive functions will have these elements. Identifying them and putting them in the right order is key.
Yes, some languages and complilers will convert recursive logic to non-recursive logic. This is known as tail call optimization - note that not all recursive calls are tail call optimizible. In this situation, the compiler recognizes a function of the form:
int foo(n) {
...
return bar(n);
}
Here, the language is able to recognize that the result being returned is the result from another function and change a function call with a new stack frame into a jump.
Realize that the classic factorial method:
int factorial(n) {
if(n == 0) return 1;
if(n == 1) return 1;
return n * factorial(n - 1);
}
is not tail call optimizatable because of the inspection necessary on the return.
To make this tail call optimizeable,
int _fact(int n, int acc) {
if(n == 1) return acc;
return _fact(n - 1, acc * n);
}
int factorial(int n) {
if(n == 0) return 1;
return _fact(n, 1);
}
Compiling this code with gcc -O2 -S fact.c
(the -O2 is necessary to enable the optimization in the compiler, but with more optimizations of -O3 it gets hard for a human to read...)
_fact:
.LFB0:
.cfi_startproc
cmpl $1, %edi
movl %esi, %eax
je .L2
.p2align 4,,10
.p2align 3
.L4:
imull %edi, %eax
subl $1, %edi
cmpl $1, %edi
jne .L4
.L2:
rep
ret
.cfi_endproc
One can see in segment .L4
, the jne
rather than a call
(which does a subroutine call with a new stack frame).
Please note this was done with C. Tail call optimization in java is hard and depends on the JVM implementation -- tail-recursion + java and tail-recursion + optimization are good tag sets to browse. You may find other JVM languages are able to optimize tail recursion better (try clojure (which requires the recur to tail call optimize), or scala).
Best Answer
It is good to at least understand how recursion works, because some algorithms are naturally recursive and thus much easier to express using recursion.
Also, recursive algorithms (or implementations) are not inherently slower than iterative ones. In fact, every recursive algorithm can be implemented by an equivalent iterative implementation at the cost of having to keep track some intermediate/temporary values yourself, where the recursive version automatically keeps track of those in the function call stack.
One example of a naturally recursive algorithm is if you want to apply an operation to all the nodes of a tree. The most natural implementation here is to have a function that performs the operation on one node and calls itself recursively for each of the children of the node.
For some algorithms, like calculating a Fibonacci sequence, recursion seems natural, but a naive implementation will be much slower than an iterative implementation, because the recursive version keeps re-doing the same work over and over again. This just means that for those particular algorithms, recursion may not be the best choice, or that you need to use some techniques to remember intermediate values that might be needed again elsewhere in the algorithm.
For other algorithms, in particular those that use divide-and-conquer tactics, you will find that you need a stack to keep track of some thing in the iterative solution. In those cases, a recursive version is much cleaner, because the stack handling becomes implicit.