You misunderstood recursion: although it can be used to replace iteration, there is absolutely no requirement for the recursive function not to have iterations internal to itself.
The only requirement for a function to be considered recursive is the existence of a code path through which it calls itself, directly or indirectly. All correct recursive functions also have a conditional of some sort, preventing them from "recursing down" forever.
Your recursive function is ideal to illustrate the structure of recursive search with backtracking. It starts with the check of the exit condition row < n
, and proceeds to making search decisions on its level of recursion (i.e. picking a possible position for queen number row
). After each iteration, a recursive call is made to build upon the configuration that the function has found so far; eventually, it "bottoms out" when row
reaches n
in the recursive call that is n
levels deep.
It is good to at least understand how recursion works, because some algorithms are naturally recursive and thus much easier to express using recursion.
Also, recursive algorithms (or implementations) are not inherently slower than iterative ones. In fact, every recursive algorithm can be implemented by an equivalent iterative implementation at the cost of having to keep track some intermediate/temporary values yourself, where the recursive version automatically keeps track of those in the function call stack.
One example of a naturally recursive algorithm is if you want to apply an operation to all the nodes of a tree. The most natural implementation here is to have a function that performs the operation on one node and calls itself recursively for each of the children of the node.
For some algorithms, like calculating a Fibonacci sequence, recursion seems natural, but a naive implementation will be much slower than an iterative implementation, because the recursive version keeps re-doing the same work over and over again. This just means that for those particular algorithms, recursion may not be the best choice, or that you need to use some techniques to remember intermediate values that might be needed again elsewhere in the algorithm.
For other algorithms, in particular those that use divide-and-conquer tactics, you will find that you need a stack to keep track of some thing in the iterative solution. In those cases, a recursive version is much cleaner, because the stack handling becomes implicit.
Best Answer
Yes, some languages and complilers will convert recursive logic to non-recursive logic. This is known as tail call optimization - note that not all recursive calls are tail call optimizible. In this situation, the compiler recognizes a function of the form:
Here, the language is able to recognize that the result being returned is the result from another function and change a function call with a new stack frame into a jump.
Realize that the classic factorial method:
is not tail call optimizatable because of the inspection necessary on the return.
To make this tail call optimizeable,
Compiling this code with
gcc -O2 -S fact.c
(the -O2 is necessary to enable the optimization in the compiler, but with more optimizations of -O3 it gets hard for a human to read...)One can see in segment
.L4
, thejne
rather than acall
(which does a subroutine call with a new stack frame).Please note this was done with C. Tail call optimization in java is hard and depends on the JVM implementation -- tail-recursion + java and tail-recursion + optimization are good tag sets to browse. You may find other JVM languages are able to optimize tail recursion better (try clojure (which requires the recur to tail call optimize), or scala).