What’s the big-O complexity of this recursive algorithm

algorithm-analysisalgorithmsbig ocomplexity

I am following a course of Algorithms and Data Structures.

Today, my professor said the complexity of the following algorithm is 2^n.

I waited till the lesson was over, approached him and told him I actually believed it was an O(n) algorithm, and I did the computation to prove it, and wanted to show them to it, but he continued to say it was not, without giving me any convincing explanation.

The algorithm is recursive, and its recurrence relation is:

       { 1         if n=1
T(n) = {
       { 2T(n/2)   otherwise

I computed it down to be a O(n), this way:

Let's expand T(n)

T(n) = 2 [2 * T(n/(2^2))]
     = 2^2 * T(n/(2^2))
     = 2^2 * [2 * T(n/(2^3))]
     = 2^3 * T(n/(2^3))
     = ...
     = 2^i * T(n/(2^i)).

We stop when the term inside the T is 1, that is:

n/(2^i) = 1 ==> n = 2^i ==> i = log n

After the substitution, we obtain

T(n) = 2^log n * T(1)
     = n * 1
     = O(n).

Since this algorithm jumped out of a lesson on Merge Sort, I noted how Merge Sort, which notoriously is O(n log n) has a complexity of 2T(n/2) + theta(n) (obviously major then 2T(n/2)), and I asked him why is it, that an algorithm with a lower complexity, gets a higher big-O. Because, at this point, it's counter intuitive for me. He replied, words for words, "If you think that is counter-intuitive, you have serious problem in your math."

My questions are:

  1. Is there any fallacy in my demonstration?
  2. Wouldn't the last situation be counter-intuitive?

Best Answer

First of all, to analyze the complexity of a simple, recursive algorithm, the first stop would be the Master Theorem, where we match your expression T(n) = 2·T(n/2) against the schema T(n) = a·T(n/b) + f(n). By comparing coefficients, we can see that a = 2, b = 2, f(n) = x for some constant x. The Master Theorem then gives us the complexity depending on whether certain relations between the coefficients hold.

Here, the case applies where there exists a c such that f(n) = O(n^c) and c < log_b a, with c = 0 (f(n) = O(1) = O(n^0) and 0 < log_2 2 <=> 0 < 1). Therefore, the Master Theorem tells us that T(n) = Theta(n^(log_b a)) = Theta(n), so that also T(n) = O(n) holds. In other words, you are right.

Your proof checks out, but to be really convincing you should use the proof by induction technique. You've essentially used induction but have not clearly labeled each step, which makes it a bit hard to follow (and, you did it backwards – you should start with the base case T(1), then show that the property holds for T(n+1) for any n rather than starting with the general case and working towards the base case).

Related Topic