Algorithms – Complexity in Nested Loops

algorithmscomplexityloops

Foreword:

  • In this post, I will make the common confusion between O(n) and Theta(n) as complexity notations.
  • I will write pseudo-code to talk about algorithms, using whatever notation I find to my liking.

I know, yet another question about nested loops and complexity.

I recently wrote a nested algorithm, that would work as a bottleneck of a more complex application. It somehow looked like this:

while(firstLoopRunning) {
    foreach(element in myArray) {
        doSomeAction(element);
    }
}

My keen-eye brought me to the conclusion that, as doSomeAction is in O(1), the algorithm had to be in O(n²).

I know however that nested loops can have a different complexity. For instance the following is in O(n.log(n)):

for(i in 0..n) // n is constant in this loop
    for(j in 0..i) // j goes up to i and not to a constant
        print(j);

Then I thought, hey, I know that myArray has a maximum length. In this particular case, I know by design that there will never be more than 8 elements in myArray, so the algorithm could look like this:

while(firstLoopRunning) {
    // Let's do this one in Java, myArray would actually be an ArrayList
    try {
        doSomeAction(myArray.get(0));
        doSomeAction(myArray.get(1));
        doSomeAction(myArray.get(2));
        doSomeAction(myArray.get(3));
        doSomeAction(myArray.get(4));
        doSomeAction(myArray.get(5));
        doSomeAction(myArray.get(6));
        doSomeAction(myArray.get(7));
    } catch(ArrayOutOfBoundsException ex) {
       // Just a lazy way for me to avoid checking if the index exists
    }
}

And voilà! Here it is in O(n)!

Moreover, I know for a fact that compilers usually transform fake loops such as this one:

for(i in 0..8) // not really a loop
    someCall(i);

into a sequence of calls.

Which brought me to a first conclusion:

A iteration over an array which length will have a known finite upper-bound is in O(1).

I think I'm right about this one (aka: correct me if I'm wrong).

On the other hand, we are working with finite data and the whole point of the complexity theory is to work on theorically infinite data.

So let's use another classical example: we are iterating over the squares in a grid (=~ bidimensional array), so clearly an O(n²) algorithm. What if however we would to put all the squares in a single list and change this:

// O(n²) algorithm over myGrid[][]
for(i in myGrid)
    for(j in myGrid[i])
        yieldAction(myGrid[i][j])

into this

// O(n) algorithm over myFlatGrid[]
for(i in myFlatGrid)
    yieldAction(myFlatGrid[i mod rowLength][j])

Basically the same thing but different complexity, yet no loop has in fact been gained. Granted that the grid can grow in both dimensions so the variable really is quadratic, but in a way it can definitely be treated in a linear way (even though that's not worth it).

But I must be missing something. What sense does it make that if a twist the data a litte, I can change the complexity of an algorithm on a theorical point of view although the exact same number of operations is performed?

Best Answer

You seem to think that the complexity of an algorithm is linked to the number of nested loops. It is not the case. The following piece of code is O(1):

for i in [1.. 10^15]:
    for j in [1.. 10^15]:
        for k in [1.. 10^15]:
            dosomethingO1()

Complexity is related to the rate of growth of the number of operations. Unrolling your loop does not change that. So:

foreach(element in myArray) {
    doSomeAction(element);
}

Has 0(1) if the number of elements in myArray is bounded. The complexity will be decided by the number of iterations in the while loop.

About your grid example: if n is the number of cells and your grid is represented as a nested array, looping over those arrays still yields O(n) complexity.

Related Topic