How to find time complexity of an algorithm

algorithmcomplexity-theorytime-complexity

The Question

How to find time complexity of an algorithm?

What have I done before posting a question on SO ?

I have gone through this, this and many other links

But no where I was able to find a clear and straight forward explanation for how to calculate time complexity.

What do I know ?

Say for a code as simple as the one below:

char h = 'y'; // This will be executed 1 time
int abc = 0; // This will be executed 1 time

Say for a loop like the one below:

for (int i = 0; i < N; i++) {        
    Console.Write('Hello World !');
}

int i=0; This will be executed only once.
The time is actually calculated to i=0 and not the declaration.

i < N; This will be executed N+1 times

i++ ; This will be executed N times

So the number of operations required by this loop are

{1+(N+1)+N} = 2N+2

Note: This still may be wrong, as I am not confident about my understanding on calculating time complexity

What I want to know ?

Ok, so these small basic calculations I think I know, but in most cases I have seen the time complexity as

O(N), O(n2), O(log n), O(n!)…. and many other,

Can anyone help me understand how does one calculate time complexity of an algorithm? I am sure there are plenty of newbies like me wanting to know this.

Best Answer

How to find time complexity of an algorithm

You add up how many machine instructions it will execute as a function of the size of its input, and then simplify the expression to the largest (when N is very large) term and can include any simplifying constant factor.

For example, lets see how we simplify 2N + 2 machine instructions to describe this as just O(N).

Why do we remove the two 2s ?

We are interested in the performance of the algorithm as N becomes large.

Consider the two terms 2N and 2.

What is the relative influence of these two terms as N becomes large? Suppose N is a million.

Then the first term is 2 million and the second term is only 2.

For this reason, we drop all but the largest terms for large N.

So, now we have gone from 2N + 2 to 2N.

Traditionally, we are only interested in performance up to constant factors.

This means that we don't really care if there is some constant multiple of difference in performance when N is large. The unit of 2N is not well-defined in the first place anyway. So we can multiply or divide by a constant factor to get to the simplest expression.

So 2N becomes just N.