With small n
Big O it is just about useless and it's the hidden constants or even actual implementation that will more likely be the deciding factor for which algorithm is better. This is why most sorting functions in standard libraries will switch to a faster insertion sort for those last 5 elements. The only way to figure out which one will be better is benchmarking with realistic data sets.
Big O is good for large data sets and discussing on how an algorithm will scale, it's better to have a O(n log n)
algorithm than a O(n^2)
when you expect the data to grow in the future, but if the O(n^2)
works fine the way it is and the input sizes will likely remain constant, just make note that you can upgrade it but leave it as is, there are likely other things you need to worry about right now.
(Note all "large" and "smalls" in the previous paragraphs are meant to be taken relatively; small can be a few million and big can be a hundred it all depends on each specific case)
Often times there will be a trade-off between time and space: for example quicksort requires O(log n)
extra memory while heapsort can use O(1)
extra memory, however the hidden constants in heapsort makes it less attractive (there's also the stability issue which make mergesort more attractive if you don't mind payign the extra memory costs).
Another thing to consider is database indexes, these are additional tables that require log(n)
time to update when a record is added, removed or modified, but lets lookups happen much faster (O(log n)
instead of O(n)
). deciding on whether to add one is a constant headache for most database admins: will I have enough lookups on the index compared to the amount of time I spend updating the index?
One last thing to keep in mind: the more efficient algorithms are nearly always more complicated than the naive straight-forward one (otherwise it would be the one you would have used from the start). This means a larger surface area for bugs and code that is harder to follow, both are non-trivial issues to deal with.
You seem to think that the complexity of an algorithm is linked to the number of nested loops. It is not the case. The following piece of code is O(1):
for i in [1.. 10^15]:
for j in [1.. 10^15]:
for k in [1.. 10^15]:
dosomethingO1()
Complexity is related to the rate of growth of the number of operations. Unrolling your loop does not change that. So:
foreach(element in myArray) {
doSomeAction(element);
}
Has 0(1) if the number of elements in myArray
is bounded. The complexity will be decided by the number of iterations in the while loop.
About your grid example: if n is the number of cells and your grid is represented as a nested array, looping over those arrays still yields O(n) complexity.
Best Answer
You're really going about it the wrong way here. You're trying to memorize which big-O expression goes with a given algorithmic structure, but you should really just count up the number of operations that the algorithm requires and compare that to the size of the input. An algorithm that loops over its entire input has O(n) performance because it runs the loop n times, not because it has a single loop. Here's a single loop with O(log n) performance:
So, any algorithm where the number of required operations is on the order of the logarithm of the size of the input is O(log n). The important thing that big-O analysis tells you is how the execution time of an algorithm changes relative to the size of the input: if you double the size of the input, does the algorithm take 1 more step (O(log n)), twice as many steps (O(n)), four times as many steps (O(n^2)), etc.
Does it help to know from experience that algorithms that repeatedly partition their input typically have 'log n' as a component of their performance? Sure. But don't look for the partitioning and jump to the conclusion that the algorithm's performance is O(log n) -- it might be something like O(n log n), which is quite different.