Algorithms – When Is Code Considered Tight and How Does It Increase Efficiency?

algorithmsefficiency

I have read many times that insertion sort has tight code and hence the hidden constant factor in its asymptotic complexity is smaller. I just read yesterday that quick sort has tight code just like insertion sort so it also has smaller hidden constant factor. It is one of the reasons it is considered best among the O(n lg n) sorting algorithms.

I don't understand what does it mean when someone says that this algorithm's code is tight. What does that mean exactly and how does it lead to improvement in the efficiency of the algorithm (the smaller hidden constant factor in asymptotic complexity)?

Best Answer

When an algorithm is given a Big O designation such as O(n log n), it tells you how the algorithm will scale; that is, how well will it perform as you increase the size of n to a large number.

Let's say I have an algorithm that is O(n) over a data set. That means that there's a loop in the code somewhere that executes n times, once for each element in the data set. How long will it take to execute? Assuming that each iteration of the loop takes about the same amount of time, it will take

m * n

Where m is the amount of time it takes to process one element of the data set.

It is clear that it will take longer to process the data set if m is higher. Big O doesn't address that time at all; it only states that (in this case), the algorithm will scale linearly as a function of the data set size.

So a "tight loop" is simply one that has a smaller m.

This is the "hidden constant factor" that the book talks about. Reducing m improves efficiency because any improvement in m will be multiplied over the whole data set.