I assume you're asking about how to measure algorithm time complexity with regards to its input (how the algorithm time grows as N grows). One way is to model the algorithm in the form of a recurrence equation and then solve via a number of techniques. Common techniques are master theorem, substitution, recurrence trees, ...
The binary search algorithm can be seen as recurrences of dividing N in half with a comparison. So T(n) = T(n/2) + 1. Solve this by the master theorem to show the function is log n.
For a complete overview of this type of stuff I suggest working through these two classes:
http://aduni.org/courses/discrete
and
http://aduni.org/courses/algorithms
With small n
Big O it is just about useless and it's the hidden constants or even actual implementation that will more likely be the deciding factor for which algorithm is better. This is why most sorting functions in standard libraries will switch to a faster insertion sort for those last 5 elements. The only way to figure out which one will be better is benchmarking with realistic data sets.
Big O is good for large data sets and discussing on how an algorithm will scale, it's better to have a O(n log n)
algorithm than a O(n^2)
when you expect the data to grow in the future, but if the O(n^2)
works fine the way it is and the input sizes will likely remain constant, just make note that you can upgrade it but leave it as is, there are likely other things you need to worry about right now.
(Note all "large" and "smalls" in the previous paragraphs are meant to be taken relatively; small can be a few million and big can be a hundred it all depends on each specific case)
Often times there will be a trade-off between time and space: for example quicksort requires O(log n)
extra memory while heapsort can use O(1)
extra memory, however the hidden constants in heapsort makes it less attractive (there's also the stability issue which make mergesort more attractive if you don't mind payign the extra memory costs).
Another thing to consider is database indexes, these are additional tables that require log(n)
time to update when a record is added, removed or modified, but lets lookups happen much faster (O(log n)
instead of O(n)
). deciding on whether to add one is a constant headache for most database admins: will I have enough lookups on the index compared to the amount of time I spend updating the index?
One last thing to keep in mind: the more efficient algorithms are nearly always more complicated than the naive straight-forward one (otherwise it would be the one you would have used from the start). This means a larger surface area for bugs and code that is harder to follow, both are non-trivial issues to deal with.
Best Answer
"N log N" is as good as you're going to get, and should be well understood by professional programmers. You can't expect there to be a single word to describe every complexity class that exists.