Why is Big O taught instead of Big Theta

big obig-thetanotation

Big O notation provides an upper bound to a function whereas Big Theta provides a tight bound. However I find that Big O notation is typically (and informally) taught and used when they really mean Big Theta.

e.g. "Quicksort is O(N^2)" can turned into the much stronger statement "Quicksort is Θ(N^2)"

While usage of Big O is technically correct, wouldn't a more prevalent use of Big Theta be more expressive and lead to less confusion? Is there some historical reason why this Big O is more commonly used?

Wikipedia notes:

Informally, especially in computer science, the Big O notation often
is permitted to be somewhat abused to describe an asymptotic tight
bound where using Big Theta Θ notation might be more factually
appropriate in a given context.

Best Answer

Because you are usually just interested in the worst case when analyzing the performance. Thus, knowing the upper bound is sufficient.

When it runs faster than expected for a given input - that is ok, it's not the critical point. It's mostly negligible information.

Some algorithms, as @Peter Taylor noted, don't have a tight bound at all. See quicksort for example which is O(n^2) and Omega(n).

Moreover, tight bounds are often more difficult to compute.

See also:

Related Topic