I know with O(n), you usually have a single loop; O(n^2) is a double
loop; O(n^3) is a triple loop, etc. How about O (log n)?
You're really going about it the wrong way here. You're trying to memorize which big-O expression goes with a given algorithmic structure, but you should really just count up the number of operations that the algorithm requires and compare that to the size of the input. An algorithm that loops over its entire input has O(n) performance because it runs the loop n times, not because it has a single loop. Here's a single loop with O(log n) performance:
for (i = 0; i < log2(input.count); i++) {
doSomething(...);
}
So, any algorithm where the number of required operations is on the order of the logarithm of the size of the input is O(log n). The important thing that big-O analysis tells you is how the execution time of an algorithm changes relative to the size of the input: if you double the size of the input, does the algorithm take 1 more step (O(log n)), twice as many steps (O(n)), four times as many steps (O(n^2)), etc.
Does it help to know from experience that algorithms that repeatedly partition their input typically have 'log n' as a component of their performance? Sure. But don't look for the partitioning and jump to the conclusion that the algorithm's performance is O(log n) -- it might be something like O(n log n), which is quite different.
BigO is, simply put, a measure of algorithm complexity. In a best case scenario, even the worst algorithm may be finished in one operation/iteration. However, different operations have different average- and worst-case complexities. These worst-case complexities could be considered the upper bound.
So, how can we figure out the upper bound?
An algorithm will have different running durations based on the size of the data it is handling. As an example, let's consider a simple data structure, which could be implemented in a number of ways (let's use an array) and we decide that we want to search the structure for a piece of data. The number of operations you have to do will be based on the size of the collection of data. Let's assume there are n
elements in the structure.
A typical array will, at worst case, iterate through the entire collection for this, meaning that you will perform up to n
operations, resulting in O(n)
complexity, so you have an upper-bound of n
.
Let's assume that the data is sorted: you could now perform a binary search, resulting in log(n)
operations, reducing the complexity with an upper bound of O(log(n))
. It could still be completed in one operation, and if n
approached an infinite number, the complexity would approach infinite execution time. This is probably what you were seeing in class. Different algorithm complexities approach this infinite execution level at different rates (n! > n^2 > n > log(n) > 1).
Edit: As per the comments, it should be noted that a change in the amount of data will also be reflected by a change in execution time based on the complexity. O(1) algorithms will not change, logarithmic ones will change in a logarithmic manner, linear algorithms linearly, etc. Essentially, doubling the amount of data may NOT double your execution time. You may quadruple it, or increase it by some smaller or larger factor.
For a more complex example, more work is required to figure out the complexity of course, but this is the general idea:
The Upper Bound of an algorithm's complexity describes how the algorithm's execution time changes with a change in the amount of data being processed in the operation, meaning that a more complex algorithm will often take an increasingly (often not linearly) long time to execute as the amount of data is increased.
Best Answer
You have to go through the whole list in any case, because otherwise you do not know if the current minimum is indeed the absolute minimum. So the algorithm would always require n iterations.
The only exception is if you know bounds on the possible values, for example that the integers are all positive, and you have already found a 1 (then you can stop). That does not change the complexity discussion, though.