Big-O notation's definition
I really want to know the real definition. I have tried to read a book, but couldn't understand it.
O: Big-O notation worst case.
Θ: Theta notation average case. Ω: Omega notation bes开发者_C百科t case.Why does Wikipedia represent the speed of algorithms just in Big-O including its average, best and worst cases? How come they did not replace with those formal keywords?
O, Θ and Ω do not represent worst, average and best case ; although they have a similar meaning.
Big-O notation f(n) = O(g(n)) means f grows slower than g for large values of n ("n > n0" means "for large values of n" in this context). This does not mean that g is the worst case: g could be worse than the worst case (quick sort is also O(n!) for instance). For the more complicated algorithms, there is ongoing research to determine the smallest Big-O for their actual complexity: the original author mostly finds a Big-O upper-bound.
Ω notation means the reverse (f grows faster than g), which means it could be better than the best case (all algorithms are Ω(1) for instance).
There are many algorithms for which there is no single function g such that the complexity is both O(g) and Ω(g). For instance, insertion sort has a Big-O lower bound of O(n²) (meaning you can't find anything smaller than n²) and an Ω upper bound of Ω(n).
Other algorithms do: merge sort is both O(n log n) and Ω(n log n). When this happens, it is written as Θ(n log n), which means that not all algorithms have a Θ-notation complexity (and notably, algorithms with worst cases or best cases do not have one).
To get rid of worst cases that carry a very low probability, it's fairly common to examine average-case complexity - replacing the standard "f" value on the left-hand side by a different function "favg" that only takes into account the most probable outcomes. So, for quick sort, f = O(n²) is the best you can get, but favg = O(n log n).
I'm not sure where you found those definitions, but I'd consider them to be incorrect.
The best, average and worst cases are all functions generally over the input size.
Big-O, Theta and Omega indicate, respectively, the upper, tight and lower bounds of any given function.
That is to say the best-case has a big-O, Theta and Omega bound. The same goes for the average and worst case.
See also: How do O and Ω relate to worst and best case?
Note: big-O is commonly (arguably incorrectly) used to indicate a tight bound (instead of Theta).
Let's consider the example of insertion sort.
The best case is when it's already sorted, in which case it'll take linear time, that is f(n) = k1n
time for some constant k1
.
k1n
is O(n), Θ(n), Ω(n). According to the definitions, we can also say it's O(n2), O(n3), ... or Ω(1), Ω(log n), Ω(log log n), ..., but it's commonly expected to present the tightest bound.
The worst and average cases are g(n) = k2n2
and h(n) = k3n2
, which are both O(n2), Θ(n2), Ω(n2).
Now you might be saying: that isn't very useful, why do we need three bounds if they're always the same? Why don't we just use Theta everywhere?
In general, you'd be absolutely right - algorithms are often described in terms of just one of the bounds (typically the tight bound).
However, for some algorithms it's difficult to figure out exactly what the tight bound is, while it's easy to get an upper and/or lower bound. The unoptimised algorithm to calculate Fibonacci numbers is one such example - it's not too hard to find an upper bound of O(2n) and a lower bound of Ω(1.5n), but the tight bound of ~θ(1.6n) is a lot more difficult to calculate.
精彩评论