Asymptotic complexity constant, why the constant?
Big oh notation says that all g(n) are an element c.f(n), O(g(n)) for some constant c.
I have always wondered and never really understood why we need this arbitrary constant to multip开发者_高级运维ly with the bounding function f(n) to get our bounds?
Also how does one decide what number this constant should be?
The constant itself doesn't characterize the limiting behavior of the f(n) compared to g(n).
It is used for the mathematical definition, which enforces the existence of a constant M such that
If such a constant exists then you can state that f(x) is an O(g(x)), and this is the usual notation when analyzing algorithms, you just don't care about which is the constant but just the complexity of operations itself. The constant is able make that disequation correct by ensuring that M|g(x)| is an upper bound of f(x).
How to find that constant depends on f(x) and g(x) and it is the mathematical point that must be proved to ensure that f(x) has a g(x) big-o so there's not a general rule. Look at this example.
Consider function
f(n) = 4 * n
Doesn't it make sense to call this function O(n)
since it grows "as fast" as g(n) = n
.
But without constant in definition of O
you can't find n0
such as that for all n > n0, f(n) <= n
. That's why you need constant, and indeed from condition,
4 * n <= c * n for all n > n0
you can get n0 == 0, c == 4
.
精彩评论