When analyzing the time complexity of an algorithm why do we drop the constant of the term with the largest degree
Suppose I have the following : T(n) = 5n^2 +2n The asymtotic tight bound of this is theta n^2. I want to understand the reason behind dropping the 5. I understand why we开发者_运维知识库 ignore the lower order terms.
Consult the definition of big-O.
Keeping things simple[*], let's define that a function g is O(f) if there exist constants C and M such that for n > M, 0 <= g(n) < Cf(n).
The presence of a positive constant multiplier in f doesn't affect this, just choose C appropriately. Your example T is O(n^2) by choosing a value of C greater than 5, and a value of M big enough that the +2n
is irrelevant. For for example, for n > 2 it's a fact that 5n^2 + 2n < 6n^2 (because n^2 > 2n), so with C= 6 and M = 2 we see that T(n) is O(n^2).
So it's true that T(n) is O(n^2), and also true that it's O(5n^2), and O(5n^2 + 2n). The most interesting of those facts is that it's O(n^2), since it's the simplest expression and the other two are logically equivalent. If we want to compare the complexities of different functions, then we want to use simple expressions.
For big-Theta just note that we can play the same trick when f and g are the other way around. The relation "g is Theta(f)" is an equivalence relation, so what are we going to choose as the representative member of the equivalence class of T? The simplest one.
[*] Keeping things less simple, we cope with negative numbers by using a limsup rather than a plain limit. My definition above is actually sufficient but not necessary.
wEverything comes back to the concept of "Order of Magnitute". Given something like
5n^2 +2n
You think that the 5 is significant, however when you break it down and think about the numbers, the constant really doesn't matter (graph it and you will see why). For example .. say n is 50.
5 * 50^2 + 2 * 50 --> 5 * 2500 + 2 * 50 --> 12,600
As you mentioned, the 2*n is insignificant when compared to n^2. This same concept applies when viewing the constant as well... you might think that 2500 vs 125,000 is a big difference; however consider if the algorithm was n^3... you are now looking at 12,600 vs 625,100
So the factor that will have the most significant difference on the cost of an algorithm is just n^2.
The constant is dropped because it does not affect which complexity class the function belongs to.
If you have two functions f(n) = c1 * n^2 and g(n) = c2 * n^3, where c1 and c2 are constants, it doesn't matter how large c1 is and how small c2 is, g(n) will ALWAYS overtake and outgrow f(n) at some value of n.
精彩评论