开发者

What are the differences between O(1) and O(2) in algorithm-analysis?

According to the definition of big O f(n) <= C*g(n)(which means f(n) = O(g(n)), it could be deduced that:

开发者_如何转开发f(n) <= C
f(n) <= 2C

I think there are no big differences between these two. What I could come up with is:

f(n) = 1 - 1 / n
f(n) = 2 - 1 / n
C = 1

But what differs this two complexities,since both are constant complexity?

Could you show some real world code to demonstrate the differences between O(1) and O(2).


There is no difference between O(1) and O(2). Algorithms classifying as O(1) are O(2) and vice versa. In fact, O(c1) is O(c2) for any positive constants c1 and c2.

O(c) where c is a positive constants simply means that the runtime is bounded independent of the input or problem size. From this it is clear (informally) that O(1) and O(2) are equal.

Formally, consider a function f in O(1). Then there is a constant c such that f(n) <= c * 1 for all n. Let d = c / 2. Then f(n) <= c = (c / 2) * 2 = d * 2 which shows that f is O(2). Similarly if g is O(2) there is a constant c such that g(n) <= c * 2 for all n. Let d = 2 * c. Then g(n) <= c * 2 = d = d * 1 which shows that g is O(1). Therefore O(1) = O(2).


O(1) and O(2) are the same, as is any O(constant value).

The point being that neither rely on some function of N.


There is no difference.

In the graph below, the red line represents O(n) and the green curve represents O(n2).

What are the differences between O(1) and O(2) in algorithm-analysis?

As you can see by the red line, the 2 and the 1 become insignificant as x increases (the green curve grows at a much faster rate). This is what Big-O notation is trying to capture; constants are relatively meaningless.


Maybe they meant that both algorithms execute in constant time regardless of input size (usually denoted as N), but one of them is twice as fast. But it's an abuse of the big-O notation.


There is no difference between O(1) and O(2).

The order-of notation is unique up to a constant. O(f(x)) means that there is some constant k such that the time would be less than kf(x).

If something is O(2), then there is some constant k that the program takes less than 2k. Therefore, there's another constant, k' = 2k that works for O(1).


There is not difference between O(1) and O(2). In fact you would not use this notation. It is more like O(N) or O(n^2), O(log(N)) etc. This is just a indication of the order of magnitude of the algorithm. Put in other words, O(1) would be constant in time. O(N) would be linear in the number of items (N), O(n^2) would be exponential in time, etc.


Big-O notation is generally used for asymptotic analysis of algorithm complexity, ie analysis of how the algorithm performs as n increases toward infinity. The highest order term in n of the function f(n) will be the dominant part of the function in this case.

As such, lower order terms in n are usually dropped from the function when expressed in big-O (for example, f(n)=2n^2+4 will result in O(n^2) asymptotic big-O complexity).

In the case of the highest term being constant and not dependent on n, then all these constants are effectively the same asymptotically speaking and are usually simply reduced to O(1).

Hence, O(2) would be considered equivalent to O(1).


You typically do not write O(2) or O(2n) but O(1) and O(n) instead. The difference is in actual speed, of course - 5s vs 10s for instance. I found your question a bit confusing.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜