开发者

Comparing algorithm complexity

Please, help me to compare complexity of two algorithms.

  1. O(N+1000) + O(M*log(M))
  2. O(N*5) + O(2000)

N = 100000 M = 100

I can't understand, what should I do with O(...)? Can I leave it? And just do...

(N+1000) + (M*log(M)) = 101200
(N*5) + 2000 = 502000

Is it right?

Thank you

UPDATED

I have task and I have two probable solutions for it. First sol开发者_StackOverflow中文版ution's algorithm complexity O(N) + O(M log(M)), see http://code.google.com/p/redis/wiki/ZunionstoreCommand ; the second solution consists of two algorithms with complexities O(N) http://code.google.com/p/redis/wiki/SunionCommand and O(N*M) http://code.google.com/p/redis/wiki/SinterCommand. I thought that I can replace N and M with real world values to compare speed of both solutions.


Big-O notation does not tell you how fast a specific algorithm is for a specific data set - it tells you about the asymptotic performance. In Big-O notation you can ignore constants and constant coefficients:

  1. O(N + M*log(M))
  2. O(N)

Now you can see that the second algorithm has better asymptotic performance.


When comparing complexity in a general case, you ignore constant values.

O(N+1000 + M*log(M))

becomes

O(N + M*log(M))

while

O(N*5 + 2000)

becomes

O(N)

Now, if you know FOR SURE those are the values you'll have for M and N, you can do the math and be more precise, but you'll also have to know how long each operation takes -- big-O notation is used for scaling, not so much for how an algorithm performs in a particular case. If your data is that static, you may as well just run both algorithms and see which returns faster.

Edit: as somebody else pointed out, proper notation isn't the sum of two big-Os, but rather the big-O of the sum.


Properly speaking, O(N+1000) + O(M*log(M)) doesn't actually make any sense, as O(f(n)) is the set of all functions g(n) such that blah blah look it up in the big white book blah... Andy you can't add sets of functions.

Yeah, it's a common abuse of notation, but, being pedantic, (and having taught that class several times) I feel compelled to point out that the correct answer is "Mu".


I don't really like the question you're being asked to answer.

What Mark Bryers has said is correct - you can ignore constants with Big-O notation.

To expand on that: The reason is that Big O notation is supposed to be indicitave of asymptotic growth, which in laymans' words means that the Big-O notation describing the complexity of an algorithm shows how fast or slow it will become for a very large input size. That's because 'asymptotic' refers to 'approaches asymptotes' (places where a function is undefined), which in the case of Big-O notation refers to infinite (eg. a very large input size).

Now, for that reason, I find it extremely odd that there would be addition or constant multiplication in the Big-O's - you're supposed to get rid of those with Big-O notation... that's the whole point. Is that exactly how the question was asked? Plus the algorithms should not be described with two Big-O's each (it should just be one, as stated and explained by Peter Leppert) .

The way I'd answer this is to get rid of the constants, since they shouldn't be there anyways, and then evaulate them given n and m, and compare.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜