开发者

why program running time is not a measure?

i have learned that a program is measured by it's complexity - i mean by Big O Notation. why don'开发者_如何学Ct we measure it by it's absolute running time? thanks :)


You use the complexity of an algorithm instead of absolute running times to reason about algorithms, because the absolute running time of a program does not only depend on the algorithm used and the size of the input. It also depends on the machine it's running on, various implementations detail and what other programs are currently using system resources. Even if you run the same application twice with the same input on the same machine, you won't get exactly the same time.

Consequently when given a program you can't just make a statement like "this program will take 20*n seconds when run with an input of size n" because the program's running time depends on a lot more factors than the input size. You can however make a statement like "this program's running time is in O(n)", so that's a lot more useful.


Absolute running time is not an indicator of how the algorithm grows with different input sets. It's possible for a O(n*log(n)) algorithm to be far slower than an O(n^2) algorithm for all practical datasets.


Running time does not measure complexity, it only measures performance, or the time required to perform the task. An MP3 player will run for the length of the time require to play the song. The elapsed CPU time may be more useful in this case.

One measure of complexity is how it scales to larger inputs. This is useful for planning the require hardware. All things being equal, something that scales relatively linearly is preferable to one which scales poorly. Things are rarely equal.

The other measure of complexity is a measure of how simple the code is. The code complexity is usually higher for programs with relatively linear performance complexity. Complex code can be costly maintain, and changes are more likely to introduce errors.

All three (or four) measures are useful, and none of them are highly useful by themselves. The three together can be quite useful.


The question could use a little more context.

In programming a real program, we are likely to measure the program's running time. There are multiple potential issues with this though 1. What hardware is the program running on? Comparing two programs running on different hardware really doesn't give a meaningful comparison. 2. What other software is running? If anything else running, it's going to steal CPU cycles (or whatever other resource your program is running on). 3. What is the input? As already said, for a small set, a solution might look very fast, but scalability goes out the door. Also, some inputs are easier than others. If as a person, you hand me a dictionary and ask me to sort, I'll hand it right back and say done. Giving me a set of 50 cards (much smaller than a dictionary) in random order will take me a lot longer to do. 4. What is the starting conditions? If your program runs for the first time, chances are, spinning it off the hard disk will take up the largest chunk of time on modern systems. Comparing two implementations with small inputs will likely have their differences masked by this.

Big O notation covers a lot of these issues. 1. Hardware doesn't matter, as everything is normalized by the speed of 1 operation O(1). 2. Big O talks about the algorithm free of other algorithms around it. 3. Big O talks about how the input will change the running time, not how long one input takes. It tells you the worse the algorithm will perform, not how it performs on an average or easy input. 4. Again, Big O handles algorithms, not programs running in a physical system.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜