开发者

Is amortization ever really desirable?

For instance, suppose I have an algorithm that's O(n) and an al开发者_StackOverflow中文版gorithm that's an amortized O(n). Is it fair to say that in strictly big oh terms, the non-amortized algorithm will always be as fast or faster than the amortized algorithm? Or is there ever any reason to prefer the amortized version (ignoring things like code simplicity or ease of implementation)?


  1. Big O notation only tells you how your code scales, not how fast it will be for finite N.
  2. The difference between amortized and non-amortized is important if you care about worst-case performance or latency (such as real-time or interactive systems). If you only care about average throughput, though, they're for all practical purposes the same. Amortized analysis is good enough even in some very performance-critical situations like scientific computing and large-scale data mining.


Or is there ever any reason to prefer the amortized version

Smaller constants.


The main difference between a O(n) algorithm and an amortized O(n) algorithm is that you don't know anything about the worst-case behavior of the amortized O(n) algorithm. In practice, that doesn't matter very often: If your algorithm is being run many times, then you can count on the law of averages to balance out a few bad cases, and if your algorithm isn't being run very many times, then it's unlikely that you'll ever hit the worst case at all.

The only cases where the word "amortized" should matter are cases where you can't accept occasional bad performance for some reason. For example, in GUI applications, you would happily give up some average-case performance in return for a guarantee that you'll never bog down and stop responding while the user is sitting there and getting bored. In such applications, you want to make sure that even the worst-case behavior is good for any code that could cause the GUI to stop responding.

Still, most of the time, you don't need to worry about amortized O(n) versus O(n), and you can worry instead about what the constant factors are (as others have already said).


Big O notation tells you about how your algorithm changes with growing input. It is not a shortcut for profiling your code.

It is possible that a better algorithm is O(n^2) for all n in your program, because there is a constant in an O(n) that is larger.

So your choice of algorithms really depends on which algorithm is faster for your input size. I guess the answer to your question is to profile both algorithms in your program and then decide which is faster.


A classic example of the need for amortized algorithm would be std::vector for which insert is amortized O(1).

Some reasons to use amortized algorithms:

  • Much more efficient average case.
  • Easier Implementation.
  • No worst case guarantee algorithm exists.


Strictly speaking big-O isn't precise enough of a measure to so be able to say that an algorithm with O(n) is faster than one with a amortized O(n). Think about it. If you drop down a level of fidelity in your complexity analysis, the constant factors may be significantly different and make the amortized version faster.

Also, the impact of amortization tends to depend on use. For example, if you're using a hash table, the impact of the amortization will be largely dependent on your ratio of get to add operations. So if you add 1000 entries, then do a billion lookups, the fact that you had to rehash a couple times doesn't make much of a difference. But if you're constantly adding entries the cost of rehashing might add up.

That's why amortization is different than worst case. Big-O reflects the worst case, while amortization lets you say "one in every x will take a hit, and x is sufficiently large that it doesn't matter." Also, if you consider examples like insertion into a hash table, x grows according to some constant. So if you have a hash table that starts with 100 buckets and doubles every rehash, then the rehashes are asymptotically going to get farther and farther apart. In addition, the absolute worst case complexity for an algorithm with an amortized complexity is dependent on previous calls, while in measures like average case the calls are considered independent.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜