Big O complexity of the basic arithmetic operations
What is the Big-O complexity for widespread algorithms of the basic arithmetic operations like multiplication, square root, logarithm, scalar and matrix product?
Are there exotic algorithms which are 开发者_高级运维more efficient, in terms of Big-O complexity, but are not very widespread in practical solutions (e.g. not implemented in popular software libraries)?
See http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
Matrix product of square matrices:
- O(N3) (naïve method)
- O(N2.81) (Strassen's algorithm).
There is also a O(N2.38) Coppersmith–Winograd algorithm but I don't think it's wide-spread due to the huge hidden constant.
Big-int multiplication:
- Naïve: O(n2)
- Fast-Fourier transform based: O(n log n log log n) (Schönhage–Strassen algorithm).
There are also an n log n · 2O(log* n) algorithm published in 2008 but that was too new to be widespread.
Usually the naïve method is good enough for normal-sized input.
You'll consider most simple operations as O(1) because your input size is usually fixed (i.e. 32- or 64-bits).
Under normal conditions, your platform will be performing exactly the same operation for multiplication, square root, logarithm etc. regardless of the "size" of your input (i.e. int a = 0; and int b = Int32.MaxValue are both 32-bit integers).
It gets interesting once you start to look at matrices or representing arbitrary-precision numbers, but someone already linked the wikipedia summary, so I won't go into that.
Just don't use Schönhage–Strassen to multiply "normal" little numbers. It'll make me cry. Just because an algorithm is O(n2) doesn't mean it's bad - particularly when n is almost always 25 or 26.
Operations don't have complexity, algorithms do. For example, there are various square root algorithms, and they will have different complexity.
Square root and logarithm may be implemented in various ways, greatly affecting the complexity (judged by the required precision).
If they are implemented with lookup tables (and some kind of interpolation), the memory requirement really explodes as more precision is required but the complexity is that of looking up the value in the array and possibly applying the interpolation.
More popularly they seem to be implemented by their series definitions. Recurse or iterate a statement for a number of rounds until you reach the required precision. Here the number of rounds may get very high as more precision is needed, and also the calculations itself are affected by the increased precision.
Take a look at BigInteger, on arbitrary-length integers. Everything now has a cost, in terms of the size of the input, which is the number of bits (typically O(log K)
bits for a number K
). I will use N
for the number of bits below.
For example, addition and subtraction is now O( N )
. Multiplication is either O( N^2 )
(naive) or O( n (log n)^(2+epsilon) )
with FFT.
Other algorithms include the "power" function, which takes O( N )
multiplication. (except now each multiplication has a cost!)
And there are additional complexities for BigDecimals, which is the arbitrary-length decimal equivalent, and beyond some of the more basic operations, some of the things are more interesting as well (especially if you want to figure out how much precision you want). You can take a look at Java's implementation.
There's a Fourier type algorithm that does integer multiplication as well (Schonhage-Strassen)
I thought that there was a version of Strassen's algorithm that does slightly better than normal for integer multiplication, but now that I think of it, that one ends up being the same as straightforward...
Addition and subtraction are pretty much, just addition and subtraction. Division and square root are probably interesting though...
ALSO: Note that so far everyone has talked about INTEGER arithmetic. Once you get to floats/doubles all bets are off. Then you get into the world of numerical analysis, and that's a whole field of it's own...
Division and square roots for huge number of bits are not much more complex than multiplication. For both operations, plain old Newton iteration can be arranged in such a way that the Newton iteration only has multiplications. Since the number of correct digits is doubled in each step, we can just double the precision of the calculation in each step.
精彩评论