the comparation of the computing time of multiplication
Let a, b be two integers with n digits. I am wondering does the com开发者_Python百科puting time of the square of a is shorter than a*b.
Thank you for your help.
I don't think there's a way to square A without using an IMUL on x86. I could be wrong.
To find out how long something takes, microbenchmark it!
Edit: oh wait, I've got it! ab takes two memory reads and aa takes one! So a*a is faster :-).
True answer: there's no reason a*b would be slower unless you have some outside factor influencing things.
I assume your question is:
*Let a, b be two integers with n digits. I am wondering if the computing time of calculating the square of a is shorter than the computing time of calculating a*b.*
If n is large enough that you cannot just use a single multiply instruction, then any algorithm that I know can take advantage of the fact that both factors are the same. That's true for the algorithm that you learned at school, since almost half the products of pairs of digits don't need to be multiplied. At the extreme end for very large n, using convolution with FFTs, the FFT for both factors is the same for the square and needs to be calculated only once.
Take a look at the benchmarks in Bentley's "Programming Pearls", you could hack up something from there to measure.
精彩评论