Do computers calculate .5*x faster than they do x/2? [duplicate]
Possible Duplicate:
Why is float division slow?
I heard that a computer does the operation .5*x faster than x/2. Is this true? Can you please tell me why or how this works?
That is not generally speaking true. It will depend on the instruction set of the microprocessor. Sometimes there is no native '/' operation, so the compiler will use two: clocks one to get the .5 and one to multiply, while its cousin .5*x will just use one.
But there are no restriction on having '/', there may be a microprocessors that do have a native '/', so it will depend.
Short answer: Yes, the multiplication is usually faster.
For particular cases it could depend on many things e.g. the platform, language, compiler, hardware, presence or absence of lookup tables etc. For integer division by powers of 2, bit shifting is sometimes a little faster again. But compilers can usually optimise those cases.
wim@wim-acer:~/Desktop$ python -mtimeit '0.5*1234567890.'
100000000 loops, best of 3: 0.0168 usec per loop
wim@wim-acer:~/Desktop$ python -mtimeit '1234567890./2.'
10000000 loops, best of 3: 0.043 usec per loop
wim@wim-acer:~/Desktop$ python -mtimeit '1234567890 >> 1'
100000000 loops, best of 3: 0.0168 usec per loop
When writing C++ code, when I have to do division by a constant k
from inside a loop, I have often been able to squeeze out some gains in performance-critical code by defining double ki = 1./k
outside the loop and using multiplication by ki
inside the loop.
精彩评论