Performance gains from fast multiplication with bit shifting
I have been reading a lot of articles lately about programming practice, design and so forth and was curious about the real performance gains from implementing multiplication as bit shifting.
The example I was reading about was 开发者_StackOverflow社区encouraging implementing x*320 as (x<<8 + x<<6) for a commonly used routine.
How relevant is this in modern compilers? If there are significant performance gains, can compilers not automatically convert these "easy multiplications" to bit-shifts as necessary?
Has anyone had to resort to bit-shifting in this way in their projects to achieve faster multiplication? What performance gains can you expect?
Yes, compilers will do most of these for you. They're pretty aggressive with it too. So there's rarely a need to do it yourself. (especially at the cost of readability)
However, on modern machines now, multiplication isn't "that" much slower than shifts. So any number that needs more than like 2 shifts are better done using multiplication. The compilers know this and will choose accordingly.
EDIT:
From my experience, I've never been able to outdo a compiler in this area unless the code was vectorized via SSE intrinsics (which the compilers don't really try to optimize).
精彩评论