MSVC generates strange/slow binary for some multiplications and divisions
I use MSVC 2010 SP1 and I have the following line of C++ code:
int32_t c = (int64_t(a)*int64_t(b))>>2;
When a
and b
are not constants, MSVC correctly generates a 32 bit imul
and shrd
instructions. But when a
or b
are constants it generates a call to _allmull
instead of the imul
instruction. Could there be any reason for this? How can I force/guide it to always generate good code? The thing that bothers me is why it generates worse code when it has more compile time information. I have found that _allmull
function performs 64 bit mul开发者_开发技巧tiplication, but I think it is not needed in this case.
I have also noticed that for a line int32_t c = (int64_t(a)*int64_t(b))/4;
it even generates _alldiv for the division by 4.
Edit: It seems to be a compiler error. I have filled a bug report.
Partially related: if you want to be sure to exploit the imul
capability of performing 32x32=>64 bit multiplication, you can use the Int32x32To64
"fake API" (actually a macro):
Multiplies two signed 32-bit integers, returning a signed 64-bit integer result. The function performs optimally on 32-bit Windows.
This function is implemented on all platforms by optimal inline code: a single multiply instruction that returns a 64-bit result.
By the way, did you enable the optimizations? I would be quite baffled if, with optimizations enabled, the compiler wasn't able to figure it out by itself.
Edit:
interestingly enough, looking for Int32x32To64
in winnt.h
, you find, for x86:
//
// The x86 C compiler understands inline assembler. Therefore, inline functions
// that employ inline assembler are used for shifts of 0..31. The multiplies
// rely on the compiler recognizing the cast of the multiplicand to int64 to
// generate the optimal code inline.
//
#define Int32x32To64( a, b ) (LONGLONG)((LONGLONG)(LONG)(a) * (LONG)(b))
#define UInt32x32To64( a, b ) (ULONGLONG)((ULONGLONG)(DWORD)(a) * (DWORD)(b))
So, it should definitely produce imul
if even the Platform SDK trusts the compiler to do the right thing.
Edit again:
If you need to be sure to get an imul
, you could use the __emul
compiler intrinsic.
I see the allmul if I run the compiler without optimisation, but with /Ox, I see a combination of shifts and adds that's dependent on the value of the constant part.
I think you need to provide a specific bit of code, and the compiler options you've used.
Have you tried as a workaround:
int32_t c = (int64_t(int32_t(a))*int64_t(int32_t(b)))>>2;
精彩评论