开发者

C# - Inconsistent math operation result on 32-bit and 64-bit

Consider the following code:

double v1 = double.MaxValue;
double r = Math.Sqrt(v1 * v1);

r = double.MaxValue on 32-bit machine r = I开发者_JAVA百科nfinity on 64-bit machine

We develop on 32-bit machine and thus unaware of the problem until notified by customer. Why such inconsistency happens? How to prevent this from happening?


The x86 instruction set has tricky floating point consistency issues due the way the FPU works. Internal calculations are performed with more significant bits than can be stored in a double, causing truncation when the number is flushed from the FPU stack to memory.

That got fixed in the x64 JIT compiler, it uses SSE instructions, the SSE registers have the same size as a double.

This is going to byte you when your calculations test the boundaries of floating point accuracy and range. You never want to get close to needing more than 15 significant digits, you never want to get close to 10E308 or 10E-308. You certainly never want to square the largest representable value. This is never a real problem, numbers that represent physical quantities don't get close.

Use this opportunity to find out what is wrong with your calculations. It is very important that you run the same operating system and hardware that your customer is using, high time that you get the machines needed to do so. Shipping code that is only tested on x86 machine is not tested.

The Q&D fix is Project + Properties, Compile tab, Platform Target = x86.


Fwiw, the bad result on x86 is caused by a bug in the JIT compiler. It generates this code:

      double r = Math.Sqrt(v1 * v1);
00000006  fld         dword ptr ds:[009D1578h] 
0000000c  fsqrt            
0000000e  fstp        qword ptr [ebp-8] 

The fmul instruction is missing, removed by the code optimizer in release mode. No doubt triggered by it seeing the value at double.MaxValue. That's a bug, you can report it at connect.microsoft.com. Pretty sure they're not going to fix it though.


This is a nigh duplicate of

Why does this floating-point calculation give different results on different machines?

My answer to that question also answers this one. In short: different hardware is allowed to give more or less accurate results depending on the details of the hardware.

How to prevent it from happening? Since the problem is on the chip, you have two choices. (1) Don't do any math in floating point numbers. Do all your math in integers. Integer math is 100% consistent from chip to chip. Or (2) require all your customers to use the same hardware as you develop on.

Note that if you choose (2) then you might still have problems; small details like whether a program was compiled debug or retail can change whether floating point calculations are done in extra precision or not. This can cause inconsistent results between debug and retail builds, which is also unexpected and confusing. If your requirement of consistency is more important than your requirement of speed then you'll have to implement your own floating point library that does all its calculations in integers.


I tried this in x86 and x64 in debug and release mode:

x86 debug:   Double.MaxValue
x64 debug:   Infinity
x86 release: Infinity
x64 release: Infinity

So, it seems that it's only in debug mode that you get that result.

Not sure why there is a difference, though, the x86 code in debug mode:

            double r = Math.Sqrt(v1 * v1);
00025bda  fld         qword ptr [ebp-44h] 
00025bdd  fmul        st,st(0) 
00025bdf  fsqrt            
00025be1  fstp        qword ptr [ebp-5Ch] 
00025be4  fld         qword ptr [ebp-5Ch] 
00025be7  fstp        qword ptr [ebp-4Ch] 

is the same as the code in release mode:

            double r = Math.Sqrt(v1 * v1);
00000027  fld         qword ptr [ebp-8] 
0000002a  fmul        st,st(0) 
0000002c  fsqrt            
0000002e  fstp        qword ptr [ebp-18h] 
00000031  fld         qword ptr [ebp-18h] 
00000034  fstp        qword ptr [ebp-10h]


The problem is that Math.Sqrt expects a double as argument. v1 * v1 cannot be stored as double and overflows resulting in undefined behavior.


double.MaxValue * double.MaxValue is an overflow.

You should avoid the calculation overflowing rather than relying on the 32-bit behaviour you reported (which as commented upon seems unlikey).

[Are the 32bit and 64bit builds the same configuration and settings?]

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜