Floating point comparison functions explanation
Anyone care to explain in detail (code by code) what this is doing? I have read Bruce Dawson's paper on comparing floats and have found converted C# code of it but don't quite understand it. What is maxDeltaBits
and its purpose? In Dawson's paper it states that this can also be applied to double
, so if that is the case then would you need to convert the double
value to Int64
instead of int32
?
public static开发者_JAVA技巧 int FloatToInt32Bits( float f )
{
return BitConverter.ToInt32( BitConverter.GetBytes( b ), 0 );
}
public static bool AlmostEqual2sComplement
( float a, float b, int maxDeltaBits )
{
int aInt = FloatToInt32Bits( a );
if ( aInt < 0 ) // Why only if it is less than 0?
aInt = Int32.MinValue - aInt; // What is the purpose of this?
int bInt = FloatToInt32Bits( b );
if ( bInt < 0 ) // Why only if it is less than 0?
bInt = Int32.MinValue - bInt; // What is the purpose of this?
int intDiff = Math.Abs( aInt - bInt );
return intDiff <= ( 1 << maxDeltaBits ); // Why ( 1 << maxDeltaBits )?
}
It's all there in Bruce Dawson's papers. A float (or double for that matter) has a finite precision. That also means there is a set of all the numbers that a float can represent.
What Dawson's method does, is to count how many steps away in that set you can accept as "equal value", rather than use a fixed acceptable error value. The relative error of one such step will vary with a factor (almost) 2, where the relative error is less for a high value of the mantissa and bigger for a low value of the mantissa. However, the relative error for a fixed number of steps will not vary more than that.
To Roland Illig:
Why is it "a fact that you can never test two FP numbers for equality directly"? You can, but you will not always get what you expect. Not too large integer numbers stored as floats will work. However, a fraction that can be written as a finite number of decimal digits, cannot generally be stored with a finite number of binary digits. The number will be truncated. If you do some arithmetic, the error introduced with the truncation comes into play. Also, the FPU of your machine may store values with a higher precision, and you can end up with a comparison of values with unequal precision.
Test the following (C includes, change to cstdio and cmath for modern C++):
#include <stdio.h>
#include <math.h>
void Trig(float x, float y)
{
if (cos(x) == cos(y)) printf("cos(%f) equal to cos(%f)\n",x,y);
else printf("cos(%f) not equal to cos(%f)\n",x,y);
}
int main()
{
float f = 0.1;
f = f * 0.1 * 0.1 * 0.1;
if (f == 0.0001f) printf("%f equals 0.0001\n",f);
else printf("%f does not equal 0.0001\n",f);
Trig(1.44,1.44);
return 0;
}
On my machine, I get the "not equal" branch in both cases:
0.000100 does not equal 0.0001
cos(1.440000) not equal to cos(1.440000)
What you get on your machine is implementation dependant.
The <0
stuff is due to http://en.wikipedia.org/wiki/Two%27s_complement. The maxDeltaBits handles the fact that you can never test two FP numbers for equality directly, you can only make sure they're "almost the same" within a certain amount of precision.
精彩评论