C# double precision problem
Imagine that a - b < c (a, b, c are C# doubles). Is it guaranteed that a < b + c?
Thanks!
EDIT
Let's say that the arithmetical overflow doesn't occur unlike the following example:double a = 1L << 53;
double b = 1;
double c = a;
Console.WriteLine(a - b < c); // Prints True
Console.WriteLine(a < b + c); // Prints False
Imagine that Math.Abs(a) < 1开发者_如何学JAVA.0 && Math.Abs(b) < 1.0 && Math.Abs(c) < 1.0
No. Suppose a = c, a very large number, and b is a very small number. It's possible that a - b
has a representation less than a
, but a + b
is so close to a
(and bigger) that it still ends up being most precisely representable as a
.
Here's an example:
double a = 1L << 53;
double b = 1;
double c = a;
Console.WriteLine(a - b < c); // Prints True
Console.WriteLine(a < b + c); // Prints False
EDIT:
Here's another example, which matches your edited question:
double a = 1.0;
double b = 1.0 / (1L << 53);
double c = a;
Console.WriteLine(a - b < c); // Prints True
Console.WriteLine(a < b + c); // Prints False
In other words, when we subtract a very small number from 1, we get a result less than 1. When we add the same number to 1, we just get 1 back due to the limitations of double precision.
no not always:
double a = double.MaxValue;
double b = double.MaxValue;
double c = 0.1;
Console.WriteLine(a - b < c); // True
Console.WriteLine(a < b + c); // False
This link speaks about floating-point arithmetic properties, and could be very interesting:
FLOATING-POINT FALLACIES
In particular, search for Properties of Relations
精彩评论