Problem with calculating floats
strange situation, when performing the following lines of Code:
const float a = 47.848711;
const float b = 47.862952;
float result = b - a;
I get a (NSLog %.10f) result = 0.0142440796.
I expect开发者_开发问答ed to get 0.0142410000.
What's going on?
Classic!
What Every Computer Scientist Should Know About Floating-Point Arithmetic
(basically, floating points can be inaccurate; wikipedia).
What if I ask you the following:
const int a = 1.3; const int b = 2.7; int result = b - a;
I get a (NSLog %d) result = 1.
I expected to get 1.4. What's going on?
In this case, the answer is obvious, right? 1.3 isn't an integer, so the actual value that gets stored in a
is 1, and the value that gets stored in b
isn't 2.7, but rather 2. When I subtract 1 from 2 I get exactly 1, which is the observed answer. If you're with me so far, keep reading.
The exact same thing is happening in your example. 47.848711 isn't a single-precision float, so the closest floating-point value is stored in a
instead, which is exactly:
a = 47.8487091064453125
Similarly, the value stored in b
is the closest floating-point value to 47.862952
, which is exactly:
b = 47.86295318603515625
When you subtract these numbers to get result
, you get:
47.86295318603515625
- 47.8487091064453125
----------------------
0.01424407958984375
When you round that value to 10 digits to print it out, you get:
0.0142440796
精彩评论