Confusion with floating point numbers
int main()
{
float x=3.4e2;
printf("%f",x);
return 0;
}
Output:
340.000000 // It's ok.
But if write x=3.1234e2
the output is 312.339996
and if x=3.12345678e2
the output is 312.345673
.
Why are the outputs like these? I think if I write x=3.1234e2
the output should be 312.340000
, but the开发者_JAVA技巧 actual output is 312.339996
using GCC compiler.
Not all fractional numbers have an exact binary equivalent so it is rounded to the nearest value.
Simplified example,
if you have 3 bits for the fraction, you can have:
0
0.125
0.25
0.375
...
0.5 has an exact representation, but 0.1 will be shown as 0.125.
Of course the real differences are much smaller.
Floating-point numbers are normally represented as binary fractions times a power of two, for efficiency. This is about as accurate as base-10 representation, except that there are decimal fractions that cannot be exactly represented as binary fractions. They are, instead, represented as approximations.
Moreover, a float
is normally 32 bits long, which means that it doesn't have all that many significant digits. You can see in your examples that they're accurate to about 8 significant digits.
You are, however, printing the numbers to slightly beyond their significance, and therefore you're seeing the difference. Look at your printf
format string documentation to see how to print fewer digits.
You may need to represent decimal numbers exactly; this often happens in financial applications. In that case, you need to use a special library to represent numbers, or simply calculate everything as integers (such as representing amounts as cents rather than as dollars and fractions of a dollar).
The standard reference is What Every Computer Scientist Should Know About Floating-Point Arithmetic, but it looks like that would be very advanced for you. Alternatively, you could Google floating-point formats (particularly IEEE standard formats) or look them up on Wikipedia, if you wanted the details.
精彩评论