interview question printing a floating point number [duplicate]
What's the output of the following program and why?
#include <stdio.h>
int main()
{
float a = 12.5;
printf("%d\n", a);
pri开发者_运维问答ntf("%d\n", *(int *)&a);
return 0;
}
My compiler prints 0 and 1095237632. Why?
In both cases you pass a bits representing floating-point values, and print them as decimal. The second case is the simple case, here the output is the same as the underlying representation of the floating-point number. (This assumes that the calling convention specified that the value of a float
is passed the same way an int
is, which is not guaranteed.)
However, in the first case, when you pass a float
to a vararg function like printf
it is promoted to a double
. This mean that the value passed will be 64 bits, and printf
will pick up either half of it (or perhaps garbage). In your case it has apparently picked up the 32 least significant bits, as they typically will be all zero after a float
to double
cast.
Just to make it absolutely clear, the code in the question is not valid C, as it's illegal to pass values to printf
that does not match the format specifier.
The memory referred to by a
holds a pattern of bits which the processor uses to represent 12.5. How does it represent it: IEEE 754. What does it look like? See this calculator to find out: 0x4148000. What is that when interpreted as an int? 1095237632.
Why do you get a different value when you don't do the casting? I'm not 100% sure, but I'd guess it's because compilers can use a calling convention which passes floating point arguments in a different location than integers, so when printf tries to find the first integer argument after the string, there's nothing predictable there.
(Or more likely, as @Lindydancer points out, the float bits may be passed in the 'right' place for an int, but because they are first promoted to a double representation by extending with zeros, there are 0s where printf expects the first int to be.)
This is because floating point representation (see IEEE 754 standard).
In short, set of bits, which makes 12.5 floating point value in IEEE 755 representation when interpreted as an integer will give you strange value which has not much in common with 12.5 value.
WRT to 0
from printf("%d\n", a)
, this is internal undefined behavior for incorrect printf call.
精彩评论