How is conversion of float/double to int handled in printf?
Consider this program
int main()
{
float f = 11.22;
double d = 44.55;
int i,j;
i = f; //cast float to int
j = d; //cast double to int
开发者_Go百科printf("i = %d, j = %d, f = %d, d = %d", i,j,f,d);
//This prints the following:
// i = 11, j = 44, f = -536870912, d = 1076261027
return 0;
}
Can someone explain why the casting from double/float to int works correctly in the first case, and does not work when done in printf?
This program was compiled on gcc-4.1.2 on 32-bit linux machine.EDIT: Zach's answer seems logical, i.e. use of format specifiers to figure out what to pop off the stack. However then consider this follow up question:
int main()
{
char c = 'd'; // sizeof c is 1, however sizeof character literal
// 'd' is equal to sizeof(int) in ANSI C
printf("lit = %c, lit = %d , c = %c, c = %d", 'd', 'd', c, c);
//this prints: lit = d, lit = 100 , c = d, c = 100
//how does printf here pop off the right number of bytes even when
//the size represented by format specifiers doesn't actually match
//the size of the passed arguments(char(1 byte) & char_literal(4 bytes))
return 0;
}
How does this work?
The printf
function uses the format specifiers to figure out what to pop off the stack. So when it sees %d
, it pops off 4 bytes and interprets them as an int
, which is wrong (the binary representation of (float)3.0
is not the same as (int)3
).
You'll need to either use the %f
format specifiers or cast the arguments to int
. If you're using a new enough version of gcc
, then turning on stronger warnings catches this sort of error:
$ gcc -Wall -Werror test.c
cc1: warnings being treated as errors
test.c: In function ‘main’:
test.c:10: error: implicit declaration of function ‘printf’
test.c:10: error: incompatible implicit declaration of built-in function ‘printf’
test.c:10: error: format ‘%d’ expects type ‘int’, but argument 4 has type ‘double’
test.c:10: error: format ‘%d’ expects type ‘int’, but argument 5 has type ‘double’
Response to the edited part of the question:
C's integer promotion rules say that all types smaller than int
get promoted to int
when passed as a vararg. So in your case, the 'd'
is getting promoted to an int
, then printf is popping off an int
and casting to a char
. The best reference I could find for this behavior was this blog entry.
There's no such thing as "casting to int
in printf
". printf
does not do and cannot do any casting. Inconsistent format specifier leads to undefined behavior.
In practice printf
simply receives the raw data and reinterprets it as the type implied by the format specifier. If you pass it a double
value and specify an int
format specifier (like %d
), printf
will take that double
value and blindly reinterpret it an an int
. The results will be completely unpredictable (which is why doing this formally causes undefined behavior in C).
Jack's answer explains how to fix your problem. I'm going to explain why you're getting your unexpected results. Your code is equivalent to:
float f = 11.22;
double d = 44.55;
int i,j,k,l;
i = (int) f;
j = (int) d;
k = *(int *) &f; //cast float to int
l = *(int *) &d; //cast double to int
printf("i = %d, j = %d, f = %d, d = %d", i,j,k,l);
The reason is that f
and d
are passed to printf
as values, and then these values are interpreted as int
s. This doesn't change the binary value, so the number displayed is the binary representation of a float
or a double
. The actual cast from float
to int
is much more complex in the generated assembly.
Because you are not using the float format specifier, try with:
printf("i = %d, j = %d, f = %f, d = %f", i,j,f,d);
Otherwise, if you want 4 ints you have to cast them before passing the argument to printf:
printf("i = %d, j = %d, f = %d, d = %d", i,j,(int)f,(int)d);
The reason your follow-up code works is because the character constant is promoted to an int before it is pushed onto the stack. So printf pops off 4 bytes for %c and for %d. In fact, character constants are of type int, not type char. C is strange that way.
printf uses variable length argument lists, which means you need to provide the type information. You're providing the wrong information, so it gets confused. Jack provides the practical solution.
It's worth noting that printf
, being a function with a variable-length argument list, never receives a float; float arguments are "old school" promoted to doubles.
A recent standard draft introduces the "old school" default promotions first (n1570, 6.5.2.2/6):
If the expression that denotes the called function has a type that does not include a prototype, the integer promotions are performed on each argument, and arguments that have type float are promoted to double. These are called the default argument promotions.
Then it discusses variable argument lists (6.5.2.2/7):
The ellipsis notation in a function prototype declarator causes argument type conversion to stop after the last declared parameter. The default argument promotions are performed on trailing arguments.
The consequence for printf
is that it is impossible to "print" a genuine float. A float expression is always promoted to double, which is an 8 byte value for IEEE 754 implementations. This promotion occurs on the calling side; printf will already have an 8 byte argument on the stack when its execution starts.
If we assign 11.22
to a double and inspect its contents, with my x86_64-pc-cygwin gcc I see the byte sequence 000000e0a3702640.
That explains the int value printed by printf
: Ints on this target still have 4 bytes, so that only the first four bytes 000000e0 are evaluated, and again in little endian, i.e. as 0xe0000000. This is -536870912 in decimal.
If we reverse all of the 8 bytes, because the Intel processor stores doubles in little endian, too, we get 402670a3e0000000. We can check the value this byte sequence represents in IEEE format on this web site; it's close to 1.122E1, i.e. 11.22, the expected result.
精彩评论