Why does printf print wrong values?
Why do I get the wrong values when I print a开发者_Python百科n int
using printf("%f\n", myNumber)
?
I don't understand why it prints fine with %d
, but not with %f
. Shouldn't it just add extra zeros?
int a = 1;
int b = 10;
int c = 100;
int d = 1000;
int e = 10000;
printf("%d %d %d %d %d\n", a, b, c, d, e); //prints fine
printf("%f %f %f %f %f\n", a, b, c, d, e); //prints weird stuff
well of course it prints the "weird" stuff. You are passing in int
s, but telling printf
you passed in float
s. Since these two data types have different and incompatible internal representations, you will get "gibberish".
There is no "automatic cast" when you pass variables to a variandic function like printf
, the values are passed into the function as the datatype they actually are (or upgraded to a larger compatible type in some cases).
What you have done is somewhat similar to this:
union {
int n;
float f;
} x;
x.n = 10;
printf("%f\n", x.f); /* pass in the binary representation for 10,
but treat that same bit pattern as a float,
even though they are incompatible */
If you want to print them as floats, you can cast them as float before passing them to the printf function.
printf("%f %f %f %f %f\n", (float)a, (float)b, (float)c, (float)d, (float)e);
a, b, c, d and e aren't floats. printf() is interpreting them as floats, and this would print weird stuff to your screen.
Using incorrect format specifier in printf()
invokes Undefined Behaviour
For example:
int n=1;
printf("%f", n); //UB
float x=1.2f;
printf("%d", x); //UB
double y=12.34;
printf("%lf",y); //UB
Note: format specifier for double
in printf()
is %f
.
the problem is... inside printf
. the following happens
if ("%f") {
float *p = (float*) &a;
output *p; //err because binary representation is different for float and int
}
the way printf and variable arguments work is that the format specifier in the string e.g. "%f %f" tells the printf the type and thus the size of the argument. By specifying the wrong type for the argument it gets confused.
look at stdarg.h for the macros used to handle variable arguments
For "normal" (non variadac functions with all the types specified) the compiler converts integer valued types to floating point types where needed.
That does not happen with variadac arguments, which are always passed "as is".
精彩评论