Type casting with printf statements under Mac OSX and Linux
I have some piece of code that behaves differently under Mac OSX and Linux (Ubuntu, Fedora, ...). This is regarding type casting in arithmetic operations within printf statements. The code is compiled with gcc/g++.
The following
#include <stdio.h>
int main () {
float days = (float) (153*86400) / 86400.0;
printf ("%f\n开发者_如何学编程", days);
float foo = days / 30.6;
printf ("%d\n", (int) foo);
printf ("%d\n", (int) (days / 30.6));
return 0;
}
generates on Linux
153.000000
5
4
and on Mac OSX
153.000000
5
5
Why?
To my surprise this here works on both Mac OSX and Linux
printf ("%d\n", (int) (((float)(153 * 86400) / 86400.0) / 30.6));
printf ("%d\n", (int) (153 / 30.6));
printf ("%.16f\n", (153 / 30.6));
Why? I don't have a clue at all. THX.
try this:
#include <stdio.h>
int main () {
float days = (float) (153*86400) / 86400.0;
printf ("%f\n", days);
float foo = days / 30.6;
printf ("%d\n", (int) foo);
printf ("%d\n", (int) (days / 30.6));
printf ("%d\n", (int) (float)(days / 30.6));
return 0;
}
Notice what happens? The double to float conversion is the culprit. Remember float is always converted to double in a varargs function. I'm not sure why macos would be different, though. Better (or worse) implementation of IEEE arithmetic?
I expect the answer is somehow related to the 32-bit assignment to the float variable then being converted to a double before printing, leaving fewer significant bits than you'd expect if you passed a full double - as in the second expression. It gets tricky, dealing with floating point arithmetic. I still like the quote from Kernighan and Pike's classic book 'The Elements of Programming Style', where they say:
A wise programmer once said, "Floating point numbers are like little piles of sand; every time you move one, you lose a little sand and pick up a little dirt".
This is an illustration of the point. It also shows why floating point decimal arithmetic as in the revised IEEE 754 standard is a good idea.
Your value for days
is a floating point calculation, which with infinite precision will produce exactly 5.0
.
If, with your limited precision, you get a result that's the slightest bit smaller than 5.0 your cast to an int will produce an int smaller than 5.
The problem here is incorrect use of floating point arithmetic. You should probably be rounding.
The reason you're seeing the discrepancy is either a difference in the CPU's handling of floating point or (more likely) your compilers' optimizers vary in adjusting what calculations are done at run time and in what order.
For starters, read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Because 30.6 is not exactly representable in IEEE 754 floating-point, the precise results you get will not be exactly correct. So you might get slightly less than 5.0 before casting to an integer, and then the integer cast rounds down to 4.
The exact result you get depends on a lot of factors, such as how the intermediate results are being computed. Your compiler might be generating code to use the x87 floating-point stack, which uses 80-bit registers for intermediate calculations. Alternatively, you might be using SSE2 (the default on Mac OS X), which uses 128-bit vector registers divided into either 4x32-bit or 2x64-bit. Check your compiler's assembly output for the type of floating-point operations used. See this page for a list of GCC command-line options you can use to control the type of floating-point instructions used, particularly the -mfpmath
option.
Hmmmmmmm... I suspect there's some difference due to the Linux machine being a 32 bit one (am I right?) and the Mac being a 64 bit machine. On my 64 bit Linux machine I get the second set of results.
Are both run on the same processor? My guess is that it has to do with endianess of the platform, not the OS. Also try (int) ((float) (days / 30.6))
instead of (int) (days / 30.6)
.
Another thing to look at is whether the compiler versions are the same.
I doubt it has to do with printf, try this:
#include <iostream>
int main () {
float days = (float) (153*86400) / 86400.0;
std::cout << days << std::endl;
float foo = days / 30.6;
std::cout << (int) foo << std::endl;
std::cout << (int) (days / 30.6) << std::endl;
return 0;
}
And post the result in comments, please.
I'm starting to wonder about floating point representation issues. I don't believe that 30.6 has an exact IEEE representation. Maybe you're getting "lucky" with rounding issues.
Another possibility is different compiler handling of this line:
float days = (float) (153*86400) / 86400.0;
where as a human I can see that the seconds per day bit cancels out, but the compiler might miss the chance to do constant folding if it treats one as being in an integer context and the other as being in a subsequent floating point context. Any standards gurus want to weight in on the presence or absence of sequence points in there?
One calculation is as float
and one as double
, and they must be rounding differently on Linux. Why this is not the MacOSX behavior I don't know, particularly since you don't bother to specify anything about the MacOSX computer. Is it a real Mac? PPC or Intel?
Never, ever, rely on floating-point computations to come out exactly the way you want. Always round to int
, never truncate.
Here's some code which should illustrate wat's happening a bit better than the original code:
#include <stdio.h>
int main(void)
{
volatile double d_30_6 = 30.6;
volatile float f_30_6 = 30.6;
printf("%.16f\n", 153 / 30.6); // compile-time
printf("%.16f\n", 153 / d_30_6); // double precision
printf("%.16f\n", 153 / f_30_6); // single precision
return 0;
}
The volatile
variables force the compiler to not optimize the computation away, regardless of optimization level...
Floating point numbers prove the existence of God by negation, as they are most certainly the work of the devil...
One day after a long day of running the universe God and Satan got together for a beer and began reminiscing. "Hey, remember the stuff we pulled on that guy Job?" said God.
"Yeah," Satan replied, "those were the days, eh? Lots of smiting and damning unto eternal perdition..."
"Not to mention all the prophesying and plagues and everything", said God. "Yeah - those were they days". God sighed. "Say, I know it's been a while but, um, how'd you feel about doing something like that again?".
"Funny you should mention it," said Satan smoothly. "You know I've always been interested in technology...".
"Sure", said God, "plenty of opportunity to get folks to sell their souls for another round of funding, eh?" He chuckled. "So, whatcha got in mind?"
"Well", said Satan, "I've put my best daemons on it. They've been working like the very devil to come up with something new and I think they've got it. It's a little something we call 'floating point'..."
精彩评论