开发者

Casting int as float

I am trying to add an int to a float. My code is:

int main() {
   char paus[2];
   int millit = 5085840;
   float dmillit = .000005;
   float dbuffer;

   printf("(float)milit + dmillit: %f\n",(float)millit + dmillit);
   dbuffer = (float)millit + dmillit;
   printf开发者_StackOverflow("dbuffer: %f\n",dbuffer);

   fgets(paus,2,stdin);
   return 0;
 }

The output looks like:

(float)millit + dmillit: 5085840.000005

dbuffer: 5085840.000000

Why is there a difference? I've also noticed that if I change dmillit = .5, then both outputs are the same (5085840.5), which is what I would expect. Why is this? Thanks!


(float)millit + dmillit evaluates to a double value. When you print the value, it displays correctly, but when you store it in a float variable, the precision is lost.


the precision that you are trying to use is too big for a float. On the printf function it is being casted into a double to be printed.

See the page IEEE 754 float calculator to better understand this.


I believe the floating point addition in the printf statement may be silently casting the result to a double, so it has more precision.


 printf("(float)milit + dmillit: %f\n",(float)millit + dmillit); 

I believe that here, the addition is done as a double, and passed as a double to printf.

 dbuffer = (float)millit + dmillit;    
 printf("dbuffer: %f\n",dbuffer);  

Here's the addition is done as a double, then reduced to a float to be stored in dbuffer, then expanded back to a double to be passed to printf.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜