开发者

Best way to cast a float to an int for arithmetic?

In C#, I am doing something like this:

float a = 4.0f;
float b = 84.5f;
int ans = a * b;

However, the compiler states that a cast is required to go from float -> int in assignment. Of course I could probably do this:

int ans = (int)a * (int)b;

But thi开发者_开发知识库s is ugly and redundant. Is there a better way? I know in C++ I could do this:

int ans = int(a * b);

At least that looks a little better on the eyes. But I can't do this in C# it seems.


You should consider the need of your application before the look of the code. Doing float math to int, is not something to be taken lightly. The real question what are you looking for out of your final answer.

a is cast to 4, and b is cast to 84, which is the result of 336. However if you cast it to an int after you do the math, the result is 338.

Is being off by 2 good enough for you? Then you have to do

int ans = (int)a * (int)b;

// ans = 336

If you want 338 then you have to do

int ans = (int)(a * b);

// ans = 338

I would really consider the side effects about what you are doing. Ideally you should have a policy for rounding the 2 floats before doing the math. Remember casting to an int is just going to cut a decimal off. So 84.9 becomes 84. That could greatly change your final result. You need to consider what is required in your application.


Put the int in parentheses as well.

int ans = (int)(a * b);


 int ans = (int)a * (int)b; 

 int ans = (int)(a * b); 

These two statements are not equivalent and will produce different results. In once case, you give up precision before the multiplication, in the other after the multiplication.


Try int ans = (int)(a * b);


int ans = (int)(a * b);
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜