Why does a float divided by a larger float result in zero, and how can I avoid this in C#?
I was trying to divide (float)200 / (float)500
but the result is 0.0. Why is this so and how ca开发者_如何学Gon we have 0.4 as the result? Thanks a lot.
It is a very common mistake, every programmer makes it at least once. There are two kind of division operators, they both use the same symbol, integral and floating point. The compiler chooses which it uses based on the types of the operands. If both the left- and right-hand operands are integral then you'll get an integral division. The idiv instruction in machine code. Which truncates to zero and produces an integral result.
As soon as at least one operand is floating point, you'll get the fdiv instruction in machine code and the result you are looking for. Simply casting to (float) or (double) is enough, like you did in your question. You only have to cast one of them. Or use a floating point literal like 200f or 200.0
That's impossible. I can think of the following scenarios:
- You are casting the result to an integer; in this case it gets rounded down to 0
- 200 and 500 aren't really floats but integers; the result will then be an integer by default
- The way you are printing the float either converts it to an integer or doesn't display the decimal values.
Nothing wrong here - I'm with Andreas.
Console.WriteLine((float)200 / (float)500);
Console.WriteLine("Press any key to continue");
Console.ReadKey(true);
Results in:
0.4
Press any key to continue
I experienced this problem in MonoDevelop debugger with Unity3D project. Somehow Evaluate window shows 1.0f/10.0f = 0 long. This is just a bug of MonoDevelop IDE.
精彩评论