开发者

Instead of error, why don't both operands get promoted to float or double?

1) If one operand is of type ulong, while the other operand is of type sbyte/short/int/long, then compile-time error occurs. I fail to see the logic in this. Thus, why would it be bad idea for both operands to instead be promoted to type double or float?

        long L = 100;
        ulong UL = 1000;
        double d = L + UL; // error saying + operator can't be applied
                              to operands of type ulong and long
开发者_JS百科

b) Compiler implicitly converts int literal into byte type and assigns resulting value to b:

byte b = 1;

But if we try to assign a literal of type ulong to type long(or to types int, byte etc), then compiler reports an error:

long L = 1000UL;

I would think compiler would be able to figure out whether result of constant expression could fit into variable of type long?!

thank you


To answer the question marked (1) -- adding signed and unsigned longs is probably a mistake. If the intention of the developer is to overflow into inexact arithmetic in this scenario then that's something they should do explicitly, by casting both arguments to double. Doing so implicitly is hiding mistakes more often than it is doing the right thing.

To answer the question marked (b) -- of course the compiler could figure that out. Obviously it can because it does so for integer literals. But again, this is almost certainly an error. If your intention was to make that a signed long then why did you mark it as unsigned? This looks like a mistake. C# has been carefully designed so that it looks for weird patterns like this and calls your attention to them, rather than making a guess that you meant to say this weird thing and blazing on ahead as if everything were normal. The compiler is trying to encourage you to write sensible code; sensible code does not mix signed and unsigned types.


Why should it?

Generally, the 2 types are incompatible because long is signed. You are only describing a special case.

For byte b = 1; 1 has no implicit type as such and can be coerced into byte

For long L = 1000UL; "1000UL" does have an explicit type and is incompatible and see my general case above.

Example from "ulong" on MSDN:

When an integer literal has no suffix, its type is the first of these types in which its value can be represented: int, uint, long, ulong.

and then

There is no implicit conversion from ulong to any integral type

On "long" in MSDN (my bold)

When an integer literal has no suffix, its type is the first of these types in which its value can be represented: int, uint, long, ulong.

It's quite common and logical and utterly predictable


long l = 100;  
ulong ul = 1000;  
double d = l + ul;  // error 

Why would it be bad idea for both operands to instead be promoted to type double or float?

Which one? Floats? Or doubles? Or maybe decimals? Or longs? There's no way for the compiler to know what you are thinking. Also type information generally flows out of expressions not into them, so it can't use the target of the assignment to choose either.

The fix is to simply specify which type you want by casting one or both of the arguments to that type.


The compiler doesn't consider what you do with the result when it determines the result type of an expression. The rules for how types are promoted in an expression only consider the values in the expression itself, not what you do with the value later on.

In the case where you assign the result to a variable, it could be possible to use that information, but consider a statement like this:

Console.Write(L + UL);

The Write method has overloads that take several different data types, which would make it rather complicated to decide how to use that information.

For example, there is an overload that takes a string, so one possible way to promote the types (and a good candidate as it doesn't lose any precision) would be to first convert both values to strings and then concatenate them, which is probably not the result that you were after.


Simple answer is that's just the way the language spec is written:

http://msdn.microsoft.com/en-us/library/y5b434w4(v=VS.80).aspx

You can argue over whether the rules of implicit conversions are logical in each case, but at the end of the day these are just the rules the design committee decided on.

Any implicit conversion has a downside in that it's doing something the programmer may not expect. The general principal with c# seems to be to error in these cases rather then try to guess what the programmer meant.


Suppose one variable was equal to 9223372036854775807 and the other was equal to -9223372036854775806? What should the result of the addition be? Converting the two values to double would round them to 9223372036854775808 and -9223372036854775808, respectively; performing the subtraction would then yield 0.0 (exactly). By contrast, if both values were signed, the result would be 1.0 (also exact). It would be possible to convert both operands to type Decimal and do the math exactly. Conversion to Double after the fact would require an explicit cast, however.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜