开发者

Why does C++ allow implicit conversion from int to unsigned int?

Consider following code:

void foo(unsigned int x)
{

}

int main()
{
  foo(-5);
  return 0;
}

This code compiles with no problems. Errors li开发者_开发百科ke this can cause lots of problems and are hard to find. Why does C++ allow such conversion?


The short answer is because C supported such conversions originally and they didn't want to break existing software in C++.

Note that some compilers will warn on this. For example g++ -Wconversion will warn on that construct.

In many cases the implicit conversion is useful, for example when int was used in calculations, but the end result will never be negative (known from the algorithm and optionally asserted upon).

EDIT: Additional probable explanation: Remember that originally C was a much looser-typed language than C++ is now. With K&R style function declarations there would have been no way for the compiler to detect such implicit conversions, so why bother restricting it in the language. For example your code would look roughly like this:

int foo(x)
unsigned int x
{

}

int main()
{
  foo(-5);
  return 0;
}

while the declaration alone would have been int foo(x);

The compiler actually relied on the programmer to pass the right types into each function call and did no conversions at the call site. Then when the function actually got called the data on the stack (etc) was interpreted in the way the function declaration indicated.

Once code was written that relied on that sort of implicit conversion it would have become much harder to remove it from ANSI C even when function prototypes were added with actual type information. This is likely why it remains in C even now. Then C++ came along and again decided to not break backwards compatibility with C, continuing to allow such implicit conversions.


  • Just another quirk of a language that has lots of silly quirks.
  • The conversion is well-defined to wrap around, which may be useful in some cases.
  • It's backward-compatible with C, which does it for the above reasons.

Take your pick.


@user168715 is right. C++ was initially designed to be a superset of C, pretending to be as backward-compatible as possible. The "C" philosophy is to deliver most of the responsibility to the programmer, instead of disallowing dangerous things. For C programmers it is heaven, for Java programmers, it is hell... a matter of taste.

I will dig the standards to see where exactly it is written, but I have no time for this right now. I'll edit my answer as soon as I can.

I also agree that some of the inherited freedom can lead to errors that are really hard to debug, so I am adding to what was said that in g++ you can turn on a warning to prevent you from doing this kind of mistake: -Wconversion flag.

-Wconversion

Warn for implicit conversions that may alter a value. This includes conversions between real and integer, like abs (x) when x is double; conversions between signed and unsigned, like unsigned ui = -1; and conversions to smaller types, like sqrtf (M_PI). Do not warn for explicit casts like abs ((int) x) and ui = (unsigned) -1, or if the value is not changed by the conversion like in abs (2.0). Warnings about conversions between signed and unsigned integers can be disabled by using -Wno-sign-conversion.

For C++, also warn for confusing overload resolution for user-defined conversions; and conversions that will never use a type conversion operator: conversions to void, the same type, a base class or a reference to them. Warnings about conversions between signed and unsigned integers are disabled by default in C++ unless -Wsign-conversion is explicitly enabled.

Other compilers may have similar flags.


By the time of the original C standard, the conversion was already allowed by many (all?) compilers. Based on the C rationale, there appears to have been little (if any) discussion of whether such implicit conversions should be allowed. By the time C++ came along, such implicit conversions were sufficiently common that eliminating them would have rendered the language incompatible with a great deal of C code. It would probably have made C++ cleaner; it would certainly have made it much less used -- to the point that it would probably never have gotten beyond the "C with Classes" stage, and even that would just be a mostly-ignored footnote in the history of Bell labs.

The only real question along this line was between "value preserving" and "unsigned preserving" rules when promoting unsigned values "smaller" than int. The difference between the two arises when you have (for example) an unsigned short being added to an unsigned char.

Unsigned preserving rules say that you promote both to unsigned int. Value preserving rules say that you promote both values to int, if it can represent all values of the original type (e.g., the common case of 8-bit char, 16-bit short, and 32-bit int). On the other hand, if int and short are both 16 bits, so int cannot represent all values of unsigned short, then you promote the unsigned short to unsigned int (note that it's still considered a promotion, even though it only happens when it's really not a promotion -- i.e., the two types are the same size).

For better or worse, (and it's been argued both directions many times) the committee chose value preserving rather than unsigned preserving promotions. Note, however, that this deals with a conversion in the opposite direction: rather than from signed to unsigned, it's about whether you convert unsigned to signed.


Because the standard allows implicit conversion from signed to unsigned types.

Also (int)a + (unsigned)b results to unsigned - this is a c++ standard.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜