开发者

What type-conversions are happening?

#include "stdio.h"

int 开发者_开发技巧main()
{
    int x = -13701;
    unsigned int y = 3;
    signed short z = x / y;

    printf("z = %d\n", z);

    return 0;
}

I would expect the answer to be -4567. I am getting "z = 17278". Why does a promotion of these numbers result in 17278?

I executed this in Code Pad.


The hidden type conversions are:

signed short z = (signed short) (((unsigned int) x) / y);

When you mix signed and unsigned types the unsigned ones win. x is converted to unsigned int, divided by 3, and then that result is down-converted to (signed) short. With 32-bit integers:

(unsigned) -13701         == (unsigned) 0xFFFFCA7B // Bit pattern
(unsigned) 0xFFFFCA7B     == (unsigned) 4294953595 // Re-interpret as unsigned
(unsigned) 4294953595 / 3 == (unsigned) 1431651198 // Divide by 3
(unsigned) 1431651198     == (unsigned) 0x5555437E // Bit pattern of that result
(short) 0x5555437E        == (short) 0x437E        // Strip high 16 bits
(short) 0x437E            == (short) 17278         // Re-interpret as short

By the way, the signed keyword is unnecessary. signed short is a longer way of saying short. The only type that needs an explicit signed is char. char can be signed or unsigned depending on the platform; all other types are always signed by default.


Short answer: the division first promotes x to unsigned. Only then the result is cast back to a signed short.

Long answer: read this SO thread.


The problems comes from the unsigned int y. Indeed, x/y becomes unsigned. It works with :

#include "stdio.h"

int main()
{
    int x = -13701;
    signed int y = 3;
    signed short z = x / y;

    printf("z = %d\n", z);

    return 0;
}


Every time you mix "large" signed and unsigned values in additive and multiplicative arithmetic operations, unsigned type "wins" and the evaluation is performed in the domain of the unsigned type ("large" means int and larger). If your original signed value was negative, it first will be converted to positive unsigned value in accordance with the rules of signed-to-unsigned conversions. In your case -13701 will turn into UINT_MAX + 1 - 13701 and the result will be used as the dividend.

Note that the result of signed-to-unsigned conversion on a typical 32-bit int platform will result in unsigned value 4294953595. After division by 3 you'll get 1431651198. This value is too large to be forced into a short object on a platform with 16-bit short type. An attempt to do that results in implementation-defined behavior. So, if the properties of your platform are the same as in my assumptions, then your code produces implementation-defined behavior. Formally speaking, the "meaningless" 17278 value you are getting is nothing more than a specific manifestation of that implementation-defined behavior. It is possible, that if you compiled your code with overflow checking enabled (if your compiler supports them), it would trap on the assignment.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜