开发者

C: gcc implicitly converts signed char to unsigned char and vice versa?

I'm trying to learn C at got stuck with datatype-sizes at the moment.

Have a look at this code snippet:

#include <stdio.h>
#include <limits.h>

int main() {
    c开发者_Python百科har a = 255;
    char b = -128;
    a = -128;
    b = 255;
    printf("size: %lu\n", sizeof(char));
    printf("min: %d\n", CHAR_MIN);
    printf("max: %d\n", CHAR_MAX);
}

The printf-output is:

size: 1
min: -128
max: 127

How is that possible? The size of char is 1 Byte and the default char seems to be signed (-128...127). So how can I assign a value > 127 without getting an overflow warning (which I get when I try to assign -128 or 256)? Is gcc automatically converting to unsigned char? And then, when I assign a negative value, does it convert back? Why does it do so? I mean, all this implicitness wouldn't make it easier to understand.

EDIT:

Okay, it's not converting anything:

char a = 255;
char b = 128;
printf("%d\n", a);    /* -1 */
printf("%d\n", b);    /* -128 */

So it starts counting from the bottom up. But why doesn't the compiler give me a warning? And why does it so, when I try to assign 256?


See 6.3.1.3/3 in the C99 Standard

... the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.

So, if you don't get a signal (if your program doesn't stop) read the documentation for your compiler to understand what it does.


gcc documents the behaviour ( in http://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html#Integers-implementation ) as

  • The result of, or the signal raised by, converting an integer to a signed integer type when the value cannot be represented in an object of that type (C90 6.2.1.2, C99 6.3.1.3).

For conversion to a type of width N, the value is reduced modulo 2^N to be within range of the type; no signal is raised.


how can I assign a value > 127

The result of converting an out-of-range integer value to a signed integer type is either an implementation-defined result or an implementation-defined signal (6.3.1.3/3). So your code is legal C, it just doesn't have the same behavior on all implementations.

without getting an overflow warning

It's entirely up to GCC to decide whether to warn or not about valid code. I'm not quite sure what its rules are, but I get a warning for initializing a signed char with 256, but not with 255. I guess that's because a warning for code like char a = 0xFF would normally not be wanted by the programmer, even when char is signed. There is a portability issue, in that the same code on another compiler might raise a signal or result in the value 0 or 23.

-pedantic enables a warning for this (thanks, pmg), which makes sense since -pedantic is intended to help write portable code. Or arguably doesn't make sense, since as R.. points out it's beyond the scope of merely putting the compiler into standard-conformance mode. However, the man page for gcc says that -pedantic enables diagnostics required by the standard. This one isn't, but the man page also says:

Some users try to use -pedantic to check programs for strict ISO C conformance. They soon find that it does not do quite what they want: it finds some non-ISO practices, but not all---only those for which ISO C requires a diagnostic, and some others for which diagnostics have been added.

This leaves me wondering what a "non-ISO practice" is, and suspecting that char a = 255 is one of the ones for which a diagnostic has been specifically added. Certainly "non-ISO" means more than just things for which the standard demands a diagnostic, but gcc obviously is not going so far as to diagnose all non-strictly-conforming code of this kind.

I also get a warning for initializing an int with ((long long)UINT_MAX) + 1, but not with UINT_MAX. Looks as if by default gcc consistently gives you the first power of 2 for free, but after that it thinks you've made a mistake.

Use -Wconversion to get a warning about all of those initializations, including char a = 255. Beware that will give you a boatload of other warnings that you may or may not want.

all this implicitness wouldn't make it easier to understand

You'll have to take that up with Dennis Ritchie. C is weakly-typed as far as arithmetic types are concerned. They all implicitly convert to each other, with various levels of bad behavior when the value is out of range depending on the types involved. Again, -Wconversion warns about the dangerous ones.

There are other design decisions in C that mean the weakness is quite important to avoid unwieldy code. For example, the fact that arithmetic is always done in at least an int means that char a = 1, b = 2; a = a + b involves an implicit conversion from int to char when the result of the addition is assigned to a. If you use -Wconversion, or if C didn't have the implicit conversion at all, you'd have to write a = (char)(a+b), which wouldn't be too popular. For that matter, char a = 1 and even char a = 'a' are both implicit conversions from int to char, since C has no literals of type char. So if it wasn't for all those implicit conversions either various other parts of the language would have to be different, or else you'd have to absolutely litter your code with casts. Some programmers want strong typing, which is fair enough, but you don't get it in C.


Simple solution :

see signed char can have value from -128 to 127 okey so now when you are assigning 129 to any char value it will take 127(this is valid) + 2(this additional) = -127
(give char a=129 & print it value comes -127)

look char register can have value like.. ...126,127,-128,-127,-126...-1,0,1,2....

which ever you will assign final value will come by this calculation ...!!

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜