Question regarding C argument promotions [closed]
Alright actually I've study about how to use looping to make my code more efficient so that I could use a particular block of code that should be repeated without typing it over and over again, and after attempted to use what I've learn so far to program something, I feel it's time for me to proceed to the next chapter to learn on how to use control statement to learn how to instructs the program to make decision.
But the thing is that, before I advance myself to it, I still have a few question that need any expert's help on previous stuff. Actually it's about datatype.
A. Character Type
- I extract the following from the book C primer Plus 5th ed:
Somewhat oddly , C treats character constans as type
int
rather thanchar
. For example, on an ASCII system with a 32-bitint
and an 8-bitchar
, the code:char grade = 'B';
represents
'B'
as the numerical value 66 stored in a 32-bit unit,grade
winds up with 66 stored ub ab 8-bit unit. This characteristic of character constants makes it possible to define a character constant such as'FATE'
, with four separate 8-bit ASCII codes stored in a 32-bit unit. However , attempting to assign such a character constant to achar
variable results in only the last 8 bits being used, so the variable gets the value'E'
.
So the next thing I did after reading this was of course, follow what it mentions, that is I try store the word
FATE
on a variable withchar grade
and try to compile and see what it'll be stored usingprintf()
, but instead of getting the character'E'
printed out, what I get is'F'
.Does this mean there's some mistake in the book? OR is there something I misunderstood?
From the above sentences, there's a line says C treats character constants as type
int
. So to try it out, I assign a number bigger than255
, (e.x.356
) to thechar
type.Since
356
is within the range of 32-bitint
(I'm running Windows 7), therefore I expect it would print out356
when I use the%d
specifier.But instead of printing
356
, it gives me100
, which is the last 8-bits value.Why does this happen? I thought
char == int == 32-bits
? (Although it does mention before char is only a byte).
B. Int and Floating Type
I understand when a number stores in variable in
short
type is pass to variadic function or any implicit prototype function, it'll be automatically promoted toint
type.This also happen to floating point type, when a floating-point number with
float
type is passed, it'll be converted todouble
type, that is why there's no specifier for thefloat
type but instead there's only%f
fordouble
and%Lf
forlong double
.But why there's a specifier for
short
type although i开发者_开发百科t is also promoted but notfloat
type? Why don't they just give a specifier forfloat
type with a modifier like%hf
or something? Is there anything logical or technical behind this?
A lot of questions in one question... Here are answers to a couple:
This characteristic of character constants makes it possible to define a character constant such as 'FATE' , with four separate 8-bit ASCII codes stored in a 32-bit unit.However , attempting to assign such a character constant to a char variable results in only the last 8 bits being used , so the variable gets the value 'E'.
This is actually implementation defined behavior. So yes, there's a mistake in the book. Many books on C are written with the assumption that the only C compiler in the world is the one the author used when testing the examples.
The compiler the author use treated the characters in 'FATE' as the bytes of an integer with the 'F' being the most significant byte and 'E' being the least significant. Your compiler treats the characters in the literal as bytes of an inteder with 'F' being the least significant byte and 'E' the most significant. For example, The first method is how MSVC treats the value, while MinGW (a GCC compiler targeting Windows) treats the literal in the second way.
As far as there being no format specifier to printf()
that expects float
, on specifiers that expect double
- this is because the values passed to printf()
for formatting are part of the variable argument list (the ...
in printf()
's prototype). There is not type information about these arguments, so as you mentioned, the compiler must always promote them (from C99 6.5.2.2/6 "Function calls"):
If the expression that denotes the called function has a type that does not include a prototype, the integer promotions are performed on each argument, and arguments that have type float are promoted to double. These are called the default argument promotions.
And C99 6.5.2.2/7 "Function calls"
The ellipsis notation in a function prototype declarator causes argument type conversion to stop after the last declared parameter. The default argument promotions are performed on trailing arguments.
So in effect, it's impossible to pass a float
to printf()
- it will always be promoted to a double
. That's why the format specifiers for floating point values expect a double
.
Also due to the automatic promotion that would be applied to short
, I'm honestly not sure if the h
specifier for formatting a short
is strictly necessary (though it is necessary for use with the n
specifier if you want to get the count of characters written to the stream placed in a short
). It might be in C because it needs to be there to support the n
specifier, historical reasons, or something that I'm just not thinking of.
First, a char
is by definition exactly 1 byte wide. Then the standard more or less says that the sizes should be:
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)
The exact sizes vary (except for char
) by system and compiler but on a 32 bit Windows the sizes with GCC and VC are (AFAIK):
sizeof(short) == 2 (byte)
sizeof(int) == sizeof(long) == 4 (byte)
Your observation of 'F' versus 'E' in this case is a typical endianness issue (little vs. big endian, how a "word" is stored in memory).
Now what happens to your value ? You have a variable that is 8 bit wide. You assign a bigger value ('FATE'
or 356
), but the compiler knows it only is allowed to store 8 bits so it cuts off all the other bits.
To A: 3.) This is due to the different byte orderings of big and little endian CPU achitectures. You get the first byte on a little endian (i.e. x86) and the last byte on a big endian CPU (i.e. PPC). Actually you get always the lowest 8 bit when the conversion fom int to char is done but the characters in the int are stored in reversed order.
7.) a char can only hold 8 bits, so everything else gets truncated in the moment you assign the int to a char variable and can never be restored from the char variable later.
To B: 3.) You might sometimes want to print only the lowest 16 bits of a int variable regardless of what is in the higher half. It is not uncommon to pack multiple integer values in a single variable for certain optimizations. This works well for integer types but makes not much sense for floating point types which don't support bitwise operations directly, which might be the reason why there is no separate type specifier for float in printf.
char
is 1 byte long. The bit length of a byte can be 8, 16, 32 bits long. In general purpose computers generally the bitlength of character is 8 bits long. So the maximum number which the character can represent depends on the bitlength of the character. To check the bitlength of character check limits.h
header file it is defined as CHAR_BIT
in this file.
char x = 'FATE'
will depend probably on the byte ordering which the machine/compiler will interpret the 'FATE' . So this depends on the system/compiler. Someone please confirm/correct this.
If your system has 8 bits byte, then, when you do c = 360
only the lower 8 bits of the binary representation of 360 will be stored in the variable, because char
data is always allocated 1 byte of storage. So %d
will print 100 because the upper bits were lost when you assigned the value in the variable, and what is left is only the lower 8 bits.
精彩评论