short int literals in C
Why a开发者_运维百科re there no short int literals in 'C'?
It doesn't make sense to have a short int literal in C since all integer expressions are evaluated as if the subexpressions were at least int in size.
Alternatively: Because they aren't needed!
Data types of various sizes are needed to fit well with underlying hardware and/or to economize on storage space, but literals are a compile-time construct that gets stored into appropriate data structures anyway.
It's different with float
vs. double
because the same number actually has different internal representation in those - more different, anyway, than just a few leading zeros.
Similarly, there's a difference between char
and short
even though they may be stored in the same bits: If the programmer is talking about character data, it will usually be more convenient for him to specify, say, 'A'
than 65
.
But a short 99
and an int 99
look the same to the programmer, are treated the same in the program... the wider-ranging type will easily do the work of both.
If we are talking original language design, remember that C got most of its present shape on the PDP-11 CPU, which is 16 bit. So they had integers for arithmetic and characters for string storage. Pointers were basically the same as integers.
The language was very pragmatic and only later got a more formal and intricate syntax. So the answer is, it just happened to be that way. Much later we got 32 bit and 64 bit CPUs and the need to distinguish between integers of different lengths.
To this day I code almost all my C programs like as if there were no type other than char and integer. Oh, by the way, "char" in C can be either signed and unsigned according to the standard. This reflects that chars were meant for character storage (strings) and ints for arithmetic.
To clarify, (thank you semaj) the compiler can choose to treat a variable declared "char" as "unsigned char". This does not happen for an "int". An "int" is always signed, but with chars you can not be sure. You have to assume that a char can have either unsigned or signed arithmetics. This is a speed optimization to accommodate CPUs that work faster with either implementation. I.e. focus is placed on chars as storage containers, not as an arithmetic type. (It's name is also a give-away. It could have been called "short" or "small", but was called "char" for a reason.)
精彩评论