LP64, LLP64 and the IL32 transition
During the transition fr开发者_如何学Com 16 to 32 bit in the 80s, int
was either 16 or 32 bit. Using the current 64 bit transition nomenclature, I understand there was a pretty even spread of ILP32 and LP32 machines. At the time I believe it was understood that int
would always follow the register or pointer width for any given architecture and that long
would remain 32 bit.
Fast forward 25 years, I see that LP64 is pretty mainstream, but until I encountered 64 bit platforms [my discovery of desktop Linux in 2007 :)], I always expected IP64 to be the next logical step.
- Was this (LP64) the expected evolution for 64bit?
- How does the
char
≤short
≤int
≤long
relationship fit into this emerging scheme of fixing an integer type to each platform we leave behind? - How do these transition schemes relate to the use of (your choice of
{l,u}case
)WORD
/DWORD
on various platforms? - Some areas of Windows still contain
INT
forms that are 16bit. Will Windows grow out of LLP64 or is it too late? - Why was
int
chosen to be left behind this time, as opposed to during the 32bit transition?
How I see it is that Windows is an oddball in the whole x64 transition. But putting that aside, C or C++ never defined the integral types to be fixed-length. I find the whole int
/long
/pointer
thing quite understandable, if you look at it this way:
int
: mostly 32 bits long (Linux, Mac and Windows)long
: 64 bits on Mac and Linux, 32 on Windowslong long
: 64-bit on Mac, Linux, and Windows x64- (
u
)intptr_t
: exact length of pointer (32 on 32-bit, 64 on 64-bit systems)
WORD
and DWORD
are ugly, and should be avoided. If the API forces you to use them, replace DWORD
with DWORD_PTR
when you're dealing with... well, pointers. It was never correct to use (D
)WORD
there in the first place IMHO.
I don't think Windows will change its decision, ever. Too much trouble already.
Why was int
left behind? Why does Venus rotate in the opposite direction? The answer to the first question is found here (I believe), the second is a bit more complicated ;)
Instead of looking at this as int
being "left behind", I would say you should look at it in terms of not being able to leave behind any size type that might be needed. I suppose compilers could define int32_t
in terms of some internal __int32_t
extension type, but with C99 still not being widely supported, it would have been a major pain for apps to have to work around missing int32_t
definitions when their build systems couldn't find a 32-bit type among the standard types. And having a 32-bit type is essential, regardless of what your native word size is (for instance it's the only correct type for Unicode codepoint values).
For the same reason, it would not be feasible to have made short
32-bit and int
64-bit: a 16-bit type is essential for many things, audio processing being the first that comes to mind. (Not to mention Windows'/Java's ugly UTF-16 obsession..)
Really, I don't think the 16-to-32-bit and 32-to-64-bit transitions are at all comparable. Leaving behind 16-bit was leaving behind a system where most numbers encountered in ordinary, every-day life would not fit in a basic type and where hacks like "far" pointers had to be used to work with nontrivial data sets. On the other hand, most applications have minimal need for 64-bit types. Large monetary figures, multimedia file sizes/offsets, disk positions, high-end databases, memory-mapped access to large files, etc. are some specialized applications that come to mind, but there's no reason to think that a word processor would ever need billions of characters or that a web page would ever need billions of html elements. There are simply fundamental differences in the relationship of the numeric magnitudes to the realities of the physical world, the human mind, etc.
精彩评论