is there such a thing as a 64bit int in c++?
Is there a real platform/compile开发者_JAVA技巧r combo that defines int
as 64 bits? Or is this just used to scare new programmers like myself into using int32_t
where size matters (e.g. saving to a file) in order to make it "portable"?
There absolutely are such systems. There may be more in the future (or there may not). And do you want to take a bet on what int
will be on a possible 128-bit architecture?
Wikipedia has an incomplete-but-useful rundown: http://en.wikipedia.org/wiki/64-bit#64-bit_data_models
I believe that in c99 long long
is usually 64 bit. gcc and microsoft both use this convention.
http://jk-technology.com/c/inttypes.html#long_long
I know that under SGI 64-bit IRIX, long ago, the long
type declared an 64-bit integer, unless you passed the -n32
flag and got 32-bit code instead. Sun's solution was to make up a long long
type which was 64 bits and leave long
as 32 bits. The old Apple Macintosh OS treated int
as 16 bits. Rambling on, I can point out that char
is 16 bits on several TI DSPs.
With the exception of the long long
solution (since it changed the syntax of the language at the time), all of these were legal under the C standard. So, yeah, the rationale behind <stdint.h>
is not just a scare story. It's happened.
Of course there are compilers that have 64 bit ints
.
An int
is typically the size of the natural word size for your architecture (i.e. a 64 bit system has 64 bit ints
, a 16 bit system has 16 bit ints
, etc).
And you shouldn't use int32_t
unless you know the variable requires 32bits, such as for a network protocol, binary file format, or hardware register. If you use an int32_t
on a 64 bit machine where it isn't necessary, then you may be introducing a performance issue for no reason. Since an int
is the natural word size, it tends to also be the most efficient datatype. This is especially true on RISC architectures.
If you want your code to be portable you might want to store 64 bit values in a long long.
精彩评论