Do I need to have 64 bit Processor to use 64 bit data type
I have a few questions:
Do I need to have 64 bit Processor to use 64 bit data type(__int64 or int64_t) ?
What means by, the "t" of int64_t?
Starting from what version of 开发者_StackOverflow社区GCC and VCC are supporting data type?
Is the 64 bit data type are just doubling the data length or there are some other things going under the hood too?
You don't need 64 bit processor to use 64 bit data type. It all depends on the compiler and only on the compiler. The compiler can provide you with 128-bit, 237-bit or 803-bit data types, if it so desires.
However, keep in mind that normally 32-bit CPUs cannot handle 64-bit values directly, which means that the burden of supporting all necessary language operations for 64-bit type lies on the compiler and the library. The compiler will have to generate a more-or-less complex sequence of 32-bit CPU instructions in order to perform additions, shifts, multiplications etc. on 64-bit values. This means that in code generated for 32-bit CPUs basic language operations on 64-bit data types will not be as efficient as they would be in code generated for 64-bit CPUs (since in the latter most language operations would be carried out by a single CPU instruction).
The "t" in int64_t
stands for either "type" or "typedef name". That's an old accepted naming convention for standard library typedefs.
As for compiler versions, it is an ambiguous question actually. The typedef name int64_t
is a part of the standard library of C language (but not of C++ language), while the support for 64-bit integer types (under any name) is a part of the compiler. So which one are you asking about? For example, MSVC compiler has been supporting 64-bit data types for a long time, but the names for these types have been different. 64-bit signed integer is called __int64
of something like that in MSVC. As for the int64_t
typedef, AFAIK, it is not a part of MSVC's standard library even today. In fact, int64_t
became a part of C language from the C99 version of its specification. At the same time it is not a part of C++ language. So, generally, you are not supposed to expect to have int64_t
in C++ code regardless of the version of the compiler.
As for data length... Well, yeah, it is just doubling the number of bits. The rest follows.
- No, you can process such data on a 32 bit machine. So long as your compiler supports those data types you are fine.
- int64_t is just its name, as defined in the standard.
- I think all versions of GCC and MSVC this century support 64 bit integers on 32 bit architecture.
- A 64 bit integer is just twice the size of a 32 bit integer.
If you look at /usr/include/stdint.h
, you'll find that int64_t
is defined as
typedef long long int int64_t;
So, as David said, it's compiler and not architecture dependent.
No, compilers on 32bit architectures emulate 64bit arithmetic. It's not terribly fast, but it's not that bad.
The t
refers to type
. This is legacy from C where structs would have to be referred to differently.
64bit integral types may have increased alignment, but that's about it.
I've no idea for point 3.
精彩评论