开发者

On what bases the size and range of any datatype is decided?

I what to know that what is the base on which the size of any datatype is decided. For eg. size of intger datatype in java is 4 bytes. So why its exactly 4 and not anything else. More over maximum number that can be stored with int type is 2,147,483,648开发者_开发百科. When this number come from? I mean what is the formula to get this number for anyother data type?

But why it is exactly 4? I still not clear. Some says that to represent any number in the range of perticular datatype. But there are other dataypes also which provide larger range. Still so much confused...


Because 4 bytes is 4*8 = 32 bits. 2^32 = 4294967296 For unsigned int that is the max number, for signed int it's 2^32/2, which is your number. The number of bytes on which a datatype is represented is decided by the architecture. For C++, only sizeof(char) is guaranteed to be 1.


I don't know where got your number from, this is wrong. Maximum numbers for such types are always odd and not even.

For unsigned types C prescribes that it always uses a representation with bits, say x bits. The maximum is then always 2 to the power of x minus 1, thus an odd number.

For signed types the rules are similar for the maximum (thus positive) value. For the minimum value things are a bit more complicated, but usually this uses the so-called two's complement representation for negative numbers and the minimum value is then usually the (negative of the maximum value) minus one, so an even number.


In assembler size in bits of the types is defined by the architecture that has to work with the numbers. Most current architectures group bits in 8-bit bytes, and double the size of the number when going to the next type: 8, 16, 32, 64, 128... but not all architectures support all the types, and some old architectures had weird types (14 bit integers, for example).

When you use a programming language, the language abstracts the lower levels, and the types in the language are defined to abstract the lower level types. Depending on the language the types might differ: char, short int, int some proposed short, tall, grande. In some cases it will be exactly defined (in Java or C# an int is exactly 32bits, and a long is 64bits), while in others like C/C++ only the relationships among the types are defined (long is not smaller than int which in turn is not smaller than short, then char...)

The maximum number that a given unsigned type can hold can be calculated as 2^N-1, so for a 32 bit unsigned int, the maximum value is 4294967295. The reason for the -1 part is that there are 2^N distinct numbers, one of which is 0, so only 2^N-1 are left for non-zero values.

When it comes to signed integer types, again the architecture has much to say. Most current architectures use two's-complement. In those platforms the highest signed value for an N bit signed integer will be 2^(N-1)-1 and the most negative number will be -2^(N-1). Some older architectures reserved one bit for the sign and then stored the actual number, in which case the negative range would be reduced by one (this approach allows for two zero values: 0 and -0).

As to your question, you will have to pick the language first. Then, in the case of C or C++, you will have to pick your platform, and even the compiler. While in most cases the types are directly related (int is 32 bits in most 32 and 64bit architectures in C++), that is not guaranteed, and some types might differ (long might represent a 32 or 64bit integer type). As of the actual size of a type in C++ besides char that is guaranteed to have CHAR_BITS bits, you can use sizeof(type)*CHAR_BITS to get the size in bits.


4 bytes = 4*8 bits = 32 bits. Now 2^32 = 4294967296. Divide by 2, you get 2147483648. So int spans from -2147483648 to +2147483648.


It is all determined by hardware. 32-bit platforms were in the vogue when these languages were designed. This meant that it was most efficient to operate with 32-bit numbers and that pointers were 32-bit. The most common platforms had 8-bit bytes, so 32 bits are 4 bytes. The most common signed integer representation was two's complement, so the largest representable number is 231-1 or 2,147,483,648 and the smallest is -231 or 2,147,483,647. An unsigned integer has the range from 0 to 232-1 or 4,294,967,295.

C++ does not actually require integers to have these sizes. It is meant for writing efficient code on a wider range of platforms, so it says:

There are four signed integer types: "signed char", "short int", "int", and "long int." In this list, each type provides at least as much storage as those preceding it in the list. Plain intss have the natural size suggested by the architecture of the execution environment; the other signed integer types are provided to meet special needs.


In a slight variant of Vladimir:

Int is a signed data type, so as Vladimir says:

4 bytes = 4*8 bits = 32 bits. 

Actually though, 2^32 = 4,294,967,296.

Because this is signed, you half this value, so max = 4,294,967,296 / 2 = 2,147,483,648


To begin, computer only understand binary digit (0 and 1).
8 binary digits (bits) group together will form the least-addressable memory unit (byte).

Since byte can only represent 256 different states and it is far less than what we require. Other units are defined by using multiple bytes to represent single value, such as int to represent integer.

The way each unit is interpreted can be found in the link above.


Primitive data types tend to be based on those provided by the hardware architecture for efficiency reasons. Even platform independent languages such as Java and C# have been designed with common 32 bit processors in mind. As others have explained the minimum and maximum values of integral types depend on the lowest and highest numbers that can be represented in the number of bits that are available in each type, reserving one bit to represent the sign for signed types.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜