开发者

Size of integers?

This has to do with a question I read yesterday: How to determine how many bytes an integer needs?

Anyway, the part that I have a question about is this:

I'm looking for the most efficient way to calculate the minimum number of bytes needed to store an integer without losing precision.

e.g.

int: 10 = 1 byte

int: 257 = 2 bytes

My question is, why does 10 require 1 byte, and why does 257 require 2? From what I understand, you can represent 10 as 1010, which is 4 bits, and 257 as 100000001, which is 9 bits. Does it have to do with word size? Is it that you can't have just 4 bits, but you need the whole byte and you can't just have 9 bits, yo开发者_C百科u need the whole 2 bytes?


That's right, bytes come in sizes of 8 bits each1, and you usually can't subdivide them.

1 Usually (for pedants and troglodytes).


Heh, yes, each byte has an address and so you can't use less than one.

In fact, it's a bit difficult to use less than 4 or 8, because access to unaligned scalars is slow and so language processors tend to align addressable objects to multiples of 4, 8, or even 16 when concerned about cache blocks. The actual data bus is likely to equal the register width, so if an object isn't so aligned (32 or 64 bits, generally) then really two objects need to be snagged and combined by the CPU. That's slow and so the compiler guards against it.

Sometimes, even more alignment is added.

Typical, an individual object declaration will get a 4- or 8- byte alignment, but a function, module (linker input file), or other large object may get 16 or 32, because using a partial cache block tends to waste the unused section of the cache block, and cache performance is critical these days.


memory is allocated in bytes and 9 byte will of course need the second block of the byte to accomodate the 9th bit.


It is not hard to come up with schemes that represent small numbers in a reduced number of bytes or bits. For example, UTF-8 is a way to represent Unicode code points (up to 22 bits) as 1, 2 or 3 byte sequences in a way that ensure code points in the range 0 to 127 occupy 1 byte.

But these schemes tend to have the downside that larger numbers tend to take MORE bits to represent than if you hadn't encoded them. And besides, you are trading off the number of bits needed to represent the numbers against the extra processor time of encoding and decoding the numbers.

My question is, why does 10 require 1 byte, and why does 257 require 2?

Theoretically it doesn't / they don't. But in practice, computers are primarily designed to deal with chunks of 32-bit words. Addressing memory on a byte level, and doing arithmetic on variable sized number representation is going to be a LOT slower.

Besides, memory is cheap, so for most applications it is simply not enough payback to justify trying to reduce "wastage" below the word granularity.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜