Why is number of bits always(?) a power of two? [closed]
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
Improve this questionWe have 8-bit, 16-bit, 32-bit and 64-bit hardware architectures and operating systems. But not, say, 42-bit or 69-bit ones.
Why? Is it something fundamental that makes 2^n bits a better choice, or is just about compatibility with existing systems? (It's obviously convenient that a 64-bit register can hold two 32-bit pointers, or that a 32-bit data unit can hold 4 bytes.)
That's mostly a matter of tradition. It is not even always true. For example, floating-point units in processors (even contemporary ones) have 80-bits registers. And there's nothing that would force us to have 8-bit bytes instead of 13-bit bytes.
Sometimes this has mathematical reasoning. For example, if you decide to have an N bits byte and want to do integer multiplication you need exactly 2N bits to store the results. Then you also want to add/subtract/multiply those 2N-bits integers and now you need 2N-bits general-purpose registers for storing the addition/subtraction results and 4N-bits registers for storing the multiplication results.
http://en.wikipedia.org/wiki/Word_%28computer_architecture%29#Word_size_choice
Different amounts of memory are used to store data values with different degrees of precision. The commonly used sizes are usually a power of 2 multiple of the unit of address resolution (byte or word). Converting the index of an item in an array into the address of the item then requires only a shift operation rather than a multiplication. In some cases this relationship can also avoid the use of division operations. As a result, most modern computer designs have word sizes (and other operand sizes) that are a power of 2 times the size of a byte.
Partially, it's a matter of addressing. Having N bits of address allows you to address 2^N bits of memory at most, and the designers of hardware prefer to utilize the most of this capability. So, you can use 3 bits to address 8-bit bus etc...
The venerable PDP-10 was 36 bits.
Your memory system wants to be a byte multiple, which makes your cache want to be a byte multiple, which makes your whole system want to be a byte multiple.
Speaking as a HW designer, you generally want to design CPU's to byte boundaries of some kind, ie multiples of 8. Otherwise you either have to add a lot of awkward circuitry to a 49-bit system to make it utilize the mod-8 bits, or you end up ignoring the extra bits, in which case it was a waste, unless you needed the extra bit for instructions, which is never the case on 16 bit or wider systems.
Many (most?) early pre-microprocessor CPUs have some number of bits per word that are not a power of two.
In particular, Seymour Cray and his team built many highly influential machines with non-power-of-two word sizes and address sizes -- 12 bit, 48 bit, 60 bit, etc.
A surprisingly large number of early computers had 36-bit words, entirely due to the fact that humans have 10 fingers. The Wikipedia "36-bit" article has more details on the relationship between 10 fingers and 36 bits, and links to articles on many other historically important but no longer popular bit sizes, most of them not a power of two.
I speculate that
(a) 8 bit addressable memory became popular because it was slightly more convenient for storing 7-bit ASCII and 4 bit BCD, without either awkward packing or wasting multiple bits per character; and no other memory width had any great advantage.
(b) As Stephen C. Steel points out, that slight advantage is multiplied by economies of scale and market forces -- more 8-bit-wide memories are used, and so economies of scale make them slightly cheaper, leading to even more 8-bit-wide memories being used in new designs, etc.
(c) Wider bus widths in theory made a CPU faster, but putting the entire CPU on a single chip made it vastly cheaper and perhaps slightly faster than any previous multi-part CPU system of any bus width. At first there were barely enough transistors for a 4 bit CPU, then a 8 bit CPU. Later, there were barely enough transistors for a 16 bit CPU, to a huge fanfare and "16 bit" marketing campaign. Right around the time one would expect a 24 bit CPU ...
(d) the RISC revolution struck. The first two RISC chips were 32 bits, for whatever reason, and people had been conditioned to think that "more bits are better", so every manufacturer jumped on the 32 bit bandwagon. Also, IEEE 754-1985 was standardized with 32-bit and 64-bit floating point numbers. There were some 24 bit CPUs, but most people have never heard of them.
(e) For software compatibility reasons, manufacturers maintained the illusion of a 32-bit databus even on processors with a 64 bit front-side bus (such the Intel Pentium and the AMD K5, etc.) or on motherboards with a 4 bit wide bus (LPC bus).
At one time, computer word lengths tended to be a multiple of 6 bits, because computers typically used 6-bit character sets, without support for lower-case letters.
IBM made a high-performance computer, the STRETCH, for Los Alamos, which had a 64-bit word. It had the unusual feature that individual bits in the computer's memory could be directly addressed, which forced the word length to be a power of two. It also had a more extended character set, which allowed mathematical symbols (in addition to lower case) to be included; they were used in a special higher-level language named COLASL.
When IBM came out with the very popular System/360 mainframe, even though it did not have bit addressing, it kept the eight-bit byte, primarily to allow efficient storage of packed decimal quantities at four bits to the decimal digit. Because that machine was so popular, it was very influential, and the PDP-11 computer from DEC was designed with a 16-bit word and 8-bit characters. The PDP-11 was also the first true little-endian machine, and it was also very popular and influential.
But it isn't just because of following fashion. 8-bit characters allow lower-case text, and as computers became cheaper, being able to easily use them for word processing was valued. And just as the STRETCH needed to have a word that had a power of two size in bits to allow bits to be easily addressed, today's computers needed to have a word that was a power-of-two multiple of 8 (which happens to be two to the third power itself) to allow characters to be easily addressed.
If we still used 6 bit characters, computers would tend to have 24, 48, or 96 bit words.
As others have pointed out, in the early days, things weren't so clear cut: words came in all sorts of oddball sizes.
But the push to standardize on 8bit bytes was also driven by memory chip technology. In the early days, many memory chips were organized as 1bit per address. Memory for n-bit words was constructed by using memory chips in groups of n (with corresponding address lines tied together, and each chips single data bit contributing to one bit of the n-bit word).
As memory chip densities got higher, manufacturers packed multiple chips in a single package. Because the most popular word sizes in use were multiples of 8 bits, 8-bit memory was particularly popular: this meant it was also the cheapest. As more and more architectures jumped onto the 8 bit byte bandwagon, the price premium for memory chips that didn't use 8 bit bytes got bigger and bigger. Similar arguments account for moves from 8->16, 16->32, 32->64.
You can still design a system with 24 bit memory, but that memory will probably be much more expensive than a similar design using 32 bit memory. Unless there is a really good reason to stick at 24 bits, most designers would opt for 32 bits when its both cheaper and more capable.
Related, but possibly not the reason, I heard that the convention of 8 bits in a byte is because it's how IBM rigged up the IBM System/360 architecture.
A common reason is that you can number your bits in binary. This comes in useful in quite a few situations. For instance, in bitshift or rotate operations. You can rotate a 16 bit value over 0 to 15 bits. An attempt to rotate over 16 bits is also trivial: that's equivalent to a rotation over 0 bits. And a rotation over 1027 bits is equal to a rotation over 3 bits. In general, a rotation of a register of width W over N bits equals a rotation over N modulo W, and the operation "modulo W" is trivial when W is a power of 2.
The 80186, 8086, 8088 and "Real Mode" on 80286 and later processors used a 20-bit segmented memory addressing system. The 80286 had 24 native address lines and then the 386 and later had either 32 or 64.
Another counter example: the PIC16C8X series microcontrollers have a 14 bit wide instruction set.
Byte is related to encoding of characters mostly of the western world hence 8 bit. Word is not related to encoding it related to width of address hence it is varied from 4 to 80 etc etc
My trusty old HP 32S calculator was 12-bits.
Because the space reserved for the address is always a fixed number of bits. Once you have defined the fixed address (or pointer) size then you want to make the best of it, so you have to use all its values until the highest number it can store. The highest number you can get from a multiple of a bit (0 either 1) is always a power of two
May be you can find something out here: Binary_numeral_system
The ICL 1900 were all 24 bit (words). Bet there's not a lot of people remember these. You do ??
We have, just look at PIC microcontrollers.
精彩评论