开发者

Why does a byte only have 0 to 255?

Why does a byte 开发者_Python百科only range from 0 to 255?


Strictly speaking, the term "byte" can actually refer to a unit with other than 256 values. It's just that that's the almost universal size. From Wikipedia:

Historically, a byte was the number of bits used to encode a single character of text in a computer and it is for this reason the basic addressable element in many computer architectures.

The size of the byte has historically been hardware dependent and no definitive standards exist that mandate the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. Many types of applications use variables representable in eight or fewer bits, and processor designers optimize for this common usage. The popularity of major commercial computing architectures have aided in the ubiquitous acceptance of the 8-bit size. The term octet was defined to explicitly denote a sequence of 8 bits because of the ambiguity associated with the term byte.

Ironically, these days the size of "a single character" is no longer consider a single byte in most cases... most commonly, the idea of a "character" is associated with Unicode, where characters can be represented in a number of different formats, but are typically either 16 bits or 32.

It would be amusing for a system which used UCS-4/UTF-32 (the direct 32-bit representation of Unicode) to designate 32 bits as a byte. The confusion caused would be spectacular.

However, assuming we take "byte" as synonymous with "octet", there are eight independent bits, each of which can be either on or off, true or false, 1 or 0, however you wish to think of it. That leads to 256 possible values, which are typically numbered 0 to 255. (That's not always the case though. For example, the designers of Java unfortunately decided to treat bytes as signed integers in the range -128 to 127.)


Because a byte, by its standard definition, is 8 bits which can represent 256 values (0 through 255).


Byte ≠ Octet

Why does a byte only range from 0 to 255?

It doesn’t.

An octet has 8 bits, thus allowing for 28 possibilities. A byte is ill‐defined. One should not equate the two terms, as they are not completely interchangeable. Also, wicked programming languages that support only signed characters (ʏᴏᴜ ᴋɴᴏw ᴡʜᴏ ʏᴏᴜ ᴀʀᴇ﹗) can only represent the values −128 to 127, not 0 to 255.

Big Iron takes a long time to rust.

Most but not all modern machines all have 8‑bits bytes, but that is a relatively recent phenomenon. It certainly has not always been that way. Many very early computers had 4‑bit bytes, and 6‑bit bytes were once common even comparitively recently. Both of those types of bytes hold rather fewer values than 255.

Those 6‑bit bytes could be quite convenient, since with a word size of 36 bits, six such bytes fit cleanly into one of those 36‑bit words without any jiggering. That made if very useful for holding Fieldata, used by the very popular Sperry ᴜɴɪᴠᴀᴄ computers. You can only fit 4 ᴀsᴄɪɪ characters into a 36‑bit word, not 6 Fieldata. We had 1100 series at the computing center when I was an undergraduate, but this remains true even with the modern 2200 series.

Enter ASCII

ᴀsᴄɪɪ — which was and is only a 7‑ not an 8‑bit code — paved the way for breaking out of that world. The importance of the ɪʙᴍ 360, which had 8‑bit bytes whether they held ᴀsᴄɪɪ or not, should not be understated.

Nevertheless, many machines long supported ᴅᴇᴄ’s Radix‑50. This was a 40‑character repertoire wherein three of its characters could be efficiently packed into a single 16‑bit words under two distinct encoding schemes. I used plenty of ᴅᴇᴄ ᴘᴅᴘ‑11s and Vaxen during my university days, and Rad‑50 was simply a fact of life, a reality that had to be accomodated.


A Byte has 8 bits(8 1's or 0's) 01000111=71

each bit represents a value, 1,2,4,8,16,32,64,128 but from right to left ?

example

128, 64, 32, 16, 8, 4, 2, 1,
0    1   0   0   0  1  1  1 =71
1    1   1   1   1  1  1  1 = max 255
0    0   0   0   0  0  0  0 = min 0

using binary 1's or 0's and only 8 bits(1 byte) we can only have

1 of each value 1 X 128, 1 X 64,1 X 32 etc giving a max total of 255 and a min of 0


You are wrong! A byte ranges from 0 to 63 or from 0 to 99!

Do you believe in God? God said in the Holy Bible.

The basic unit of information is a byte. Each byte contains an unspecified amount of information, but it must be capable of holding at least 64 distinct values. That is, we know that any number between 0 and 63, inclusive, can be contained in one byte. Furthermore, each byte contains at most 100 distinct values. On a binary computer a byte must therefore be composed of six bits; on a decimal computer we have two digits per byte.* - The Art of Computer Programming, Volume 1, written by Donald Knuth.

And...

* Since 1975 or so, the word "byte" has come to mean a sequence of precisely eight binary digits, capable of representing the numbers 0 to 255. Real-world bytes are therefore larger than the bytes of the hypothetical MIX machine; indeed, MIX's old-style bytes are just barely bigger than nybbles. When we speak of bytes in connection with MIX we shall confine ourselves to the former sense of the word, harking back to the days when bytes were not yet standardized. - The Art of Computer Programming, Volume 1, written by Donald Knuth.

:-)


A byte has only 8 bits. A bit is a binary digit. So a byte can hold 2 (binary) ^ 8 numbers ranging from 0 to 2^8-1 = 255.

It's the same as asking why a 3 digit decimal number can represent values 0 through 999, which is answered in the same manner (10^3 - 1).

Originally bytes weren't always 8 bits though. They represented 'a couple' of bits that could be 6, 7 or 9 bits as well. That was later standardized and it made sense to make those units a power of two, due to the binary nature of computering. Hence came the nibble (4 bits, or half a byte) and the 8 bit byte.

[edit] That is also why octal and hexadecimal numbering became popular. One octal number represents 3 bits and one hexadecimal number represents 4 bits. So a to digit hexadecimal number can represent exactly one byte. It makes a lot more sense to have number from 0 to 0xFF than from 0 to 255. :)


I'll note that on the PDP-10 series of computers, a byte was a variable-length construct, defined by a "byte pointer" which defined the number of bits as well as the offset from the beginning of the storage area. There were then a set of machine instructions for dealing with a byte pointer, including:

  • LDB - Load Byte
  • DPB - Deposit Byte
  • ILDB - Increment pointer, then Load Byte
  • IDPB - Increment pointer, then Deposit Byte (hope I got this one right - it doesn't feel right)

In fact, a "byte" was what we today would call a bit field. Using a byte pointer to represent the next in a series of bytes of the same size was only one of its uses.

Some of the character sets in use were "sixbit" (upper-case only, six bytes to a 36-bit word), ASCII (upper and lower-case, five bytes to a word, with a bit left over), and only rarely EBCDIC (the IBM character set, which used four eight-bit bytes per word, wastefully leaving four bits per word unused).


Strictly speaking, it doesn't.

On most modern systems a byte is 8 binary bits, but on some systems this was not always the case (many older computers used 7 bits to represent ASCII characters (aka bytes), and punched card systems were often based on 6-bit characters (aka bytes), for example).

If you're talking about an 8-bit byte, this can represent any range you wish. However, it can only represent 256 distinct values, so it is typically used to represent 0..255 ("unsigned byte") or -128..+127 ("signed byte").

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜