开发者

If byte is 8 bit integer then how can we set it to 255?

The byte keyword denotes an integral type that stores v开发者_Go百科alues as indicated in the following table. It's an Unsigned 8-bit integer.

If it's only 8 bits then how can we assign it to equal 255?

byte myByte = 255;

I thought 8 bits was the same thing as just one character?


There are 256 different configuration of bits in a byte

0000 0000
0000 0001
0000 0010
...
1111 1111

So can assign a byte a value in the 0-255 range


Characters are described (in a basic sense) by a numeric representation that fits inside an 8 bit structure. If you look at the ASCII Codes for ascii characters, you'll see that they're related to numbers.

The integer count a bit sequence can represent is generated by the formula 2^n - 1 (as partially described above by @Marc Gravell). So an 8 bit structure can hold 256 values including 0 (also note TCPIP numbers are 4 separate sequences of 8 bit structures). If this was a signed integer, the first bit would be a flag for the sign and the remaining 7 would indicate the value, so while it would still hold 256 values, but the maximum and minimum would be determined by the 7 trailing bits (so 2^7 - 1 = 127).

When you get into Unicode characters and "high ascii" characters, the encoding requires more than an 8 bit structure. So in your example, if you were to assign a byte a value of 76, a lookup table could be consulted to derive the ascii character v.


11111111 (8 on bits) is 255: 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1

Perhaps you're confusing this with 256, which is 2^8?


8 bits (unsigned) is 0 thru 255, or (2^8)-1.

It sounds like you are confusing integer vs text representations of data.


i thought 8 bits was the same thing as just one character?

I think you're confusing the number 255 with the string "255."

Think about it this way: if computers stored numbers internally using characters, how would it store those characters? Using bits, right?

So in this hypothetical scenario, a computer would use bits to represent characters which it then in turn used to represent numbers. Aside from being horrendous from an efficiency standpoint, this is just redundant. Bits can represent numbers directly.


255 = 2^8 − 1 = FF[hex] = 11111111[bin]


range of values for unsigned 8 bits is 0 to 255. so this is perfectly valid

8 bits is not the same as one character in c#. In c# character is 16 bits. ANd even if character is 8 bits it has no relevance to the main question


I think you're confusing character encoding with the actual integral value stored in the variable.

A 8 bit value can have 255 configurations as answered by Arkain
Optionally, in ASCII, each of those configuration represent a different ASCII character
So, basically it depends how you interpret the value, as a integer value or as a character

ASCII Table
Wikipedia on ASCII


Sure, a bit late to answer, but for those who get this in a google search, here we go...

Like others have said, a character is definitely different to an integer. Whether it's 8-bits or not is irrelevant, but I can help by simply stating how each one works:

for an 8-bit integer, a value range between 0 and 255 is possible (or -127..127 if it's signed, and in this case, the first bit decides the polarity)

for an 8-bit character, it will most likely be an ASCII character, of which is usually referenced by an index specified with a hexadecimal value, e.g. FF or 0A. Because computers back in the day were only 8-bit, the result was a 16x16 table i.e. 256 possible characters in the ASCII character set.

Either way, if the byte is 8 bits long, then both an ASCII address or an 8-bit integer will fit in the variable's data. I would recommend using a different more dedicated data type though for simplicity. (e.g. char for ASCII or raw data, int for integers of any bit length, usually 32-bit)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜