Interpreting hex values?
This might be very basic or even silly to experts here but I wanted to get my head around this. Most of the times I generally write hex values like this in C:
unsigned int a = 0xFFFF1232;
Let's say I am trying to extract the first and last 16-bits then I can simply do:
unsigned short high = a >> 16; // Gives me 0xFFFF
unsigned short low = a & 0x0000FFFF; // Gives me 0x00001232 which is then stored as 0x1232
In some of the code I am reading I have come across the following:
unsigned short high = a >> 16;
unsigned short low = a & 0xFFFF;
I have two questions
- When you are
AND
ing a 32-bit value with a mask, why do people write0xFFFF
instead of0x0000FFFF
? Is it to keep it co开发者_开发知识库mpact? - Is it always safe to write
0x0000FFFF
as0xFFFF
? Is it interpreted differently in any context?
They're completely synonymous. Leaving out the leading zeros makes it a little more readable.
They are identical.
And, you're making an assumption that ints are always 32 bits long. If your platform happened to use 64-bit ints would you write it like this?
unsigned short low = a & 0x000000000000FFFF; // ouch. Did I count them right?
And there's another reason why you shouldn't waste time putting in leading zeroes: You'll try and do it with decimals next, which is a Bad Idea:
int x = 00000377
printf("%d\n", x); // 255! WTF....
In your example low
is a short
(typically 16bit). So the leading zeroes are not only redundant, they suggest a 32bit result is expected, and in this case the upper bits are discarded, so arguably it makes the intent of the code clearer.
In fact in this case
unsigned short low = a ;
would suffice, though is perhaps less clear.
Further you should not assume that the integer widths are appropriate and use the <stdint.h>
types instead:
uint32_t a = 0xFFFF1232;
uint16_t = a >> 16;
uint16_t = a & 0xFFFF;
If you are using VC++ which does not supply that header, you can use this implementation
Something like 0x1
is called a literal, which defaults to be an int type. This is not necessarily 32 bits, depending on the compiler or platform, so yes, there is a context in which it is not safe to leave off the leading zeroes.
For example, in embedded systems, it is common to encounter int types that are only 16 bits long. If you want a longer integer, you need to use long int.
In that case, the compiler will notice if you forgot to append "L" to the hex literal, to indicate that you wanted the longer type.
unsigned int a = 0xFFFF1234;
would be a compiler error at best, or an undetected bug at worst. You would need to use
unsigned long int a = 0xFFFF1234L;
instead.
In your case 0x0000FFFF and 0xFFFF are identical.
This woud not be the case if your variables weren't unsigned: a 16-bits value of 0xFFFF means -1, which is 0xFFFFFFFF in 32 bits.
精彩评论