Bit field manipulation-setting a bit
#include<stdio.h>
int main()
{
struct s{
int bit_fld:3;
};
s a;
开发者_如何学Python a.bit_fld=0x10;
a.bit_fld =( a.bit_fld | (1<<2));
printf("%x\n",a.bit_fld);
return 0;
}
This program outputs fffffffc
.
I tried to do manual calculation of the output and I could not get the output that the compiler produced.
bit_fld = 00010000 and (1<<2) = 0100
oring both wil result in 00010100
which is 0x14
in hexadecimal.
Why my perception of the output is wrong ? Help me to understand where I'm mistaken.
a.bit_fld
is only 3 bits big, it can't store the value 0x10
. Behavior is implementation-defined, but in this case it has probably stored 0.
Then 1 << 2
is binary 100
as you say. Assuming we did store 0 at the first step, the result of ( a.bit_fld | (1<<2))
is an int
with value 4 (binary 100
).
In a signed 2's complement 3-bit representation, this bit pattern represents the value -4, so it's not at all surprising if -4 is what you get when you store the value 4 to a.bit_fld
, although again this is implementation-defined.
In the printf
, a.bit_fld
is promoted to int
before passing it as a vararg. The 2's complement 32 bit representation of -4 is 0xfffffffc
, which is what you see.
It's also undefined behavior to pass an int
instead of an unsigned int
to printf
for the %x
format. It's not surprising that it appears to work, though: for varargs in general there are certain circumstances where it's valid to pass an int
and read it as an unsigned int
. printf
isn't one of them, but an implementation isn't going to go out of its way to stop it appearing to work.
精彩评论