开发者

Bits and byte operation

/* Bit Masking */
/* Bit masking can be used to switch a character between lowercase and uppercase */
#define BIT_POS(N) ( 1U << (N) )
#define SET_FLAG(N, F) ( (N) |= (F) )
#define CLR_FLAG(N, F) ( (N) &= -(F) )
#define TST_FLAG(N, F) ( (N) & (F) )
#define BIT_RANGE(N, M) ( BIT_POS((M)+1 - (N))-1 << (N) )
#define BIT_SHIFTL(B, N) ( (unsigned)(B) << (N) )
#define BIT_SHIFTR(B, N) ( (unsigned)(B) >> (N) )
#define SET_MFLAG(N, F, V) ( CLR_FLAG(N, F), SET_FLAG(N, V) )
#define CLR_MFLAG(N, F) ( (N) &= ~(F) )
#define GET_MFLAG(N, F) ( (N) & (F) )

#include <stdio.h>
void main()
{
    unsigned char ascii_char = ‘A’; /* char = 8 bits only */
    int test_nbr = 10;
    printf(“Starting character = %c\n”, ascii_char);
    /* The 5th bit position determines if the character is
    uppercase or lowercase.
    5th bit = 0 - Uppercase
    5th bit = 1 - Lowercase */
    printf(“\nTurn 5th bit on = %c\n”, SET_FLAG(ascii_char, BIT_POS(5)) );
    printf(“Turn 5th开发者_StackOverflow bit off = %c\n\n”, CLR_FLAG(ascii_char, BIT_POS(5)) );
    printf(“Look at shifting bits\n”);
    printf(“=====================\n”);
    printf(“Current value = %d\n”, test_nbr);
    printf(“Shifting one position left = %d\n”,
    test_nbr = BIT_SHIFTL(test_nbr, 1) );
    printf(“Shifting two positions right = %d\n”,
    BIT_SHIFTR(test_nbr, 2) );
}

In the above code what does U mean in the

#define BIT_POS(N) ( 1U << (N) )

Also the above program compiles fine and output is

Starting character = A 

Turn 5th bit on = a  
Turn 5th bit off = ` 

Look at shifting bits
=====================
Current value = 10
Shifting one position left = 20
Shifting two positions right = 5

but when the 5th bit is turned off the result must be A instead of `(ascii 96) please clarify.... Thank you.


The U means that '1' is an unsigned int, as opposed to a potentially signed int.

See here.

As to the other problem; I'd speculate that the - in CLR_FLAG is causing the problem; try using a '~' (bit-wise not). Have not tested this though.


The problem is in this line:

#define CLR_FLAG(N, F) ( (N) &= -(F) )

The CLR_FLAG macro does a bitwise-and of N and -F. That is, N and minus-F. What you really want to do is use the bitwise one's complement of F:

#define CLR_FLAG(N, F) ( (N) &= ~(F) )

Note that now I use ~F. The ~ operator performs bitwise not.


If you actually want to change a character between uppercase and lowercase, you should not do it this way. Use toupper()/tolower() instead. By using those routines your intent is much more clear, and any locale specific differences on what is an upper/lowercase character are taken into account.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜