开发者

Do bitwise operators (other than shifts) make any mathematical sense in base-10?

According to wiki shifts can be used to calculate powers of 2:

A left arithmetic shift by n is equivalent to multiplying by 2^n (provided the value does not overflow), while a right arithmetic shift by n of a two's complement value is 开发者_JAVA技巧equivalent to dividing by 2^n and rounding toward negative infinity.

I was always wondering if any other bitwise operators (~,|,&,^) make any mathematical sense when applied to base-10? I understand how they work, but do results of such operations can be used to calculate anything useful in decimal world?


"yep base-10 is what I mean"

In that case, yes, they can be extended to base-10 in several ways, though they aren't nearly as useful as in binary.

One idea is that &, |, etc. are the same as doing arithmetic mod-2 to the individual binary digits. If a and b are single binary-digits, then

a & b = a * b (mod 2)
a ^ b = a + b (mod 2)
   ~a = 1-a   (mod 2)
a | b = ~(~a & ~b) = 1 - (1-a)*(1-b) (mod 2)

The equivalents in base-10 would be (note again these are applied per-digit, not to the whole number)

a & b = a * b (mod 10)
a ^ b = a + b (mod 10)
   ~a = 9-a   (mod 10)
a | b = ~(~a & ~b) = 9 - (9-a)*(9-b) (mod 10)

The first three are useful when designing circuits which use BCD (~a being the 9's complement), such as non-graphing calculators, though we just use * and + rather than & and ^ when writing the equations. The first is also apparently used in some old ciphers.


A fun trick to swap two integers without a temporary variable is by using bitwise XOR:

void swap(int &a, int &b) {
   a = a ^ b;
   b = b ^ a; //b now = a
   a = a ^ b; //knocks out the original a
}

This works because XOR is a commutative so a ^ b ^ b = a.


Yes, there are other useful operations, but they tend to be oriented towards operations involving powers of 2 (for obvious reasons), e.g. test for odd/even, test for power of 2, round up/down to nearest power of 2, etc.

See Hacker's Delight by Henry S. Warren.


In every language I've used (admittedly, almost exclusively C and C-derivatives), the bitwise operators are exclusively integer operations (unless, of course, you override the operation).

While you can twiddle the bits of a decimal number (they have their own bits, after all), it's not necessarily going to get you the same result as twiddling the bits of an integer number. See Single Precision and Double Precision for descriptions of the bits in decimal numbers. See Fast Inverse Square Root for an example of advantageous usage of bit twiddling decimal numbers.

EDIT

For integral numbers, bitwise operations always make sense. The bitwise operations are designed for the integral numbers.

n << 1 == n * 2
n << 2 == n * 4
n << 3 == n * 8

n >> 1 == n / 2
n >> 2 == n / 4
n >> 3 == n / 8

n & 1 == {0, 1}       // Set containing 0 and 1
n & 2 == {0, 2}       // Set containing 0 and 2
n & 3 == {0, 1, 2, 3} // Set containing 0, 1, 2, and 3

n | 1 == {1, n, n+1}
n | 2 == {2, n, n+2}
n | 3 == {3, n, n+1, n+2, n+3}

And so on.


You can calculate logarithms using just bitwise operators...

Finding the exponent of n = 2**x using bitwise operations [logarithm in base 2 of n]


You can sometime substitute bitwise operations for boolean operations. For example, the following code:

if ((a < 0) && (b < 0)
{
  do something
{

In C this can be replaced by:

if ((a & b) < 0)
{
  do something
{

This works because one bit in an integer is used as the sign bit (1 indicates negative). The and operation (a & b) will be a meaningless number, but its sign will be the bitwise and of the signs of the numbers and hence checking the sign of the result will work.

This may or may not benefit performance. Doing two boolean tests/branches will be worse on a number of architectures and compilers. Modern x86 compilers can probably generate a single branch using a some of the newer instruction even with the normal syntax.

As always, if it does result in a performance increase... Comment the code - i.e. put the "normal" way of doing it in a comment and say it's equivalent but faster.

Likewise, ~ | and ^ can be used in a similar way it all the conditions are (x<0). For comparison conditions you can generally use subtraction:

if ((a < b) | (b < c))
{
}

becomes:

if (((a-b) | (b-c)) < 0)
{
}

because a-b will be negative only if a is less than b. There can be issues with this one if you get within a factor of 2 of max int - i.e. arithmetic overflow, so be careful.

These are valid optimizations in some cases, but otherwise quite useless. And to get really ugly, floating point numbers also have sign bits... ;-)

EXAMPLE: As an example, lets say you want to take action depending on the order of a,b,c. You can do some nested if/else constructs, or you can do this:

x = ((a < b) << 2) | ((b < c) << 1) | (c < a);
switch (x):

I have used this in code with up to 9 conditions and also using the subtractions mentioned above with extra logic to isolate the sign bits instead of less-than. It's faster than the branching equivalent. However, you no longer need to do subtraction and sign bit extraction because the standard was updated long ago to specify true as 1, and with conditional moves and such, the actual less-than can be quite efficient these days.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜