开发者

Counting common bits in a sequence of unsigned longs

I am looking for a faster algorithm than the below for the following. Given a sequence of 64-bit unsigned integers, return a count of the number of times each of the sixty-four bits is set in the sequence.

Example:

4608 = 0000000000000000000000000000000000000000000000000001001000000000 
4097 = 0000000000000000000000000000000000000000000000000001000000000001
2048 = 0000000000000000000000000000000000000000000000000000100000000000

counts 0000000000000000000000000000000000000000000000000002101000000001

Example:

2560 = 0000000000000000000000000000000000000000000000000000101000000000
530  = 0000000000000000000000000000000000000000000000000000001000010010
512  = 0000000000000000000000000000000000000000000000000000001000000000

counts 0000000000000000000000000000000000000000000000000000103000010010

Currently I am using a rather obvious and naive approach:

static int bits = sizeof(ulong) * 8;

public static int[] CommonBits(params ulong[] values) {
    int[] cou开发者_运维问答nts = new int[bits];
    int length = values.Length;

    for (int i = 0; i < length; i++) {
        ulong value = values[i];
        for (int j = 0; j < bits && value != 0; j++, value = value >> 1) {
            counts[j] += (int)(value & 1UL);
        }
    }

    return counts;
}


A small speed improvement might be achieved by first OR'ing the integers together, then using the result to determine which bits you need to check. You would still have to iterate over each bit, but only once over bits where there are no 1s, rather than values.Length times.


I'll direct you to the classical: Bit Twiddling Hacks, but your goal seems slightly different than just typical counting (i.e. your 'counts' variable is in a really weird format), but maybe it'll be useful.


The best I can do here is just get silly with it and unroll the inner-loop... seems to have cut the performance in half (roughly 4 seconds as opposed to the 8 in yours to process 100 ulongs 100,000 times)... I used a qick command-line app to generate the following code:

for (int i = 0; i < length; i++)
{
    ulong value = values[i];
    if (0ul != (value & 1ul)) counts[0]++;
    if (0ul != (value & 2ul)) counts[1]++;
    if (0ul != (value & 4ul)) counts[2]++;
    //etc...
    if (0ul != (value & 4611686018427387904ul)) counts[62]++;
    if (0ul != (value & 9223372036854775808ul)) counts[63]++;
}

that was the best I can do... As per my comment, you'll waste some amount (I know not how much) running this in a 32-bit environment. If your that concerned over performance it may benefit you to first convert the data to uint.

Tough problem... may even benefit you to marshal it into C++ but that entirely depends on your application. Sorry I couldn't be more help, maybe someone else will see something I missed.

Update, a few more profiler sessions showing a steady 36% improvement. shrug I tried.


Ok let me try again :D

change each byte in 64 bit integer into 64 bit integer by shifting each bit by n*8 in lef

for instance

10110101 -> 0000000100000000000000010000000100000000000000010000000000000001 (use the lookup table for that translation)

Then just sum everything togeter in right way and you got array of unsigned chars whit integers.

You have to make 8*(number of 64bit integers) sumations

Code in c

//LOOKTABLE IS EXTERNAL and has is int64[256] ;
unsigned char* bitcounts(int64* int64array,int len)
{  
    int64* array64;
    int64 tmp;
    unsigned char* inputchararray;
    array64=(int64*)malloc(64);
    inputchararray=(unsigned char*)input64array;
    for(int i=0;i<8;i++) array64[i]=0; //set to 0

    for(int j=0;j<len;j++)
    {             
         tmp=int64array[j];
         for(int i=7;tmp;i--)
         {
             array64[i]+=LOOKUPTABLE[tmp&0xFF];
             tmp=tmp>>8;
         }
    }
    return (unsigned char*)array64;
}

This redcuce speed compared to naive implemetaton by factor 8, becuase it couts 8 bit at each time.

EDIT:

I fixed code to do faster break on smaller integers, but I am still unsure about endianess And this works only on up to 256 inputs, becuase it uses unsigned char to store data in. If you have longer input string, you can change this code to hold up to 2^16 bitcounts and decrease spped by 2


const unsigned int BYTESPERVALUE = 64 / 8;
unsigned int bcount[BYTESPERVALUE][256];
memset(bcount, 0, sizeof bcount);
for (int i = values.length; --i >= 0; )
  for (int j = BYTESPERVALUE ; --j >= 0; ) {
    const unsigned int jth_byte = (values[i] >> (j * 8)) & 0xff;
    bcount[j][jth_byte]++; // count byte value (0..255) instances
  }

unsigned int count[64];
memset(count, 0, sizeof count);
for (int i = BYTESPERVALUE; --i >= 0; )
  for (int j = 256; --j >= 0; ) // check each byte value instance
    for (int k = 8; --k >= 0; ) // for each bit in a given byte
      if (j & (1 << k)) // if bit was set, then add its count
        count[i * 8 + k] += bcount[i][j];


Another approach that might be profitable, would be to build an array of 256 elements, which encodes the actions that you need to take in incrementing the count array.

Here is a sample for a 4 element table, which does 2 bits instead of 8 bits.

int bitToSubscript[4][3] =
{
    {0},       // No Bits set
    {1,0},     // Bit 0 set
    {1,1},     // Bit 1 set
    {2,0,1}    // Bit 0 and bit 1 set.
}

The algorithm then degenerates to:

  • pick the 2 right hand bits off of the number.
  • Use that as a small integer to index into the bitToSubscriptArray.
  • In that array, pull off the first integer. That is the number of elements in the count array, that you need to increment.
  • Based on that count, Iterate through the remainder of the row, incrementing count, based on the subscript you pull out of the bitToSubscript array.
  • Once that loop is done, shift your original number two bits to the right.... Rinse Repeat as needed.

Now there is one issue I ignored, in that description. The actual subscripts are relative. You need to keep track of where you are in the count array. Every time you loop, you add two to an offset. To That offset, you add the relative subscript from the bitToSubscript array.

It should be possible to scale up to the size you want, based on this small example. I would think that another program could be used, to generate the source code for the bitToSubscript array, so that it can be simply hard coded in your program.

There are other variation on this scheme, but I would expect it to run faster on average than anything that does it one bit at a time.

Good Hunting.

Evil.


I believe this should give a nice speed improvement:

  const ulong mask = 0x1111111111111111;
  public static int[] CommonBits(params ulong[] values)
  {
    int[] counts = new int[64];

    ulong accum0 = 0, accum1 = 0, accum2 = 0, accum3 = 0;

    int i = 0;
    foreach( ulong v in values ) {
      if (i == 15) {
        for( int j = 0; j < 64; j += 4 ) {
          counts[j]   += ((int)accum0) & 15;
          counts[j+1] += ((int)accum1) & 15;
          counts[j+2] += ((int)accum2) & 15;
          counts[j+3] += ((int)accum3) & 15;
          accum0 >>= 4;
          accum1 >>= 4;
          accum2 >>= 4;
          accum3 >>= 4;
        }
        i = 0;
      }

      accum0 += (v)      & mask;
      accum1 += (v >> 1) & mask;
      accum2 += (v >> 2) & mask;
      accum3 += (v >> 3) & mask;
      i++;
    }

    for( int j = 0; j < 64; j += 4 ) {
      counts[j]   += ((int)accum0) & 15;
      counts[j+1] += ((int)accum1) & 15;
      counts[j+2] += ((int)accum2) & 15;
      counts[j+3] += ((int)accum3) & 15;
      accum0 >>= 4;
      accum1 >>= 4;
      accum2 >>= 4;
      accum3 >>= 4;
    }

    return counts;
  }

Demo: http://ideone.com/eNn4O (needs more test cases)


http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetNaive

One of them

unsigned int v; // count the number of bits set in v
unsigned int c; // c accumulates the total bits set in v
for (c = 0; v; c++)
{
  v &= v - 1; // clear the least significant bit set
}

Keep in mind, that complexity of this method is aprox O(log2(n)) where n is the number to count bits in, so for 10 binary it need only 2 loops

You should probably take the metod for counting 32 bits whit 64 bit arithmetics and applying it on each half of word, what would take by 2*15 + 4 instructions

// option 3, for at most 32-bit values in v:
c =  ((v & 0xfff) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;
c += (((v & 0xfff000) >> 12) * 0x1001001001001ULL & 0x84210842108421ULL) % 
   0x1f;
c += ((v >> 24) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;

If you have sse4,3 capable processor you can use POPCNT instruction. http://en.wikipedia.org/wiki/SSE4

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜