开发者

Profiling a Set implementation on 64-bit machines

Relevant Information on my system: Core2Duo T6500 gcc (GCC) 4.4.1 20090725 (Red Hat 4.4.1-2)

Using the basic set implementation, where each set that is stored is really just the lexicographical order of the set stored, you can use standard bit operations for Set operations like Union, Intersection, elementQ, etc.

My question is about determining the size of the set. Implementations like Cliquer use a

static int set_bit_count[256]

to store how many bits are in any given possible 8 bit string, and then the algorithm would go through 8 bits at a time to determine the set's size.

I have two problems with that way:

  1. If registers are more than 8x faster than cache or RAM, this would waste speed.
  2. In a 64-bit machine, are not int operations slower than say, unsigned long long int which I assume are the standard operating integers on 64-bit CPU's.

But I would imagine just using a simple

while(x)
  x&=x-1;
  ++count;

could be faster as everything could be stored in registers. But on the downside, could something other than the obvious 8x times a开发者_如何转开发s many operations?

Also, there are so many different combinations of int, uint, unsigned long, unsigned long long that I have no Idea where to start testing.

Do you know any essays on this topic?

Do you know any other SO questions on this topic?

Do you have any insights to this?

Do you have any suggestions on how to profile this? I've never used gprof. And when I use time.h, I can't get finer than a second of granularity.

I would be very grateful if you did.


Most likely (though I'm too lazy to test right now), the fastest would be

int popcount(unsigned x) {
    int count;
#if defined(__GNUC__)
    __asm__("popcnt %1,%0" : "=r" (count) : "r" (x));
#elif defined(_MSC_VER)
    __asm {
        POPCNT x, count
    };
#else
    /* blah, who cares */
    for (count = 0; x; count += x&1, x >>= 1);
#endif
    return count;
}

(Though this will explode if the CPU doesn't support SSE4.2.) Of course, it would be better (and more portable) to use the compilers' built-in intrinsics, and in general I would trust the compiler to choose whatever implementation is best for the current target platform.

int popcount(unsigned x);
#if defined(__GNUC__)
# define popcount __builtin_popcount
#elif defined(_MSC_VER)
# define popcount __popcnt
#else
/* fallback implementation */
#fi


I would profile the two different implementations using a random number generator to create the bit patterns. I would loop over many iterations, accumulating something during each iteration (e.g., exclusive-OR of the bit count), which I'd print-out at the end of the loop. The accumulating and printing are necessary so that the compiler doesn't optimize away anything of importance.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜