开发者

what type use for bitset type in C language

I have to define a bitset type to build bit arrays. Between those arrays bit operations like and/or/xor could be performed (to compare them, for example) and are the predominant operations. What type should I use as bitset type?

I think the type should be the widest NOT SIMULATED type a compiler can handle. I.e. if a compiler simulate a 64 bit type (if the machine or the OS does not s开发者_高级运维upport it, for example) masking compound operations between a simple and, a 32 bit type should be used instead. How to determine it?

And more questions:

new c99 header defines some types (exact width integer types), for which:

"These are of the form intN_t and uintN_t. Both types must be represented by exactly N bits with no padding bits. intN_t must be encoded as a two's complement signed integer and uintN_t as an unsigned integer. These types are optional unless the implementation supports types with widths of 8, 16, 32 or 64, then it shall typedef them to the corresponding types with corresponding N. Any other N is optional."

so I think think that implementations that checking for a 64 bit type is a first step, right?

My project uses SDL library, that #defines a macro:

#ifdef SDL_HAS_64BIT_TYPE
typedef int64_t     Sint64;
#ifndef SYMBIAN32_GCCE
typedef uint64_t    Uint64;
#endif
#else
/* This is really just a hack to prevent the compiler from complaining */
typedef struct {
    Uint32 hi;
    Uint32 lo;
} Uint64, Sint64;
#endif

So perhaps I could make my definition of bitset type depending of that macro (it is not optimal, however, since I would like to write code sdl indipendent).

Tell me your opinions about.


There isn't an easy way to determine the maximum size non-emulated integer type, and most people don't bother to try. You can take either of two approaches, both of which work.

  1. Decide that you will go with 32-bit integers because they are available everywhere.
  2. Decide that you will have a configuration macro (not necessarily the one from SDL) which controls whether you use a 32-bit or 64-bit (or 16-bit, or 128-bit) data type. You specify the 'right' value when you configure your build.

If you want to detect emulated vs native, you would probably run some timing tests on a test program with various sizes of multiplication.


Traditional Unix practice, for signal and fd sets, is to use unsigned long (actually historically it may even have been long, but using signed types for bitsets is a very very bad idea and almost surely leads to undefined behavior). While some ancient 16-bit machines may have had long larger than the system word size, I think you'll have a very hard time finding any modern machine with that problem. On the other hand, 64-bit Windows has 32-bit long, so you wouldn't get the optimal type, but you might as well just live with it for simplicity,

Another approach, if you have C99 stdint.h at your disposal, is to use uintptr_t. That's almost surely the machine word size.

Yet another approach, which definitely wins on simplicity, is just to always use bytes.

Finally, note that on little endian systems, the representation in memory will be identical regardless of what type size you use.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜