开发者

Endianness manipulation - is there a C library for this?

With the sort of programs I write (working with raw file data) I often need functions to convert between big and little endian. Usually I write these myself (which is covered by many other posts here) but I'm not that keen on doing this for a number of reasons -开发者_如何学Python the main one being lack of testing. I don't really want to spend ages testing my code in a big endian emulator, and often just omit the code for big endian machines altogether. I also would rather make use of faster functions provided by various compilers, while still keeping my programs cross-platform.

The only things I can find are socket calls like htons() but they require different #include files on each platform, and some GPL code like this, however that particular file, while comprehensive, seems to miss out on some of the high performance functions provided by some compilers.

So, does anyone know of a library (ideally just a .h file) that is well tested and provides a standard set of functions for dealing with endianness across many compilers and platforms?


There have been a number of proposals for a Boost class (for C++, at least) to do exactly that over the last decade, but none have ever come to fruition, unfortunately.

I'm not aware of any better generalized solution than the htons() function set.


On linux, there's <endian.h>

http://man7.org/linux/man-pages/man3/htole32.3.html

I'd be interested to learn if other operating systems support it as well.


It's easiest just to not write endian-dependent code. You never should care exactly what the endianness is of the system you're running on; the only thing that should matter is what the mandated endianness is for any external data you're reading or writing. You shouldn't be asking about conversions between big- and little-endian values, but rather about conversions from a specific endianness to the host endianness, and you can write that code in an endian-agnostic way that's (almost) completely portable:

For example: suppose you're reading a 32-bit big-endian integer from a file stream:

/*
 * Note that callers should check feof(fp) afterward to verify that
 * there was enough data to read.
 */
uint32_t GetBE32(FILE* fp)
{
    uint32_t result;
    result  = fgetc(fp) << 24;
    result |= fgetc(fp) << 16;
    result |= fgetc(fp) <<  8;
    result |= fgetc(fp);
    return result;
}

uint32_t GetLE32(FILE* fp)
{
    uint32_t result;
    result  = fgetc(fp);
    result |= fgetc(fp) <<  8;
    result |= fgetc(fp) << 16;
    result |= fgetc(fp) << 24;
    return result;
}

(I say "(almost) completely portable" because it does assume that there are 8 bits per byte. But if you're on a system where that isn't true, you're probably going to have bigger issues when dealing with external data.)


For what its worth...

Like the OP, I often need byte-ordering aware routines for shuffling data among different machines and protocols. (In my case, I need them for embedded processors rather than for big iron.)

After several iterations, I've posted an endian library written in pure C to Github. What it lacks in documentation it makes up for in comprehensive unit testing.

https://github.com/rdpoor/endian

endian's main departure from most byte-ordering libraries is that it doesn't presume a byte-at-a-time read or write function, but rather works directly on void * memory buffers. This gives the compiler the freedom to optimize what it can, and in the case that the desired byte-ordering matches the host machine, it short-circuits byte shuffling altogether.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜