开发者

Spitting a char array into a sequence of ints and floats

I'm writing a program in C++ to listen to a stream of tcp messages from another program to give tracking data from a webcam. I have the socket connected and I'm getting all the information in but having difficulty splitting it up into the data I want.

Here's the format of the data coming in:

8 byte header: 4 character string, integer

32 byte message: integer, float, float, float, float, float

This is all being stuck into a char array called buffer. I need to be able to parse out the different bytes into the primitives I need. I have tried making smaller sub arrays such as headerString that was filled by looping through and copying the first 4 elements of the buffer array and I do get the the correct hear ('CCV ') printed out. But when I try the same thing with the next for elements (to get the integer) and try to print it out I get weird ascii characters being printed out. I've tried converting the headerInt array to an integer with the atoi method from stdlib.h but it always prints out zero.

I've already done this in python using the excellent unpack method, is their any alternative in C++?

Any help greatl开发者_StackOverflow社区y appreciated,

Jordan

Links

CCV packet structure

Python unpack method


The buffer only contains the raw image of what you read over the network. You'll have to convert the bytes in the buffer to whatever format you want. The string is easy:

std::string s(buffer + sOffset, 4);

(Assuming, of course, that the internal character encoding is the same as in the file—probably an extension of ASCII.)

The others are more complicated, and depend on the format of the external data. From the description of the header, I gather than the integers are four bytes, but that still doesn't tell me anything about their representation. Depending on the case, either:

int getInt(unsigned char* buffer, int offset)
{
    return (buffer[offset    ] << 24)
        |  (buffer[offset + 1] << 16)
        |  (buffer[offset + 2] <<  8)
        |  (buffer[offset + 3]      );
}

or

int getInt(unsigned char* buffer, int offset)
{
    return (buffer[offset + 3] << 24)
        |  (buffer[offset + 2] << 16)
        |  (buffer[offset + 1] <<  8)
        |  (buffer[offset    ]      );
}

will probably do the trick. (Other four byte representations of integers are possible, but they are exceedingly rare. Similarly, the conversion of the unsigned results of the shifts and or's into a int is implementation defined, but in practice, the above will work almost everywhere.)

The only hint you give concerning the representation of the floats is in the message format: 32 bytes, minus a 4 byte integer, leave 28 bytes for 5 floats; but 28 doesn't go into five, so I cannot even guess as to the length of the floats (except that there must be some padding in there somewhere). But converting floating point can be more or less complicated if the external format isn't exactly like the internal format.


Something like this may work:

struct {
    char string[4];
    int integers[2];
    float floats[5];
} Header;

Header* header = (Header*)buffer;

You should check that sizeof(Header) == 32.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜