开发者

What is the correct way to output hex data to a file?

I've read about [ostream] << hex << 0x[hex value], but I have some questions about it

(1) I defined my file stream, output, to be a hex output file stream, using output.open("BWhite.bmp",ios::binary);, since I did that, does that make the hex parameter in the output<< operation redundant?

(2) If I have an integer value I wanted to store in the file, and I used this:

int i = 0;
output << i;

would i be stored in little endian or big endian? Will the endi-ness change based on which computer the program is executed or compiled on?

Does the size of this value depend on the computer it's run o开发者_如何学Gon? Would I need to use the hex parameter?

(3) Is there a way to output raw hex digits to a file? If I want the file to have the hex digit 43, what should I use?

output << 0x43 and output << hex << 0x43 both output ASCII 4, then ASCII 3.

The purpose of outputting these hex digits is to make the header for a .bmp file.


The formatted output operator << is for just that: formatted output. It's for strings.

As such, the std::hex stream manipulator tells streams to output numbers as strings formatted as hex.

If you want to output raw binary data, use the unformatted output functions only, e.g. basic_ostream::put and basic_ostream::write.

You could output an int like this:

int n = 42;
output.write(&n, sizeof(int));

The endianness of this output will depend on the architecture. If you wish to have more control, I suggest the following:

int32_t n = 42;
char data[4];
data[0] = static_cast<char>(n & 0xFF);
data[1] = static_cast<char>((n >> 8) & 0xFF);
data[2] = static_cast<char>((n >> 16) & 0xFF);
data[3] = static_cast<char>((n >> 24) & 0xFF);
output.write(data, 4);

This sample will output a 32 bit integer as little-endian regardless of the endianness of the platform. Be careful converting that back if char is signed, though.


You say

"Is there a way to output raw hex digits to a file? If I want the file to have the hex digit 43, what should I use? "

"Raw hex digits" will depend on the interpretation you do on a collection of bits. Consider the following:

 Binary  :    0 1 0 0 1 0 1 0
 Hex     :    4 A
 Octal   :    1 1 2
 Decimal :    7 4
 ASCII   :    J

All the above represents the same numeric quantity, but we interpret it differently.

So you can simply need to store the data as binary format, that is the exact bit pattern which is represent by the number.

EDIT1

When you open a file in text mode and write a number in it, say when you write 74 (as in above example) it will be stored as two ASCII character '7' and '4' . To avoid this open the file in binary mode ios::binary and write it with write () . Check http://courses.cs.vt.edu/~cs2604/fall00/binio.html#write


The purpose of outputting these hex digits is to make the header for a .bmp file.

You seem to have a large misconception of how files work.

The stream operators << generate text (human readable output). The .bmp file format is a binary format that is not human readable (will it is but its not nice and I would not read it without tools).

What you really want to do is generate binary output and place it the file:

char   x = 0x43;
output.write(&x, sizeof(x));

This will write one byte of data with the hex value 0x43 to the output stream. This is the binary representation you want.

would i be stored in little endian or big endian? Will the endi-ness change based on which computer the program is executed or compiled on?

Neither; you are again outputting text (not binary data).

int i = 0;
output.write(reinterpret_cast<char*>(&i), sizeof(i)); // Writes the binary representation of i

Here you do need to worry about endianess (and size) of the integer value and this will vary depending on the hardware that you run your application on. For the value 0 there is not much tow worry about endianess but you should worry about the size of the integer.

I would stick some asserts into my code to validate the architecture is OK for the code. Then let people worry about if their architecture does not match the requirements:

int test = 0x12345678;
assert((sizeof(test) * CHAR_BITS == 32) && "BMP uses 32 byte ints");
assert((((char*)&test)[0] == 0x78) && "BMP uses little endian");

There is a family of functions that will help you with endianess and size.

http://www.gnu.org/s/hello/manual/libc/Byte-Order.html

Function: uint32_t htonl (uint32_t hostlong)
This function converts the uint32_t integer hostlong from host byte order to network byte order.

// Writing to a file
uint32_t hostValue = 0x12345678;
uint32_t network   = htonl(hostValue);
output.write(&network, sizeof(network));

// Reading from a file
uint32_t network;
output.read(&network, sizeof(network);
uint32_t hostValue = ntohl(network);    // convert back to platform specific value.

// Unfortunately the BMP was written with intel in-mind
// and thus all integers are in liitle-endian.
// network bye order (as used by htonl() and family) is big endian.
// So this may not be much us to you.

Last thing. When you open a file in binary format output.open("BWhite.bmp",ios::binary) it does nothing to stream apart from how it treats the end of line sequence. When the file is in binary format the output is not modified (what you put in the stream is what is written to the file). If you leave the stream in text mode then '\n' characters are converted to the end of line sequence (OS specific set of characters that define the end of line). Since you are writing a binary file you definitely do not want any interference in the characters you write so binary is the correct format. But it does not affect any other operation that you perform on the stream.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜