Managing changes in memory-based data format
So I've been using a compact data type in c++, and saving from memory or loading from the file involves just copying the bits of memory in and out.
However, the obvious drawback of this is that if you need to add/remove elements on the data, it becomes kind of messy. There's also problems with versioning, suppose you dis开发者_运维知识库tribute a program which uses version A of the data, and then the next day you make version B of it, and then later on version C.
I suppose this can be solved by using something like xml or json. But suppose you can't do that for technical reasons.
What is the best way to do this, apart from having to make different if cases etc (which would be pretty ugly, I'd imagine)
I don't know what your 'technical reasons' are, but if they involve speed or data size then I might suggest Protocol Buffers as your solution. It's explicitly designed to handle versioning. It will be slightly slower and slightly larger than simply dumping a struct, but only slightly, and it will be much more portable and handle versioning better.
An Idea that comes from 3dsmax ( if I remember well ): divide the file into chunks, each chunk has an header ( a long maybe ) describing it and a length. When reading if you do not know the header you skip to the next one by knowing the len. This process apply recursively inside each chunk, and ensures the back compatibility.
If you go the "column-oriented" way, then you can add fields as you like.
original struct with old way:
struct Person {
string name;
int age;
};
vector<Person> People; // single file for all fields
adding field with old way:
struct Person {
string name;
int age;
string address; // now must rewrite files on disk
};
new and improved way:
namespace People {
vector<string> name; // first file
vector<int> age; // second file
}
adding to the new way:
namespace People {
vector<string> name;
vector<int> age;
vector<string> address; // add a third file and leave other two alone
}
The gist is that each field is its own file. The added benefit is that a user only needs to read/write the fields he wants, so versioning becomes easier.
We deal with this at my work. It's not the best but some things you can do:
add a header to all files with the first field being "version" and the second field being "length". On load you can now deal with old versions appropriately.
if you can, make the rule "never delete data fields, always add fields at end of file". If you do this, then your loading code could load an old, shorter version file by just reading the available data into the struct and leave the last fields (that weren't in the file) initialized. This falls apart when you start having arrays of structs, at that point you need to load the data manually.
This is how I would hack at it. It's a "hacky" approach, but could be extended to be more sophisticated.
struct file
{
int32 fileversion; //different parsers, one for each file version
int offsetlen; //length of blocks
int numblocks; //number of blocks
int* fileoffsets; //this array has internal offsets, corresponding to blocks
block* blocklist; //blocks.
};
struct block
{
//stuff here of a fixed length
};
To write a file with fixed size blocks, the algorithm would be something like this -
write(f.fileversion)
write(f.offsetlen)
write(f.numblocks)
for i in f.blocklist
write(f.blocklist[i])
and to read -
f.fileversion = read(sizeof(f.fileversion))
f.offsetlen = read(sizeof(f.offsetlen))
f.numblocks = read(sizeof(f.numblocks))
for i in f.numblocks
f.blocks[i] = read(f.offsetlen)
With heterogenous blocks, you will need to track the offsets as you read.
精彩评论