Read 32bit caches in 64bit environment
we have a lot of caches that were built on 32bit machine which we now have to read in 64bit environment. We get a segmentation fault when we want to open read a cache file.
It will take weeks to reproduce the caches, so i would like to know how still can process our 32bit cache files on 64bit machines.
Here's the code that we use to read and write our caches:
bool IntArray::fload(const char* fname, long offset, long _size){
long size = _size * sizeof(long);
long fd = open(fname, O_RDONLY);
if ( fd >0 ){
struct stat file_status;
if ( stat(fname, &file_status) == 0 ){
if ( offset < 0 || offset > file开发者_运维技巧_status.st_size ){
std::__throw_out_of_range("offset out of range");
return false;
}
if ( size + offset > file_status.st_size ){
std::__throw_out_of_range("read size out of range");
return false;
}
void *map = mmap(NULL, file_status.st_size, PROT_READ, MAP_SHARED, fd, offset);
if (map == MAP_FAILED) {
close(fd);
std::__throw_runtime_error("Error mmapping the file");
return false;
}
this->resize(_size);
memcpy(this->values, map, size);
if (munmap(map, file_status.st_size) == -1) {
close(fd);
std::__throw_runtime_error("Error un-mmapping the file");
return false;
/* Decide here whether to close(fd) and exit() or not. Depends... */
}
close(fd);
return true;
}
}
return false;
}
bool IntArray::fsave(const char* fname){
long fd = open(fname, O_WRONLY | O_CREAT, 0644); //O_TRUNC
if ( fd >0 ){
long size = this->_size * sizeof(long);
long r = write(fd,this->values,size);
close(fd);
if ( r != size ){
std::__throw_runtime_error("Error writing the file");
}
return true;
}
return false;
}
From the line:
long size = this->_size * sizeof(long);
I assume that values
points to an array of long
. Under most OS excepted Widnows, long
are 32 bits in 32 bits build and 64 bits in 64 bits build.
You should read your file as a dump of 32 bits values, int32_t for instance, and then copy it as long. And probably version your file so that you know which logic to apply when reading.
As a matter of fact, designing a file format instead of just using a memory dump will prevent that kind of issues (endianness, padding, FP format,... are other issues which will arise is you try a slightly wider portability than just the program which wrote the file -- padding especially could change with compiler release and compilation flags).
You need change the memory layout of this->values
(whatever type that may be, you are not mentioning that crucial information) on the 64bit machines in such a way that the memory layout becomes identical to the memory layout used by the 32bit machines.
You might need to use compiler tricks like struct packing or similar things to do that, and if this->values
happens to contain classes, you will have a lot of pain with the internal class pointers the compiler generates.
BTW, does C++ have proper explicitly sized integer types yet? #include <cstdint>
?
You've fallen foul of using long
as a 32bit data type ... which is, at least on UN*X systems, not the case in 64bit (LP64 data model, int
being 32bit but long
and pointers being 64bit).
On Windows64 (IL32P64 data model, int
and long
32bit but pointers 64bit) your code performing size calculations in units of sizeof(long)
and directly doing memcpy()
from the mapped file to the object-ized array would actually continue to work ...
On UN*X, this means when migrating to 64bit, to keep your code portable, it'd be a better idea to switch to an explicitly-sized int32_t (from <stdint.h>
), to make sure your data structure layout remains the same when performing both 32bit and 64bit target compiles.
If you insist on keeping the long
, then you'd have to change the internalization / externalization of the array from simple memcpy()
/ write()
to doing things differently. Sans error handling (you have that already above), it'd look like this for the ::fsave()
method, instead of using write()
as you do:
long *array = this->values;
int32_t *filebase =
mmap(NULL, file_status.st_size, PROT_WRITE, MAP_SHARED, fd, offset);
for (int i = 0; i < this->_size; i++) {
if (array[i] > INT32_MAX || array[i] < INT32_MIN)
throw (std::bad_cast); // can't do ...
filebase[i] = static_cast<int32_t>(array[i]);
}
munmap(filebase, file_status.st_size);
and for the ::fload()
you'd do the following instead of the memcpy()
:
long *array = this->values;
int32_t *filebase =
mmap(NULL, file_status.st_size, PROT_READ MAP_SHARED, fd, offset);
for (int i = 0; i < this->_size; i++)
array[i] = filebase[i];
munmap(filebase, file_status.st_size);
Note: As already has been mentioned, this approach will fail if you've got anything more complex than a simple array, because apart from data type size differences there might be different alignment restrictions and different padding rules as well. Seems not to be the case for you, hence only keep it in mind if ever considering extending this mechanism (don't - use a tested library like boost::any or Qt::Variant that can externalize/internalize).
精彩评论