Can you force a crash if a write occurs to a given memory location with finer than page granularity?
I'm writing a program that for performance reasons uses shared memory (sockets and pipes as alternatives have been evaluated, and they are not fast enough for my task, generally speaking any IPC method that involves copies is too slow). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients.
Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed.
I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them.
Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically.
Is there any way to mark smaller areas of memory such that writing them will cause your开发者_开发问答 app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)
Edit: I found this rather eye popping paper, but unfortunately it requires using ECC memory and a modified Linux kernel.
I don't think its possible to make a few bits read only like that at the OS level.
One thing that occurred to me just now is that you could put the reference counts in a different page like you suggested. If the structs are a common size, and are all in sequential memory locations you could use pointer arithmetic to locate a reference count from the structures pointer, rather than having a pointer within the structure. This might be better than having a pointer for your use case.
long *refCountersBase;//The start address of the ref counters page
MyStruct *structsBase;//The start address of your structures page
//get address to reference counter
long *getRefCounter(MyStruct *myStruct )
{
size_t n = myStruct - structsBase;
long *ref = refCountersBase + n;
return ref;
}
You would need to add a signal handler for SIGSEGV which recovers from the exception, but only for certain addresses. A starting point might be http://www.opengroup.org/onlinepubs/009695399/basedefs/signal.h.html and the corresponding documentation for your OS.
Edit: I believe what you want is to perform the write and return if the write address is actually OK, and tail-call the previous exception handler (the pointer you get when you install your exception handler) if you want to propagate the exception. I'm not experienced in these things though.
I have never heard of enforcing read-only at less than a page granularity, so you might be out of luck in that direction unless you can put each struct on two pages. If you can afford two pages per struct you can put the ref count on one of the pages and make the other read-only.
You could write an API rather than just use headers. Forcing clients to use the API would remove most corruption issues.
Keeping the data with the reference count rather than on a different page will help with locality of data and so improve cache performance.
You need to consider that a reader may have a problem and fail to properly update its ref count. Also that the writer may fail to complete an update. Coping with these things requires extra checks. You can combine such checks with the API. It may be worth experimenting to measure the performance implications of some kind of integrity checking. It may be fast enough to keep a checksum, something as simple as adler32.
精彩评论