Heap Behavior in C++
Is there anything wrong with the optimization 开发者_如何学JAVAof overloading the global operator new to round up all allocations to the next power of two? Theoretically, this would lower fragmentation at the cost of higher worst-case memory consumption, but does the OS already have redundant behavior with this technique, or does it do its best to conserve memory?
Basically, given that memory usage isn't as much of an issue as performance, should I do this?
The default memory allocator is probably quite smart and will deal well with large numbers of small to medium sized objects, as this is the most common case. For all allocators, the number of bytes requested is never always the amount allocated. For example, if you say:
char * p = new char[3];
the allocator almost certainly does something like:
char * p = new char[16]; // or some minimum power of 2 block size
Unless you can demonstrate that you have an actual problem with allocations, you should not consider writing your own version of new.
You should try implementing it for fun. As soon as it works, throw it away.
Should you do this? No.
Two reasons:
- Overloading the global new operator will inevitably cause you pain, especially when external libraries take dependency on the stock versions.
- Modern OS implementation of the heap already take fragmentation into consideration. If you're on Windows, you can look into "Low Fragmentation Heap" if you have a special need.
To summarize, don't mess with it unless you can prove (by profiling) that it is a problem to begin with. Don't optimize pre-maturely.
I agree with Neil, Alienfluid and Fredoverflow that in most cases you don't want to write your own memory allocator, but I still wrote my own memory allocator about 15 years and refined it over the years (first version was with malloc/free redefinition, later versions using global new/delete operators) and in my experience, the advantages can be enormous:
- Memory leak tracing can be built in your application. No need to run external applications that slow down your applications.
- If you implement different strategies, you sometimes find difficult problems just switching to a different memory allocation strategy
- To find difficult memory-related bugs, you can easily add logging to your memory allocator and even further refine it (e.g. log all news and deletes for memory of size N bytes)
- You can use page-allocation strategies, where you allocate a complete 4KB page and set the page size so that buffer overflows are caught immediately
- You can add logic to delete to print out if memory is freed twice
- It's easy to add a red zone to memory allocations (a checksum before the allocated memory and one after the allocated memory) to find buffer overflows/underflows more quickly
- ...
精彩评论