开发者

Overriding global operator new to track huge memory allocations?

I am trying to produce a special build of a large monolithic a开发者_运维知识库pplication. The problem I am trying to solve is tracking hard-to-reproduce huge memory allocations (30-80 gigabytes, judging by what OS reports). I believe the problem is an std::vector resized to a negative 32-bit integer value. The only platform exhibiting this behavior is Solaris (maybe it's the only platform that manages to successfully allocate such chunks of contiguous memory). Can I globally replace std::vector with my class, delegating all calls to the real vector, watching for suspicious allocations (size > 0x7FFFFFFFu)? Maybe selectively replace the constructor that takes size_t and the resize() methods? Maybe even hijacking the global operator new?


Why not to do something like this?

void *operator new(size_t size)
{
    // if (size > MAX_SIZE) ...
    return malloc(size);
}

void *operator new [](size_t size)
{
    // if (size > MAX_SIZE) ...
    return malloc(size);
}

Setting a breakpoint in the if would find the problem right away.


You can provide a custom allocator on your vector at the time it's constructed.

You could just delegate to std::allocator, and firewall the requested memory size, in the first instance.


Take a look at the implementation of the std::vector class on the problem platform. Each implementation handles memory management differently (e.g. some double the currently allocated space when you add an object outside the vector's currently allocation size). If your objects are sufficiently large and/or you have a large number of entries being added to the vector, would be possible to attempt to allocate beyond the available (contiguous) memory on the computer. If that is the case, you'll want to look into a custom allocator for that vector.

If you're storing that many large items in a vector, you may want to look into another collection (e.g. std::list) or try storing pointers instead of actual objects.


You can supply your own allocator type to std::vector to track allocation. But I doubt that's the reason. First, looking at the sizes (30-80GB) I conclude it's a 64-bit code. How could 32-bit negative integer value make it to vector size, which is 64-bit, it would have been promoted to 64-bit first to preserve value? Second, if this problem only occurs on Solaris then it can indicate a different problem. As far as I remember, Solaris is the only OS that commits memory on allocation, the other operating systems only mark the address space allocated until those memory pages are actually used. So I would search for unused allocations.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜