开发者

What is a good size for medium sized memory allocations?

For a serializing system, I need to allocate buffers to write data into. The size needed is not known in advance, so the basic pattern is to malloc N bytes and use realloc if more is needed. The size of N would be large enough to accommodate most objects, making reallocation rare.

This made me think that there is probably an optim开发者_运维知识库al initial amount of bytes that malloc can satisfy more easily than others. I'm guessing somewhere close to pagesize, although not necessarily exactly if malloc needs some room for housekeeping.

Now, I'm sure it is a useless optimization, and if it really mattered, I could use a pool, but I'm curious; I can't be the first programmer to think give me whatever chunk of bytes is easiest to allocate as a start. Is there a way to determine this?

Any answer for this that specifically applies to modern GCC/G++ and/or linux will be accepted.


From reading this wiki page it seems that your answer would vary wildly depending on the implementation of malloc you're using and the OS. Reading the bit on OpenBSD's malloc is particularly interesting. It sounds like you want to look at mmap, too, but at a guess I'd say allocating the default pagesize (4096?) would be optimised for.


My suggestion to you would be to find an appropriate malloc/realloc/free source code such that you can implement your own "malloc_first" alongside the others in the same source module (and using the same memory structures) which simply allocates and returns the first available block greater than or equal to a passed minimum_bytes parameter. If 0 is passed you'll get the first block period.

An appropriate declaration could be

void *malloc_first (size_t minimum_bytes, size_t *actual_bytes);

How doable such an undertaking would be I don't know. I suggest you attempt it using Linux where all source codes are available.


The way it's done in similar cases is for the first malloc to allocate some significant but not too large chunk, which would suit most cases (as you described), and every subsequent realloc call to double the requested size.

So, if at first you allocate 100, next time you'll realloc 200, then 400, 800 and so on. In this way the chances of subsequent reallocation will be lower after each time you do it.

If memory serves me right, that's how std::vector behaves.

after edit

The optimal initial allocation size would be the one that will cover most of your cases on one side, but won't be too wasteful on the other side. If your average case is 50, but can spike to 500, you'll want to allocate initially 50, and then double or triple (or multiple by 10) every next realloc so that you could get to 500 in 1-3 reallocs, but any further reallocs would be unlikely and infrequent. So it depends on your usage patterns, basically.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜