开发者

Why are circular linked lists used in storage allocators instead of a tree?

How come storage alloc开发者_StackOverflow社区ators use a circular linked list to store allocated/ free addresses instead of a balanced tree? Traversing a linked list would require O(n) order of complexity whereas a balanced tree could be traversed in O(logn), right? What's the advantage / reasoning behind it?


The premise ("storage allocators use a circular linked list to store allocated/ free addresses") is not necessarily true. It might be true for some allocators, but it's not true in general.

If the allocator uses a linked-list-like structure to keep track of blocks of memory, it's often embedded as meta data in the memory blocks themselves - ie. not as a separate data structure on the side.

For example, each block of memory could start with the status (free/allocated), and the size of the block. This approach basically implements a linked list (using the size, you can easily determine the start address of the next block), but it has other properties that a linked list doesn't have : you can still find a specific memory block (node) by knowing its memory address.

So, you'd have an O(1) access time (because you, or the compiler, knows the memory address of the block of memory). Merging neighboring free blocks is also straightforward. If it's necessary to run some kind of de-fragmentation or compaction algorithm, that can be done using the linked-list-like structure. Finding a free block of sufficient size can be done using the linked-list-like structure too (although sometimes a second embedded linked list is used for free blocks specifically, to minimize the overhead of allocation functions).

Of course, this is just one possible approach to the problem. But it goes to show that using a linked list is not necessarily a worse choice than another data structure.


Well, allocators are often purpose-built, very carefully and particularly crafted to the particular demands they are expected to service.

As such, there are probably more complicated and less regular structures found in many industrial strength allocators.

Still, presuming the premise of your question is accurate:

The worst case complexity is most relevant for very large traversals. Most allocators would be designed so the necessary amount of traversal was usually quite small, so small that the additional overhead required to maintain a balanced tree makes traversal slower in the average case. Additionally, engineers prefer the simplest solution except where more complex solutions are obviously better: circularly linked lists are about a simple as it gets.


Traversing a linked list would require O(n) order of complexity

Yes, but the purpose of a storage allocator is to provide some allocated space, and that does not necessarily require "traversing" the structure that stores previous allocations. If for example we are allocating memory in specific-sized chunks every time (so we keep chunks of that size in our structure), then we just need to return the first one. In general, we just have to find some node that is big enough, so we look until we find one that is big enough (this will usually happen quite quickly).

whereas a balanced tree could be traversed in O(logn), right?

We could find a specific element in O(logn), but we can't "traverse" the tree in that time, because by definition a "traversal" of a data structure means visiting every node, and there are O(n) nodes. And we can only "find a specific element in O(logn) if the tree has an appropriate search-tree property. Which node do we want, again? This will let us efficiently find, for example, the smallest allocation that is big enough; but that isn't necessarily what we want to give back, anyway (since this policy leads to making lots of tiny chunks that might or might not be suitable for any future allocation, and which bloat the structure). See also.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜