开发者

Speed/Memory usage estimate for different data structures

I'm trying to decide which data structure 开发者_C百科to use for the following.

Lets say I have maybe 10 million keys that contain pointers to unique objects containing some data.

The keys are UUID's think of them as 16 byte binary arrays. The UUID's are generated using a good quality random number generator.

I've been considering the following but would like to know what the pros and cons in terms of speed and memory consumption would be for each. Some fair estimates, best/worst/average case on a 64bit platform would be nice.

I need to be able to have virtually unlimited items inserted.

Binary Tree Hash Table Radix Tree (bit based or 2bit multi-way)

The operations I need on these are: insert, delete, search

I like the idea of a radix tree but it's proving to be the hardest to implement and I haven't found a suitable implementation I could incorporate into a commercial product.


  • You don't care about ordering
  • Your key is already random
  • 10 million items

The short answer

A hash table will probably be the best for your case.

Speed

A hash table (std::unordered_map) will be O( 1 ) if hashing is constant. In your case, O( 1 ) holds because you don't even need to hash—just using the lower 32 bits of the random UUID should be good enough. The cost of a lookup will be similar to one or two pointer indirections.

A binary tree (std::map) will be O( log2 n ), so for 10 million items you'll have 24 comparisons and 24 potential cache misses. Even for n = 4,000 it'll use 12 comparisons, so it very quickly becomes significantly worse than a hash table.

A radix tree will be O( k ), so you'll have a maximum of k comparisons and k potential cache misses. At a very unlikely best, the radix tree will be as fast as a hash table. At worse (assuming k = a somewhat reasonable 16, for a 256-way tree) it'll perform better than a binary tree but far worse than a hash table.

So if speed is top priority, use a hash table.

Overhead

A typical hash table will have around 1–3 pointers of overhead per item if full. If not full, you'll probably be wasting 1 pointer of space per empty slot. You should be able to keep it nearly full while still being faster than a binary tree because you've got a very random key, but for maximum possible speed you'll of course want to give it plenty of headroom. For 10 million items on a 32-bit machine, expect 38–114MiB of overhead for a full table. For a half-full table, expect 76–153MiB.

A red-black tree, the most common std::map implementation, will have 3 pointers + 1 bool per item. Some implementations exploit pointer alignment to merge the bool with one of the pointers. Depending on implementations and how full the hash table is, a red-black tree might have slightly lower overhead. Expect 114–153MiB.

A radix tree will have 1 pointer per item and 1 pointer per empty slot. Unfortunately I think such large random keys will cause you to have very many empty slots toward the edge of a tree, so it will probably use more memory than either of the above. Decreasing k can lower this overhead but will similarly lower performance.

If minimum overhead is important, use a hash table or binary tree. If it's a priority, use a full hash table.

Note that std::unordered_map does not let you control when it will resize, so getting one full will be difficult. Boost Intrusive has a very nice unordered_map implementation that will put you directly in control of that and many other things.


I would try std::map or std::unordered_map first.

They've had many smart people developing and improving them over many years.

Is there any reason why you can't use std::map or std::unordered_map?


I just did a quick calculation and I think you might be fine with a standard tree. 10million keys is a reasonable number. With a balanced tree that will be a depth of only 23 nodes to check. With a radix tree you'd actually have a key length of 128bites to check.

Your key can also be represented and compared extremely cheaply. Use a tuple (boost or 0x) of two 64bit values to get the same 128bit key. The tuple ordering will be enough for use in the map. Key copying is thus cheap, as is comparison. Comparing integers as-is is likely cheaper than doing masking and bit-based comparisons for the radix depth search.

So in this case a map is likely to work just fine.

*I'd avoid an unordered_map here since UUIDs tend to be structured data. This means that a standard hashing procedure (for a hash map) could easily be very poor in performance. *

Update:

Since you are using random UUIDs the hashing might be just fine -- though such large hash tables have a significant memory overhead to remain efficient.

Also, given totally random UUIDs the radix will likely end up having the same balancing as the tree (as key distribution is completely even). Thus you may not save even steps and still incur the overhead of the bit operations. But there are so many ways to specialize and optimize a radix tree that it's hard to say definitely if it could be faster, or always slower.


IMO radix tree is not hard to implement. However, a simple hash table would be sufficient. Just allocate array of 2^16 lists of objects and use first 2 bytes of UUID to index the list where to insert the object. Then you can search list with just approximately 160 items.

Or, allocate array of 20M pointers. To store an object just make a hash of UUID in range 0-20M, find the first free (NULL) pointer and store it there. Searching means walk from the hash value to first NULL value. Deleting is also simple.... try read http://en.wikipedia.org/wiki/Hash_function

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜