开发者

C++: What is faster - lookup in hashmap or switch statement?

I have a code pattern which translates one integer to another. Just like this:

int t(int value) {
    switch (value) {
        case 1: return const_1;
        case 3: return const_2;
        case 4: return const_3;
        case 8: return const_4;
        default: return 0;
    }
}

It has about 50 entries currently, maybe later on there will be some more, but probably no more than hundred or two. All the values are predefined, and of course开发者_如何学JAVA I can order case labels by their values. So the question is, what will be faster - this approach or put this into hash map (I have no access to std::map, so I'm speaking about custom hash map available in my SDK) and perform lookups in that table? Maybe it's a bit of premature optimization, though... But I just need your opinions.

Thanks in advance.

EDIT: My case values are going to be in range from 0 to 0xffff. And regarding the point of better readability of hash map. I'm not sure it really will have better readability, because I still need to populate it with values, so that sheet of constants mapping is still needs to be somewhere in my code.

EDIT-2: Many useful answers were already given, much thanks. I'd like to add some info here. My hash key is integer, and my hash function for integer is basically just one multiplication with integral overflow:

EXPORT_C __NAKED__ unsigned int DefaultHash::Integer(const int& /*aInt*/)
{
_asm mov edx, [esp+4]
_asm mov eax, 9E3779B9h
_asm mul dword ptr [edx]
_asm ret
}

So it should be quite fast.


A switch construct is faster (or at least not slower).

That's mostly because a switch construct gives static data to the compiler, while a runtime structure like a hash map doesn't.

When possible compilers should compile switch constructs into array of code pointers: each item of the array (indexed by your indexes) points to the associated code. At runtime this takes O(1), while a hash map could take more: O(log n) at average case or O(n) at worst case, usually, and anyway a bigger constant number of memory accesses.


I will add my 5 cents:

For the number of entries at about 50 std::unordered_map (hash based, O(1)) is typically slower then std::map (tree based O(ln(N))), and both of them are slower then boost::flat_map(sorted vector O(ln(N))) which I tend to use in such cases. It is not always the case that switch can be compiled to jump table, and when it is, you can simply put your values (or functions) in vector yourself and access by index. Otherwise switch is marginally faster than boost::flat_map.

Please note the word "typically" in the beginning, if you do care about performance of this piece of code do the profiling (and share results with us :)).


A switch statement is going to be quicker than a look up in a hash map.

However, a map is going to result in much more readable code if you ever change the mappings. You can easily do this with a map by reading the results in from a file. In a switch statement you'd have to change the code and recompile.


The switch will be faster. If it's a small number of cases, as in your example, it will use an if-chain. If a large number of cases, and if they are reasonably compact, it has the option to generate a jump-table, which only takes a few instructions. (BTW you don't have to order the cases.) The hash-map is O(1), but will probably take in the range of 10-40 instructions.


An array will have the fastest access time, by definition.

The switch statement compares values, then uses a jump table (which is an array of function pointers).

The hashmap computes a hash value from the data, then either searches a tree in memory or uses the hash value as an index into an array. Slow because of computing the hash value.

On most modern platforms, 64k, is not a big amount of data and can be statically allocated as a constant array.

One problem with the array technique is account for keys that you have not accounted for. One example is to use a unique sentinel value. When the value is returned, you know you have an unknown key.

I suggest using a static const array of values.


The speed of a hash map will depend on two things: the speed of the hash function, and the number of collisions. When all of the values are known ahead of time, it's possible to create a perfect hash function that has no collisions. If you can generate a perfect hash function that only consists of a couple of arithmetic operations, it will potentially be faster than the switch.


I agree with using an array, but I don't have the reputation to vote for it. It's only 65536 entries, so unless you're under serious memory constraints and/or you're returning something very large, instead of int like your example, you will be much better off with a static const array. Having an array of 64k int's is generally only 256kB, and it would be half or 1/4 that size if you can use a short or char. I think the best you can hope for with a switch statement is a conditional branch for values outside of it's array of code pointers and a second conditional branch to jump to the code for a value inside the array. Being able to just execute "return my_array[value]" will just result in a memory fetch (possibly from l3 cache).

For readability, you can stick the array in its own file and line up all the values in a grid with something like 10 or 16 entries per line. Then you comment each line with the first part of each entry number (e.g. "// 0x12A?" ), and have periodic comment lines that would line up with the columns to fill in the last digit for the entry number (e.g. "// 0 1 2 3 4 5 6 7 8 9 A B C D E F"). I've done this for several arrays of 256 entries, which has been much easier to manage than a switch statement. I've also used arrays with 64k entries for fast integer logarithms, which get more complicated to manage, but I was able to write a program to generate all the array code.

With something that large, code management may not be easier until you're dealing with more entries, but it depends on your editor and skill with it. Maintaining such an array is just adjusting a spot in a chart instead of hunting down values that may or may not be in a long list of "case 1: return const_1;". A couple of for loops should suffice to generate an array of 64k entries, that are properly commented and filled with default values.

For access safety, you might consider using some sort of bounds checking. This could be done with boost's preconditions, throwing an exception or special return if the number is out of bounds, or a simple "return my_array[value&0xffff]". However, you might have a strong enough guarantee on your incoming value that you don't need any of it.


I think it is not obvious which is going to be faster. You might need to profile both approaches.

The hash map should have complexity of O(1).

The switch (with non-contiguous keys like yours) may be optimized into a binary search (at least with GCC), which has complexity of O(log n).

On the other hand, any operation done on a hash map will be much more expensive than an operation done in a switch.


Hash table time complexity is generally O(1) when don't considering collision. C++ standard doesn't specified how switch is implemented but it can be implemented as jump-table which time complexity is O(1) too or it can be implemented as binary search which time complexity is O(log n) or a combination depending on how many case statement etc.

So in a word, small scale like your case, switch is faster, but hash table might win in large scale

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜