开发者

Efficient Dictionary lookup

For my C++ application, there is a requirement to check i开发者_运维知识库f a word is a valid English dictionary word or not. What is the best way to do it. Is there freely available dictionary that I can make use of. I just need a collection of all possible words. How to make this lookup least expensive. Do I need to hash it.


Use either a std::set<std::string> or a std::unordered_set<std::string>. The latter is new in C++0x and may or may not be supported by your C++ Standard Library implementation; if it does not support it, it may include a hash_set of some kind: consult your documentation to find out.

Which of these (set, which uses a binary search tree, and unordered_set, which uses a hashtable) is more efficient depends on the number of elements you are storing in the container and how your Standard Library implementation implements them. Your best bet is to try both and see which performs better for your specific scenario.

Alternatively, if the list of words is fixed, you might consider using a sorted std::vector and using std::binary_search to find words in it.


With regards to the presence of a word list, it depends on the platform. Under Linux, /usr/share/dict/words contains a list of English words that might meet your needs. Otherwise, there are doubtlessly such lists available on the network.

Given the size of such lists, the most rapid access will be to load it into a hash table. std::unsorted_set, if you have it; otherwise, many C++ compilers come with a hash_set, although different compilers have a slightly different interface for it, and put it in different namespaces. If that still has performance problems, it's possible to do better if you know the number of entries in advance (so the table never has to grow), and implement the hash table in an std::vector (or even a C style array); handling collisions will be a bit more complicated, however.

Another possibility would be a trie. This will almost certainly result in the least number of basic operations in the lookup, and is fairly simple to implement. Typical implementations will have very poor locality, however, which could make it slower than some of the other solutions in actual practice (or not—the only way to know is to implement both and measure).


I actually did this a few months ago, or something close to this. You can probably find one online for free.

Like on this website: http://wordlist.sourceforge.net/

Just put it in a text file, and compare words with what is on the list. It should be order n with n being the number of words in the list. Do you need the time complexity faster?

Hope this helps.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜