开发者

Efficient most common suffix algorithm?

I have a few GBs worth of strings, and for every prefix I want to find 10 most common suffixes. Is there an efficient algorithm for that?

An obvious solution would be:

  • Store sorted list of <string, count> pairs.
  • Identify by binary search extent for prefix we're searching.
  • Find 10 highest 开发者_如何学Gocounts in this extent.
  • Possibly precompute it for all short prefixes, so it doesn't ever need to look at large portion of data.

I'm not sure if that would actually be efficient at all. Is there a better way I overlooked?

Answers must be real time, but it can take as much preprocessing as necessary.


Place the words in a tree e.g. trie or radix, placing a "number of occurrences" counter for each full word, so you know which nodes are endings and how common they are.

Find the prefix/postfix combos by iteration.

Both these operations are O(n*k) where k is the length of the longest word; this is the same complexity as a hash-table.

The HAT-trie is a cache-conscious version that promises high performance.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜