开发者

Best data structures for searching millions of filenames? [duplicate]

This question already has answers here: 开发者_运维技巧 Closed 11 years ago.

Possible Duplicate:

Build an index for substring search?

I'm developing a filename search tool. I'd like to search a harddrive (or multiple harddrives) for, perhaps, millions of filenames.

Given the file: application 3 - jack smithinson

Searches:

  1. 'application', '3', 'jack', 'smithinson'
  2. 'smith'
  3. 'inson'

Should all return this file.

What are the best data structures for this kind of operation and why?

  1. Binary tree.
  2. Trie.
  3. SQLite Database, of filenames
  4. More?


Store these file names in Lucene indexes. You can find more information here http://incubator.apache.org/lucene.net/ Lucene lets you create highly optimized indexes for search. Yahoo has used it for years for their web search engine. It offers an abstract way to create indexes without worrying about the internal implementation. It's as easy as creating an xml document in memory and then serialize it to disk

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜