Best data structures for searching millions of filenames? [duplicate]
Possible Duplicate:
Build an index for substring search?
I'm developing a filename search tool. I'd like to search a harddrive (or multiple harddrives) for, perhaps, millions of filenames.
Given the file: application 3 - jack smithinson
Searches:
- '
application
', '3
', 'jack
', 'smithinson
' - '
smith
' - '
inson
'
Should all return this file.
What are the best data structures for this kind of operation and why?
- Binary tree.
- Trie.
- SQLite Database, of filenames
- More?
Store these file names in Lucene indexes. You can find more information here http://incubator.apache.org/lucene.net/ Lucene lets you create highly optimized indexes for search. Yahoo has used it for years for their web search engine. It offers an abstract way to create indexes without worrying about the internal implementation. It's as easy as creating an xml document in memory and then serialize it to disk
精彩评论