开发者

Best Practices for implementing a Lucene Search in Java

Each document in my Lucene index is kind of similar to a post in stackoverflow and I am trying to search through the index (which contains millions of documents). Each user should only be able to search through the user's company posts only. I have no control over how the data is indexed and I only need to implement a simple search (that works) on top of it.

Here is my first draft:

String q = "mysql"
String companyId = "1001"

String[] fields = { "body", "subject", "number", "category", "tags"};

Float float10 = new Float(10);
Float float5 = new Float(5);

Map<String, Float> boost = new HashMap<String, Float>();
boost.put("body", float10);
boost.put("subject", float10);
boost.put("number", float5);
boost.put("category", float5);
boost.put("tags", float5);;

MultiFieldQueryParser mfqp = new MultiFieldQueryParser(fields, new StandardAnalyze开发者_开发技巧r(), boost);
mfqp.setAllowLeadingWildcard(true); 
Query userQuery = mfqp.parse(q);

TermQuery companyQuery = new TermQuery(new Term("company_id", companyId));

BooleanQuery booleanQuery = new BooleanQuery();
BooleanQuery.setMaxClauseCount(50000)
booleanQuery.add(userQuery, BooleanClause.Occur.MUST);
booleanQuery.add(companyQuery, BooleanClause.Occur.MUST);

FSDirectory directory = FSDirectory.getDirectory(new File("/tmp/index"));
IndexSearcher searcher = SearcherManager.getIndexSearcherInstance(directory);
Hits hits = searcher.search(booleanQuery);

Its mostly working functionally, but I am seeing some memory issues. I get an Out of Memory error every 4, 5 days and I took a heapdump and saw that Lucene Term and TermInfo objects top the list. I am using singleton instance of IndexSearcher and I can see only one instance of it in the heap.

Any reviews on the way I am doing? What I am doing wrong and what I can do better in general?


There is no obvious bug in your code (at least not as far as I can tell). It might be best to analyze your heapdump with a more powerful tool than visualvm. I recommend to use the Memory Analyzer (MAT) of eclipse (not installed by default, but available from the default update site). It's awesome.

If you need help using MAT, please refer to this blog post "Eclipse Memory Analyzer, 10 useful tips/articles" by Markus Kohler. He is the author of the tool.


What's your heap size? Are there certain searches that cause your memory usage to go high?

My guess is that you are hitting OOME's when you perform wildcard queries. Internally, Lucene expands a wildcard query to an OR query of ALL of the terms that match the wildcard. This problem is exacerbated by the fact that you are allowing leading wildcards. Running a search like "body:*" would load up every single term in the body field into memory.

My recommendation would be to run a memory profiler while running wildcard queries and see what you get. If the wildcard queries are the culprit, then at least disable leading wildcards, or lower your query clause limit.


Where do you you usually experience out of memory issues? Is it around this block?

MultiFieldQueryParser mfqp = new MultiFieldQueryParser(fields, new StandardAnalyzer(), boost);
mfqp.setAllowLeadingWildcard(true); 
Query userQuery = mfqp.parse(q);

Also, are you running the code for querying in conjunction with the indexing process?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜