Situation Users can upload Documents, a queue message will be placed onto the queue with the documents ID. The Worker Role will pick this up and get the document. Parse it completely with Lucene. Aft
I want to add to the document’s indexed field some user data that will be stripped by my custom tokenizer at run time and will be used by my custom filter later on,
I want to grab some data from lucene index file. But I can\'t read it. I try to use Luke, but it always crashes with java.lang.OutOfMemoryError: J开发者_如何学Goava heap space. Note -Xmx can\'t help
I\'m using Solr 3.4 and FieldCollapsing. I would like to group all messages using FieldCollapsing. Per default, every group contains only 1 message.
I have a situation where I need to u开发者_Go百科se both EdgeNGramFilterFactory and NGramFilterFactory.
I\'m beginner and I used Lucene to calculate precision in my program, but results arenot corr开发者_开发问答ect. I want to know if there is any way for tracing the code to know what exactly happen? Th
I\'m currently implementing a Solr solution where a user is able to select various options to search for a product. I can now take all those options and put them together into one single long query, o
Using Solr 3.3 Key Store Item NameDescriptionCategoryPrice ========================================================================开发者_开发百科=
I am able to receive the most frequently used terms in my index via the terms compontent described here:
Is the forward slash \"/\" a reserved character in solr field names? I\'m having trouble writing a solr sort query which will parse for fields containing a forward slash \"/\"