开发者

result of semantic search on java lucene

i've implemented the Latent Semantic Analisys on Lucene

The result of the algorithm are the matrix of 2 columns where the first is the index of the document and the second similarity.

That i want to write the response in the org.apache.lucene.search.Collector to the method search of Searcher, but i do not know how set the result in the collector object.

the code for the search method is:

    public void search(Weight weight, Filter filter, Collector collector) throws IOException                
{
    String textQuery = weight.getQuery().toString("contents");
    System.out.println(textQuery);
    double[][] ind;
    ind = lsa.searchOnDoc(textQuery);
    //ind contains the index and the similarity
    if (ind != null)
    {
        //construct the collector object
        for (int i=0; i<ind.length; i++)
        {
            int doc =(int) ind[i][0];
            double simi = ind[i][1]
            //collector.collect(doc);
          开发者_运维问答  //collector.setScorer(sim]);
            //This is the problem
        }
    }
    else
    {
        collector = null;
    }
}

i don't know the right steps to copy the value of ind in the collector object.

Can you help me?


I don't quite get why did you decide to shove LSI into Searcher.
And getting your text query from Weight looks especially shady - why not use the original query instead and skip all the (broken) conversions?

But the Collector is handled as follows.
For each segment in your index:

  1. Supply it corresponding SegmentReader with collector.setNextReader(reader, base). You can get these with ir.getSequentialSubReaders() and ir.getSubReaderStarts() on toplevel reader. So,

    • reader may be used by collector to load sort fields/caches during collection, and additional fields to augment search result when collection is done,
    • base is the number added to segment/local docIDs (they start from 0 for each segment) to convert them to index/global docIDs.
  2. Supply it a Scorer implementation with collector.setScorer(scorer).
    collector may use it during the next phase to get the score for the documents. Though if collector only counts the results, or sorts on some stored field, or just feels so - scorer will be ignored.
    The only method collectors invoke on Scorer instance is scorer.score(), which should return the score (I kid you not) for the current document being collected.

  3. Repeatedly call collector.collect(id) with monotonically increasing sequence of segment/local docIDs that match your query.

Going back to your code - make some wrapper that implements Scorer, use a single instance with a field that you update with simi on each iteration, have wrapper's score() method return that field, shove this instance into collector with setScorer() before the loop.

You also need lsa.searchOnDoc to return per-segment results.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜