开发者

Lucene DuplicateFilter question

Why DuplicateFilter doesn't work together with other filters? For example, if a little remake of the test DuplicateFilterTest, then the impression that the filter is not applied to other filters and first trims results:

    public void testKeepsLastFilter()
            throws Throwable {
        DuplicateFilter df = new DuplicateFilter(KEY_FIELD);
        df.setKeepMode(DuplicateFilter.KM_USE_LAST_OCCURRENCE);

        Query q = new ConstantScoreQuery(new ChainedFilter(new Filter[]{
                new QueryWrapperFilter(tq),
                // new QueryWrapperFilter(new TermQuery(new Term("text", "out"))), // works right, it is the last document.
                new QueryWrapperFilter(new TermQuery(new Term("text", "now"))) // why it doesn't work? It is the third document, but hits count is 0.

        }, ChainedFilter.AND));

开发者_开发技巧        // this varians doesn't hit too:
        // ScoreDoc[] hits = searcher.search(new FilteredQuery(tq, df), new QueryWrapperFilter(new TermQuery(new Term("text", "now"))), 1000).scoreDocs;
        // ScoreDoc[] hits = searcher.search(new FilteredQuery(tq, new QueryWrapperFilter(new TermQuery(new Term("text", "now")))), df, 1000).scoreDocs;

        ScoreDoc[] hits = searcher.search(q, df, 1000).scoreDocs;

        assertTrue("Filtered searching should have found some matches", hits.length > 0);
        for (int i = 0; i < hits.length; i++) {
            Document d = searcher.doc(hits[i].doc);
            String url = d.get(KEY_FIELD);
            TermDocs td = reader.termDocs(new Term(KEY_FIELD, url));
            int lastDoc = 0;
            while (td.next()) {
                lastDoc = td.doc();
            }
            assertEquals("Duplicate urls should return last doc", lastDoc, hits[i].doc);
        }
    }


DuplicateFilter independently constructs a filter which chooses either the first or last occurence of all documents containing each key. This can be cached with minimal memory overhead.

Your second filter independently selects some other documents. The two choices may not coincide. To filter duplicates according to some arbitrary subset of all docs would probably need to use a field cache to be performant and this is where things get expensive RAM-wise

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜