开发者

Lucene stop phrases filter

I'm trying to write a filter for Lucene, similar to StopWordsFilter (thus implementing TokenFilter), but I need to remove phrases (sequence of tokens) instead of words.

The "stop 开发者_运维问答phrases" are represented themselves as a sequence of tokens: punctuation is not considered.

I think I need to do some kind of buffering of the tokens in the token stream, and when a full phrase is matched, I discard all tokens in the buffer.

What would be the best approach to implements a "stop phrases" filter given a stream of words like Lucene's TokenStream?


In this thread I was given a solution: use Lucene's CachingTokenFilter as a starting point:

That solution was actually the right way to go.

EDIT: I fixed the dead link. Here is a transcript of the thread.

MY QUESTION:

I'm trying to implement a "stop phrases filter" with the new TokenStream API.

I would like to be able to peek into N tokens ahead, see if the current token + N subsequent tokens match a "stop phrase" (the set of stop phrases are saved in a HashSet), then discard all these tokens when they match a stop phrase, or keep them all if they don't match.

For this purpose I would like to use captureState() and then restoreState() to get back to the starting point of the stream.

I tried many combinations of these API. My last attempt is in the code below, which doesn't work.

    static private HashSet<String> m_stop_phrases = new HashSet<String>(); 
    static private int m_max_stop_phrase_length = 0; 
... 
    public final boolean incrementToken() throws IOException { 
        if (!input.incrementToken()) 
            return false; 
        Stack<State> stateStack = new Stack<State>(); 
        StringBuilder match_string_builder = new StringBuilder(); 
        int skippedPositions = 0; 
        boolean is_next_token = true; 
        while (is_next_token && match_string_builder.length() < m_max_stop_phrase_length) { 
            if (match_string_builder.length() > 0) 
                match_string_builder.append(" "); 
            match_string_builder.append(termAtt.term()); 
            skippedPositions += posIncrAtt.getPositionIncrement(); 
            stateStack.push(captureState()); 
            is_next_token = input.incrementToken(); 
            if (m_stop_phrases.contains(match_string_builder.toString())) { 
              // Stop phrase is found: skip the number of tokens 
              // without restoring the state 
              posIncrAtt.setPositionIncrement(posIncrAtt.getPositionIncrement() + skippedPositions); 
              return is_next_token; 
            } 
        } 
        // No stop phrase found: restore the stream 
        while (!stateStack.empty()) 
            restoreState(stateStack.pop()); 
        return true; 
    } 

Which is the correct direction I should look into to implement my "stop phrases" filter?

CORRECT ANSWER:

restoreState only restores the token contents, not the complete stream. So you cannot roll back the token stream (and this was also not possible with the old API). The while loop at the end of you code is not working as you exspect because of this. You may use CachingTokenFilter, which can be reset and consumed again, as a source for further work.


You'll really have to write your own Analyzer, I should think, since whether or not some sequence of words is a "phrase" is dependent on cues, such as punctuation, that are not available after tokenization.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜