开发者

Problem in using Solr WordDelimiterFilter

I am doing some test using WordDelimiterFilter in Solr but it doesn't preserve the protected list of words which I pass to it. Would you please inspect the code and the output example and suggest which part is missing or used badly?

with running this code:

private static Analyzer getWordDelimiterAnalyzer() {
    return new Analyzer() {
        @Override
        public TokenStream tokenStream(String fieldName, Reader reader) {
            TokenStream stream = new StandardTokenizer(Version.LUCENE_32, reader);
            WordDelimiterFilterFactory wordDelimiterFilterFactory = new WordDelimiterFilterFactory();
            HashMap<String, String> args = new HashMap<String, String>();
            args.put("generateWordParts", "1");
            args.put("generateNumberParts", "1");
            args.put("catenateWords", "1");
            args.put("catenateNumbers", "1");
            args.put("catenateAll", "0");
            args.put("luceneMatchVersion", Version.LUCENE_32.name());
            args.put("language", "English");
            args.put("protected", "protected.txt");
            wordDelimiterFilterFactory.init(args);
            ResourceLoader loader = new SolrResourceLoader(null, null);
            wordDelimiterFilterFactory.inform(loader);
            /*List<String> protectedWords = new ArrayList<String>();
            protectedWords.add("good bye");
            protectedWords.add("hello world");
            wordDelimiterFilterFactory.inform(new LinesMockSolrResourceLoader(protectedWords));
            */
            return wordDelimiterFilterFactory.create(stream);
        }
    };
}

input text:

hello world

good bye

开发者_StackOverflow

what is your plan for future?


protected strings:

good bye

hello world


output:

(hello,startOffset=0,endOffset=5,positionIncrement=1,type=)

(world,startOffset=6,endOffset=11,positionIncrement=1,type=)

(good,startOffset=12,endOffset=16,positionIncrement=1,type=)

(bye,startOffset=17,endOffset=20,positionIncrement=1,type=)

(what,startOffset=21,endOffset=25,positionIncrement=1,type=)

(is,startOffset=26,endOffset=28,positionIncrement=1,type=)

(your,startOffset=29,endOffset=33,positionIncrement=1,type=)

(plan,startOffset=34,endOffset=38,positionIncrement=1,type=)

(for,startOffset=39,endOffset=42,positionIncrement=1,type=)

(future,startOffset=43,endOffset=49,positionIncrement=1,type=)


You are using a standard tokenizer which at least tokenizes on a whitespace level so you will always have "hello world" be split to "hello" and "world".

TokenStream stream = new StandardTokenizer(Version.LUCENE_32, reader);

See Lucene Documentation:

public final class StandardTokenizer extends Tokenizer

A grammar-based tokenizer constructed with JFlex

This should be a good tokenizer for most European-language documents:

Splits words at punctuation characters, removing punctuation. However, a dot that's not followed by whitespace is considered part of a token.

Splits words at hyphens, unless there's a number in the token, in which case the whole token is interpreted as a product number and is not split.

Recognizes email addresses and internet hostnames as one token.

The word delimiter protected word list is meant for something like:

  • ISBN2345677 to be split in ISBN 2345677
  • text2html not to be split in text 2 html (because text2html was added to protected words)

If you really want to do something like you mentioned you may use the KeywordTokenizer. But you have to do the complete splitting by yourself.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜