开发者

Searching TokenStream fields in Lucene

I am just starting out with Lucene, and I feel like I must have a fundamental misunderstanding of it, but from the samples and documentation I could not figure out this issue.

I cannot seem to get Lucene to return results for fields which are initialized with a TokenStream, whereas fields initialized with a string work fine. I am using Lucene.NET 2.9.2 RC2.

[Edit] I've also tried this with the latest Java version (3.0.3) and see the same behavior, so it is not some quirk of the port.

Here is a basic example:

Directory index = new RAMDirectory();
Document doc = new Document();
doc.Add(new Field("fieldName", new StandardTokenizer(new StringReader("Field Value Goes Here"))));
IndexWriter iw = new IndexWriter(index, new StandardAnalyzer());
iw.AddDocument(doc);
iw.Commit();
iw.Close();
Query q = new QueryParser("fieldName", new StandardAnalyzer()).Parse("value");
IndexSearcher searcher = new IndexSearcher(index, true);
Console.WriteLine(searcher.Search(q).Length());

(I realize this uses APIs deprecated with 2.9, but that's just for brevity... pretend 开发者_运维技巧the arguments that specify the version are there and I use one of the new Searchs).

This returns no results.

However, if I replace the line that adds the field with

doc.Add(new Field("fieldName", "Field Value Goes Here", Field.Store.NO, Field.Index.ANALYZED));

then the query returns a hit, as I would expect. It also works if I use the TextReader version.

Both fields are indexed and tokenized, with (I think) the same tokenizer/analyzer (I've also tried others), and neither are stored, so my intuition is that they should behave the same. What am I missing?


I have found the answer to be casing.

The token stream created by StandardAnalyzer has a LowerCaseFilter while creating the StandardTokenizer directly does not apply such a filter.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜