开发者

Normalizing Unicode data for indexing (for Multi-byte languages): What products do this? Does Lucene/Hadoop/Solr?

I have several (1 million+) documents, email messages, etc, that I need to index and search through. Each document potentally has a different encoding.

What products开发者_高级运维 (or configuration for the products) do I need to learn and understand to do this properly?

My first guess is something Lucene-based, but this is something I'm just learning as I go. My main desire is to start the time consuming encoding process ASAP so that we can concurrently build the search front end. This may require some sort of normalisation of double byte characters.

Any help is appreciated.


Convert everything to UTF-8 and run it through Normalization Form D, too. That will help for your searches.


You could try Tika.


Are you implying you need to transform the documents themselves? This sounds like a bad idea, especially on a large, heterogeneous collection.

A good search engine will have robust encoding detection. Lucene does and Solr uses it (Hadoop isn't a search engine). And I don't think it's possible to have a search engine that doesn't use a normalised encoding in its internal index format. So normalisation won't be a choice criteria, though trying out the encoding detection would be.


I suggest you use Solr. The ExtractingRequestHandler handles encodings and document formats. It is relatively easy to get a working prototype using Solr. DataImportHandler enables importing a document repository into Solr.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜