How can pattern search make faster?
I am working on about 1GB incremental file and I want to search for a particular patter开发者_Python百科n. Currently I am using Java Regular expressions, do you have any idea how can I do this faster?
Sounds like a job for Apache Lucene.
You probably will have to rethink your searching strategy, but this library is made for doing things like this and adding indexes incrementally.
It works by building reverse indexes of your data (documents in Lucene parlance), and then quickly checking in the reverse indexes for which documents have parts of your pattern.
You can store metadata with the document indexes so you might able to not having to consult the big file in the majority of use-cases.
Basically what you need is a state machine that can process a stream. This stream being bounded to the file... Each time the file grow, you read what has been appended to it (like the tail linux command that append to standard output the lines added to the file).
If you need to stop/restart your analyser, you can either just store somewhere the start position (that can depend of the window you need for your pattern matching) and restart from that. Or you can restart from scratch.
That is for the "increasing file" part of the problem.
For the best way to process the content, it depend of what you really need, what kind of data and pattern you want to apply. Regular expression are maybe the best solution: flexible, fast and relatively convenient.
From my understanding, Lucene would be good if you wanted to do document search matching for some natural language content. This would be a poor choice to match all dates or all line with a specific property. Also because Lucene first make an index of the document... This would help only for really heavy processing as indexing in the first place take time.
You can try using the Pattern and Matcher classes to search with compiled expressions.
See http://download.oracle.com/javase/1.4.2/docs/api/java/util/regex/Pattern.html and http://download.oracle.com/javase/tutorial/essential/regex/
or use your favorite search engine to search on the terms:
java regular expression optimization or
java regular expression performance
I think it depends on:
- the structure of your data (line oriented?)
- the complexity of the match
- the speed at which the data file is growing
If your data is line oriented (or block oriented) and a match must occur within such a unit you can match until the last complete block, and store the file position of that endpoint. The next scan should start at that endpoint (possibly using RandomAccessFile.seek()).
This particularly helps if the data isn't growing all that fast.
If your match is highly complex but has a distinctive fixed text, and the pattern doesn't occur all that often you may be faster by a String.contains() and only if that's true apply the pattern. As patterns tend to be highly optimized it's definitely not guaranteed to be faster.
You may even think of replacing the regex by hand-writing a parser, possibly based on StringTokenizer or some such. That's definitely a lot of work to get it right, but it would allow you to pass some extra intelligence about the data into the parser, allowing it to fail fast. This would only be a good option if you really know a lot about the data that you can't encode in a pattern.
精彩评论