开发者

What is generally faster, grepping through files or running a SQL LIKE %x% query through blobs?

Say I'm designing a tool that would save code snippets either in a PostgreSQL/MySQL database or on the file system. I want to search through these snippets. Using a search engine like Sphinx doesn't seem practical because we need exact text matches of code when searching code.

grep and ack and has always worked great, but storing stuff in a database makes a large collection of stuff more manageable in certain ways. I'm wonder what the relative performance of running grep recursively over a tree of directories is compared to running a query like SQL's LIKE or MySQL's REGEXP function over an equivalent num开发者_运维问答ber of records with TEXT blobs is.


If you've 1M files to grep through, you will (best I'm aware) go through each one with a regular expression.

For all intents and purposes, you're going to end up doing the same thing over table rows if you mass-query them using a LIKE operator or a regular expression.

My own experience with grep is that I seldom look for something that doesn't contain at least one full word, however, so you might be able to take advantage of a database to reduce the set in which you're searching.

MySQL has native full text search features, but I'd recommend against because they mean you're not using InnoDB.

You can read about those from Postgres here:

http://www.postgresql.org/docs/current/static/textsearch.html

After creating an index on a tsvector column, you can then do your "grep" in two steps, one to immediately find rows that might vaguely qualify, followed by another on your true criteria:

select * from docs where tsvcol @@ :tsquery and (regexp at will);

That will be significantly faster than anything grep can do.


I can't compare them but both will take long. My guess is grep will be faster.

But MySQL support full text indexing and searching, which will be faster then grep -- i guess again.

Also, I did not understand, what is the problem with Sphinx or Lucene. Anyway, here's a benchmark for MySQL, Sphinx and Lucene


The internet seems guess that grep uses Boyer-Moore, which will make the query time depend additively (not multiplicatively) on the query size. This isn't that relevant though.

I think it's near-optimal for a one-time search. But in your case you can do better since you have repeated searches, which you can exploit the structure of (e.g. by indexing certain common substrings in your query), as bpgergo hints at.

Also I'm not sure the regular expression engine you are thinking of using is optimized for a non-special query, you could try it and see.

You may wish to keep all the files you're searching through in memory to avoid harddisk-based slowdown. This should work unless you are searching a staggering amount of text.


If you want a full-text index on code I would recommend Russ Cox’ codesearch tools https://code.google.com/p/codesearch/

This is How Google Code Search Worked http://swtch.com/~rsc/regexp/regexp4.html

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜