开发者

Is using a giant OR regex inefficent in python?

Simple question, is using a giant OR regex inefficent in python. I am building a script to search for bad files. I have a source file that contains 50 or so "signatures" so far. The list is in the form of:

 Djfsid
 LJflsdflsdf
 fjlsdlf
 fsdf
 .
 .
 .

There are no real "consistancies" so optomizing the list by removing "duplicates" or checking for "is one entry a substring of another entry" won't do much.

I basically wan't to OS walk down a directory, open a file, check for the signature, close and move on.

To speed things up I will break the list up into 50/n different sublists where N is the number of cores开发者_StackOverflow中文版 and have a thread do work on a few entries of the list.

Would using a giant regex re.search('(entry1|entry2|entry3....|entryK)', FILE_CONTENTS) or a giant for i in xrange(0,NUM_SUBENTRIES)...if subentry[i] in FILE_CONTENTS... be better off?

Also is this a good way to multithread? This is unix so multiple threads can work on the same file at the same time. Will disk access basically bottelneck me to the point where multithreading is useless?


Also is this a good way to multithread?

Not really.

Will disk access basically bottelneck me to the point where multithreading is useless?

Correct.

You might want to look closely at multiprocessing.

  1. A worker Process should do the OS.walk and put the file names into a Queue.

  2. A pool of worker Process instances. Each will get a file name from the Queue, open it, check the signature and enqueue results into a "good" Queue and a "bad" Queue. Create as many of these as it takes to make the CPU 100% busy.

  3. Another worker Process instance can dequeue good entries and log them.

  4. Another worker Process instance can dequeue bad entries and delete or rename or whatever is supposed to happen. This can interfere with the os.walk. A possibility is to log these into a "do this next" file which is processed after the os.walk is finished.


It would depend on the machine you are using. If you use the machine's maximum capaticy, it will slow down, of course. I think the best way to find out is to try.


Don't worry about optimisation.

50 data points is tiny compared to what your computer can manage, so you'll probably waste a lot of your time, and make your program more complicated.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜