开发者

How can I avoid re-processing input that has already been processed in my Perl script?

I have a huge Perl script (1500+ lines) that takes about 8 hours to run.

It generates SQL from HTML that is then imported into a website. Basically it's reverse-engineering a whole forum into a new one (I have permission).

The script runs from th开发者_StackOverflow社区e beginning each time parsing HTML that hasn't changed in ages, it's then stored in memory as arrays of hashes until all HTML has been parsed, the the SQL is generated.

I'd like it to pre-load the result from last time into memory and then only process the changes, but how can this be done?


Well you can use YAML, JSON, Data::Dumper or even Storable to dump/restore perl's data structures of arbitrary complexity.

(Well, Storable is a binary format, unreadable by human and with limited compatibility options, but sometimes it's good).

You can also use perl -d:DProf ./myscript.pl ; dprofpp to find out real weak spots. (Don't do that with 8hour version -- dprofpp will last forever then).


Storable?


A lot depends on the exact way you are doing this. However, if you are operating at HTML file granularity, then a simple way would be to keep a table of files and the last time you processed them. Then, when you are going through the files, check if the last processed time is earlier than the file's modification time before processing a given file.

You can persist the table in a variety of ways: See, for example, DB_File.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜