开发者

PHP Cron script efficiency using CURL to load files

Im pulling in search query results using CURL and then iterating through a d开发者_Go百科atabase to load additional queries then storing the results back in the database. Im running into hassles with php maximum time and have tried setting the maximum time variable to a higher amount which i think isnt working on my host using this:

ini_set('max_execution_time', 600);

in the file that is run by cron so it only changes the max time for the importing process.

The question is, would it be more effecient to store the result of each CURL connection in the database and then having a secondary function that pulls the dataabase results and sorts into the relevant tables and run the secondary function every 10 minutes hypothetically OR is it more effecient to pull the file and insert the sorted records in one go??


You can always find out whether your host is allowing you to modify the ini_set function by using ini_get('max_execution_time') right after your call to ini_set().

Instead of storing the results in the database, I would put them into a directory. Name the files by using the function microtime(true) (which makes it easy to pull the most or least recently written file). Then have a separate script that checks to see if there are files in the directory, and if so, processes one of them. Have the scripts run on a 1 minute interval.

I will note that there is a possible race condition on processing the file if it takes more than one minute, however, even if it takes longer than one minute, it is unlikely to ever occur.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜