开发者

How can I resume downloads in Perl?

I have a project that depends upon some other binaries to be downloaded from web at install time.For this what i do is:

if ( file-present-in-src/)
    # skip that file
else
    # use wget to download the file

The problem with this approach is that when I interrupt a download in middle, and do invoke the script next time, the partially downloaded file is also skipped (which is not desired), also I want wget to resume the download开发者_Go百科 of the partially downloaded file.

How should I go about it: Possible Solutions I could think of:

  1. Let the file to be downloaded to some file say download_tmp. Move to original file if successful.
  2. Handle SIG{'INT'} to write proper cleanup code.

But none of these could help resume the partial file download,

Any insights?


Fist, I don't understand what this has to do with Perl, since you're using wget to do the dowloading ... You could use libwww-perl (perldoc LWP) and have more control about the download process.

Then I second your idea of downloading to a "tmp" filename and move the file on success.

However I think you need to go further and verify the integrity of the files. Doing an MD5 or SHA hash is very easy, and match the downloaded one with what you're expecting. You can have a short file on server containing the checksum (filename.md5). Determine success only when you have a match.

Note that catching all the signals and generally trying to make the process unkillable, and then expecting it to have worked is bound to fail at one point or another. There could be a network timeout, a crash, power failure, configuration problem on the server ... you should instead assume downloads can fail, because they will, and code so that your process can recover.

Finally you're not telling us what kind of binaries you're downloading and what you're doing with them. Since you use wget I'm going to assume you're on Unix; you should consider using RPM+Yum or the likes, they handle all this for you. RPM are easy to write, really.


use your first approach ..

  1. download to "FileName".tmp
  2. move "FileName".tmp to "FileName"
    move! not copy
  3. once per diem clean out all .tmp files (paranoia rulez)


You could just use wget's -N and -c options and remove the entire "if file exists" logic.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜