Using wget to download all the hulkshare/mediafire linked files on a page
So I've been trying to set up wget to download all the mp3s from www.goodmusicallday.com. Unfortunately, rather than the mp3s being hosted by the site, the site puts them up on www.hulkshare.com and then links to the download pages. Is there a way to use the recursive and filtering abilities of wget to开发者_开发技巧 make it go to each hulkshare page and download the linked mp3?
Any help is much appreciated
So, a friend of mine actually figured out an awesome way to do this, just enter the code below in Terminal:
IFS="";function r { echo $1|sed "s/.*$2=\([^\'\"\&;]*\).*/\1/";};for l in `wget goodmusicallday.com -O-|grep soundFile`;do wget -c `r $l soundFile` -O "`r $l titles`";done
I guess not!!!
I have tried on several occasion to do scripted downloads from mediafire, but in vain.
and that's the reason why they don't have a simple download link, instead have a timer attached to!
If you have noticed carefully, you will see that the download links(i mean the actual file hosting server is not www.mediafire.com! but rather something like download666.com).
So, i don't think it is possible with wget!!
Wget can only save the day if download links are simple html links, the a tags.
Regards,
精彩评论