开发者

Getting all pdf files from a domain (for example *.adomain.com)

I need to download all pdf files from a certain domain. There are about 6000 pdf on that domain and most of them don't have an html link (either they have removed the link or they never put one in the first place).

I know there are about 6000 files because I'm googling: filetype:pdf site:*.adomain.com

However, Google lists only the first 1000 results. I believe there are two ways t开发者_StackOverflowo achieve this:

a) Use Google. However, how I can get all 6000 results from Google? Maybe a scraper? (tried scroogle, no luck) b) Skip Google and search directly on domain for pdf files. How do I do that when most them are not linked?


If the links to the files have been removed, and you have no permission to list the directories, it's basically impossible to know behind what URL there is a pdf-file.

You could have a look at http://www.archive.org and look up a previous state of the page if you believe there has been links to the files in the past.

To retrieve all pdfs mentioned on the site recursively I recommend wget. From the examples at http://www.gnu.org/software/wget/manual/html_node/Advanced-Usage.html#Advanced-Usage

You want to download all the gifs from a directory on an http server. You tried ‘wget http://www.server.com/dir/*.gif’, but that didn't work because http retrieval does not support globbing. In that case, use:

     wget -r -l1 --no-parent -A.gif http://www.server.com/dir/

More verbose, but the effect is the same. ‘-r -l1’ means to retrieve recursively (see Recursive Download), with maximum depth of 1. ‘--no-parent’ means that references to the parent directory are ignored (see Directory-Based Limits), and ‘-A.gif’ means to download only the gif files. ‘-A "*.gif"’ would have worked too.

(Simply replace .gif with .pdf!)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜