开发者

Running a spider (webcrawler) to find specific content

First of all I don't know whether this is the right place for this question. If not I'm sorry :)

Am thinking of writing a spider to crawl the web finding specific embedded files.

However I was wondering whether it is allowed by ISP's to run a spider, because it will make lots of request at a fast pace.

Or should I build in some delay in the requests?

I've read the contract of my ISP, but I couldn'开发者_如何学JAVAt find anything specific about crawling.


You might look at wget . It's got some helpful ideas. You should take note of the ROBOTS.txt on the site(s) you wish to crawl. And you should leave a delay between requests so as not to create denial-of-service conditions.


There's nothing that could forbid you crawling. It not differs from normal user interaction. If you open page with lot of pictures, browser makes a lot of request at once.

You can have transfer limit - just note how much data you have downloaded.

The thing you must consider, is that crawling a lot of pages can be considered DoS attack or be forbidden by page operator. Follow their rules. If they require that no more than N request are done daily from one computer, respect it. Do some delays not to block access to site.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜