How fast can I actually crawl a website?
I'm gonna crawl a website for some information. It's about 170 000+ pages. So, how many request can I make? I'm gonna extract til HTML and get some information. This is a already very popular site, so I don't think it would die开发者_JAVA百科 if was just cruising fast over all pages... Only thing that makes me nervous is that I don't know if the ownser will block my IP or something if you do that? Is that normal? Should I just load 5 pages/min ? Then it will take forever... I want to get new data every 24 hour see.
Thanks for all response!
It will take sometime, actually I suggest you use rotating proxies, and add multi-threading. 10 threads will do. This way, you can have 10 requests at the same time. Using proxies will be slow though, and add timeout of atleast 1.5 secs each request, it will slow you down, but lowers the risk of getting banned.
I've created a webcrawler a couple years ago that crawled about 7GB a night from BBC's website (limited by bandwidth) and never got blocked, but adding a 1 second delay between requests is the decent thing to do.
A second or two delay after each request should be sufficient. Making your bot crawl as fast as possible may in fact get you banned. In my day job I manage the sites for a couple of newspapers and I see homegrown crawlers occasionally. Bad ones really can cause quite a lot of system code and result in a new addition to the IP blacklist. Don't be that guy.
As long as you're obeying their robots.txt instructions, you should probably be alright. The standard delay I've seen between requests is 2 seconds - that's fairly often the limit after which you might start having your traffic throttled, or ip blocked.
精彩评论