Does threading violate robots.txt? [closed]
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this questionI'm new to scraping and I recently realized that threading is probably the way to go to crawl a site quickly. Before I begin hacking that out though, I figured it would probably be intelligent to determine whether or not that will end up getting me throttled. So question is, if I rewrite my program to employ threads to crawl more quickly, will that violate most sites' robots.txt?
Depends: if your threads have their own separate queues of URLs to be crawled and there is no synchronization between queues of any kind, then you could end up violating a site's robots.txt when two (or more) threads attempt to crawl URLs for the same site within quick succession. Of course a well designed crawler would not do that!
The very "simple" crawlers have some sort of shared priority queue where work is queued in accordance to the various Robots Exclusion Protocols and all the threads pull URLs to be crawled from that queue. There are many problems with such an approach, especially when trying to scale up and crawl the entire World Wild Web.
The more advanced crawlers perform "budget" calculations (see the BEAST budget enforcement section) which allow them to intelligently schedule crawling on various criteria: spam indicators, robots.txt, coverage vs freshness, etc. Budget enforcement makes it much easier for multithreaded crawlers to crawl fast and crawl politely!
They are unrelated. robots.txt says whether or not you are allowed to access something. It doesn't have a way to say "please send only one request at a tome".
精彩评论