开发者

Does ScraperWiki rate limit sites it is scraping?

Does ScraperWiki somehow automatically rat开发者_如何学运维e limit scraping, or should I add something like sleep(1 * random.random()) to the loop?


There is no automatic rate limiting. You can add a sleep command written in your language to add rate limiting.

Very few servers check for rate limiting, and usually servers containing public data don't.

It is, however, good practice to make sure you don't overrun the remote server. By default, scrapers only run in one thread, so there is a built in limit to the load you can produce.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜