Screen Scraping Efficiency
We are going to be scraping thousands of websites each night to update client data, and we are in the process of deciding which language we would like to use to do the scraping.
We are not locked into any platform or language, 开发者_如何转开发and I am simply looking for efficiency. If I have to learn a new language to make my servers perform well, that is fine.
Which language/platform will provide the highest scraping efficiency per dollar for us? Really I'm looking for real-world experience with high volume scraping. It will be about maximizing CPU/Memory/Bandwidth.
You will be IO bound anyway, the performance of your code won't matter at all (unless you're a really bad programmer..)
Using a combination of python and beautiful soup it's incredibly easy to write scree-scraping code very quickly. There is a learning curve for beautiful soup, but it's worth it.
Efficiency-wise, I'd say it's just as quick as any other method out there. I've never done thousands of sites at once, but I'd wager that it's definitely up to the task.
For web scraping I use Python with lxml and a few other libraries: http://webscraping.com/blog
I/O is the main bottleneck when crawling - to download data at a good rate you need to use multiple threads.
I cache all downloaded HTML, so memory use is low.
Often after crawling I need to rescrape different features, and CPU becomes important.
If you know C, a single-stream synchronous link (called the "easy" method) is a short day's work with libcURL. Multiple asynchronous streams (called the "multi" method) is a couple hours more.
With the volume that thousands of sites would require, you may be better off economically by looking at commercial packages. They eliminate the IO problem, and have tools specifically designed to handle the nuances between every site, as well as post-scraping tools to normalize the data, and scheduling to keep the data current.
I would recommend Web Scraping Language
compare a simple WSL query:
GOTO example.com >> EXTRACT {'column1':td[0], 'column2': td[1]} IN table.spad
with the following example:
import urllib2
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(urllib2.urlopen('http://example.com').read())
for row in soup('table', {'class': 'spad'})[0].tbody('tr'):
tds = row('td')
print tds[0].string, tds[1].string
精彩评论