What pure Python library should I use to scrape a website?
I currently have some Ruby code used to scrape some websites. I was using Ruby because at the time I was using Ruby on Rails for a site, and it just made sense.
Now I'm trying to port this over to Google App Engine, and keep getting stuck.
I've ported Python Mechanize to wo开发者_StackOverflowrk with Google App Engine, but it doesn't support DOM inspection with XPATH.
I've tried the built-in ElementTree, but it choked on the first HTML blob I gave it when it ran into '&mdash'.
Do I keep trying to hack ElementTree in there, or do I try to use something else?
thanks, Mark
Beautiful Soup.
lxml -- 100x better than elementtree
There's also scrapy, might be more up your alley.
There are a number of examples of web page scrapers written using pyparsing, such as this one (extracts all URL links from yahoo.com) and this one (for extracting the NIST NTP server addresses). Be sure to use the pyparsing helper method makeHTMLTags, instead of just hand coding "<" + Literal(tagname) + ">"
- makeHTMLTags creates a very robust parser, with accommodation for extra spaces, upper/lower case inconsistencies, unexpected attributes, attribute values with various quoting styles, and so on. Pyparsing will also give you more control over special syntax issues, such as custom entities. Also it is pure Python, liberally licensed, and small footprint (a single source module), so it is easy to drop into your GAE app right in with your other application code.
BeautifulSoup is good, but its API is awkward. Try ElementSoup, which provides an ElementTree interface to BeautifulSoup.
精彩评论