开发者

Easiest way to fetch all href contents on page in Ruby?

I'm writing a simple web crawler in Ruby and I need to fetch all href contents on the page. What is the best way to do this, or any other web page source parsing, since some pages might not be valid, but I still want to be able to parse them.

Are there any good Ruby HTML p开发者_开发问答arsers that allow validity agnostic parsing, or is the best way just to do it by hand with regexp?

Is it possible to use XPath on non-XHTML page?


Have a look at Nokogiri. Short example:

require 'open-uri'
require 'nokogiri'
doc = Nokogiri::HTML(open('http://www.google.com/search?q=tenderlove'))
doc.search('//*[@href]').each do |m| p m[:href] end


Take a look at Mechanize. I'm pretty sure it has methods for grabbing all links in a page.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜