Scrapy SgmlLinkExtractor is ignoring allowed links
Please take a look at this spider example in Scrapy documentation. The explanation is:
This spider would start crawling example.com’s home page, collecting category links, and item links, parsing the latter with the parse_item method. For each item response, some data will be extracted from the HTML using XPath, and a Item will be filled with it.
I copied the same spider exactly, and replaced "example.com" with another initial url.
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from stb.items import StbItem
class StbSpider(CrawlSpider):
domain_name = "stb"
start_urls = ['http://www.stblaw.com/bios/MAlpuche.htm']
rules = (Rule(SgmlLinkExtractor(allow=(r'/bios/.\w+\.htm', )), callback='parse', follow=True), )
def parse(self, response):
hxs = HtmlXPathSelector(response)
item = StbItem()
item['JD'] = hxs.select('//td[@class="bodycopysmall"]').re('\d\d\d\d\sJ.D.')
return item
SPIDER = StbSpider()
But my spider "stb" does not collect links from "/bios/" as it is supposed to do. It runs the initial url, scrapes the item['JD']
and writes it on a file and then quits.
Why is it that SgmlLinkExtractor
is ignored? The Rule
is read because it catches syntax errors inside the Rule
line.
Is this a bug? is there something wrong in my code? There are no errors except a bunch unhandled errors that I see with every run.
It would be nice to know what开发者_StackOverflow社区 I am doing wrong here. Thanks for any clues. Am I misunderstanding what SgmlLinkExtractor
is supposed to do?
The parse
function is actually implemented and used in the CrawlSpider class, and you're unintentionally overriding it. If you change the name to something else, like parse_item
, then the Rule should work.
精彩评论