Can't get Scrapy to follow links
I am trying to scrape a website but I can't get scrapy to follow links and I don't get any Python errors and I see nothing going on with Wireshark. I thought it could be the regex but I tried ".*" to try to follow any link but it doesn't work either. The method "parse" does work though but I need to follow "sinopsis.aspx" and callback parse_peliculas.
Edit: Commenting the parse method gets rules working... parse_peliculas gets run, what I have todo now is change parse method to another name and make a rule with a callback but I can't still get it to work.
This is my spider code:
import re
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from Cinesillo.items import CinemarkItem, PeliculasItem
class CinemarkSpider(CrawlSpider):
name = 'cinemark'
allowed_domains = ['cinemark.com.mx']
start_urls = ['http://www.cinemark.com.mx/smartphone/iphone/vercartelera.aspx?fecha=&id_theater=555',
'http://www.cinemark.com.mx/smartphone/iphone/vercartelera.aspx?fecha=&id_theater=528']
rules = (Rule(SgmlLinkExtractor(allow=(r'sinopsis.aspx.*', )), callback='parse_peliculas', follow=True),)
def pa开发者_如何学编程rse(self, response):
item = CinemarkItem()
hxs = HtmlXPathSelector(response)
cine = hxs.select('(//td[@class="title2"])[1]')
direccion = hxs.select('(//td[@class="title2"])[2]')
item['nombre'] = cine.select('text()').extract()
item['direccion'] = direccion.select('text()').extract()
return item
def parse_peliculas(self, response):
item = PeliculasItem()
hxs = HtmlXPathSelector(response)
titulo = hxs.select('//td[@class="pop_up_title"]')
item['titulo'] = titulo.select('text()').extract()
return item
Thanks
When writing crawl spider rules, avoid using parse as callback, since the CrawlSpider uses the parse method itself to implement its logic. So if you override the parse method, the crawl spider will no longer work.
http://readthedocs.org/docs/scrapy/en/latest/topics/spiders.html
精彩评论