开发者

python-scrapy: how to fetch an URL (not via following links) inside a spider?

How can I have inside my spider something that will fetch some URL to extract something from a page via HtmlXPathSelector? But the URL is something I want to supply as a string inside the code, not a link to follow.

I tried something like this:

req = urllib2.Request('http://www.example.com/' + some_string + '/')
req.add_header('User-Agent', 'Mozilla/5.0')
response = urllib2.urlopen(req)
hxs = HtmlXPathSelector(response)

but at th开发者_Go百科is moment it throws an exception with:

[Failure instance: Traceback: <type 'exceptions.AttributeError'>: addinfourl instance has no attribute 'encoding'


You will need to construct a scrapy.http.HtmlResponse object with the body=urllib2.urlopen(req).read() - but why exactly do you need to use urllib2 instead of returning the request with a callback?


scrapy is not explicit to show how to do unittest, i don't recommend use scrapy to crawl data if you want do unittest for each spider.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜