开发者

How much is the difference between html parsing and web crawling in python [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.

Want to improve this question? Update the question so it focuses on one problem only by editing this post.

Closed 4 years ago.

Improve this question

I need to grab some data from websites in my django website. Now i am confused whether i should use python parsing libraries or web crawling libraries. Does search engine libraries also fall in same category

I want to know how much is the difference between the two and if i want to use those function开发者_运维技巧s inside my website which should i use


If you can get away with background web crawling use scrapy. If need to immediately grab something use html5lib (more robust) or lxml (faster). If you are going to be doing the later, use the awesome requests library. I would avoid using BeautifulSoup, mechanize, urllib2, httplib.


HTML parse will parse the page and you can collect the links present in it. These links you can add to queue and visit these pages. Combine these steps in a loop and you made a basic crawler.

Crawling libraries are the ready to use solutions which do the crawling. They provide more features like detection of recursive links, cycles etc. A lot of features you would want to code would have already been done within these libraries.

However first option is preferred if you have some special requirements which libraries do not satisfy.


I've done similar things previously. Web-crawlers were not usefull to me if I wanted the parsing to be done immediately in order to fetch something and be presented to the user. For batch-job stuff they're more appropriate. I found BeautifulSoup, lxml and mechanize to be quite usefull.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜