Is there any language which is just "perfect" for web scraping? [closed]
I have used 3 languages for Web Scraping - Ruby, PHP and Python and honestly none of them seems to perfect for the task.
Ruby has an excellent mechanize and XML parsing library but the spreadsheet support is very poor.
PHP has excellent spreadsheet and HTML parsing library but it does not have an equivalent of WWW:Mechanize.
Python has a very poor Mechanize library. I had many problems with it and still unable to solve them. Its sp开发者_StackOverflow社区readsheet library also is more or less decent since it unable to create XLSX files.
Is there anything which is just perfect for webscraping.
PS: I am working on windows platform.
Check Python + Scrappy, it is pretty good:
http://scrapy.org/
Why not just use the XML Spreadsheet format? It's super simple to create, and it would probably be trivial with any type of class-based system.
Also, for Python have you tried BeautifulSoup for parsing? Urllib+BeautifulSoup makes a pretty powerful combo.
Short answer is no.
The problem is that HTML is a large family of formats - and only the more recent variants are consistent (and XML based). If you're going to use PHP then I would recommend using the DOM parser as this can handle a lot of html which does not qualify as well-formed XML.
Reading between the lines of your post - you seem to be:
1) capturing content from the web with a requirement for complex interaction management
2) parsing the data into a consistent machine readable format
3) writing the data to a spreadsheet
Which is certainly 3 seperate problems - if no one language meets all 3 requirements then why not use the best tool for the job and just worry about an suitable interim format/medium for the data?
C.
Python + Beautiful Soup for web scraping and since you are on windows, you could use win32com for Excel automation to generate your xlsx files.
精彩评论