开发者

Crawling wikipedia

I'm going through crawling wikipedia using website downloader for windows, i was looking through the whole options in this tool to find 开发者_Python百科an option to download wikipedia pages for specific period, for example from 2005 untill now.

Does anyone get any idea about crawling the website in specific period of time ?


Why not download the SQL database containing all of Wikipedia?

You can then query it using SQL.


Give a try to the Wikipedia API and your programming skills.


There should be no need to do web scraping; use the MediaWiki API to directly request the information you want. I'm not sure what you mean by "wikipedia pages for a specific period" - do you mean last edited at a certain time? If so, while skimming, I noticed an API call that lets you get a look at the last n revisions; just ask for the last revision and see what its date is.


It depends if the website in question offers the archive and mostly don't so its not possible in a straightforward way to crawl a sample started from specific date. But you can implement some intelligence in your crawler to read the page created date or something like that.

But you can also look at Wikipedia API at http://en.wikipedia.org/w/api.php

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜