开发者

How to download images from "wikimedia search result" using wget?

I need to mirror every images which appear on this 开发者_C百科page:

http://commons.wikimedia.org/w/index.php?title=Special:Search&ns0=1&ns6=1&ns12=1&ns14=1&ns100=1&ns106=1&redirs=0&search=buitenzorg&limit=900&offset=0

The mirror result should give us the full size images, not the thumbnails. What is the best way to do this with wget?

UPDATE:

I update the solution below.


Regex is your friend my friend! Using cat, egrep and wget youll get this task done pretty fast Download the search results URI wget, then run

cat DownloadedSearchResults.html | egrep (?<=class="searchResultImage".+href=").+?\.jpg/

That should give you http://commons.wikimedia.org/ based links to each of the image's web page. Now, for each one of those results, download it and run:

cat DownloadedSearchResult.jpg | egrep (?<=class="fullImageLink".*href=").+?\.jpg

That should give you a direct link to the highest resolution available for that image.

Im hoping your bash knowledge will do the rest. Good Luck.


Came here with the same problem .. found this >> http://meta.wikimedia.org/wiki/Wikix

I don't have access to a linux machine now, so I didn't try it yet.


It is quite difficult to write all the script in stackoverflow editor, you can find the script at the address below. The script only downloads all images at the first page, you can modify it to automate download process in another page.

http://pastebin.com/xuPaqxKW

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜