What should be the initial list of urls for a crawler to start its work
I want a list of urls from where my crawler can start 开发者_运维问答crawling efficiently so that it can cover a maximum part of web. Do you have any other idea to create initial index for different host. Thanks you
- http://www.dmoz.org is a good seed.
- As said before, to orient a crawl, querying a search engine gives good results.
Results from another search engine for keywords from the problem domain you're trying to explore maybe?
IMO it doesn't really matter - as long as those URLs link to various parts of the web, you can be reasonably sure your crawler will crawl most non-dark (i.e. linked to) pages on the Web, sooner or later (probably later, given the size of the Web).
I'd suggest some site's front-page, which has many links leading out to many different places on the web (hint hint), and go from there.
The problem you'll have won't be a lack of links, wherever you start - quite contrary, you'll have the exact opposite and will need to implement an algorithm to keep track of where you've been, where you should go next, and how to avoid semi-infinite and infinite loops.
精彩评论