开发者

PHP web-crawler [duplicate]

This question already has answers here: Closed 11 years ago.

I'm looking for a PHP web-crawler to gather all the links to for a large site and tell me if the links are broken.

So far I've tried modifying an example on here myself. My question about the codeI've also tried grabbing phpDig but the site is down. any suggestions would be great on how I should proceed would be great.

EDIT

The problem isn't the grabbing of the links the issue of the scale I'm not sure if the script I modified is sufficient enough to grab what possibly be thousands of URL's as I tried setting the depth for the search link to 4 and the crawler timed out through the browser. Someone else mentioned something about killing processes as 开发者_运维百科to not overload the server, could someone please elaborate on the issue.


Not a ready-to-use solution, but Simple HTML Dom parser is one of my favourite dom parsers. It let's you use CSS selectors for finding nodes over the document, so you can easily find <a href="">'s. With these hyperlinks's you can build your own crawler and check if the pages are still available.

You can find it here.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜