Grabbing html/links from same domain
I am bit of a noob when it comes to this topic. I am trying to work around a site (userscript to improve the ui of some web app) and I need to grab links from a url. The site has a login and password system to get onto the actual website and I would like to start grabbing the links once I get onto main site.
Basically on the main site after the login, there are several links that go to different pages on the same domain (eg. www.so开发者_JAVA技巧medomain.com/page?=1) and in each page, there are more links and i would like to now go and pull the links off all the child pages (and continues grabbing the child's links up to a point where I would to stop or no links are on the page).
I was thinking of using iframe to go to each url and grab the text then but I am pretty sure that it is a slow solution. I have looked into YQL but some urls I have tested with console has been blocked by the site; the return xml says access denied for some parts of the site.
I would like to know the best way to do this. Sorry if my explanation is confusing.
There really isn't a best way to this. It's going to be slow no matter what since you're basically implementing a spider in the browser.
Since the page is on the same domain, you can fetch the source using simple ajax. Using jQuery:
$.get('/path/to/page', function(data){
// data = page source
});
Then parse the source for links using a regex like:
/<a [^\>]+href="([^\"]+)"/g
test that they're on the same domain and repeat...
精彩评论