开发者

Tools/libraries to resolve/expand thousands of URLs

In a crawler-like project we have a common and widely used task to resolve/expand thousands of URLs. Say we have (very simplified example):

http://bit.ly/4Agih5

GET 'http://bit.ly/4Agih5' request returns one of the 3xx, we follow redirect right to the:

http://stackoverflow.com

GET 'http://stackoverflow.com' returns 200. So 'stackoverflow.com' is the result we need.

Any URLs (not only well-known shorteners like bit.ly) are allowed as input. Some of them redirect once, some doesn't redirect at all (result is the URL itself in this case), some redirect multiple times. Our task to follow all redirects imitating browser behavior as much as possible. In general, if we have some URL A resolver should return us URL B which should be the same as if A was opened in some browser.

So far we us开发者_Go百科ed Java, pool of threads and simple URLConnection to solve this task. Advantages are obvious:

  • simplicity - just create URLConnection, set follow redirects and that's it (almost);
  • well HTTP support - Java provides everything we need to imitate browser as much as possible: auto follow redirects, cookies support.

Unfortunately such approach has also drawbacks:

  • performance - threads are not for free, URLConnection starts downloading document right after getInputStream(), even if we don't need it;
  • memory footprint - don't sure exactly but seems that URL and URLConnection are quite heavy objects, and again buffering of the GET result right after getInputStream() call.

Are there other solutions (or improvements to this one) which may significantly increase speed and decrease memory consumption? Presumably, we need something like:

  • high-performance lightweight Java HTTP client based on java.nio;
  • C HTTP client which uses poll() or select();
  • some ready library which resolves/expands URLs;


You can use Python, Gevent, and urlopen. Combine this gevent exampel with the redirect handling in this SO question.

I would not recommend Nutch, it is very complex to set up and has numerous dependencies (Hadoop, HDFS).


I'd use a selenium script to read URLs off of a queue and GET them. Then wait about 5 seconds per browser to see if a redirect occurs and if so put the new redirect URL back into the queue for the next instance to process. You can have as many instances running simultaneously as you want.

UPDATE:

If you only care about the Location header (what most non-JS or meta redirects use), simply check it, you never need to get the inputStream:

HttpURLConnection.setFollowRedirects(false);
URL url = new URL("http://bit.ly/abc123");
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
String newLocation = conn.getHeaderField("Location");

If the newLocation is populated then stick that URL back into the queue and have that followed next round.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜