org.apache.commons.httpclient.HttpClient stuck on request
I have that code :
while(!lastPage && currentPage < maxPageSize){
StringBuilder request = new StringBuilder("http://catalog.bizrate.com/services/catalog/v1/us/" + " some more ...");
currentPage++;
HttpClient client = new HttpClient(new MultiThreadedHttpConnectionManager());
client.getHttpConnectionManager().getParams().setConnectionTimeout(15000);
GetMethod get = new GetMethod(request.toString());
HostConfiguration configuration = new HostC开发者_C百科onfiguration();
int iGetResultCode = client.executeMethod(configuration, get);
if (iGetResultCode != HttpStatus.SC_OK) {
System.err.println("Method failed: " + get.getStatusLine());
return;
}
XMLStreamReader reader
= XMLInputFactory.newInstance().createXMLStreamReader(get.getResponseBodyAsStream());
while (reader.hasNext()) {
int type = reader.next();
// some more xml parsing ...
}
reader.close();
get.releaseConnection();
}
Somehow the code gets suck from time to time on line : executing request.
I cant find the configuration for a request time out (not the connection timeout) , can someone help me maybe , or is there something that I am doing basely wrong ?
The client I am using.
You can also set socket read timeou using setSoTimeout()
but that is no guarantee either.
The only solution is to run the request in a different thread and interrupt the thread after timeout. You can use FutureTask to do this. See my answer to this question for examples,
java native Process timeout
PoolingConnectionManager
maintains a maximum limit of connection on a per route basis and in total. Per default this implementation will create no more than than 2 concurrent connections per given route and no more 20 connections in total. For many real-world applications these limits may prove too constraining, especially if they use HTTP as a transport protocol for their services. Connection limits, however, can be adjusted using HTTP parameters.
For more information, you can refer to PoolingClientConnectionManager
Java API
精彩评论