Pipelining in Tomcat - parallel?
I am writing a service using TomCat and am trying to understand the pipelining feature of HTTP1.1 and its implementation in Tomcat.
Here are my questions:
1] Is pipelining in TomCat parallel. i.e => After it gets a pipelined request, does it break it down into individual request and invoke all that in parallel? Here is a small test I did: From my tests it looks like, but I am trying to find an authorative document etc?
public static void main(String[] args) throws IOException, InterruptedException
{
Socket socket = new Socket();
socket.connect(new InetSocketAddress("ServerHost", 2080));
int开发者_如何学Go bufferSize = 166;
byte[] reply = new byte[bufferSize];
DataInputStream dis = null;
//first without pipeline - TEST1
// socket.getOutputStream().write(
// ("GET URI HTTP/1.1\r\n" +
// "Host: ServerHost:2080\r\n" +
// "\r\n").getBytes());
//
// final long before = System.currentTimeMillis();
// dis = new DataInputStream(socket.getInputStream());
// Thread.currentThread().sleep(20);
// final long after = System.currentTimeMillis();
//
// dis.readFully(reply);
// System.out.println(new String(reply));
//now pipeline 3 Requests - TEST2
byte[] request = ("GET URI HTTP/1.1\r\n" +
"Host:ServerHost:2080\r\n" +
"\r\n"+
"GET URI HTTP/1.1\r\n" +
"Host: ServerHost:2080\r\n" +
"\r\n"+
"GET URI HTTP/1.1\r\n" +
"Host: ServerHost:2080\r\n" +
"\r\n").getBytes();
socket.getOutputStream().write(request);
bufferSize = 1000*1;
reply = new byte[bufferSize];
final long before = System.currentTimeMillis();
dis = new DataInputStream(socket.getInputStream());
Thread.currentThread().sleep(20);
final long after = System.currentTimeMillis();
dis.readFully(reply);
System.out.println(new String(reply));
long time = after-before;
System.out.println("Request took :"+ time +"milli secs");
}
In the above test, in test2 the response time is not [20*3 = 60+ ms]. The actual GET request are very fast. This hints that these are getting parallelized, unless I am missing something ?
2] What is the default pipeline depth in Tomcat? How can I control it ?
3] When allowing pipelining on server side for my service, do I need to consider anything else assuming that the client follows the http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.4 spec while handling pipelining? Any experiences are welcome.
I had a similar question about how Apache works and after making several tests i can confirm that Apache does infact wait for each request to be processed before starting processing the next one so processing is SEQUENTIAL
The concept of Pipelining says that we must be able to accept the requests at any point of time, but the processing of the requests takes place in the order we get it. That is parallel processing does not take place
精彩评论