开发者

How to measure the time HTTP requests spend sitting in the accept-queue?

I am using Apache2 on Ubuntu 9.10, and I am trying to tune my configuration for a web application to reduce latency of responses to HTTP requests.

During a moderately heavy load on my small server, there are 24 apache2 processes handling requests. Additional requests get queued.

Using "netstat", I see 24 connections are ESTABLISHED and 125 connections are TIME_WAIT. I am trying to figure out if that is considered a reasonable backlog.

Most requests get serviced in a fraction of a second, so I am 开发者_StackOverflow中文版assuming requests move through the accept-queue fairly quickly, probably within 1 or 2 seconds, but I would like to be more certain.

Can anyone recommend an easy way to measure the time an HTTP request sits in the accept-queue?

The suggestions I have come across so far seem to start the clock after the apache2 worker accepts the connection. I'm trying to quantify the accept-queue delay before that.

thanks in advance, David Jones


I don't know if you can specifically measure time before connection is accepted, but you can measure latency and variability of response times (and that's the part that really matters) using ab tool that comes with apache utils.

It will generate traffic with concurrency you configure and then break down response times and give you standard deviation.

Server Hostname:        stackoverflow.com
Document Length:        192529 bytes
Concurrency Level:      3
Time taken for tests:   48.769 seconds
Complete requests:      100
Failed requests:        44
   (Connect: 0, Receive: 0, Length: 44, Exceptions: 0)
Write errors:           0
Total transferred:      19427481 bytes
HTML transferred:       19400608 bytes
Requests per second:    2.05 [#/sec] (mean)
Time per request:       1463.078 [ms] (mean)
Time per request:       487.693 [ms] (mean, across all concurrent requests)
Transfer rate:          389.02 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      101  109   9.0    105     152
Processing:   829 1336 488.0   1002    2246
Waiting:      103  115  38.9    104     368
Total:        939 1444 485.2   1112    2351

Percentage of the requests served within a certain time (ms)
  50%   1112
  66%   1972
  75%   1985
  80%   1990
  90%   2062
  95%   2162
  98%   2310
  99%   2351
 100%   2351 (longest request)

(SO didn't perform particularly well :)

The other thing you could do is to put request timestamp in the request itself and compare immediately when handling the request. If you generate traffic on the same machine or have clocks synchronised, it will let you measure request processing time.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜