开发者

Node.js slower than Apache

I a开发者_运维知识库m comparing performance of Node.js (0.5.1-pre) vs Apache (2.2.17) for a very simple scenario - serving a text file.

Here's the code I use for node server:

var http = require('http')
  , fs = require('fs')

fs.readFile('/var/www/README.txt',
    function(err, data) {
        http.createServer(function(req, res) {
            res.writeHead(200, {'Content-Type': 'text/plain'})
            res.end(data)
        }).listen(8080, '127.0.0.1')
    }
)

For Apache I am just using whatever default configuration which goes with Ubuntu 11.04

When running Apache Bench with the following parameters against Apache

ab -n10000 -c100 http://127.0.0.1/README.txt

I get the following runtimes:

Time taken for tests:   1.083 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      27630000 bytes
HTML transferred:       24830000 bytes
Requests per second:    9229.38 [#/sec] (mean)
Time per request:       10.835 [ms] (mean)
Time per request:       0.108 [ms] (mean, across all concurrent requests)
Transfer rate:          24903.11 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.8      0       9
Processing:     5   10   2.0     10      23
Waiting:        4   10   1.9     10      21
Total:          6   11   2.1     10      23

Percentage of the requests served within a certain time (ms)
  50%     10
  66%     11
  75%     11
  80%     11
  90%     14
  95%     15
  98%     18
  99%     19
 100%     23 (longest request)

When running Apache bench against node instance, these are the runtimes:

Time taken for tests:   1.712 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      25470000 bytes
HTML transferred:       24830000 bytes
Requests per second:    5840.83 [#/sec] (mean)
Time per request:       17.121 [ms] (mean)
Time per request:       0.171 [ms] (mean, across all concurrent requests)
Transfer rate:          14527.94 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.9      0       8
Processing:     0   17   8.8     16      53
Waiting:        0   17   8.6     16      48
Total:          1   17   8.7     17      53

Percentage of the requests served within a certain time (ms)
  50%     17
  66%     21
  75%     23
  80%     25
  90%     28
  95%     31
  98%     35
  99%     38
 100%     53 (longest request)

Which is clearly slower than Apache. This is especially surprising if you consider the fact that Apache is doing a lot of other stuff, like logging etc.

Am I doing it wrong? Or is Node.js really slower in this scenario?

Edit 1: I do notice that node's concurrency is better - when increasing a number of simultaneous request to 1000, Apache starts dropping few of them, while node works fine with no connections dropped.


Dynamic requests

node.js is very good at handling at lot small dynamic requests(which can be hanging/long-polling). But it is not good at handling large buffers. Ryan Dahl(Author node.js) explained this one of his presentations. I recommend you to study these slides. I also watched this online somewhere.

Garbage Collector

As you can see from slide(13 from 45) it is bad at big buffers.

Slide 15 from 45:

V8 has a generational garbage collector. Moves objects around randomly. Node can’t get a pointer to raw string data to write to socket.

Use Buffer

Slide 16 from 45

Using Node’s new Buffer object, the results change.

Still not that good as for example nginx, but a lot better. Also these slides are pretty old so probably Ryan has even improved this.

CDN

Still I don't think you should be using node.js to host static files. You are probably better of hosting them on a CDN which is optimized for hosting static files. Some popular CDN's(some even free for) via WIKI.

NGinx(+Memcached)

If you don't want to use CDN to host your static files I recommend you to use Nginx with memcached instead which is very fast.


In this scenario Apache is probably doing sendfile which result in kernel sending chunk of memory data (cached by fs driver) directly to socket. In the case of node there is some overhead in copying data in userspace between v8, libeio and kernel (see this great article on using sendfile in node)

There are plenty possible scenarios where node will outperform Apache, like 'send stream of data with constant slow speed to as many tcp connections as possible'


The result of your benchmark can change in favor of node.js if you increase the concurrency and use cache in node.js

A sample code from the book "Node Cookbook":

var http = require('http');
var path = require('path');
var fs = require('fs');
var mimeTypes = {
    '.js' : 'text/javascript',
    '.html': 'text/html',
    '.css' : 'text/css'
} ;
var cache = {};
function cacheAndDeliver(f, cb) {
    if (!cache[f]) {
        fs.readFile(f, function(err, data) {
            if (!err) {
                cache[f] = {content: data} ;
            }
            cb(err, data);
        });
        return;
    }
    console.log('loading ' + f + ' from cache');
    cb(null, cache[f].content);
}
http.createServer(function (request, response) {
    var lookup = path.basename(decodeURI(request.url)) || 'index.html';
    var f = 'content/'+lookup;
    fs.exists(f, function (exists) {
        if (exists) {
            fs.readFile(f, function(err,data) {
                if (err) { response.writeHead(500);
                    response.end('Server Error!'); return; }
                    var headers = {'Content-type': mimeTypes[path.extname(lookup)]};
                    response.writeHead(200, headers);
                    response.end(data);
                });
            return;
        }
response.writeHead(404); //no such file found!
response.end('Page Not Found!');
});


Really all you're doing here is getting the system to copy data between buffers in memory, in different process's address spaces - the disk cache means you aren't really touching the disk, and you're using local sockets.

So the fewer copies have to be done per request, the faster it goes.

Edit: I suggested adding caching, but in fact I see now you're already doing that - you read the file once, then start the server and send back the same buffer each time.

Have you tried appending the header part to the file data once upfront, so you only have to do a single write operation for each request?


$ cat /var/www/test.php
<?php
for ($i=0; $i<10; $i++) {
        echo "hello, world\n";
}


$ ab -r -n 100000 -k -c 50 http://localhost/test.php
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests


Server Software:        Apache/2.2.17
Server Hostname:        localhost
Server Port:            80

Document Path:          /test.php
Document Length:        130 bytes

Concurrency Level:      50
Time taken for tests:   3.656 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    100000
Total transferred:      37100000 bytes
HTML transferred:       13000000 bytes
Requests per second:    27350.70 [#/sec] (mean)
Time per request:       1.828 [ms] (mean)
Time per request:       0.037 [ms] (mean, across all concurrent requests)
Transfer rate:          9909.29 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       3
Processing:     0    2   2.7      0      29
Waiting:        0    2   2.7      0      29
Total:          0    2   2.7      0      29

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      2
  75%      3
  80%      3
  90%      5
  95%      7
  98%     10
  99%     12
 100%     29 (longest request)

$ cat node-test.js 
var http = require('http');
http.createServer(function (req, res) {
          res.writeHead(200, {'Content-Type': 'text/plain'});
            res.end('Hello World\n');
}).listen(1337, "127.0.0.1");
console.log('Server running at http://127.0.0.1:1337/');

$ ab -r -n 100000 -k -c 50 http://localhost:1337/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests


Server Software:        
Server Hostname:        localhost
Server Port:            1337

Document Path:          /
Document Length:        12 bytes

Concurrency Level:      50
Time taken for tests:   14.708 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    0
Total transferred:      7600000 bytes
HTML transferred:       1200000 bytes
Requests per second:    6799.08 [#/sec] (mean)
Time per request:       7.354 [ms] (mean)
Time per request:       0.147 [ms] (mean, across all concurrent requests)
Transfer rate:          504.62 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       3
Processing:     0    7   3.8      7      28
Waiting:        0    7   3.8      7      28
Total:          1    7   3.8      7      28

Percentage of the requests served within a certain time (ms)
  50%      7
  66%      9
  75%     10
  80%     11
  90%     12
  95%     14
  98%     16
  99%     17
 100%     28 (longest request)

$ node --version
v0.4.8


In the below benchmarks,

Apache:

$ apache2 -version
Server version: Apache/2.2.17 (Ubuntu)
Server built:   Feb 22 2011 18:35:08

PHP APC cache/accelerator is installed.

Test run on my laptop, a Sager NP9280 with Core I7 920, 12G of RAM.

$ uname -a
Linux presto 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:24 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux

KUbuntu natty

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜