开发者

Efficient implementation for serving 10's of thousands of short lived HTTP requests on a single Linux node?

I'm reading about different approaches for scaling request handling capabilities on a single machine being taken by node.js, ruby, jetty and company.

Being an application developer, i.e. having very little understanding in Kernel/Networking I'm curious to understand the different approaches taken by each implementation (kernel select, polling the socket for connection开发者_如何学Python, event based and company.) ?

Please note that I'm not asking about special handling features (such as jetty continuations (request->wait->request), a pattern which is typical for AJAX clients) but more generally, should you like to implement a server that can respond with "Hello World" to the maximum number of concurrent clients how would you do it? and Why?

Information / References to reading material would be great.


Take a look at The C10K problem page.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜