what webserver / mod / technique should I use to serve everything from memory?
I've lots of lookuptables from which I'll generate my webresponse.
I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast.
Are there however also non .net solutions which can do the same?
I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications?
开发者_开发知识库anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right?
I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster.
The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it.
The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed.
I've put a bounty on this question since I've not seen enough responses to make a wise choice.
At the moment I'm considering either Asp.net or fastcgi with C(++).
It sounds like you should be using a in-memory key-value datastore like Redis, if you intend on having multiple web servers in the future than you should definitely be using a centralized memory store. Redis is especially ideal in this scenario as it supports advanced data structures like lists, sets and ordered sets. Its also pretty fast, it can get 110000 SETs/second, 81000 GETs/second in an entry level Linux box. Check the benchmarks. If you go down that route I have a c# redis client that can simplify access.
In order to use shared memory you need an application server that's 'always running' in the same process. In these cases you can use static classes or the shared 'Application' cache. The most popular 'Application servers' are any Java servlet containers (e.g. Tomcat) or ASP.NET.
Now moving to access memory rather than disk will yield significant perf savings, if this perf is important to you than I don't think you want to be considering using an interpreted language. There is always going to be overhead when dealing with a request, Network IO, parsing protocol setting up worker thread etc. Deciding to use an out of process (that's on the same host) shared memory store compared to an in memory one is negligible in comparison to the overall time it takes to complete the request.
First of all, let me try to think with you on your direct questions:
- For the performance that you're aiming at, I would say that demanding shared memory access to lookup-tables is overkill. For example, memcache developers on expected performance: "On a fast machine with very high speed networking (or local access - ed.), memcached can easily handle 200,000+ requests per second."
- You're currently probably limited by cpu-time since you're generating every page dynamically. If at all possible: cache, cache, cache! Cache your frontpage and rebuilt it just once every minute or five minutes. For logged-in users, cache user-specific pages that they might visit again in their session. For example: Where 50 requests a second is not too bad for a dynamic page, a reverse-proxy such as varnish can serve thousands of static pages a second on a pretty mediocre server. My best hint would be to look into setting up a reverse proxy using varnish or squid.
- if you still need to generate a lot of pages dynamically, use a php accelerator to avoid having to compile the php code every time the script is run. According to wikipedia, this is a 2 to 10-fold performance increase right there.
- mod_php is the fastest way to run php.
- Besides using fastcgi, you could write an apache module and have your data in shared memoryspace with the webserver itself. This could be very fast. However, I've never heard of anybody doing this to increase performance, and it's a very inflexible solution.
- If you move towards more static content or go the fastcgi way: lighthttpd is faster than apache.
- Still not fast enough? in-kernel webservers such as TUX can be very fast.
Secondly: You are not the first one to encounter this challenge, and fortunately some of the bigger fish are kind enough to share their 'tricks' with us. I guess this is beyond the scope of your question, but it can be truly inspiring to see how these guys have solved their problems, and I decided to share the material known to me.
Look at this presentation on facebook architecture, and this presentation on 'building scalable web-services', containing some notes on the flickr design.
Moreover, facebook lists an impressive toolset that they have developed and contributed to, Moreover, they share notes on their architecture. Some of their performance-improving tricks:
- some performance-improving customizations to memcache, such as memcache-over-udp.
- hiphop is a php-to-optimized-c++ compiler. Facebook engineers claim a 50% cpu-usage reduction.
- implement computationally intensive services in a 'faster language', and wire everything together using thrift.
精彩评论