开发者

Scaling File Systems

This could be a question for serverfault as well, but it also includes topics from here.

I am building a new web site that consist of 6 servers. 1 mysql, 1 web, 2 file processing servers, 2 file servers. In short, file processing servers process files and copy them to the file servers. In this case I have two options;

I can setup a web server for each file server and serve files directly from there. Like, file1.domain.com/file.zip. Some files (not all of them) w开发者_运维百科ill need authentication so I will authenticate users via memcache from those servers. 90% of the requests won't need any authentication.

Or I can setup NFS and serve files directly from the web server, like www.domain.com/fileserve.php?id=2323 (it's a basic example)

As the project is heavily based on the files, the second option might not be as effective as the first option, as it will consume more memory (even if I split files into chunks while serving)

The setup will stay same for a long time, so we won't be adding new file servers into the setup.

What are your ideas, which one is better? Or any different idea?

Thanks in advance,


Just me, but I would actually put a set of reverse proxy rules on the "web server" and then proxy HTTP requests (possibly load balanced if they have equal filesystems) back to a lightweight HTTP server on the file servers.

This gives you flexibility and the ability implement future caching, logging, filter chains, rewrite rules, authentication &c, &c. I find having a fronting web server as a proxy layer a very effective solution.


I recommend your option #1: allow the file servers to act as web servers. I have personally found NFS to be a little flaky when used under high volume.


You can also use Content Delivery Network such as simplecdn.com, they can solve bandwidth and server load issue.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜