High performance middleware communication for distributed application
I am designing a distributed architecture where we will have a web front end (probably ASP.NET MVC and eventually ExtJS as well), then certain number of application modules as backend services, my idea is to be completely free to deploy these .NET services in one or two or 3 different servers so I can distribute work load among several machines.
Which technology should I use to write and communicate among those back end services?
for example, if I write .NET WCF wrapper开发者_StackOverflow中文版s for my business logic (.NET class libraries) I believe I can change the binding and use named pipes for high performances in the same box or when deploying in multiple servers I just change the bindings in the configuration file to use netTCP and everything should work hopefully.
About the WCF services in themselves, better to host them in IIS or in a custom written windows service?
My point is to really get the highest possible performances and design an architecture scalable and reliable with no compromises on network traffic or delays, that's why I am thinking about WCF vs xml web services, to use binary transfers instead of SOAP.
Thanks, Davide.
"Highest possible performances" is a nebulous target, you never really know what the highest possible target is for a system. All you can do is measure and test, to see if your system meets your performance requirements.
I recommend using WCF and IIS, to start with. Better still, try a fraction of your system, as a proof of concept for chosen technologies. Then profile to see where/what is too slow for your requirements. WCF/IIS approach gives the easiest implementation and maintainability. Then if you find that IIS is causing too many limits (and cannot be configured to remove those limits, IIS has a lot of config), then you can do self-hosting for your services. Also, if SOAP is using too much bandwidth for your requirements, then try binary transfers. If you can implement a fraction of your system upfront to do these tests, then you can avoid some rework.
Do you want API to help you build service layer quickly, manage service layer, reconfigure service layer easily, etc. or do you need to build high performed service layer where everything unneeded must be avoided?
WCF is generic (unified), configurable and highly extensible API to build service layer. But these configuration, extensibility, unification have costs. The first cost is complexity of the API which is quite funny because when API is not extensible developers complain about it and when it is extensible developers complain about its complexity and performance impact.
Yes all these stuff has performance impact. There is a lot of layers in WCF and lot of things are sometimes handled less efficiently to maintain unified model, extensibility, configuration, etc.
If you really need high performed service layer where every single performance decrease matters then you must build your own hardcoded communication layer following exactly requirements and expectations your client have. Even that highest possible performance is nothing. If the client have a requirement for the performance he must specify the requirement in measurable way - for example:
- the system must be able to serve xxx concurrent requests
- the average time of serving the request must be xxx ms
- the highest time of serving the request must be xxx ms
There is also no need to optimize the application for requirements which haven't been defined
精彩评论