Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly
We develop mostly low traffic but highly specialized web applications. Normally we use L2S, EF or nHibernate as access layer and then throws Asp.Net MVC to it and in which for normal crud operations we query the ISession/DataContext directly but for more advanced functions/side effects we p开发者_如何学Gout it in a some kind of service layer.
Now, i was think about publishing the data through OData (WCF Data Service) and query that from the controllers (or even from jQuery when the a good template engine shows up) and publish the service operations through a WCF service (or as custom methods on the WCF Data Service?). What advantages/disadvantages does this architecture poses?
Do I gain something except higher complexity and latency? Better separations of concerns (or is it just a illusion)?
Edit: Can it be a good idea to create a complete ajax driven solution with eg. WCF RIA Services? Or do one loose too much flexibility? Feels like you can completely dispatch your views from your logic then, heck, one should be able to just write pure HTML, not even a asp.net MVC should be needed? but i guess there's a lot of new problems arising?
Don't Do it. Sorry, but this is a stupid over-engineered approach. You are IN ONE PROCESS and you insist on running a network connection AND coding all passing data into XML and back out, plus running it over a HTTP connection with limited query semantics? Don't tell anyone you even tried.
Separation of concern is an illusion here - you replace a highly optimized domain model with a simplified data layer.
THAT SAID: I love OData - great. But it is not an in program technology, it is a FRONT END technology, like ASP.NET MVC - just not for the end user, but for ANOTHER program to integrate into your data. It should be used in similar scenarios, and when exposing data over trust borders (Silverlight - for example - is a trust border as the requests can be faked).
It is NOT optimized to replace in process high end application run-time layers like NHibernate.
As TomTom mentions, you don't want to pay the cost of loopback for OData when within a process. If you have direct line-of-sight to your database and it's your own application's database, then there is no reason to put WCF Data Services in the middle. I would continue to use one of the other options you mentioned (L2S, EF, nHibernate).
Now, if you need to expose data over your http endpoint for other applications to consume, or even for your own application if you have some jQuery code in the client that needs to access data from the server, then definitely an OData endpoint may help and WCF Data Services is the simplest way to create one.
TomTom has a lot of votes and although he's not wrong, he's also not right, in spite of his persuasive tone.
In this particularly instance, the OP appears to be writing an intranet LOB style app that probably only stands to be impeded by an OData service mimicking the underlying database, but what if he were not mimicking the underlying database?
If he were building an application based on various or unknown future data sources, then the services layer can unify, re-present, simplify and aggregate those services, even if a large proportion of queries eventually back to a SQL Server in the next room.
Similarly, if you're building an application of massive scale, and by scale I mean millions of users expecting to wait a few seconds between actions, not millions of FX trades an hour, then placing a services layer between your application the data is a common pattern. The scalability of the internet is based on many small stateless HTTP servers and the caching infrastructure in between.
In real life, the same queries are run countless times, people refresh pages or click the same link over and over. No one really asks for 10m rows, because not many humans can look at it in one go. So working in small pages keeps the data flowing and requests interleaving. You also have the opportunity to introduce a shared in RAM cache in the services layer, or even a RAM database.
You may even find that you need to shard your database or partition it between SQL and a key/value store. You can then do the joins in the middle tier, scaled out, and offload the joining and compute-intensive stuff away from the database server.
The rule with internet scale is that the database is your hot spot and you need to do everything you can to prevent anyone talking to it! Be that local HTTP cache in an iPad, in your ISPs proxy, in the IIS output cache, or in a Redis cache, all those layers are helping to spread the load, ease the burden.
So if Carl came to interview with me and told me he'd considered putting an OData layer before his SQL boxes, I'd be interested to hear his reasoning.
WCF Data Services and OData support JSON, so you can minimize the payload by leveraging that. Plus, with WCF Data Services you can completely control your data access. You don't have to roll Entity Framework. You can customize everything. The benefit is that the protocol structure is completely handled for you by using WCF Data Services and OData. And consuming the service from MVC is an Add Service Reference away. WCF Data Services runs on WCF so you have the ability to do other web services beyond just OData type delivery, so it is extremely flexible.
There are limitations here and there that come with the nature of OData as well as the way WCF Data Services handles OData, but they are fairly specific and if they arise in your architecture there are ways around them.
If you solution is isolated to a single web application, then having the data layer embedded in that application works well. But if you have any need whatsoever to have another app or process hit the data layer or shared business logic then exploring the option of putting your data layer in a WCF Data Service is well worth it. For example, you could write a PowerShell script to call a web service method in 2 lines of code. So if you have domain logic that you want to be able to run from your web app and from a command line or scheduled task then your WCF Data Service layer could handle that scenario for all without having to duplicate logic or code.
Many ways to skin a cat. I have used both approaches in business applications and would not say that one or the other should be avoided. They both work well and provide plenty of value without being detrimental.
To be fair, there are benefits to this approach that may outweigh the performance concerns, which are admittedly tremendous. An application built this way will have orders of magnitude more latency and may cost several times more in compute resources to execute than an in-process solution.
That having been said, in development scenarios where human resources are limited, this may work better. It allows for contractors to be quickly hired on to write new screens or whole new applications very quickly in whatever language suits them. Developers can get up-to-speed faster than a proprietary homegrown solution. No more sa passwords in config files, injection of a custom security layer if required, unified logging and auditing, combining several data stores into one consistent resource. If you have a heterogenous platform, you don't need to write SDKs, they have already been written in many important languages. oData works very well with MS Excel, which is a huge win at many organizations. Depending on your network topology, it might be cheaper and even faster to route out over the internet than to use a leased line if you're in a remote office, or behind a firewall (at a client site doing a demo, for instance).
For large datasets, the overhead of the request and packaging becomes less important. For reporting scenarios, for instance. While I have never designed something like this, I can see where it might be useful, depending on your corporate culture and available resources, to consume oData endpoints internally.
精彩评论