开发者

Entity Framework Self Tracking Entities on a N-Tier application

This is a general architecture question, hopefully to folks out there already using EF in final applications.

We have a typical N-Tier application:

  • WPF Client
  • WCF Services
  • EF STE DTO's
  • EF Data Layer

The application loads all known business types during load time (at the same time as the user logs in) then loads a very large "Work Batch" on demand, this batch is around 4-8Mg and is composed of over 1.000 business objects. When we finish loading this "Batch" we then link everything with the previously loaded business types, etc...

In the end we have around 2K-5K business objects in memory all correctly reference so we can use and abuse LINQ on the client side, we also do some complex math on all these objects on the client side, so we really need the large graph.

The issue comes when we want to save changes to the Database. With such a large object graph, we hardly want to send over everything again through the Network.

Our current aproach, which I dislike, given the complexity of the T4 templates so far, is to detach and attach everything on update. We basically want to update a given object, detach it from the rest of the graph, send it over the network, updated it on the WCF side, and then reattach it again on the client side. The main problem is when you want to update linked objects, let's say you add something that has a reference for something that is also added, then another reference to something modified, etc. This forces a lot of client code to make sure we don't break anything.

All this is done with generated code, so we are talking about 200-800 lines of T4 code per template.

What I'm looking at right now is a way to customize serialization and deserialization of the STE's, so that I can control what is sent over the network or not, and be able to update batches instead of just a single STE. Checking references, see if those references are Unchanged or not; if not don't serialize, if yes serialize and update everything just by attaching it to the con开发者_Go百科text on the WCF side.

After some studying I found 2 solutions to this method.

One is by writing a custom DataContractSerializer.

The second one is by changing the STE template created by EF and playing around with the KnownTypeAttribute, instead of generating it for each reference type, have it reference a method that inspects the object and only marks for serialization references that are not unchanged.

  • Has anyone ever come across this issue before?
  • What solutions did you use?
  • What problems did you encounter down the line?
  • How easy was it to maintain the templates created?


I don't know whole application design but if you generally load the work batch to the service and then send it to the client to play with it, it looks like service layer is somehow unnecessary and you can directly load data from database (and you will get much better performance). Depending on complexity of computation you can also do some computation directly in the database and you will again get much better performance.

Your approach to save only part of the graph is abuse to STE concept. STE works in manner - you load the graph, modify the graph and save the same graph. If you want to have a big dataset for reading and save only small chunks it is probably better to load data set for reading and once you decide to update a chunk, load only the chunk again, modify it and send it back.

Interfering the internal STEs behavior is imho the best way to lost some changes in some corner / unexpected scenarios.

Btw. this somehow looks like a scenario for syncing local database with a global one - I have never done that but it is quite common in smart-clients.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜