Does Queryability and Lazy Loading in C# blur the lines of Data Access vs Business Logic?
I am experiencing a mid-career philosophical architectural crisis. I see the very clear lines between what is cons开发者_StackOverflow社区idered client code (UI, Web Services, MVC, MVP, etc) and the Service Layer. The lines from the Service layer back, though, are getting more blurred by the minute. And it all started with the ability to query code with Linq and the concept of Lazy loading.
I have created a Business Layer that consists of Contracts and Implementations. The Implementations then could have dependencies to other Contracts and so on. This is handled via an IoC Container with DI. There is one service that handles the DataAccess and all it does is return a UnitOfWork. This UnitOfWork creates a transaction when extantiated and commits the data on the Commit method. [View this Article (Testability and Entity Framework 4.0)]:
public interface IUnitOfWork : IDisposable {
IRepository<T> GetRepository<T>() where T : class;
void Commit();
}
The Repository is generic and works against two implementations (EF4 and an InMemory DataStore). T is made up of POCOs that get generated from the database schema or the EF4 mappings. Testability is built into the Repository design. We can leverage the in-memory implementation to assert results with expectations.
public interface IRepository<T> where T : class {
IQueryable<T> Table { get; }
void Add(T entity);
void Remove(T entity);
}
While the Data Source is abstracted, IQueryable still gives me the ability to create queries anywhere I want within the Business logic. Here is an example.
public interface IFoo {
Bar[] GetAll();
}
public class FooImpl : IFoo {
IDataAccess _dataAccess;
public FooImpl(IDataAccess dataAccess) {
_dataAccess = dataAccess;
}
public Bar[] GetAll() {
Bar[] output;
using (var work = _dataAccess.DoWork()) {
output = work.GetRepository<Bar>().Table.ToArray();
}
return output;
}
}
Now you can see how the queries could get even more complex as you perform joins with complex filters.
Therefore, my questions are:
- Does it matter that there is no clear distinction between BLL and the DAL?.
- Is queryability considered data access or business logic when behind a Repository layer that acts like an InMemory abstraction?
Addition: The more I think about it, maybe the second question was the only one that should have been asked.
I think the best way to answer your questions is to step back a moment and consider why separation between business logic layers and data access layers is the recommended practice.
In my mind, the reasons are simple: keep the business logic separate from the data layer because the business logic is where the value is, the data layer and business logic will need to change over time more or less independently of each other, and and the business logic needs to be readable without having to have detailed knowledge of what all the data access layer does.
So the litmus test for your query gymnastics boils down to this:
- Can you make a change to the data schema in your system without upsetting a significant portion of the business logic?
- Is your business logic readable to you and to other C# developers?
1. Only if you care more about philosophy than getting stuff done. :)
2. I'd say it's business logic because you have an abstraction in between. I would call that repository layer part of DAL, and anything that uses it, BL.
But yeah, this is blurry to me as well. I don't think it matters though. The point of using patterns like this is to write a clean, usable code that easy to communicate at the same time, and that goal is accomplished either way.
1.Does it matter that there is no clear distinction between BLL and the DAL?.
It sure does matter! Any programmer that uses your Table property needs to understand the ramifications (database roundtrip, query translation, object tracking). That goes for programmers reading the business logic classes as well.
2.Is queryability considered data access or business logic when behind a Repository layer that acts like an InMemory abstraction?
Abstraction is a blanket that we hide our problems under.
If your abstraction is perfect, then the queries could be abstractly considered as operating against in-memory collections and therefore they are not data access.
However, abstractions leak. If you want queries that make sense in the data world, there must be effort to work above and beyond the abstraction. That extra effort (which defeats abstraction) produces data access code.
Some examples:
output = work.GetRepository<Bar>().Table.ToArray();
This is code is (abstractly) fine. But in the data world it results in scanning an entire table and is (at least generally) dumb!
badquery = work.GetRepository<Customer>().Table.Where(c => c.Name.Contains("Bob")).ToArray();
goodquery = work.GetRepository<Customer>().Table.Where(c => c.Name.StartsWith("Bob")).ToArray();
Goodquery is better than bad query when there's an index on Customer.Name
. But that fact is not available to us unless we lift the abstraction.
badquery = work.GetRepository<Customer>().Table
.GroupBy(c => c.Orders.Count())
.Select(g => new
{
TheCount = g.Key,
TheCustomers = g.ToList()
}).ToArray();
goodquery = work.GetRepository<Customer>().Table
.Select(c => new {Customer = c, theCount = c.Orders.Count())
.ToArray()
.GroupBy(x => x.theCount)
.Select(g => new
{
TheCount = g.Key,
TheCustomers = g.Select(x => x.Customer).ToList()
})
.ToArray();
goodquery is better than bad query since badquery will requery the database by group key, for each group (and worse, it is highly unlikely there is an index to help with filtering customers by c.Orders.Count()
).
Testability is built into the Repository design. We can leverage the InMemory implementation to assert results with expectations.
Be under no illusions that your queries are being tested if you actually run them against in-memory collections. Those queries are untestable unless a database is involved.
精彩评论