开发者

Does a Middle Ground Exist? (Unit Testing vs. Integration Testing)

Consider an implementation of the Repository Pattern (or similar). I'll try to keep the example/illustration as succinct as possible:

interface IRepository<T>
{
    void Add(T entity);
}

public class Repository<T> : IRepository<T>
{
    public void Add(T entity)
    {
        // Some logic to add the entity to the repository here.
    }
}

In this particular implementation, the Repository is defined by an interface IRepository to have one method which adds an entity to the repository, thus making Repository dependant upon the generic type T (also, the Repository must be implicitly dependant upon another type TDataAccessLayer, since the abstraction is the entire point of the Repository Pattern. This dependency, however, is not currently readily available). At this point, from what I understand so far, I have two options: Unit Testing and Integration Testing.

Where Integration Testing may be assumed to have a greater number of moving parts, I would much rather initially Unit Test in order to as least verify a baseline functionality. However, without creating some sort of "entity" property (of generic type T), I can see no way of asserting that any logic is actually performed within the Add() method of the Repository implementation.

Is there, perhaps, a middle ground somewhere between Unit Testing and Integration Testing which allows (through Reflection or some other means) to verify that specific points of execution have been reached within a tested unit?

The only explanation I've come up with for this particular issue is to further abstract the Data Access Layer from the repository, resulting in the Add() method accepting not only an entity argument but also a Data Access argument. This seems to me like it might defeat the purpose of the Repository Pattern, however, since the consumer of the Repository must now know about the Data Access Layer.

With regard to request for examples:

(1) And in regard to Unit Testing, I'm not sure something like a Repository could actually be Unit Tested with my understanding of current testing techniques. Because a Repository is an abstraction (wrapper) around a specific Data Access Layer, it seems that the only method of verification would be an Integration Test? (Granted, a Repository Interface may not be tied to any specific DAL, but any implemented Repository must surely be tied to a specific DAL implementation, therefore the need to be able to test that the Add() method actually performs some work).

(2) And in regard to Integration Testing, the test, as I understand the technique, would verify the Add() method performing work by actually calling the Add() method (which should add a record to the repository) and then check to see that the data was actually added to the repository (or perhaps database in a specific scenario). This may look something like:

[TestMethod]
public void Add()
{
    Repository<Int32> repository = new Repository<Int32>()开发者_如何转开发;
    Int32 testData = 10;

    repository.Add(testData);

    // Intended to illustrate the point succinctly. Perhaps the repository Get() method would not
    // be called (and a DBCommand unrelated to the repository issued instead). However, assuming the
    // Get() method to have been previously verified, this could work.
    Assert.IsTrue(testData == repository.Get(testData));
}

So, in this instance, assuming the repository is a wrapper around some database logic layer, the database is actually hit twice during the test (once during insert, and once during retrieve).

Now, what I could see being useful, would be a technique for verifying that a certain execution path is taken during runtime. An example could be that if a non-null reference is passed in, verify execution path A is taken, and if a null reference is passed in, verify execution path B is taken. Also, perhaps one could verify that particular LINQ query was to be executed. Therefore, the database is never actually hit during the test (allowing prototyping and development of an implementation without any actual DAL in place).


It sounds like you're describing the testing of an implementation detail rather than fulfillment of the requirements of a pattern by an implementor of the pattern. It doesn't matter if "specific points of execution" have been reached within the tested unit, it only matters if the concrete implementor upholds the contract of the interface. It's perfectly acceptable for tests to create a T entity for testing purposes, that's what mocks are for.


If you want to do integration testing you need to use the real database. But if you quickly want to test things you could try an in memory database. The question is what you can test and what you cannot test. As long as your database access code is db specific you are using an external system (to stay in unit test speak) which you should mock. But since you really want to know if your data ends up in the datbase you need to test against the real database.

But if you use some db abstraction e.g. an ORM mapper you could use the ORM mapper and test if at least the mapping does work correctly. The ORM mapper then could use an in memory datbase for your tests to check if the ORM mapper does work as expected.

If you do not use an ORM mapper and you create an additional db abstraction layer only to have an abstraction so you have code which is executed for the sole purpose to have errors you want to uncover in your true unit tests is not going to make you more productive.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜