开发者

Coverage analysis for Functional Tests

In the project that I am working on, we have functional tests written over Selenium. The application undergoes functional changes with each feature release.

Is there a tool / mechanism that we can keep track of the gaps in automated functional tests, so that at least the manual testers can keep an eye for these areas?

Note: we开发者_如何学运维 are not doing FTDD, so the functional test coverage can be quite poor, even though we ensure high unit test coverage. We use NCover to check unit test coverage.


There are at least two (commercial but cheap) tools that I know of that allow you to attach to the IIS process to capture coverage data for IIS applications.

NCover:

NCover includes the //iis command line switch. This switch sets up the coverage environment within IIS and restarts the web server. You’ll run NCover like this to analyze coverage for your web applications:

NCover.Console.exe nunit-console.exe TestAssembly.dll //iis When you run NCover in this way, IIS will be restarted to allow NCover to monitor your coverage, and your tests will be run. Once finished, NCover will stop IIS and detach itself.

See: http://docs.ncover.com/how-to/code-coverage-of-asp-net-applications-on-iis/

DotCover by jetbrains:

Dotcover has visual studio integration which allows you to attach to an IIS application in the same way you would do if you'd want to trace your IIS application. This probably can also be started with the commandline dotCover tool although i've never actually tried this.

See http://www.jetbrains.com/dotcover/

I think Rational and Microsoft Teamsystem also have solutions but they're a bit more expensive.


We use a system for testing that involves a person creating a text narrative - a test script - for manual testing of functionality. This is enumerated in some fashion (e.g. [functionality]-001). Then our Selenium tests are noted as covering one or more of these enumerations.

When new functionality is constructed, new test scripts are written and enumerated. Then with Selenium testing we are able to compare its list of what's automated relative to the enumerated test scripts - the delta is what must be tested manually.


Some of our Test Coverage tools (at present, Java, C# and COBOL) are designed to handle this kind of thing.

If you run your application and exercise a particular function, you can use these test coverage tools to collect code coverage data for that particular functionality. In essence, this is a record of all code which the functionality exercises. With some minor scripting, you can arrange for each functinonality test to run and obtain code coverage data for that test.

The collected test coverage vectors can be combined into a summary vector by the tool, that will give you code coverage number for your code based on the entire set of functionality tests.

If you change the code base, the test coverage tool will tell you which blocks of code have changed (it compares at the method level for differences). This in turn can be applied to the test coverage vectors already collected for individual functionalities; if there's an intersection, you need to run the functional test again, as the code on which it depended has changed.

This way you can decide which functionalities need to be re-tested after a change.


NDepend has the ability to display code coverage deltas between different versions.

from the NDepend site (http://www.ndepend.com/Features.aspx#Coverage):

Writing automatic tests is a central practice to increase code correctness. Knowing which part of the code is covered by automatic tests helps improving tests and consequently, it helps increasing code correctness.

NDepend gathers code coverage data from NCover™ and Visual Studio Team System™. From this data, NDepend infers some metrics on methods, types, namespaces and assemblies : PercentageCoverage, NbLinesOfCodeCovered, NbLinesOfCodeNotCovered and BranchCoverage (from NCover only).

These metrics can be used conjointly with others NDepend features. For example you can know what code have been added or refactored since the last release and is not thoroughly covered by tests. You can write a CQL constraint to continuously check that a set of classes is 100% covered. You can list which complex methods need more tests.

A video demo is here: http://s3.amazonaws.com/NDependOnlineDemos/Coverage_viewlet_swf.html


Suggest a functional coverage tool for .net core

Ncover: Does not have .net core support

dotcover: There is an defect in open state

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜