Many projects follow the same pattern for software testing. These projects test the entire system by running manual test cases through the user interface. When the number of manual test cases grow beyond the capacity of the current staff they introduce automation tools to execute the manual test cases. This testing strategy leads to discussions about data management, test maintenance cost and ever increasing execution times.
While there are ways to improve the exist testing process, ultimately I think the strategy is wrong. More specifically the test boundaries are wrong. We can't effectively test a system using end-to-end tests exclusively. This is an example of an alternative testing strategy.
Imagine we have a system with three Modules (A, B, C).
For the sake of simplicity lets assume that each module has 10 paths. Every path in Module A depends on every path in Module B and every path in Module B depends on every path in Module C.
While there are ways to improve the exist testing process, ultimately I think the strategy is wrong. More specifically the test boundaries are wrong. We can't effectively test a system using end-to-end tests exclusively. This is an example of an alternative testing strategy.
Imagine we have a system with three Modules (A, B, C).
For the sake of simplicity lets assume that each module has 10 paths. Every path in Module A depends on every path in Module B and every path in Module B depends on every path in Module C.
If we test the entire system across all the modules, we need 1000 tests to cover all the paths through the system.
We can drastically reduce the testing effort by splitting our test boundaries.
We can drastically reduce the testing effort by splitting our test boundaries.
When we unit test each module in isolation we'll need 10 tests per module, 1 for each path in the module. However these 30 tests don't cover the interactions between the modules. We need 200 integration tests between the modules. 100 tests to validate A and B plus 100 tests to cover B and C.
So if we change our test boundaries we can get the same test coverage for a system using 200 + 30 = 230 test cases.
The most powerful benefit of smaller test boundaries is the ability to quickly localize failures. If there is a single path in Module B that has a defect and we have a full system boundary we will have 100 failing test cases. This large number of failing tests makes it difficult to track down a single defect.
If we split our testing boundaries up, we have a single unit test failure for Module B and 10 broken integration tests between Module A and Module B and 10 broken tests between Module B and Module C. 21 failing tests is still a lot, but our single failing unit test will tell us the exact location of the defect and we can quickly discover the defect.
I realize this is a simple example, but I hope it illustrates the effects and impact of using smaller testing boundaries.
I realize this is a simple example, but I hope it illustrates the effects and impact of using smaller testing boundaries.
Can you use this strategy and still use acceptance tests to drive requirements definitions and represent customer expectations.
ReplyDeleteIt seems if you move into the guts of the architecture, you've lost the ability to have your tests readable and understandable by the customer.
I always consider validation and verification of automated tests as a side effect. Micro tests help drive the design and acceptance tests help drive the requirements and definitions of done. Verification and validation are only a side effect.