Identifying Test Cases #3

2010-10-04 - General

Review
In the previous two posts, we discussed some ways to look at code and pick out the test cases. These included finding the boundary test cases, input and output classes and special value testing. This time we're going to take a departure and discuss specific practices which will help you creatively find more test cases. When used with existing code, many of them can quickly flush out defects. When used while writing code, they can dramatically reduce the number of defects from being written in the first place.

Pairing
Pairing is the practice of two people collaborating on a single task, together and at the same time. Working side-by-side and at the same time is important as it greatly increases the richness and efficiency of communication.

With two minds focused on the work, a creative conversation ensues as the pair identifies test cases. Working with a pair can not just double the number of test cases found, but can uncover brand new directions in which to test. Pairing can take the form of pair-programming, pair-testing or even be used during requirements elicitation. By mixing and matching who you pair with, you can learn a great deal about finding new test cases, the functionality and design of the system, and the desired system behavior.

While pairing, remember to keep an open-mind about new approaches and be willing to share your take on the solution. It is important for both members to remain engaged in the work being performed. So eliminate distractions by turning off e-mail and cell phones and making sure you each have a period of dedicated time in which to work.

For an additional perspective on pairing for testing, check out this article by Jonathan Kohl which was published in Better Software:"Pair Testing: How I Brought Developers into the Test Lab"

Test-storming and the testing notebook
At the beginning of a programming or testing session, I like to discuss with my pair what we're about to build or test. I then suggest we enter into brain-storming mode about potential test cases. During development, this gives us a better understanding of what we're going to build and creates awareness of corner cases and exceptions as we code. In exploratory testing sessions, this serves to give a little structure and planning to the session. In both development and testing, we reserve the right to deviate from the plan created by this initial test-storming. The purpose of test-storming is to generate understanding, initiate some creative thinking and have a mini-plan of attack.

Test-storming ideas are recorded in a notebook (or on index cards) so that we don't loose the ideas, and can refer back during the session. I don't use any particular format or standard for this as different problem spaces benefit from different note taking styles. For example, some times I test-storm with a truth table, a time line, a decision tree or a diagram. Often I use a few of these in combination during each session. I practice no particular formality to this - any ceremony defeats the point of quickly generating test scenarios.

I keep the notebook handy throughout the session. If a new test case comes to mind while pairing, I quietly write it down until there is an appropriate time to bring it up. This keeps the flow going without loosing any good test case ideas.

Test Driven Design
Test-driven development is a technique which follows this micro-process:
1. Write a failing test case - the smallest, simplest one
2. Write just enough code to make it pass - and no more
3. Refactor - improve the code while keeping all the tests passing
4. Repeat at #1

By creating a failing test case first and only writing enough code to make it pass, we ensure that our code is fully covered by tests. Furthermore, by writing these automated tests, we encourage our design to be modular. It is difficult to create a test case for code that has many hard-coded dependencies. By creating the test first, we've designed part of the programming interface and the result will typically be more flexible and extensible. Finally, by having code which is well-covered by automated tests, we have much greater confidence in our refactoring.

It is important to realize that the automated unit tests created by TDD are alone not sufficient for all software testing. Other approaches are still required in tandem, such as performance and scalability testing, usability tests, integration testing and other various forms of testing.

For a deeper discussion of TDD, see Introduction to Test Driven Design (TDD) by Scott Ambler.

All-pairs (also called pairwise testing)
All-pairs testing is quite different than pair testing. All-pairs testing is a method of generating test case inputs. Consider, for example, a function for specifying a car to be purchased with three parameters: color (red, green, blue, black, white, or silver), transmission (automatic or manual), and trim (sport, luxury, standard, or special edition). We can generate test cases by creating various combinations of these parameters, such as a red-manual-sport or white-manual-standard.

The simplest defects are caused by coding problems involving a single parameter, while the next simplest involve the combination of two parameters. All-pairs testing uses an algorithm to generate test case inputs which include all possible pairings of the inputs. This is different than generating all possible combinations of all the inputs. Generating each possible pairing yields significantly fewer test cases than generating each combination, while still giving a decent amount of coverage through the code.

It is useful to use a program to generate the pairwise inputs automatically. James Bach provides a simple to use all-pairs tool at: All Pairs Testing Tool. The included .rtf file in the download does an excellent job at describing what the program does and how Allpairs works.

Flash Review
At the end of a session, I review with my pair what we've covered from our test notebook. Test cases which we've completed (preferably as automated tests) are crossed out. We highlight what we need to do next and if we missed any cases. We think back over what we've just accomplished and look to see if anything is missing or if there are more test cases to go. Often we'll ask the question - "how else can we try to break this thing?". Having a point where we consciously perform a review is useful because it tends to solidify what we've learned and to discover a few new things to test.

Wrap-up
So I've discussed in this post some practices I follow while developing and testing. These techniques, when combined, have served as a robust way of preventing and detecting defects.

Now, for homework:
1. Get a test notebook and put it on your desk so that it's ready for your next coding or testing session.
2. Try out test-storming and flash review by yourself and with a pair. Note what pairing added to these practices.
3. Get a copy of the All-pairs tool mentioned above and try it out for a few problems which you need to test.

Categories: General

Leave a Reply