When automating existing test scripts, it can be hard to create sensible automated tests out of them. Sensible meaning that, besides being robust and maintainable, pinpoint you to the right problem if it fails. In other words, it should be clear why a test fails. You shouldn’t have to spend a lot of time figuring out what is wrong.
One of the reasons it’s hard is because the current tests are structured in a way that makes it hard to see what exactly is tested. They validate a certain area of an application, like a page, screen or a certain component (e.g. the shopping cart if you have a web shop). In other words, they validate a subject.
Applications are all about behavior. An application does something. When you test the application, you test if the application behaves according to spec. When I hear: “We need to test this screen”, my first response is “You need to test this screen for which behavior?”. In other words: what is it the application does that you’re testing.
Let’s take a shopping cart of a web shop as an example. Instead of just “Test shopping cart”, test cases like:
- Adding items to the shopping cart
- Clearing the shopping cart
- Calculate the total amount of all items in the cart
As you can see, these test cases all start with a verb and express the actual behavior of the application we are testing. These test cases are perfect candidates to be defined as features in ATDD/BDD frameworks like Cucumber, SpecFlow and jBehave, each with different scenarios underneath them.
The reason why it isn’t always apparent which behavior we are testing, is because manual tests are often organized to be efficient to the tester. A tester often combines multiple implicit tests into a single test run. For example, it could mean creating a bunch of things with some variation, testing input validation along the way. Doing everything separately takes too much time. Fortunately, we don’t have that problem if we automate. Instead, we can spend time on making the tests as clear as possible, so that if a test fails, it is crystal clear why it fails.
When you have tests that clearly test certain behavior of your application, it’s of course still possible to organize them by subject. Just make sure the actual tests test one behavior and one behavior only. If you’re using something like Cucumber, you could create a directory per subject/system under test (e.g. Shopping Cart), and have all the feature files related to that subject in there.
To summarize: make sure you know which behavior you are testing. If the answer on the question “What is this test validating?” doesn’t contain a verb (e.g. “The shopping cart”), you know that your test isn’t explicit and split up enough. You know the subject your testing, now it’s time to extract the behavior of the subject you’re testing. Doing this help you to create clear and robust automated tests.
This blog was written by Harm Pauw.