Let’s say somebody asks you the question “What is a good speed to drive?”. How would you answer that question? First of all, you’ll need more context to even begin to formulate an answer: where do you drive (in a residential area, on a highway), are there speed limits, what are the weather conditions, fuel mileage, do you have somewhere to be at a certain time? But more importantly, how is driving at a particular speed a goal? Getting safely from A to B within a certain time frame sounds more like the actual goal and measuring speed helps you to track progress on achieving this goal.

So, what’s the reason for this seemingly non-IT related case? Because “What is a good speed to drive?” is the same kind of question as a common question within Software Development: “What’s a good percentage of code coverage to have?”. Like the speed question, this is dependent on the context and isn’t a real goal. So what does code coverage actually mean and how can we use it to achieve the real goal: creating quality software?

Running (unit) tests on your code is an important part of assessing the quality of your product. By building code, we verify that the source code is syntactically correct (it leads to something that we can run) and by testing it, we verify that it is semantically correct (does it behave the way it should). Code coverage is a measure used to describe the degree to which code is exercised by your tests. For this, we use tools (like JaCoCo and Cobertura for Java) and make them part of our Continuous Integration/Delivery pipeline. Tools like SonarQube can help to generate insights based on these measurements. What’s important to realize is that this measure doesn’t tell you anything about the quality of the tests being run and therefore that code that’s covered by tests automatically has a high quality.

One way to define quality software is to look at it from an external and internal point of view. From an external point of view, customers experiencing bugs in production should be a rare occurrence. From a more internal point of view, developers should feel comfortable adding or changing functionality without being afraid of introducing bugs that slip into production. You could say that you test enough if both are the case.

Keeping track of code coverage can help you to get a partial answer about whether or not you test enough by showing the ratio between tested and untested code. It’s interesting to know what code is covered, but it’s far more interesting to know what code is not covered by your tests. In code that is not covered by tests, it’s easier to introduce a bug that gets into production than in code that is tested (well).

So does that mean you should strive for 100% code coverage? There are always parts of code that are hard to test (e.g. I/O, multi-threaded and network code) and the benefits are not always worth the costs. If you strive for a code coverage of 100% it sounds like you make code coverage a goal and not using it as an aid. If your way of working includes writing unit tests for every new feature or change (preferably before writing production code by practicing Test Driven Development), you will automatically end up with a code coverage of about 80%-90% for those features or changes.

Beware however, to set bars (e.g. we need 80% code coverage), especially if you do it for somebody else. I’ve seen developers writing bogus tests to increase code coverage just because a certain percentage was forced upon them by some manager or customer. The tests exercised the code, but didn’t actually verify if the behavior was correct. Martin Fowler calls this Assertion free testing.  To quote Brian Marick:

I expect a high level of coverage. Sometimes managers require one. There’s a subtle difference.

So measuring code coverage can help increase software quality by identifying untested parts of your application. Beware to make having a high percentage of coverage a goal on its own and don’t force it on others because chances are it will lead to counter-productive behavior. If you start with a low coverage, it’s far more effective to make writing unit tests for all new functionality part of your way of working. Combine this with identifying high-risk areas that are currently uncovered and write test for those areas. This way, you’ll soon reap the benefits and achieve the real goal: increase the quality of your software.

 

This blog was written by Harm Pauw.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

*