100% Code Coverage or Bust?

Over the past 10+ years I have heard many stakeholders, clients, and even the occasional quality lead demand that some project have “100% unit test code coverage” without much explanation (or interest really) in how or why that metric was decided. While I can appreciate the idea of setting a lofty goal then promoting and pursuing it with decisive enthusiasm — my viewpoint has evolved to question setting a global edict or goal without consideration for the details of the individual project. Rather than spout off my personal beliefs on the matter for an entire post, I reached out to our development team for their views on it to see if their experience and expertise had led them to similar conclusions:

Should you always shoot for 100% code coverage with unit tests? Why or why not?

David Pine

<codeblocktest></codeblocktest>

Whenever I hear someone tell me that they’re at 100% code coverage, two thoughts quickly come to mind… ”you have a lot of extra time” or “you have a small code base”. Should you always shoot for 100% code coverage with unit tests? No, but it is important to unit test. It’s like the childhood saying “practice makes perfect” and the corresponding argument “no one can be perfect, so why practice”. I believe in unit tests, but a goal of 100% is simply unrealistic.

I would argue that setting a percentage goal is the wrong approach to take. The number of unit tests (and their percentage) should be irrelevant. You should however have unit tests, and enough that validate your code to the extent that it adds value and verifies the intent of your code. Ideally, unit tests will allow for ease of refactoring and assurance that no new breaking changes are introduced.

<codeblocktest></codeblocktest>

Dustan Vodvarka

<codeblocktest></codeblocktest>

Unit testing for the sake of code coverage is not something I would recommend. 100% code coverage will need a lot of developer time while providing minimal value in return. Instead we should focus on coverage of important paths and outcomes. Tests should have a clear purpose and they should actually test some kind of logic. Unit tests help the developer and will find bugs BEFORE something is checked in. When 100% coverage is required people will get lazy when writing tests and a false sense of security can develop. Like anything in life it is all about finding a balance that works for you.

<codeblocktest></codeblocktest>

Tyler Evert

<codeblocktest></codeblocktest>

Testing is waste, in the sense that it doesn't directly add anything to the customer. Customers aren't any happier if we test something 4 or 5 or 6 times. They only really care about quality. Therefore — the only reason we test at all is to find and fix bugs. So testing is a question of economy. The closer you get to 100% code coverage, the more difficult it gets to create crazy automated tests that will truly exercise every line of code, while finding fewer and fewer bugs. Depending on how expensive it is, in your domain, to let a bug slip by — it's almost certainly never worth it to hit 100%.

Perhaps more important is the fact that code coverage says nothing about finding bugs or ensuring quality — it just says "this was run". It's hazardous to use code coverage as a benchmark for quality, as it doesn't actually ensure meaningful, effective tests are written. It's a good tool for sanity checks — it's not a good tool for proving quality.

<codeblocktest></codeblocktest>

Nick Kremer

<codeblocktest></codeblocktest>

In my view 100% code coverage is not something one should try to attain and/or force. The problem lies in the fact that code coverage is a fallible metric. You can have a line or lines of code touched by a test (which code coverage normally is looking for) but not actually be testing the code properly; if for example assertions are bad or missing. Therefore, I think shooting for something more like “full functional” coverage would be better concept. Even with “full functional” coverage you should start considering diminishing returns though. Sometimes more obscure test scenarios can end up taking more and more time to cover. When this starts to happen, it comes down to if you feel it’s worth it based on a time/money to benefit analysis.

One thing to consider even in a system with 100% code coverage is when a new bug is reported a unit test should be created for that bug. Again, just because you have 100% code coverage doesn’t mean there are no bugs. But if a bug does rear its ugly head, a good process I’ve found is to write a unit test that validates the bug (in this case the test would fail), then fix the bug and re-run the test and then the test should be passing (assuming the test is correctly written to test the scenario). This way the bug fix is immediately validated by a test and you have more peace of mind going forward when doing something like refactoring down the road.

<codeblocktest></codeblocktest>

James Pemberton

<codeblocktest></codeblocktest>

100% code coverage is a good goal but in most real world situations an unattainable one. Most modern applications have pieces and parts that cannot be unit tested. Code that can't be tested should be isolated and mocked when needed as a best practice without worrying about code coverage. Another way to think about coverage is that all business logic in your application should have 100% coverage and those tests should be meaningful. "Meaningful" meaning the unit tests exercise the logic in the code properly and assert the correct output of that logic. Unit tests shouldn't exist to satisfy an arbitrary need for a high code coverage number.

<codeblocktest></codeblocktest>

Nick Wessing

<codeblocktest></codeblocktest>

100% code coverage is a great long term goal. I think if the team is given this goal before they have really mastered unit testing, it’s going to cause more problems than it solves. I have seen many poorly designed unit tests that don't actually test anything, and I would hate to see 100% code coverage worth of those unit tests. It’s important that everyone on the team is capable of writing great tests, at that point the team can decide to start shooting for 100% code coverage. If the team is not targeting 100% coverage, then it is important to identify which parts of the system are the most important to test. The more complicated the business logic, the more important it is to test a particular component.

<codeblocktest></codeblocktest>

Colin McCabe

<codeblocktest></codeblocktest>

Our profession often looks to code coverage to understand whether the products we develop have quality. The measurement of quality is very necessary. It is an indication of where the team can improve. It is critical in building trust both with stakeholders and customers. Code coverage percentage falls short of providing this, however. 100% coverage does not indicate that the behavior of the product is thoughtfully exercised, only that every code path was reached in some way. For this reason, a single percentage number can be misleading and direct us to activity that doesn't further the original goal of developing with quality.

This leaves us with the larger questions of what is quality in the first place and how can it be measured? One might start by breaking quality down into less ambiguous components like Correctness, Maintainability, and Usability. But these are questions best answered at another time...

<codeblocktest></codeblocktest>

What do you think about unit test code coverage requirements? Have they inspired higher quality in your work, or caused more harm than good?

Get your project started today

Get in Touch