I had been a proponent of automated testing for a long time before we really adopted it at my last company. It was one of those things people wanted to do but never felt they had the time for. When we all actually started writing tests for our code was at the start of a new project, a rewrite of an existing product. The fact that on day one we could create and maintain 100% test coverage was a strong driver in writing more tests. I knew that code coverage was an imperfect metric, but it's so easy to track it's natural to use it. This is, I think, the key problem with automated testing that leads people to think that 100% is what they should be aiming for. When measuring coverage is so easy, and 100% coverage of your first piece of code is possible it's natural to try and maintain that.
I think a lot of people who haven't really considered what tests they should write also assume 100% coverage is what they should aim for. This seems to be the default opinion people come to the question with, if they even realise that there is a question to be asked.
I believe you should definitely not try to write "as much automation as possible" or fall into the common trap of thinking that 100% automation is an ideal to be aimed for.
Writing automated tests takes time, running them takes time and maintaining them takes time. Imagine for the sake of argument that only 50% of your tests are useful. In this case you have wasted time writing the remaining 50%. You need to wait for the other 50% to finish executing to get the results of the useful ones. And, you need to spend time maintaining them. Most of the work in software development is in the maintenance, not in originally creating the code. Less code is a laudable aim, this also applies to your tests.
So, you may well ask, how can we decide if we've written enough tests for a piece of code or not? Which is lucky because that's what I'm going to tell you next :).
I propose splitting your tests into four groups:
By structuring your test cases along the above lines, when one fails people know why it was created. If it is the test that is wrong and not the code under test they should understand why it's important to fix the test rather than delete/disable it. Or if it's not important to maintain, they can delete it if that's what they want to do. Note, I would suggest explicitly laying out your code using these four categories so anyone reading it knows which group the test is in.
Code review of the tests then becomes easier as someone can look at the functionality under test and form their own opinion about the tests which should appear in each category. This is where a code coverage report is useful, a scan through that as part of a code review may show something up which you think should be tested.
You can obviously adapt the above to your own needs, but I think the categories are vague enough they would suit most teams.
Agile team? Using Trello? You should check out Corrello - Dashboards for agile teams using Trello.
Everything copyright © Cherry Wood Software ltd.
All rights reserved.