Software industry has been filled with praises and reasons for why unit/integration/functional testing is a good idea. A little bit like Agile IMO. It’s rare to find people who will state on the record that they think that it’s a bad idea. Test coverage tools are enthusiastically installed and run by directives from higher up. Testing framework support in built into all major IDEs and endless books extol its virtues. However once you step into the trenches and look at the work being produced it’s all too common to find that its was all just lip service 😉
So what’s wrong?
Once enough experts said out loud that testing was a good thing; developers assumed that it must be correct. When developers went back to work the next day; many found the concept of TDD obtuse and probably gave up after a quick try (its a bit like functional programming in that respect). While focusing on writing straight forward methods, the value of testing those methods is not immediately obvious. It can and is difficult to imagine the situation where the codebase grows and now has hundreds of those straight forward methods. Im my experience majority of production problems arise from these slip ups in simple logic as opposed to misread on a fundamental aspect.
The primary problem I believe lies in the way the value proposition of testing is put forward. It’s been advocated primarily as an academic ‘right’ thing to do as opposed to something with practical benefits contributing to success.
How should we think about testing?
Imagine that you are a freelancer and you have been approached to build software for a need, let’s say making bills from a bunch of expenses. You start building; since there is no architects/leads/managers asking for tests you haven’t bothered with tests. At some point you write important code with calculates taxes, applies commission and such. As you keep adding features you do manual testing along the way to make sure stuff still works. All is going smoothly and the codebase has grown to a few thousand lines… the quick manual test you were doing now takes a bit longer – starting to break to flow a little. As you add more and more it starts to get disruptive, now you start excluding test steps by judging the impact of a change.
The client calls up and asks for a change. Now you can’t quite remember all the details; it’s a small change to billing but you can’t be sure that you recall all the side effects accurately. So you make a small sub-optimal change; entropy has now sneaked in. Unfortunately, the client keeps making these changes – till one day the inevitable slip up happens. A bill has been calculated incorrectly in Prod – how could this be – you scramble .. ah a small oversight… no problem… minor heroic effort like skipping dinner with family… all good again. The good news is you are still getting more people using this software. Over time, more features are being added. Then another slip happens … you probably start missing those tests around about now … but you don’t want to slow down your feature delivery. It would take non-trivial time to add missing tests. I hope you can see the pattern.
So now the lack of automated verification is likely to cost to actual reputation damage; is it worth trying to spend time on it? Will it give you a competitive advantage?
IMO testing has to be looked at as a means to an end. More stable products which break less often and can be changed with a higher degree of certainty. You should go after logic and calculation first. Cover end to end config with some workflow tests. If a piece is a static, once off perhaps you can skip it, for now. In a nutshell testing is not dogma; think what is important to you and balance it against the effort. Whatever you do, do not think you can scale a code base without tests. That is just naive 🙂