Automated Tests Are a Code Smell
By Adrian Sutton
Writing automated tests to prove software works correctly is now well established and relying solely or even primarily on manual testing is considered a “very bad sign”. A comprehensive automated test suite gives us a great deal of confidence that if we break something we’ll find out before it hits production.
Despite that, automated tests shouldn’t be our first line of defence against things going wrong. Sure they’re powerful, but all they can do is point out that something is broken, they can’t do anything to prevent it being broken in the first place.
So when we write tests, we should be asking ourselves, can I prevent this problem from happening in the first place? Is there a different design which makes it impossible for this problem to happen?
For example, checkstyle has support for import control, allowing you to write assertions about what different packages can depend on. So package A can use package B but package C can’t. If you’re concerned about package structure it makes a fair bit of sense. Except that it’s a form of testing and the feedback comes late in the cycle. Much better would be to split the code into separate source trees so that the restrictions are made explicit to the compiler and IDE. That way autocomplete won’t offer suggestions from forbidden packages and the code won’t compile if you use them. It is therefore much harder to do the wrong thing and feedback comes much sooner.
Each time we write an automated test, we’re admitting that there is a reasonable likelihood that someone could mistakenly break it. In the majority of cases an automated test is the best we can do, but we should be on the look out for opportunities replace automated tests with algorithms, designs or tools that eliminate those mistakes in the first place or at least identify them earlier.