Why We Test

Why we fight... errr, test.

Software testing is a lot like war. Without the glory, or the fighting, or the blood, or the killing, or the dead people. In fact testing is nothing like war. In both endeavors though, without proper motivation you will fail.

At first we test to show it works

When most people start out as developers they have some vague idea of testing to show that the product works. Or to satisfy themselves that the code they wrote actually works. While this is not entirely a bad viewpoint it is a pretty useless definition. Testing can never show that the product works, in every intended (and unintended) way. Furthermore, attempting to formalize testing by stating that the goal is to prove the product correct is not only hilariously misguided but downright dangerous.

Then we test to show that it doesn't work

Most books on software testing will inform the reader that the old goal of showing a product to be A-OK is pretty useless, and generally provide ample examples where that goal will lead testers astray. It boils down to how hard it is to find things which you are not looking for. If you don't attempt to break software you're pretty unlikely to find out how it's broken. i.e. find the bugs. Which is generally proclaimed to be the ultimate goal of a tester. Find the bugs, break the software, show how much shit it really is! How can anyone truly be satisfied with working on software which he really believes is bad? How can you find joy in showing other people their mistakes? While this notion of testing is much more useful than the previous one in as much as it helps us finds bugs it does not capture the entire story.

Finally we test to find out what works and what doesn't

The idea that all software is crap and if I don't find hundreds of bugs just by booting the computer I'm failing at my job does not sit well with me. I was delighted to discover the philosophies behind exploratory testing championed by James Bach and Michael Bolton. Our jobs as testers is to give the stakeholders (i.e. those who decide whether to release or not) as much information as we can about the state of the software. Under this definition testing and testing without finding any notable bugs doesn't have to demonstrate a failure on the part of the tester. Provided that the tests he performed provide adequate(tm) coverage of the application under test. Under this notion the important thing becomes coverage. Bugs or not do not matter nearly as much as the amount of valuable information we can give.

Next level testing

The viewpoint I have adapted is that I test to provide my stakeholders with as valuable information as possible as soon as possible. The value of the information depends on the context, who is asking for it and why. Merely reporting bugs without highlighting the risks associated with them and the level of testing performed is a missed opportunity. The point is to understand the product, not just from a technical perspective but from the users' perspective and not just users you have but also the users you want to have.

Why is this motivating?

I personally find this viewpoint extremely satisfying as I love learning. I love puzzles and figuring things out. To perform well as a tester under the last definition requires one to truly understand the system under test. I like to think that noone should know the SUT better than the testers, at least in the large. Truly understanding something from the nuts and bolts all the way to the uses and usefulness of it is a rare challenge, and one testers are faced with every day. To be able to say with reasonable confidence that something is useful or valuable to someone you must have some understanding of that someone and what he intends to accomplish. Testing excercises a wonderful mixture of hard, analytical skills as well as soft, human empathy and understanding. A competent tester is modern computer science's equivalent to the renaissance man, a polymath, a jack-of-all-trades. That's a fantastic place to be.