Whether they’re concerned that they’re not finding enough defects or worried about spending too much on diminishing returns, every IT department wonders if they’re over or under-testing. Whether your testing team is outsourced or in-house, how much is “good enough”?
This presents a huge challenge for IT departments, as it’s really hard to ascertain how much testing really needs to be done. Test too little and you’re putting yourself at risk of critical defects making their way into release; test too much and you’re potentially wasting money on diminishing returns. |
When it comes to software testing, a lot of IT departments can feel a bit like Goldilocks.
Now, don’t go thinking I’m trying to say that the people tasked with testing your software regularly wander into random houses in the forest or anything; rather, they all face the challenging task of trying to find the elusive option that’s “just right”. But instead of choosing between bowls of porridge and the like, they’re faced with determining just how much to test their software projects.
And, unlike the little girl in the fairy tale, your testers have a whole lot more than a mere three choices.
The process of finding defects, in a lot of ways, is like trying to find your way through a maze with a blindfold on—even if you have a good idea of where you should be going, you simply won’t know how close you came to the exit (or if you even got there at all) until you take the blindfold off and see for yourself. This presents a huge challenge for IT departments, as it’s really hard to ascertain how much testing really needs to be done—test too little and you’re putting yourself at risk of critical defects making their way into release (leading to costly rework); test too much and you’re potentially wasting money on diminishing returns.
To that end, it’s always important to make sure your testing team (whether they’re in-house, offshore, a rural sourcing/onshore outsourcing model, or a combination) has an accurate forecast of how many defects they should expect to find on any given engagement—without an end-point in mind, your team is still effectively blindfolded. For example, our testers (including our Rural Software Testing teams) use a proprietary methodology to generate a self-adjusting forecast of how many defects we expect to find at various stages in the testing process; but, as the old saying goes, there are many ways to skin a cat. So long as your team’s working toward an objectively forecasted goal, you can work towards whatever defect-removal-efficiency your organization is comfortable with. Remember, though, how much you test is all about risk management; so the more you spend in testing, the less you should be spending on rework.
Uncertainty in testing is completely normal. But, the next time your testers are acting like Goldilocks, try forecasting their exit criteria so everyone has a clear idea of what needs to be done to get the project where it needs to be.
It’ll make testing much more “bear”-able—I promise.
Cheers,
Mike Hodge
Lighthouse Technologies, Inc
Software Testing | Quality Assurance Consulting | Oracle EBS Consulting