Thursday, September 6, 2007

Selling automated testing to management

I don't think there are that many developers out there these days that don't think that automated unit testing is a good idea. Most seem to agree that automated functional and integration testing are worthwhile as well, even if they're generally harder to achieve. Being able to be reasonably confident that the change you just made didn't disrupt far flung parts of the project just by executing a test suite is great. However it's not a tangible, quantifiable thing you can sell to management.

There are generally two cases where you need to sell the effort involved in building and maintaining these suites. The first (and easier) is when you already have the suites in place, but need to invest in maintaining them. This is an easier sell because usually management will have seen the benefit and/or the ongoing expense is fairly small, and justifying the ongoing effort isn't all that bad. I'll throw new development in here as well... writing the suites as you go usually isn't as bad as retrofitting suites onto an existing code base.

Which brings us to the second case. When you have a legacy product, adding automated test suites is usually time consuming at best, and technically challenging in most cases. Your legacy product probably wasn't built in a way that encourages automated testing. The interfaces may not be there, it's tough to run in a test sandbox, etc. If you want to get automated tests working, you're probably facing a significant investment in people terms, and could be facing some infrastructure expenditures as well.

The wrong way to pitch this is to go to management, roll out your proposal and say that you need to spend $500k over the next three months building something that provides purely qualitative benefit. Unfortunately most attempts at quantification are difficult. You can talk about improved quality, improved productivity, increased agility... but at the end of the day that $500k is staring them down and making everybody in the room uncomfortable.

A better way is to talk about it in terms that can be strictly quantified, and the easiest one I've found is around regression testing. Figuring out how much you spend each year on regression testing is usually pretty straightforward.
  1. Write down how many times a year you execute a full regression test
  2. Write down many people a single test cycle takes
  3. Write down how long it takes
  4. Figure out how much a tester's time is worth.
Doing the math hear should give you a dollar amount for your regression expenditure. The next step is simpler, all though less precise. Estimate how much you think you could cut the time you got for #3. From what I've seen this should be on the order of 50%-80%. This gives you a total savings realized from automated testing.

A quick example may help
  1. We release twice a year, and execute on average two regression test cycles per release (one beta, one GA)
  2. Each test requires four test engineers
  3. The test cycle takes four weeks
  4. Each tester costs the org $100k a year
this means we have a total cost of 4*4*(4/52)*100k which is about $123k. If I can cut my regression test time to 1 week, I save at least $92k per year. Keep the last two words in mind... that's per year. Forever.

Keep in mind this is a minimum. You still have all the intangible benefits. But $92k/year is something the business folks can plug into their project valuation tools and evaluate. When we ran these numbers for our project it became clear that dedicating people full time to build out automated test suites for existing projects made perfect financial as well as technical sense.

3 comments:

Jeremy Weiskotten said...

Good post. I experienced something similar, although management was generally in favor of unit testing legacy code and there was quite a bit of test coverage in place already.

We didn't have to justify the cost of testing -- an expensive Agile consultant did that. But we did have to strike a balance between bringing legacy code under test (which invariably required refactoring WITHOUT tests to make the code testable) and cranking out new features with good test coverage built in.

We generally didn't bother adding tests to legacy code unless the it was being modified, was known to be trouble, or was a particularly valuable (business-wise) component.

If you haven't already, I recommend picking up a copy of Michael Feathers' "Working Effectively with Legacy Code". It presents solutions to some common problems you face when bringing legacy code under test -- mostly related to decoupling awkward collaborators.

mccv said...

We ended up taking a mixed stance on legacy code. The sales job we had to do was in the context of rolling out our agile pilot (from a 150-ish engineer org to an 800-ish engineer org), and laying out the necessary preconditions for making that successful.

We did end up deciding not to test some projects... they were just too bulky and had too much inertia to make agile. The interesting thing was that we also decided somewhat independently that those same projects were dead ended, and would be replaced by new projects rather than sustained indefinitely.

Anonymous said...

This is great info to know.