Appendix: ROI Example for a Test Automation Initiative
Let’s look at an example of a fictitious automation
initiative where the SUT is in place and automated tests need to be developed to replace part of the manually executed regression tests.
Let’s assume the following for our example:
Licenses for the Test Automation tool cost 10K1 per year.
Training for the automation team costs 10K for 2 weeks.
It will take 6 months to automate 80% of the regression test suite.
Two FTEs working on the automation project, with an average daily cost of 1K per FTE.
There are monthly releases, with 1 regression test cycle per release.
Each regression test cycle run manually takes 3 FTEs for 5 days.
The average cost of one FTE running manual tests is 1K per day.
There are 20 working days in a month.
Once the automated suite is in place, you will need 3 man-days2 of maintenance effort per month.
The ROI
formula
we use is the following:
This calculation will not consider the time value of money; therefore, ROI calculation is done without incorporating the net present value concept.
The
gain from investment
is going to be the cost savings achieved on the testing activity.
To calculate this gain we will take a look at the costs with automation and costs without automation, over a period of 3 years, which is a reasonable time for most initiatives of this type and scale.
In case of no automation, the monthly costs are equal to running the entire suite manually; 3 FTE * 5 days * 1K = 15K.
In case of automation
, the first month you will have the manual testing costs (15K), the licenses (10K), the training (10K), and the automation effort (20days*2FTEs*1K=40K) (which sums up to 75K).
To make the computation simple but realistic, let’s say that for the first month there are no benefits since you are learning and setting up the tool. The following 2 months you manage to automate 10% of the tests each month, and then your efficiency increases to 20% per month.
So the cost with automation
will be the automation effort (40K), plus the remaining manual effort (15K-10% for month 2 and 15K-20% for month 3, 15K-40% for month 4, 15K-60% for month 5, and 15K-80% at month 6).
On month 6 your
automation work
is complete, and for the following months you just have 3 man-days of effort per month needed for maintenance. Therefore after the sixth month, each additional month you are left with 20% of manual testing (3K) plus the automation maintenance (3K), for a total of 6K per month.
Month | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | ... | 24 | ... | 36 |
Without automation | 15 | 15 | 15 | 15 | 15 | 15 | 15 | 15 | 15 | 15 | 15 | 15 | ... | 15 | ... | 15 |
With automation | 75 | 53.5 | 52 | 49 | 46 | 43 | 6 | 6 | 6 | 6 | 6 | 6 | ... | 6 | ... | 6 |
To correctly draw a graph you need to cumulate the costs.
Also, at month 13 and 25 the
license renewal costs
must be considered (remember it was 10K per year).
Month | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | ... | 24 | ... | 36 |
Cum without automation | 15 | 30 | 45 | 60 | 75 | 90 | 105 | 120 | 135 | 150 | 165 | 180 | ... | 360 | ... | 540 |
Cum with automation | 75 | 128.5 | 180.5 | 229.5 | 275.5 | 318.5 | 324.5 | 330.5 | 336.5 | 342.5 | 348.5 | 354.5 | | 436 | ... | 518.5 |
Using the last table with the cumulative costs
, we can draw the following graph.
As you can see from the graph, the costs with automation start to be lower than the costs without automation around month 34. This is called the break-even point
.
In this case, almost 3 years are needed for the automation initiative to be profitable.
To calculate the ROI over 3 years, let’s look at the costs with and without
automation
:
Costs without automation: 540K
Costs with automation: 518.5K
→ ROI = gain from investment / costs of investment = (540-518)/518 = 4.15%
This was a simple example, and one could argue that the business case
is weak.
But this is like comparing apples to oranges. With automation, you have added benefits. First of all, you can run all the regression tests
nightly, rather than just once per release as you would do if you were to run them manually.
Therefore to make it more realistic and to have a similar quality objective, you could assume a scenario where you would run the manual tests 2 times for each release.
In that case, the costs without automation are 30K instead of 15K for each monthly release.
Adapting the table and resulting graph would look like:
In this case, the break-even point
is sooner, around month 16.
And the ROI over 3
years
is
Costs without automation: 1080K
Costs with automation: 677K
→ ROI = (1080-677)/677 = 62%
As you see, the business case
changes considerably, and we can push the scenario even further.
When using automation you find issues early on, while the fixing cost is lower.
To account for this you can assume that a certain percentage of issues discovered during UAT
are now discovered during development, with little to none additional cost for fixing it.
Let’s assume that, without Test Automation, 1 regression defect is spotted on UAT every other release. Over 1 year it makes 6 defects. Considering 2 days of effort for fixing a defect found in UAT (bug reporting, analysis, fix implementation, deployment, and retest), this means 12K saved each year, and 36K over 3 years.
Let’s consider this and update the
ROI
:
Costs without automation: 1080K + 36K = 1116K
Costs with automation: unchanged, 677K
→ ROI = (1116 - 677) / 677 = 83%
Moreover, the manual test efforts
you saved (12 man-days each month) will be reinvested in usability testing, or used on other value-add tasks.
Last but not least, the two team members that went from 15 boring days of manual regression tests
to just 3 days will be much happier!
With these examples, I just wanted to illustrate how an investment that may initially seem worthless, in reality proves to be beneficial. Test Automation
, both at the unit testing level and system level, is rarely a poor investment.
Sure, this type of analysis can and should be done in specific
cases
like
Legacy applications with a limited lifetime
Systems technologically complex where Test Automation might be challenging
Systems where releases are not frequent and where regressions can be tolerated and quickly fixed
Aside from these specific cases, Test Automation should not be doubted: the question should be not if, but when, how, and to what degree.
In my experience, I have seen contexts where Test Automation
was not applied from the beginning and manual testing effort was one of the reasons that slowed down release cycles and limited system evolutions. Other times, the Test Automation effort had to be justified and a formal ROI deeply questioned.
Luckily, in the NIS project
, I was afforded a “good” context, where we do not question whether to apply Test Automation or not, but rather, how to do it in the most efficient way.