117
Chapter 8
Software Test Metrics
e broad benefits of metrics discussed in Chapters 1 and 5 are relevant to software
testing too. Although testing is part of the full life cycle, it has become a project of
its own kind, with its own unique objectives. Testing is conducted to nd defects
and to improve product quality. e three objectives of any project, that is, faster,
better, and cheaper, also apply to testing. Metrics would help to push testing to
greater levels.
Project Metrics
Definitions of test project metrics are exactly same as metrics definitions in any
project, with the same meaning and purpose. Project metrics help to conserve proj-
ect resources and make optimal use of them. Project metrics track requirement
changes and help to take corrective measures when requirement changes threaten
the project. Project metrics also help to satisfy customers.
Schedule Variance
In testing projects, delivering on time is important. Measuring time and schedule
variance as the testing milestones are crossed would give a feedback to the team to
work toward meeting the delivery schedule.
Effort Variance
Completing testing within budgeted effort is the next concern. An effort variance
metric would help to control the cost of testing.
118 Simple Statistical Methods for Software Engineering
Cost
e cost of testing per release is measured. Cost variance tracks dollars spent in
excess of the budget. In addition to human cost, we need to consider investment on
tools and outsourcing and see if we can execute testing within budget.
Human Productivity
From the view of project management, we look at defects found per tester. e
result is used to provide feedback to testers and hence improve test results. Defect
discovery rate is one of the metrics used as an index of team productivity.
Requirement Stability
Testing closely follows requirements. Test cases mirror use cases. Hence, the big-
gest uncertainty in testing is requirement stability. is is measured and tracked.
e requirement stability index (RSI), also called requirement volatility, is defined
as follows:
RSI
Original req Req changed Req added Req dele
=
+ + +
tted
Original req
RSI is a metric that might already exist in the metrics system developed for the
entire development project. One has just to reuse it.
Resource Utilization
From an operational perspective, resource utilization is a key metric. is metric is
extended to tools, systems, and people.
Box 8.1 S CurveS in TeSTing
ere are a few S curves used in testing. Cumulative defects arrived is the
first curve. is is also called the reliability growth curve. is curve ends
in a plateau zone beyond which further testing does not find defects. e
product is said to have become stable, as far as we know, and can be shipped.
Cumulative test cases executed is another S curve. e pattern of this S curve
tells a lot about the nature and quality of test progress. Experienced testers
can interpret this pattern and take appropriate decisions.
Software Test Metrics 119
Customer Satisfaction
Eventually, testing is a service. Customer satisfaction must be measured by con-
ducting surveys to improve service quality.
Test Effectiveness
is metric captures the percentage of defects found by testing. Of course, the
stretch goal of the test team is to reach 100% effectiveness. e metric is defined
as follows:
Test effectiveness
Defects found by tests
Defect
=
ss found by tests Defects found by business use+
rrs
Process Metrics
e testing process is managed with several metrics, continuously tracked dur-
ing testing life cycle. A few of them are cumulatively graphed to derive deeper
meanings.
Box 8.2 MeaSuring The reTurn on
inveSTMenT of TeST auToMaTion
It is good to automate test cases. Test automation needs creativity. Carefully
directing test automation will result in cost saving. To make sure that auto-
mation is kept profitable, we introduce a metric ROI of test automation.
Regression tests can be automated; the ROI is great. When manual test-
ing is difficult or impossible, automation is required. e simulation of a
test scenario is difficult manually and is best performed through automa-
tion. In special cases such as testing a firmware, automation is the only way.
In testing middle layers with missing upper or lower layers, automation is
the only way.
Investment on automation may yield benefits beyond the current project.
e tool must be generic enough to accommodate the needs of different proj-
ects. e tool need not aim at solving the requirements of a single project; it
must be planned to address the needs of upcoming projects. It is an organi-
zational asset.
ROI from automation may vary from two to ten, typically.
120 Simple Statistical Methods for Software Engineering
Defect Removal Efficiency
is metric is used in a special context in testing projects. ere are other defini-
tions for this metric in other contexts. Defect removal efficiency (DRE) in testing
means the number defects removed per unit time, defined as follows:
DRE
Number of defects
Detection time Resolution
=
+
ttime Retesting time+
e inverse of this number is called defect turnaround time (TAT), defined as
follows:
TAT
DRE
=
1
Test Cases Count
We can cumulatively count test cases designed, executed, and succeeded until any
point of time chosen for inquiry. is count makes more meaning if plotted as a
cumulative chart as shown in Figure 8.1. ese charts measure dynamic changes
in test case counts.
Discernible in these charts are a few useful metrics: the percentage of success-
ful test cases and the percentage of executed test cases. ese two metrics provide
Testing time (weeks)
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Review date
Cumulative number
of test cases
Test cases designed
Test cases executed
Test cases succeeded
Figure 8.1 Cumulative test cases count.
Software Test Metrics 121
useful feedback to the test management. e test team would strive to ll in the
gaps in these areas. e rst refers to quality test cases. e second pertains to a
commitment to execute test case.
Test Coverage
Functionality Coverage
It is vital to know the proportion of requirements (functionalities) covered by test
cases. is is the simple and most widely used test coverage metric. is metric
helps to control and improve coverage and makes the application more usable by
customers. As testing is in progress, coverage will increase in the light of this metric
and will reach 100% in the ideal case.
Code Coverage
Yet another coverage metric traces the proportion of code covered by test cases.
Coverage helps eliminate gaps in a test suite.
is is a tenuous metric. Higher coverage does not mean assurance of better
quality. An experienced tester knows to take a balanced view on this and make sure
a minimum coverage has been achieved and critical paths have been covered.
Code coverage is a very useful metric.
However, you need to know how to use it.
Coverage metric tools are available to track line, statement, block, decision,
path, and condition coverage. ey provide excellent reports with back tracing and
help achieve higher test efficiency.
Box 8.3 uniT TeST DefeCT DaTa
A unit test is cost effective. It improves reliability. It reveals bugs that are
otherwise devious. A unit test needs design knowledge and is best performed
by testers with design knowledge. For best results, testers can collaborate
with designers and developers. e level of thoroughness and documenta-
tion depends on test strategy and goals. However, often enough, unit tests
are not well documented, and unit test defects are not entered into the bug
tracker. ere is not much to motivate the designer except project objectives
and leadership drive.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.54.120