Software Test Metrics 127
Deciding on the test cases to be automated is a critical decision. Not every test
case could be automated. Some defects could be found only by slow manual testing.
Although speed is an advantage, only some test cases should be automated.
Automation Progress
To track the automation project, we can measure automation progress by setting
up the following metric:
Automation progress
Number of test cases actual
=
lly automated
Test cases selected for automation
×× 100
Case Study: Defect Age Data
Defect age is a very well-known metric in the industry, but only a limited number
of organizations collect this metric. e time defect spends in the product, from
injection till resolution, is defect age. is can be calculated from the defect data in
the bug tracker if appropriate fields have been set and populated. e fields required
are phase injected and phase detected. It is now evident that the defect database
is maintained throughout the project life cycle. Moreover, the information in the
column phase injected is only obtained by reasoning during causal analysis and
not from the log book. e term data in testing includes even reasoned, intuitive
judgments.
Defect age = 1 + phase detected − phase injected
Some people use the formula without the 1; they just recognize the incre-
mental difference and choose to keep the base value at 0. If a defect is detected in
the same phase where it was injected, defect age is 1. If discovery happens in the
next phase, defect age is 2. If a defect is injected during the requirement phase but
discovered in the test phase, defect age is 4.
is case study is about a testing project with a defect database with 32 fields
and quite rich in raw data. However, no one derived the metric defect age although
the possibility was there.
Much after the release, data mining by a QA analyst revealed the defect age
metric. Motivation for this metric creation from available data had to come from
an unexpected direction. Managers asked for a model for defect economics, and
one of the factors driving cost was obviously defect age. Supporting metrics in
this model building were the cost of nding defects and the cost of fixing defects.
Both could be calculated from the data available in the database. e cost of
nding defects was labeled as part of the cost of appraisal, and the cost of xing
128 Simple Statistical Methods for Software Engineering
defects was labeled as rework or the cost of poor quality as unmistakable influ-
ences from the cost of quality framework. However, these metrics were reported
as part of defect age study and not COQ study. Life of bugs was nearer to people
than formal frameworks. Testers perceived COQ as a larger metric for senior
managers.
(Elsewhere, the cost of quality is also computed within the testing process.
ree components of COQ are used: prevention, appraisal, and failure. Failure
means rework on test cases and retesting, appraisal refers to reviews, and prevention
includes training and defect prevention effort.)
e goal of creating the defect age metric was to establish a cost model by dis-
covering a relationship between defect age and cost of defects.
The most important quality metric is cost of failure.
Crosby
Designers did not want to think of the cost of defect as the cost of failure, but
testers did. Despite a controversy, the defect age metric became a potential metric.
ere was an engineering concept wherein defect age reflects product reliability.
e lesser the defect age, the more the reliability. e project team strived to reduce
defect age.
Another purpose of creating the defect age metric was to check the 1:10:100
rule of the cost of xing defects. e rule says if the early xing of defects costs a
dollar, late fixing would attract exponentially increasing costs. Deep set defects are
difficult to nd and costly to x. Data revealed that in that project, the rule was
1:2:4.2. ere was no dramatic rise of cost when defect age increased.
Review Questions
1. How many metrics would you use in a test dashboard in a testing project?
2. What are the metrics that can be used in unit testing?
3. What are the metrics that can be used in smoke tests?
4. Suggest simple ways of assessing reliability of software before release.
5. What metric data will be used while making a decision about stopping testing?
6. Compare test effectiveness with test efficiency.
7. Mention the names of two commonly used S curves in testing.
8. Relate defect age to the cost of testing. In your opinion, what would be the
expression for such a relationship?
Software Test Metrics 129
Exercises
1. Develop a minimum set of metrics you would maintain for a testing agency
who provides testing services to software developing companies.
2. Develop a risk metrics system for risk-based testing in a software develop-
ment project.
3. Suggest a matrix format for checking requirement traceability with test cases.
4. Develop a metric system to be used for managing test automation.
5. Develop a template for a one monthly test report based on metric data.
Mention the names of metrics and the charts you would use.
References
1. V. Nguyen, V. Pham and V. Lam, Test Case Point Analysis: An Approach to Estimating
Software Testing Size. http://csse.usc.edu/csse/TECHRPTS/2007/usc-csse-2007-737/usc
-csse-2007-737.pdf.
2. N. Patel, M. Govindrajan, S. Maharana and S. Ramdas, Test Case Point Analysis, White
Paper Version 1.0, Cognizant Technology Solutions, April 11, 2001.
3. K. A. Jha and J. D. Cem Kaner, Bugs in the Brave New Unwired World: A Failure Mode
Catalog of Risks and Bugs in Mobile Applications (personal communication). Copyright ©
2003.
4. C. Kaner, Risk-based testing: Some basic concepts, QA Managers Workshop, April 2008.
Suggested Readings
Kaur, A., B. Suri and A. Sharma, Software testing product metrics—A survey, Proceedings of
National Conference on Challenges & Opportunities in Information Technology (COIT-
2007), RIMT-IET, Mandi Gobindgarh, March 23, 2007.
Kelly, D. P. and R. S. Oshana, Improving software quality using statistical testing techniques,
Information and Software Technology, 42, 801–807, 2000.
Nirpal, P. B. and K. V. Kale, A brief overview of software testing metrics, International Journal
on Computer Science and Engineering, 3(1), 204–211, 2011.
Poore, J. H. and C. J. Trammell, Application of Statistical Science to Testing and Evaluating
Software Intensive Systems. Available at http://sqrl.eecs.utk.edu/papers/199905_cjt
_hmc.pdf.
Prabhakar, J. Test execution through metrics, STEP-AUTO Conference, Bangalore, Test
Management, September 19, 2007.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.98.181