108 Simple Statistical Methods for Software Engineering
could be 24 hours for express service, 48 hours for next priority tasks, a week for regu-
lar tasks, a fortnight for odd tasks, and so on. ere are different performance slabs
for different task categories. If delivery had been made within the stipulated time, the
service-level agreement (SLA) is complied; if not, it is a noncompliance and a breach of
contract. e customer virtually controls the maintenance team by closely monitoring
the SLAs. e criteria are often designed to minimize risk to the customer.
e SLA compliance metric is based on counts, dened as follows:
SLA compliance
Number of deliveries that met SL
=
AA
Total number of deliveries
× 100
Customers sign different SLAs with the maintenance organization for each of
the delivery attributes such as quality, response time, priority levels, and volume
delivered per month. Every month, the percentage of the SLA compliance level is
measured for each. Noncompliance may attract penalties, and hence these metrics
are respected and sincerely tracked.
Maintenance project ID
Team ref.
Assessment date
Evaluated by
Team
member
Problem solving
Platform
experience
Domain
experience
Oral
communication
Written
communication
Analysis
capability
Average
A
B
A
B
A
B
If TSI >8, good
If TSI <8 but >5, poor, arrange training
If TSI <5, alarming problem, escalate it
Team skill index (TSI) overall average
Skill score (scale: 0−10)
Skill
Figure 7.2 Resource utilization—Team Skill Index (TSI).
Maintenance Metrics 109
SLA metrics enable teams to perform within the limits set. ey do not mea-
sure the exact performance but only register whether the SLA is met or not. For
example, while providing a work around governed by a 48-hour SLA criterion, the
agent does not report the actual time. Even if the agent completes the job within
12 hours, delivery will be officially logged only at the 48th hour because the SLA
says so. Twelve hours is unaccounted for. is is where Parkinsons law plays a role:
Work expands to ll the time available.
ere is no motivation to do your best, but to do just so much, the bare minimum,
merely to avoid penalty. Under SLA, we never know the true capabilities of teams.
SLA compliance is a business metric in its strict sense.
Percentage of On-Time Delivery
Of all the service attributes, time is the most crucial. A special metric is constructed
to track the % of deliveries made on time. e on-time delivery (OTD) metric
elicits respect from the maintenance team because of its inherent business context.
is metric is very different from schedule variance (SV), which is a process metric
and measures delay. As a process metric, even the magnitude of delay is captured as
information. In OTD, the magnitude of delay is not captured. ITD is a discrete met-
ric, whereas SV is a continuous metric. OTD captures partial information, whereas
SV captures complete information. If we have not delivered on time, there is a con-
solation we get in measuring the delay. Even if the delay is small, no mercy is shown.
e delivery is said to have failed. OTD belongs to a pass/fail world of hard decisions.
Enhancement Size
e enhancement size metric helps in understanding the enhanced job better,
besides serving as an estimator of cost, schedule, and quality. e Netherlands
Software Metrics Users Association (NESMA) guideline “Function Point Analysis for
Software Enhancement Version 2.2.1defines enhancement as changes to the func-
tionality of an information system, so-called adaptive maintenance. Enhancement
involves three possible tasks:
Addition of functionality
Deletion of functionality
Change of functionality
Addition of functionality is measured as added FP. e deletion of functional-
ity is measured as 0.40 × deleted FP. Changed functionality is measured as impact
factor × changed FP; the impact factor could take values from 0 to 1.
e total is called the enhancement size, calculated as follows: enhancement
function point (EFP) = added FP + 0.40 deleted FP + IF × changed FP.
110 Simple Statistical Methods for Software Engineering
e previously mentioned equation shows the effect of deletion and change. To
work with EFP is a good practice.
As a proxy, LOC can also be used to judge enhancement size, though such a
metric may not be available early in the enhancement life cycle and may not help
in estimation. It is a pity to see that some projects do not include deleted size and
changed size in their calculation.
Bug Complexity
In a lower scale of measurement, bug complexity is measured as high, medium, or
low by the maintenance engineer. is is a subjective measurement, but it works.
Bug complexity can also be assessed on a continuous scale of 1–10, 10 being the
most complex and 1 being the least. is is still subjective but has better granularity and
has the extra advantage of being a numerical expression allowing further calculations.
e objective treatment of bug size considers factors driving bug xing effort.
In one model by Andrea De Lucia [6], the number of tasks required and the appli-
cation size are considered as the factors, and the linear regression equation relating
these two to effort is constructed.
e purpose of measuring bug complexity is to use the answer to predict bug
xing effort. at means the purpose is to build an estimation model. During
analysis, such an estimate is made by the maintenance engineer, usually on the fly.
Because analysis of the bug is made to x the bug and not to build a model, explor-
ing additional factors or increasing the depth of measurements is not suggested.
Bug fixing is the main objective, model building a concomitant one. Moreover, we
do not need extraordinary precision in estimating the bug xing effort; we need a
reasonably useful indicator.
Do not measure with a micrometer, mark with a chalk and cut
with an axe.
Murphy’s Law of Measurement
Box 7.2 Bug repair tiMe Metric
A maintenance manager desired to statistically establish the teams bug repair
capability and circulated a form to gather data. Senior engineers responded truth-
fully. e time spent on bug fixing was approximately 56 hours every day. A new
recruit did not know how to respond and simply wrote “From 9 am till 6 pm, I
spent time on bug fixing”; a statement too good to be true.e new recruit filled
in what he thought as an appropriate value and not what he actually did. Bug
repair time is a metric difficult to collect and even more difficult to validate.
Maintenance Metrics 111
ere are several equally simple ways of building an estimation model, includ-
ing estimation by analogy and proxy-based estimation.
Bug complexity is thus measured with consciously selected level of approxima-
tion. Later, this is going to affect the effort variance metric.
Effort Variance (EV)
e formula for the effort variance metric remains the same as before:
Effort variance
Actual effort Estimated effort
=
(
))
Estimated effort
× 100
However, context and interpretation change in maintenance. is metric can be
easily applied to enhancement projects where estimation is performed reasonably well,
and the metric accordingly carries full meaning. It is beneficial to calculate effort vari-
ance twice, first with the initial estimate and later with a revised estimate after the
change request is better understood. Even after the second estimate, the first estimate
is still used as a budget control metric and the second as a process control metric.
In bug xing, effort variance can be calculated, if at all, only approximately
because of approximations in estimation.
Schedule Variance (SV)
e SV metric is treated like effort variance. Often, this is restricted to enhance-
ment projects and not implemented in bug xing tasks for two principal reasons:
bug xing is tightly controlled by SLAs, and the actual time of fixing is not avail-
able. Many times, bug fixing happens without estimation.
Quality
Quality of Enhancement
Quality metric is calculated by dividing defect count by size and is expressed as
defects per EFP. e quality of each release is monitored.
Quality of Bug Fix
Sometimes, maintenance activities inadvertently harm quality; while xing a bug,
another could be introduced. In a typical bug-fixing environment, the support team
does not know if a xed bug opens in the field. If the same bug returns, people still may
not detect this arrival because there is no traceability. Usage-triggered failures seldom
come to the knowledge of the bug fixer. Bug arrival rate is not usually connected with
quality because no one connects the dots. In such a fluid situation, unless quality met-
ric is defined and collected, the quality of the software under maintenance cannot be
known and improved. However, such a step needs to be negotiated with the customer
112 Simple Statistical Methods for Software Engineering
and be seen as a business need. If quality improves, maintenance cost will come down
and the customer will benefit; but preventive maintenance has to be paid.
Productivity
Productivity can be expressed in different ways. We suggest the EFP metric per man
month. is metric eventually controls cost of maintenance; it helps in cost control.
For measuring bug fixing productivity, the number of xes per man month
could be a basic metric.
Measuring productivity is straightforward, but estimating productivity from
contributing factors is not. If the metric includes estimation (which in a broader
sense is a right expectation), then making use of models such as COCOMO may
be included in our purview.
Time to Repair (TTR)
is is a metric automatically collected by the bug-tracking tool. If one wants to improve
the process of bug fixing, optimize it, and achieve excellence, subprocess metrics such as
(1) time for replication, (2) time for analysis and design, and (3) time for implementation
and testing can be collected. ese subprocess metrics can be obtained by a quick survey.
e bug tracker tool may not be equipped to collect subprocess metrics. Such surveys are
performed occasionally to obtain information to improve the process. It is quite possible
to collect metrics at more granular levels, measure analysis time separately, and design
time separately if the cost is justified by expected gain. One can go a step further and
apply lean techniques such as value stream map analysis, waiting time analysis, and idle
time analysis.is metric will help to make the operation more efficient.
Most certainly, one does not choose subprocess metric for regular data collec-
tion till the organization achieves high maturity and people volunteer to provide
personaldata. However, whatever be the level of granularity, bug repair time is
one of the most effective and beautiful metrics in software engineering.
Box 7.3 the Queue
Bugs form a queue, and customers wait for xes. Customer satisfaction
increases with response time and quality; both depend on human resources.
When resource utilization is 100%, customer satisfaction is less than the best.
When resource utilization is 90%, customer satisfaction improves. Moral of
the story: some human reserves must be maintained to boost customer satis-
faction, and there is a trade-off between the two. Maintenance organization
keeps buffer resources and operates at less than 100% resources utilization
because losing customer satisfaction is costlier.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.134.188