71
Chapter 5
Deriving Metrics
Creating Meaning in Data
Direct observations are also called base measures or raw data. Such data are either entered
in the computer by people or recorded automatically by machines. Automated data
collec tion is gaining currency. When people enter data, they have a chance to see what
they are keying in and validate the data. Data caught by machines are not immedi ately
seen by people; such automatic data are visited during report generation or process analysis.
Derived measures are computed from base measures. Derived measures are known
by several names. Two of these names are significant: key performance indicators and
metrics; we use the term metrics. Errors in base measure propagate into metrics. In a
broader sense, metrics also constitute data. However, metrics carry richer information
than base measures. We create meaning in data by creating metrics.
Deriving Metrics as a Key Performance Indicator
Measurement is essentially a mapping process. e primitive man counted sheep
by mapping his cattle, one to one, to a bundle of sticks. If a sheep is missing, it will
show up as a mismatch. Word was not yet invented, but there was mapping all the
same. A similar mapping is performed for function point counting; measurement
is seen as a counting process,a new name for mapping. e mapping phase in
measurement is well described in the COSMIC function point manual. With the
help of language, we have given a name for what is counted—software size. With
the help of number theory, we assign a numerical value applying rules.
In a similar manner, we count defects in software. Here the mapping is obvi-
ous. e discovery of defect is conducted by a testing process. Each defect is given
72 Simple Statistical Methods for Software Engineering
a name or an identification number. e total number of defects in a given module
is counted from the defect log.
Size is a base measure. Defect count is another base measure. e ratio of defects
to size is called defect density. It is a derived measure, a composite derived from two
independent base measures. It denotes product quality.
Productivity is another example for a derived measure.
Measures are directly mapped values. Metrics are meaningful indicators con-
structed against requirements.
A “measure” refers to directly mapped value.
A “metric” refers to a meaningful indicator.
Technically speaking, size is a measure, and complexity is a metric. Arrival time
is a measure, and delay in the arrival is a metric.
Metrics carry more meaning than measures. Hence, metrics are “meaningful
indicators.
Estimation and Metrics
A few metrics such as effort variance and schedule variance are based on estimated
and observed values. For instance, the metric effort variance is defined as follows:
Effort variance
Actual effort Estimated
%
(
= ×
100
eeffort
Estimated effort
)
is metric truly and directly reflects any uncertainty in estimation.
Accurate measurement combines with ambiguous estimation to produce
ambiguous metrics.
Measurement capability and estimation system support each other. ey are
similar in so much as both are observations. Metrics measure the past and the pres-
ent; estimation measures the future.
Paradigms for Metrics
Whats measured improves.
Peter F. Drucker
Deriving Metrics 73
“Measure what matters” is a rule of thumb. We do not measure trivial sides.
We do measure critical factors. Intrinsic to this logic is an assumption that hav-
ing metrics is an advantage; we can improve what we do. e balanced score card
measures performance to achieve improvement. Areas for improvement are identi-
fied by strategic mapping. Loyal followers of the balanced score card way use this
method to improve performance through measurements.
Another paradigm for measurement can be seen in quality function deploy-
ment (QFD). is is an attempt to measure what’s and how’s. e QFD structure
and the associated metrics have benefitted several organizations.
Capability maturity model integrated (CMMI) suggests measurement of every
process at each level of process maturity. e list of metrics thus derived could be
comprehensive. e goal question metric (GQM) paradigm is suggested to select
metrics at each level.
ITIL suggests measurements to improve service quality.
ISO 9000 indicates the measureanalyzeimprove approach. It protects quality
of data by meticulous calibration of measuring devices.
e Six Sigma initiative suggests metrics to solve problems. It has a measure
phase, where Y = F(X) is used to define X (causal) metrics and Y (result) metrics.
In the lean enterprise, wastes and values are identified and measured to elimi-
nate waste.
In clean room methodology, software usage is measured and statistically tested.
Reliability metrics are used in this case.
In personal software process (PSP), Humphrey proposed a set of personal level
metrics. e choice was based on the special quality and attributes of PSP.
Barry Boehm uses a narrowed down set of metrics to estimate cost in his cost
construction model (COCOMO). COCOMO metrics have created history by
contributing to estimation model building.
A metric design follows the framework used for improvement. ere are many
frameworks and models for achieving excellence. Metrics are used by each of
them as a driver of improvement. e system of metrics easily embraces the parent
framework.
GQM Paradigm
Most metrics are naturally driven by strong and self-evident contexts. In special
applications such as breakthrough innovation, model building, and reliability
research, we need special metrics. We are very anxious that metrics carry a purpose.
Special initiatives and hence special metrics should still connect with business
goals. e tree that makes the connection is the GQM paradigm [1].
e GQM paradigm is an approach to manage research metrics and hence is
more effective in problem solving and model building. It is not so influential in
driving the five categories of industry metrics mentioned earlier.
74 Simple Statistical Methods for Software Engineering
Software Engineering Institute (SEI) introduced the GQ(I)M paradigm [2] as a
value adding refinement to the GQM paradigm. GQ(I)M uses Peter Singe’s mental
models to drive metric choice and indi cators to convey meaning. GQ(I)M certainly
has helped to widen the reach of GQM.
Difficulties with Applying GQM to
Designing a Metrics System
First, the intermediate stage (question) in the GQM paradigm is not very helpful.
We simply map metrics to goal. e mapping phase in COSMIC size measurement
is a great illustration.
Second, while applying GQM, people tend to start with corporate goals and
attempt to drill down to metrics. is often turns out to be a futile attempt. Large
organizations have spent days with GQM, getting lost in dening goals, subgoals,
and sub-subgoals. All these goals go into goal translation and could never make
it to metric definitions in a single workshop. Rather, we would rst derive per-
formance goals from business goals using a goal tree. is is a goal deployment, a
leadership game. Designers of metrics should discriminate metrics mapping from
goal deployment. Designers of metrics should pick up selected performance goals
and map them to metrics.
ird, some metrics are specified by clients. Customer-specified metrics seem
to run very well in organizations. Data collection is smooth. ere is no need for a
separate mechanism to identify and define these metrics.
Box 5.1 Flying a Plane and gQM
Even the simplest propeller airplane would have meters to measure altitude and
fuel. ese measurements are intrinsic to the airplane design. e meters are fit-
ted by the manufacturer and come with the airplane as basic parts of the airplane.
One cannot fly without altitude, speed, and fuel level metrics. Flying a plane
without altimeter is unthinkable. A plane without a speedometer is unrealistic.
A pilot cannon make decisions without a fuel indicator. ese metrics are not
goal driven and certainly not business strategy driven but are driven by design
requirements. One can think of purposes for each metric, but these purposes are
not derived from business strategies and business goals; these purposes are implic-
itly inherent in product engineering. Whether there are goals or not, these meters
will be fitted to the plane, almost spontaneously, like a reflex action triggered by
survival needs. ere are no options here. ese metrics are indispensable and
obvious. One does not need a GQM approach to figure them out.
Hence, cost, schedule, and quality metrics are also indispensable in a soft-
ware development project. ese are not goal drivenbut are based on opera-
tional needs. One does not have a choice.
Deriving Metrics 75
Fourth, some metrics are driven by requirements. If requirements include
that the software must be developed for maintainability, there is a natural metrics
associated with this requirement, that is, the maintainability index. Meeting both
functional and nonfunctional requirements might need metrics support. us, the
recognized performance targets easily and organically map into performance met-
rics. One need not apply GQM and make heavy weather.
Fifth, metrics are often constructed to serve information needs in the organiza-
tion. Hence, management information systems (MIS) automatically capture many
metrics. ese metrics are an inherent part of MIS. Metric teams have to extract
metric data from the MIS or ERP systems. ese metrics do not follow the GQM
road.
Sixth, some metrics are derived from operational needs, for example, schedule
variance. Such needs are compelling, and one does not have a chance to exercise
options. When the needs are clearly and decisively known, we need not rediscover
them by GQM.
Seventh, even in the Six Sigma way of problem solving, where the Y and X vari-
ables are defined to characterize the cause-and-effect relationships that constitute
the problem, metrics are derived by mapping through a causeeffect diagram. e
selection of the problem to be solved is a goal-driven process, but deriving the vari-
ables (metrics) is conducted through causal mapping, not GQM.
Need-Driven Metrics
It is our nding that successful metrics are driven by transparent needs. e link
between metrics and needs must be organic, spontaneous, and natural. e bottom
line:
If we can do without metrics, we will do without metrics. We use metrics
only when they are needed.
e system of assigned goals, personal goals, and all the subgoals nally boil
down to performance goals that reflect the pressing needs of the system. Once a
metric connect with needs, it works.
e connection between needs and metrics must be concrete, spontaneous,
transparent, and direct. Hence, mapping is the preferred connecting mechanism.
Using “questions” is too verbose to be of practical value.
Mapping is a better connector than question.
A more serious concern would be to obtain commitment from agents to sup-
port metrics. A need-based system enables commitment harder than inquiries and
questions.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.39.181