245
13
Risk Measurement
Imagine a measurement system that, when working eectively, oers the
opportunity to reduce supply chain risk. Next, imagine the possible out-
comes when such a system fails to work as intended. A number of years ago
a consumer products company with $100 million in annual sales developed a
scorecard system to measure supplier performance. Besides creating a sys-
tem that was not validated and was less than professional in appearance,
many larger suppliers challenged their scores, particularly when the scores
were lower than what they received from their more sophisticated custom-
ers. e measurement system was such a nonstarter that it deterred the
company from moving forward with its supplier measurement objectives.
It also aected, and not in a good way, the company’s relationships with its
suppliers. Not much in the way of risk reduction occurred here.
Welcome to the world of measurement, a topic that can enhance or
impede a company’s risk management eorts. is chapter examines risk
measurement from a variety of perspectives. We rst discuss measure-
ment validity and reliability, something that is critical as companies create
new ways to evaluate risk. is is followed by a presentation of best- in-
class supplier performance measurement systems, quantied risk indexes,
and a system for measuring risk at the country level. Next, we present the
increasingly talked about subject of total cost measurement. e chapter
concludes with a set of emerging risk metrics.
RISK MEASUREMENT VALIDITY AND RELIABILITY
As supply chain risk management (SCRM) evolves as a discipline, it
almost goes without saying that measurement will play an integral part.
246 • Supply Chain Risk Management: An Emerging Discipline
Recall that measurement is one of the key risk enablers we introduced in
Chapter3. As we work with companies, we are seeing all kinds of new
measures, measurement models, and risk indexes emerging that are part
of the risk management process. A basic concept whenever measurement
plays a central role is to ask a simple question: Is the measure or model
valid and reliable?
Valid means that an indicator or model measures what it is supposed to
measure. If we had to replace the word valid with another word, that word
would be accurate. If a social scientist develops a scale to measure indi-
vidual happiness, for example, does that scale actually measure happiness?
In the risk arena, if a measure is supposed to measure the probability of
a supplier failing nancially, does the measure actually measure nancial
distress? Or, an index might translate risk scores into a system that assigns
red, yellow, or green risk indicators. Is the cuto value dening red versus
yellow actually where the cuto should be? If a supplier measure indicates a
supplier is high risk, is it really a higher risk compared with other suppliers?
We do not want to give the impression that validating a measure or
model is easy to do. Our concern is that far too oen risk measures and
indicators are developed but not suciently tested, usually because valida-
tion can be a time- consuming process. In the social sciences, and many
observers consider business to be a social science, researchers have to
address many kinds of measurement validity or risk having their work
rejected by external reviewers. Dierent kinds of validity can include
construct, convergent, face, internal, predictive, statistical conclusion,
content, criterion, and concurrent validity. Validity has many dimensions,
enough to give a person a serious headache.
A second important dimension of a risk measure is reliability. Reliability
is the extent to which a measure provides results that are consistent from
use to use. A watch could measure time (it has validity), but it could be
inaccurate as its battery wears down. Or, the same piece of equipment used
to measure blood pressure is not reliable if it gives dierent readings when
no real change in a person’s blood pressure occurred. Something that is
reliable means that we have condence in its use time and time again.
Possible problems with risk measures and indexes are similar to Type I
and Type II measurement errors in quality management. A risk measure
or index may be so sensitive that it raises a red ag when no unusual prob-
lem or risk exists (i.e., a false positive, or Type I error). Aer receiving
enough false warnings, trust in the system erodes as users become desen-
sitized to what the measure conveys. Another possible outcome is similar
Risk Measurement • 247
to Type II quality errors—the measure says there is no problem when in
fact there is a risk event pending or likely. Unfortunately, when this is the
case we are lulled into a false sense of security when the system should be
picking up various signals. Perhaps the model supporting the risk mea-
sure is not sensitive enough. Or, perhaps the right factors are not part of
the model, causing the model to miss some important clues.
If measurement validity is so important, how do companies ensure their
risk measures are valid? Perhaps the best way to validate a risk measure
or measurement approach is through simulation testing using historical
data, similar to what occurs when validating forecasting models. Aer all,
any measure that is forward looking, and most risk measures should be
forward looking, is essentially a forecasting tool.
Validity and Bridge Safety Measures
Here is an example of model validity that fell out of nowhere. When an
oversize truck traveling in Washington State hit a bridge girder, causing
an entire section of Interstate 5 to fall into the Skagit River, it was not
long before the system that calculates suciency ratings for bridges came
under scrutiny.
1
And the verdict of this scrutiny was that the suciency
rating system to assess bridge safety has some serious shortcomings.
Part of the problem is the complexity of the suciency rating system
developed in the 1970s. About 20 factors, almost half of which have noth-
ing to do with a bridge’s actual condition, are put into a magic formula
that generates a single suciency bridge rating. Mathematically, it is pos-
sible that a bridge that is more vulnerable to collapse has a higher su-
ciency rating than a bridge that is less vulnerable. A suciency rating less
than 80 is necessary to qualify for federal funding for bridge modica-
tions, while a rating under 50 qualies a bridge for replacement. In other
words, serious decisions are made because of the suciency rating. And it
appears that the rating system may not be doing what it is supposed to do.
Compiling so many factors into one rating increases the probabil-
ity that serious deciencies are overshadowed by other factors, such as
average daily trac and detour length if a bridge is taken out of service.
Shortcomings in the current rating system are causing engineers to look at
new ways to measure bridge risk, including using soware to predict how
bridges will change and possibly fail over time, along with cost- benet
analysis to optimize spending on maintenance and repair. However the
measurement of bridge hazard risk eventually turns out, the one thing
248 • Supply Chain Risk Management: An Emerging Discipline
that is becoming increasingly clear is that bridge suciency ratings are
not all that sucient.
SUPPLIER PERFORMANCE MEASUREMENT—
DOING IT RIGHT
Most rms, particularly larger ones, will say they have some sort of sup-
plier performance measurement system in place. Many companies call the
output from these measurement systems supplier scorecards. Our discus-
sion here is not a how- to on supplier performance measurement; other
sources have covered this topic quite well. Rather, we address the issues
that tend to aect supplier measurement systems at most companies. Lets
highlight these shortcomings through a case example.
e Case of the Deceptive Scorecards
During a review of a supplier scorecard system at a global logistics company,
a training instructor asked a buyer to name one of his best- performing sup-
pliers in terms of its performance score. Without hesitation the buyer pro-
vided a supplier’s name. Another participant in the room responded quickly
by saying that in the operations facility this is one of the worst suppliers his
group deals with on a day- to- day basis. How can one person say this is a
supplier worthy of a preferred status while another person would like to see
this supplier go away? And, perhaps most importantly, what are the risks of
a measurement system that awards high scores (and likely future business)
to what may be poorly performing suppliers? e irony here is that a system
that is designed to reduce supply chain risk could actually be increasing risk.
ese dierences of opinion resulted in a spirited discussion among the
participants in the room. During this discussion the group reached con-
sensus about a number of important points. First, the group agreed that
although the measurement system is supported by an extensive database
that allows all kinds of on- demand analyses, the data to support that system
are largely collected and input manually. Furthermore, many performance
items require subjective judgments. Second, most buyers had responsibility
for inputting data quarterly for about 25 suppliers, a heavy burden that is in
addition to their normal workload. Many in attendance also agreed that the
data are input just before, and sometimes aer, the quarterly cuto. ird,
attendees acknowledged that supplier scores are used as one indicator of a
buyer’s job performance, potentially creating a conict of interest.
e group also agreed that all suppliers are essentially held to the same
criteria with the same assigned weights, even though no one believes that
suppliers are equally important or similar. Participants further agreed
Risk Measurement • 249
that internal customers have no way to provide input into the measurement
process, even though this group has the best perspective regarding a suppli-
er’s day- to- day performance. Some participants were even confused about
how to rate a supplier since some suppliers provide material from more than
one site. Finally, no clear agreement emerged that the measurement process
was contributing to better supplier performance. It was taken as an article of
faith that measurement is a worthwhile pursuit.
Table13.1 provides a set of guidelines for assessing whether a supplier
performance measurement system is likely to satisfy its intended use.
Evaluating a measurement system against these criteria will help ensure
the system is leading- edge. In fact, the items that appear in this table essen-
tially dene the characteristics of a world- class supplier performance mea-
surement system. If supplier measurement is an important objective, then
let’s at least do it right. Measuring performance incorrectly is an invitation
to trouble, and we all know that trouble and risk are best friends.
TABLE13.1
Characteristics of an Eective Supplier Measurement System
e measurement system allows scoring exibility so all performance categories and
suppliers are not measured the same way.
Internal customers evaluate supplier performance through an online portal that
feeds information directly to the measurement system.
Performance reports are forwarded electronically to suppliers with review and
acknowledgment required by executive supplier management.
Each location at a supplier receives an operational performance report while the
supplier’s corporate oce receives a “relationship” performance report.
Supplier performance reports include total cost measures wherever possible instead
of price measures.
Supplier performance, particularly cost, quality, and delivery, is updated in real time
as transactions occur.
e measurement system separates critical suppliers from marginally important
suppliers.
e supplier measurement database allows user exibility when retrieving and
displaying data.
e measurement system provides early- warning performance alerts such as
predicted late deliveries from suppliers.
Suppliers have the ability to view their performance online with comparisons against
other suppliers.
e measurement system is regularly compared against best- practice companies.
Real performance improvement can be demonstrated as a result of the measurement
system.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.238.161