© Robert D. Brown III 2018
Robert D. Brown IIIBusiness Case Analysis with Rhttps://doi.org/10.1007/978-1-4842-3495-2_4

4. Interpreting and Communicating Insights

Robert D. Brown III1 
(1)
Cumming, Georgia, USA
 

Certain graphical display devices help us to interpret and communicate powerful insights from the immense information produced by the Monte Carlo simulation process. These display devices visually communicate the range of exposure we face and the central tendency of outcomes we care about. They prioritize our attention on the risk factors that can affect us most.

Cash Flow and Cumulative Cash Flow with Probability Bands

To plot the cash flow and cumulative cash flow with their 80th percentile probability bands, we first specify a vector of the desired quantiles.

q80 <- c(0.1, 0.5, 0.9)

We then apply the quantiles() function in each year of the cash flow (Figure 4-1) and cumulative cash flow (Figure 4-2) using the sapply() function .

cash.flow.q80 <- sapply(year, function(y) quantile(cash.flow[, y], q80))
cum.cash.flow.q80 <- sapply(year, function(y) quantile(cum.cash.flow[, y],
q80))

To plot these results , use the following:

 # Plots the 80th percentile cash flow quantiles.
plot(
  0,
  type = "n",
  xlim = c(1, kHorizon),
  ylim = c(min(cash.flow.q80) / 1000,
           max(cash.flow.q80) / 1000),
  xlab = "Year",
  ylab = "[$000]",
  main = "Cash Flow",
  tck = 1
)
lines(
  year,
  cash.flow.q80[1,] / 1000,
  type = "b",
  lty = 1,
  col = "blue",
  pch = 16
)
lines(
  year,
  cash.flow.q80[2, ] / 1000,
  type = "b",
  lty = 1,
  col = "red",
  pch = 18
)
lines(
  year,
  cash.flow.q80[3, ] / 1000,
  type = "b",
  lty = 1,
  col = "darkgreen",
  pch = 16
)
legend(
  "topleft",
  legend = q80,
  bg = "grey",
  pch = c(16, 18, 16),
  col = c("blue", "red", "dark green")
)
../images/461101_1_En_4_Chapter/461101_1_En_4_Fig1_HTML.jpg
Figure 4-1

The cash flow with probability bands around the 80th percentile prediction interval, representing the range of possible cash flows given all the relevant information we have at hand

 # Plots the 80th percentile cumulative cash flow quantiles.
plot(
  0,
  type = "n",
  xlim = c(1, kHorizon),
  ylim = c(min(cum.cash.flow.q80) / 1000,
           max(cum.cash.flow.q80) / 1000),
  xlab = "Year",
  ylab = "[$000]",
  main = "Cumulative Cash Flow",
  tck = 1
)
lines(
  year,
  cum.cash.flow.q80[1,] / 1000,
  type = "b",
  lty = 1,
  col = "blue",
  pch = 16
)
lines(
  year,
  cum.cash.flow.q80[2,] / 1000,
  type = "b",
  lty = 1,
  col = "red",
  pch = 18
)
lines(
  year,
  cum.cash.flow.q80[3,] / 1000,
  type = "b",
  lty = 1,
  col = "darkgreen",
  pch = 16
)
legend(
  "topleft",
  legend = q80,
  bg = "grey",
  pch = c(16, 18, 16),
  col = c("blue", "red", "dark green")
)
../images/461101_1_En_4_Chapter/461101_1_En_4_Fig2_HTML.jpg
Figure 4-2

The cumulative cash flow with probability bands , useful for determining the potential range of time in which payback on a cash basis can occur

The cash flow graph (Figure 4-1) shows us that there is an 80% probability, given the quality of our information, that we can break even between the third and sixteenth year of operation, and the 50–50 outcome occurs around Year 5. The cumulative cash flow graph (Figure 4-2) is much less encouraging. It shows us there is an 80% probability that we will reach payback between Year 7 and never! The cash flow graph is important for showing when positive cash flow will likely occur, but the cumulative cash flow graph presents an initial insight into the more sobering reality of the economic value of the investment opportunity; that is, when an investment’s resultant cash flows pay back the initial cash outlays. Of course, this result does not yet include the effects of the time value of money, so sobering results can often turn frightening when the discount rate is properly applied.

The Histogram of NPV

The histogram of the NPV is quite easy to set up using the hist() function of R. First, establish the locations of the histogram bins’ breakpoints. Here we begin with 20 bins.

breakpoints <- seq(min(npv), max(npv), abs(min(npv) - max(npv)) / 20)

R sets these bin breakpoints automatically, but they are not frequently represented with enough detail. Using this parameter gives us some flexibility to play around with the number of bins and their size.

# Calculates and plots the histogram of NPV.
hist(
  npv / 1000,
  freq = FALSE,
  breaks = breakpoints / 1000,
  main = "Histogram of NPV",
  xlab = "NPV [$000]",
  ylab = "Probability Density",
  col = "blue"
)
Also, the hist() function plots the actual counts of the values that fall into the histogram bins. By setting freq = FALSE, we force the hist() function to use the probability density instead. The end result is demonstrated in Figure 4-3.
../images/461101_1_En_4_Chapter/461101_1_En_4_Fig3_HTML.jpg
Figure 4-3

The histogram of the NPV shows the central tendency of the net value of the investment opportunity

The uncertainty in this problem has clearly produced a situation where the great bulk of the NPV outcomes will most likely fall below $0. In fact, on average they will fall below $0, as the mean NPV = -$5.8 million. Now we need to understand by just how much.

The Cumulative Probability Distribution of NPV

The approach to plotting the cumulative probability distribution is similar to that of the cash flow and cumulative cash flow confidence bands; however, we won’t need to employ the sapply(), but we will need a longer vector of quantile values to get enough detail in the plot. We define the quantile breakpoint values by

cum.quantiles <- seq(0, 1, by = 0.05)

which gives a vector of 21 values from 0 to 1 with step sizes of 0.05, as shown here:

[1] 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00.

Now, using the quantile() function again, we find the cumulative probability of the NPV at the 21 points.

cum.npv.vals <- quantile(npv, cum.quantiles)
If we put these values into a data frame, like cum.npv.frame <- data.frame(cum.npv.vals), we see in Figure 4-4 that we face an 80% probability that the NPV will fall between -$56.2 million and $56.6 million. Our probability for betting that the outcome will be less than $0 is 62%.
../images/461101_1_En_4_Chapter/461101_1_En_4_Fig4_HTML.jpg
Figure 4-4

The domain and range of the cumulative probability of the NPV . The 80th percentile prediction interval, indicated in blue, shows there is ∼62% probability that the outcome will be $0 or less.

Now, to plot the NPV cumulative probabilities, as shown in Figure 4-5, we use the following code.

# Plot the cumulative probability NPV curve.
plot(
  cum.npv.vals / 1000,
  cum.quantiles,
  main = "Cumulative Probability of NPV",
  xlab = "NPV [$000]",
  ylab = "Cumulative Probability",
  "b",
  tck = 1,
  col = "blue",
  pch = 16
)
../images/461101_1_En_4_Chapter/461101_1_En_4_Fig5_HTML.jpg
Figure 4-5

The cumulative probability distribution of the NPV

Therefore, by using both the probability density and the cumulative probability plots, we can observe where the bulk of the results would likely fall, as well as other important probability intervals (e.g., the 80th percentile prediction interval). We also see that the deterministic approach to the problem could have led us into trouble if we had stopped there—an expected loss of $5.8 million is certainly more sobering than the originally determined $8.2 million. Recall, though, that I emphasized that the initial deterministic results were tentative. We only acquire an informative picture of the decisions we make after we consider the effects of uncertainty and risk.

This is not the end of the analysis, though. There very clearly exists some opportunity to do better than less than $0. The question is this: What can we do to manipulate future outcomes to our benefit? Before we can answer that question, though, we need to know what we most likely should manipulate.

The Waterfall Chart of the Pro Forma Present Values

A waterfall chart, like the one shown in Figure 4-6, is useful for showing the relative cumulative contribution that the present value of the revenue (or accrued benefits) and cost elements make to the NPV.
../images/461101_1_En_4_Chapter/461101_1_En_4_Fig6_HTML.jpg
Figure 4-6

The pro forma waterfall chart shows how the present value of the major pro forma line items contribute cumulatively to the NPV

To create a waterfall chart, we first need to extract the appropriate benefit and cost elements from the pro forma array .

# Extract the rows from the pro forma for the waterfall chart.
waterfall.rows <- c(2, 3, 5, 6, 10, 13)
waterfall.headers <- pro.forma.headers[waterfall.rows]
wf.pro.forma <- pro.forma[waterfall.rows, ]

Next, we find the present value of these elements in the same way we found the NPV of the cash flow.

# Find the present value of the extracted pro forma elements.
pv.wf.pro.forma <- rep(0, length(waterfall.rows))
for (w in 1:length(waterfall.rows)) {
 pv.wf.pro.forma[w] <- sum(wf.pro.forma[w, ] * discount.factors)
}

We create a cumulative sum of the present values.

cum.pv.wf.pro.forma1 <- cumsum(pv.wf.pro.forma)

We duplicate the prior vector, but shift all the values forward one space, dropping the last value.

cum.pv.wf.pro.forma2 <- c(0, cum.pv.wf.pro.forma1[1:(length(waterfall.rows) - 1)])

Now, to plot the floating columns of the waterfall chart, we need two vectors, one to capture the high position of each bar, and the other to capture the low position of each bar. We can do this by finding the pairwise maximum of the last two vectors, then by finding the pairwise minimum of the last two vectors.

wf.high <- pmax(cum.pv.wf.pro.forma1, cum.pv.wf.pro.forma2)
wf.low <- pmin(cum.pv.wf.pro.forma1, cum.pv.wf.pro.forma2)

Finally, we assign these last two vectors to an array.

waterfall <- array(0, c(2, length(waterfall.rows)))
waterfall[1, ]<- wf.high
waterfall[2, ]<- wf.low
colnames(waterfall) <- waterfall.headers
rownames(waterfall) <- c("high", "low")

We then plot the waterfall with the following code by co-opting the box plot .

# Plot the waterfall.
boxplot(
  waterfall / 1000,
  data = waterfall / 1000,
  notch = FALSE,
  main = "Waterfall Chart",
  xlab = waterfall.headers,
  ylab = "$000",
  col = c("blue", rep("red", 5))
)

If we want to improve profit by controlling costs , variable cost should get our first attention, followed by GS&A and CAPEX. However, don’t assume that reducing these items arithmetically necessarily leads to increased profit, as there could be systematic effects that relate, say, capital spent now to operating costs incurred later. In a real analysis, this idea should be explored in a more detailed model for completeness, but for the purposes of this tutorial we won’t consider this further.

The Tornado Sensitivity Chart

For our initial pass of analysis in Chapter 2, the deterministic sensitivity provides us some clues about which variables we should pay attention to, either in the form of manipulating them to our benefit or developing a mitigation plan to prevent some undesirable outcome in the objective function caused by them. However, this also assumes that the variables are under our control, that they are essentially decision variables. During the initial investigation and planning of a business opportunity, this condition of control rarely is the case.

As we’ve already recognized that the variables’ actual future outcomes are most likely uncertain, in Chapter 3 we represented the variables in the model with distributions. Each distribution represents our conception of what the range of those outcomes could be with associated probabilities for intervals across the range. (This implies that a ±x% relative change in any variable might either be highly improbable or just a small fraction of the most likely range of actual behavior.)

What we need is a way to prioritize our attention on the uncertain variables , based on our current information about them and how strongly the likely range of their behavior might affect the average value of the objective function, NPV. So, before we go seeking ways to control variables on the guidance of the deterministic sensitivity analysis, we first need to understand if the probable range of outcomes for any variable is significant enough to matter to make a clear decision of any kind.

Tornado sensitivity analysis works by observing how much the average NPV changes in response to the 80th percentile range of each variable sequentially. We choose a variable and set it to its p10 value, then we record the effect on average NPV. Next we set the same variable to its p90 and record the effect on average NPV. During both of these iterations, we let the other variables run according to their defined distribution. Repeating this process for each variable, we observe how much each variable influences the objective function both by their functional strength and likelihood of occurrence.

We can make this observation easily with a floating bar chart , where each bar is assigned to a particular variable. The width of a bar runs from the low response of the average NPV to the high for the given variable. The bars are ordered such that the variable with the widest bar goes at the top and the narrowest is at the bottom. The declining widths of the bars give the chart its distinctive tornado, or funnel, shape.

To set up the code for this routine, we begin just as we did in the original deterministic sensitivity analysis by duplicating and modifying the base R code file (Risk_Model.R), then giving it a name like Risk_Model_Sensitivity.R . Our new file starts out looking just like the original.

# Read source data and function files. Modify the path names to match your
# directory structure and file names.
source("/Applications/R/RProjects/BizSimWithR/data/global_assumptions.R")
d.data <- read.csv("/Applications/R/RProjects/BizSimWithR/data/risk_assumptions.csv")
source("/Applications/R/RProjects/BizSimWithR/libraries/My_Functions.R")
# Slice the values from data frame d.data.
d.vals <- d.data[, 2:4]

Now we add some additional lines to set up the process that will loop through our variables and create an initialized array to store the values of the NPV on each sensitivity iteration .

sens.range <- c(0.1, 0.9)
len.d.vals <- length(d.vals[, 1])
len.sens.range <- length(sens.range)
npv.sens <- array(0, c(len.d.vals, len.sens.range))

Just as before, we assign the appropriate distribution to each uncertain assumption and simulate them.

 # Assign values to variables using appropriate distributions.
p1.capex <- CalcBrownJohnson(0, d.vals[1, 1], d.vals[1, 2],
  d.vals[1, 3], , kSampsize)
p1.dur <- round(CalcBrownJohnson(1, d.vals[2, 1], d.vals[2, 2],
  d.vals[2, 3], , kSampsize), 0)
p2.capex <- CalcBrownJohnson(0, d.vals[3, 1], d.vals[3, 2],
  d.vals[3, 3], , kSampsize)
p2.dur <- round(CalcBrownJohnson(1, d.vals[4, 1], d.vals[4, 2],
  d.vals[4, 3], , kSampsize), 0)
maint.capex <- CalcBrownJohnson(0, d.vals[5, 1], d.vals[5, 2],
  d.vals[5, 3], , kSampsize)
fixed.prod.cost <- CalcBrownJohnson(0, d.vals[6, 1], d.vals[6, 2],
  d.vals[6, 3], , kSampsize)
prod.cost.escal <- CalcBrownJohnson( , d.vals[7, 1], d.vals[7, 2],
  d.vals[7, 3], , kSampsize)
var.prod.cost <- CalcBrownJohnson(0, d.vals[8, 1], d.vals[8, 2],
  d.vals[8, 3], , kSampsize)
var.cost.redux <- CalcBrownJohnson( , d.vals[9, 1], d.vals[9, 2],
  d.vals[9, 3], , kSampsize)
gsa.rate <- CalcBrownJohnson(0, d.vals[10, 1], d.vals[10, 2],
  d.vals[10, 3], 100, kSampsize)
time.to.peak.sales <- round(CalcBrownJohnson(1, d.vals[11, 1],
  d.vals[11, 2], d.vals[11, 3], ,kSampsize), 0)
mkt.demand <- CalcBrownJohnson(0, d.vals[12, 1], d.vals[12, 2],
  d.vals[12, 3], ,kSampsize)
price <- CalcBrownJohnson(0, d.vals[13, 1], d.vals[13, 2],
  d.vals[13, 3], ,kSampsize)
rr.comes.to.market <- rbinom(kSampsize, 1, d.vals[14, 2])
rr.time.to.market <- round(CalcBrownJohnson(1, d.vals[15, 1],
  d.vals[15, 2], d.vals[15, 3], ,kSampsize), 0)
early.market.share <- CalcBrownJohnson(0, d.vals[16, 1],
  d.vals[16, 2], d.vals[16, 3], 100, kSampsize)
late.market.share <- CalcBrownJohnson(0, d.vals[17, 1],
  d.vals[17, 2], d.vals[17, 3], 100,kSampsize)
price.redux <- CalcBrownJohnson(0, d.vals[18, 1], d.vals[18, 2],
  d.vals[18, 3], ,kSampsize)

Next, we collect our simulated assumptions into an array that we iterate through as we sequentially replace each variable’s simulated samples with the p10 and p90 quantile values.

 d.vals.vect <- c(
  p1.capex,
  p1.dur,
  p2.capex,
  p2.dur,
  maint.capex,
  fixed.prod.cost,
  prod.cost.escal,
  var.prod.cost,
  var.cost.redux,
  gsa.rate,
  time.to.peak.sales,
  mkt.demand,
  price,
  rr.comes.to.market,
  rr.time.to.market,
  early.market.share,
  late.market.share,
  price.redux
)
d.vals.temp <- array(d.vals.vect, dim=c(kSampsize, len.d.vals))

The array d.vals.temp holds the original set of simulated samples. For the run of the sensitivity analysis, we borrow values from this array and place them in a duplicate array.

d.vals.temp2 <- d.vals.temp

We also need an array that contains only the p10 and p90 assumption parameters. We extract those from the d.vals array with the following:

d.vals2 <- d.vals[, -2]
Recall that the d.vals array looks like this:
../images/461101_1_En_4_Chapter/461101_1_En_4_Figa_HTML.jpg

The subscript indicator [, -2] in d.vals tells R to remove the second column from d.vals, which is the p50 column .

Now, to summarize again, our R code will loop through our list of borrowed simulated values associated with each variable. On the selection of a specific variable that is being tested in the loop, the simulated samples in the variable’s row will be replaced sequentially with its p10 value from d.vals2, and then its p90 value. Once the code has run its two tests at the sensitivity points on a given variable, the R code will restore the variable’s original samples from the d.vals.temp array to the d.vals.temp2 array and move on to the next variable to repeat the process.

Before we move on, let’s think about the way we set up the deterministic sensitivity analysis. In that case, we stepped across the sensitivity points defined for the low, median (p50), and high values for our variables.

The central value of the sensitivity analysis was always the same because we were running a deterministic model. In stochastic systems, like the one we’re representing in our uncertain model, the central value of the objective function (i.e., the mean) we find in the base uncertainty model will not necessarily be a function of the p50s of the assumptions’ parameters. We therefore can’t just find the center of the sensitivity analysis here by setting all the variables to their p50s in one step.

Another problem also arises from the simulation error we discussed in the last chapter. We likely will not get the exact same mean NPV from the samples simulated in the base uncertainty model as we would get from the samples simulated here in this instance of the sensitivity model, although they will probably be close for a large enough sample set. The way around this conundrum is to run the model twice with the same simulation samples, once to find the mean NPV, then once more to find the sensitivity of that mean NPV to the range in the assumptions. By doing this, we maintain the central value around which its sensitivity behaves. How do we do this? By placing our base model in one big function that has one parameter: the data array that contains the required samples for a given run of the model. We start the function at the point in our base uncertainty model where we extracted the values from our original data parameter array, simulated their samples with the appropriate distribution, and then assigned them to meaningful variable names. Our function won’t need to resimulate the samples for each variable, though. It will simply use the samples we’ve already generated and placed in an indexed array. The following code represents the base model converted to a function that returns the samples for the NPV calculation .

CalcBizSim = function(x) {
# x is the data array that contains the presimulated samples for
# each variable.
       p1.capex <- x[, 1]
       p1.dur <- x[, 2]
       p2.capex <- x[, 3]
       p2.dur <- x[, 4]
       maint.capex <- x[, 5]
       fixed.prod.cost <- x[, 6]
       prod.cost.escal <- x[, 7]
       var.prod.cost <- x[, 8]
       var.cost.redux <- x[, 9]
       gsa.rate <- x[, 10]
       time.to.peak.sales <- x[, 11]
       mkt.demand <- x[, 12]
       price <- x[, 13]
       rr.comes.to.market <- x[, 14]
       rr.time.to.market <- x[, 15]
       early.market.share <- x[, 16]
       late.market.share <- x[, 17]
       price.redux <- x[, 18]
# CAPEX module
phase <- t(sapply(run, function(r) (year <= p1.dur[r]) * 1 +
  (year > p1.dur[r] & year <= (p1.dur[r] + p2.dur[r])) * 2 +
  (year > (p1.dur[r] + p2.dur[r])) *3))
capex <- t(sapply(run, function(r) (phase[r, ] == 1) * p1.capex[r]/p1.dur[r] +
  (phase[r, ] == 2) * p2.capex[r]/p2.dur[r] +
  (phase[r, ] == 3) * maint.capex[r]))
# Depreciation module
depr.matrix <- array(sapply(run, function(r) sapply(year, function(y)
ifelse(y <= p1.dur[r] & year>0, 0,
  ifelse(y == (p1.dur[r]+1) & year<y+kDeprPer & year>=y, p1.capex[r] / kDeprPer,
    ifelse((year >= y) & (year < (y + kDeprPer)), capex[r, y - 1] / kDeprPer, 0)
    )
  )
)), dim=c(kHorizon, kHorizon, kSampsize))
depr <- t(sapply(run, function(r) sapply(year, function(y)
  sum(depr.matrix[y, , r]))))
# Competition module
market.share <- (rr.comes.to.market ==1) * ((rr.time.to.market <= p1.dur) *
  early.market.share/100 + (rr.time.to.market > p1.dur) *
    late.market.share/100 ) +
  (rr.comes.to.market == 0)*1
# Sales module
mkt.adoption <- t(sapply(run, function(r) market.share[r] *
  pmin(cumsum(phase[r, ] > 1) / time.to.peak.sales[r], 1)))
sales <- t(sapply(run, function(r) mkt.adoption[r, ] * mkt.demand[r] *
  1000 * 2000))
revenue <- t(sapply(run, function(r) sales[r, ] * price[r] *
  (1 - rr.comes.to.market[r] * price.redux[r]/100)))
# OPEX module
fixed.cost <- t(sapply(run, function(r) (phase[r, ] > 1) * fixed.prod.cost[r] *
  (1 + prod.cost.escal[r]/100)^(year - p1.dur[r] -1)))
var.cost <- t(sapply(run, function(r) var.prod.cost[r] *
  (1 - var.cost.redux[r]/100)^(year - p1.dur[r] -1 ) * sales[r, ]))
gsa <- t(sapply(run, function(r) (gsa.rate[r]/100) * revenue[r, ]))
opex <- fixed.cost + var.cost
# Value
gross.profit <- revenue - gsa
op.profit.before.tax <- gross.profit - opex - depr
tax <- op.profit.before.tax * kTaxRate/100
op.profit.after.tax <- op.profit.before.tax - tax
cash.flow <- op.profit.after.tax + depr - capex
cum.cash.flow <- t(sapply(run, function(r) cumsum(cash.flow[r, ])))
# Following the convention for when payments are counted as occurring
# at the end of a time period.
discount.factors <- 1/(1 + kDiscountRate/100)^year
discounted.cash.flow <- t(sapply(run, function(r) cash.flow[r, ] *
  discount.factors))
npv <- sapply(run, function(r) sum(discounted.cash.flow[r, ]))
return(npv)
}

I recommend placing this function code in a separate .R file and then importing it at the beginning of the sensitivity analysis file using the source() function in the same manner as we did with the global assumptions and other functions.

To calculate the base mean NPV, we run the CalcBizSim() once with the values we assigned to the d.vals.temp array.

base.mean.npv <- mean(CalcBizSim(d.vals.temp))

Then, to find the sensitivity of the base mean NPV to the range in our assumptions, we loop across the secondary array that contains our borrowed samples, replacing each variable’s samples with a vector of the same length containing the values of the p10 and p90 values.

for (i in 1:len.d.vals) {
  for (k in 1:len.sens.range) {
# For a given variable, replace its samples with a vector containing
# each sensitivity endpoint.
    d.vals.temp2[, i] <-  rep(d.vals2[i, k], kSampsize)
# Calculate the mean NPV by calling the CalcBizSim() function.
      mean.npv <- mean(CalcBizSim(d.vals.temp2));
# Insert the resultant mean NPV into an array that catalogs the
# variation in the mean NPV by each variable's sensitivity points.
      npv.sens[i, k] <- mean.npv
    }
# Restore the current variable's last sensitivity point with its original
# simulated samples.
    d.vals.temp2[, i] <- d.vals.temp[, i]
}

The remainder of the code works just like the deterministic sensitivity analysis that set up the graphical results.

# Assign npv.sens to a data frame.
var.names <- d.data$variable
rownames(npv.sens) <- d.data$variable
colnames(npv.sens) <- sens.range
# Sets up the sensitivity array.
npv.sens.array <- array(0, c(len.d.vals, 2))
npv.sens.array[, 1] <- (npv.sens[, 1] - base.mean)
npv.sens.array[, 2] <- (npv.sens[, 2] - base.mean)
rownames(npv.sens.array) <- var.names
colnames(npv.sens.array) <- sens.range
# Calculates the rank order of the NPV sensitivity based on the
# absolute range caused by a given variable. The npv.sens.array
# is reindexed by this rank ordering for the bar plot.
npv.sens.rank <- order(abs(npv.sens.array[, 1] -
  npv.sens.array[, 2]), decreasing = FALSE)
ranked.npv.sens.array <- npv.sens.array[npv.sens.rank, ]
ranked.var.names <- var.names[npv.sens.rank]
rownames(ranked.npv.sens.array) <- ranked.var.names
# Plots the sensitivity array.
par(mai = c(1, 1.75, .5, .5))
barplot(
  t(ranked.npv.sens.array) / 1000,
  main = "NPV Sensitivity to
  Uncertainty Ranges",
  names.arg = ranked.var.names,
  col = "red",
  xlab = "NPV [$000]",
  beside = TRUE,
  horiz = TRUE,
  offset = base.mean / 1000,
  las = 1,
  space = c(-1, 1),
  cex.names = 1
)

We now obtain a chart (Figure 4-7) very much like our deterministic sensitivity chart, except this one is based on the variation to our mean NPV caused by all the sources of variation and risk (and the bars are colored red). The chart tells us that the top five variables could easily turn a bad situation worse, but they also tell us that if we could harness the drivers behind the variation in each variable, effectively converting each of these uncertainties into a decision, we could create an immensely valuable opportunity (well, immensely so relative to the negative mean NPV we currently see).

One of the important features of this type of sensitivity analysis is that it usually reveals that not all uncertainties are as important as we likely thought they were. In fact, we often find that at least one uncertainty is much more important than we anticipated, a feature of a good model.
../images/461101_1_En_4_Chapter/461101_1_En_4_Fig7_HTML.jpg
Figure 4-7

The tornado chart displays the sensitivity of the mean NPV to the 80th percentile range of each uncertainty

Although it is beyond the scope of this book, the refinement in our analysis would be to find strategies that would potentially allow us to capture the upper end of each of those upper bars. The cumulative sum of the net value of reaching those goals would show us the marginal contribution of the value of control on those top four variables . After some market and competitive reconnaissance and creative planning, we might be able to find a way legally to deter RoadRunner from coming to market at all at little to no cost to us.

You can see the uninterrupted source code for this procedure in Appendix B.

Closing Comments

As I pointed out in Chapter 1, there are many ways the case study presented here could have reflected more complexity in the business environment, or the way the revenue and specific cost elements were represented could have taken completely different forms. My goal, though, in presenting this case study as a tutorial was to provide a platform for the following:
  1. 1.

    Thinking about R as a simulation alternative to spreadsheets.

     
  2. 2.

    Extending R from its common orientation of statistical analysis of empirical data toward an orientation more characterized as probabilistic reasoning.

     

The framework that we developed represented the mechanical structure for processing information we acquire through the reasoning process, but it by no means represents a complete analysis. We based our analysis on one design structure for engineering, market development, and operations. No one initial business solution can be thorough and comprehensive enough to address all the best (and worst) ways to allocate resources in the pursuit of success.

For our business cases analysis to be truly informative, we must consider multiple alternatives for achieving the same goal, simply because the process of value creation is pathway dependent. Each set of pathways faces opportunity costs that must be thoroughly explored before we can honestly say that we have performed our fiduciary responsibilities . In this tutorial, we considered only one pathway, one decision strategy. To truly learn what the future has to offer, we need to contrast and compare a good handful of thematically different ways we could take advantage of those potential offerings, measuring the risks and benefits of each, and discarding the ones that present the least opportunity to create value. We’ll explore this alternative generation approach in Part 2.

This approach is the scientific process of thinking: We develop hypotheses about how we can create value in a given opportunity context, explore the conditioning effects for what imparts greater utility to those hypotheses, and discard the hypotheses that demonstrate weak explanatory power for creating value. This, in my opinion, is the best kind of process-driven, data-supported decision making.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.137.67