PROBLEM-SOLVING
TOOLS

Once priorities for the organization are established using the strategy and planning tools highlighted earlier, the organization will then engage in improvement initiatives. Six Sigma, in particular, offers a wealth of methods and tools to assist with problem solving. In fact, its primary aim is in the structuring of problems and forwarding methods of analysis for solving them. The primary problem-solving method is DMAIC, described first in this chapter. Other tools include cause-and-effect diagrams, five-why analysis, and brainstorming.

DMAIC (DEFINE-MEASURE-ANALYZE-IMPROVE-CONTROL)

Many refer to DMAIC as the roadmap for Six Sigma. Without question, it is the “backbone” methodology applied in Six Sigma improvement efforts. Some may question why DMAIC is recognized as a problem-solving tool rather than a strategy and planning tool given its comprehensive scope. Put simply, DMAIC is a means to an end. It does not necessarily determine the end, but rather provides, as indicated, the roadmap. The vision for a DMAIC endeavor is developed using the voice of the customer and voice of the business tools. The outcome of these strategic analyses should recognize the opportunities for improvement, and it is in pursuing these opportunities that the DMAIC method is employed. In addition, a focus on quality achieved through variation reduction can be a core element of a company’s philosophy and strategy.

  

Figure 21.1.
Figure 21.1. The DHAIC Method.

The stages of the DMAIC process are highlighted in Figure 21.1. Each step is reviewed next.*

Define

The strategy and planning tools described in Chapter 20 provide the logistician with a multitude of improvement opportunities. Voice of the customer highlights customer needs, voice of the business features the needs and the constraints of the company, value stream mapping illuminates the wastes, and the XY matrix helps to prioritize projects. The “Define” stage of DMAIC picks up where the XY matrix leaves off by defining the problem, selecting the project, and scoping the project. First, the problem must be stated clearly and succinctly. In turn, the project’s purpose, scope, team members, resource requirements, and potential constraints must be delineated. It should be clear to everyone involved what is at stake, how and when the mission of the project is to be achieved, and who is responsible for what actions. Again, voice of the customer, voice of the business, and value stream mapping provide critical input in this stage of the process.

Measure

Precision in defining the problem should facilitate the next stage, measurement. Measurement refers to assessment of the current state. Should the focal problem for a DMAIC project be “improved reliability in delivery,” transit time would serve as the primary measure. True to Six Sigma’s concern with variation reduction, one would look not only at the average transit time but also the variance around it. We would be concerned with the accuracy in measurement as well. How is “transit time” determined? When does the clock start and stop? Who is currently measuring transit time? Can we trust the timekeeper? Questions of this sort come into play in this stage of the DMAIC process. Should multiple measures be necessary to assess a particular area of performance fully, all measures should be reviewed with the same scrutiny. In addition, they should be prioritized so that everyone knows which measures are most important. Common areas of measurement include cost, time, and quality.

The best measures will prove to be those that are:

Quantifiable

Easily measured

Robust

Reliable

Valid

In the absence of valid measures, we are sure to experience “GIGO” (garbage in/garbage out), though we are unlikely to realize that bad information is driving our decision making. As noted in previous discussions, too often we become complacent with measurement, relying on measures for which we have long-standing history that enables the company to track “progress.” However, careful analysis often suggests that we are measuring the wrong things or measuring the right things in the wrong way. The DMAIC process provides the perfect opportunity to correct errors in measurement. Though it can sometimes lead to separation anxiety, we must part with measures that act as bad compasses, telling us consistently to walk in the wrong direction.

Analyze

Given a clear statement of the problem and identification of focal measures, the DMAIC process proceeds with the “Analyze” step. This is where DMAIC borrows significantly from the scientific method in its pursuit of truth — to find what lies at the root of the problem that is leading to dissatisfied customers, unnecessary costs, dwindling margins, and frustration. The scientific method guides the researcher through three basic steps:

  1. Observation of a phenomenon or a group of phenomena

  2. Development of hypotheses that seek to explain and predict the phenomenon or phenomena

  3. Testing of the hypotheses for causal relationships

These are the same basic steps employed in the analysis stage of DMAIC.

Six Sigma’s borrowing from the community of scientists does not end with application of the scientific method itself, however, for it is in analyzing problems that Six Sigma practitioners look, act, and sound most like physicists, chemists, and statisticians. Six Sigma commonly employs tools like Design of Experiments to understand the cause-effect relationships among two or more factors, much like the biologist would test the effect of light on plant species in a lab setting. The logistician might examine the variance in delivery reliability by controlling for different factors associated with shipments of interest including, but not limited to, the way in which the shipment is tendered, dispatched, and scheduled; the way in which the order is physically prepared, staged, and loaded; the carriers and drivers used to fulfill the delivery; the time of day for pickup and delivery; weather conditions; and the processing of documentation associated with the shipment. Multiple factors may be to blame for the inconsistency of delivery and some factors may interact with others to complicate and worsen the problem.

Inferential statistics are often tapped to provide critical analysis of observations. Parametric techniques such as analysis of variance and regression analysis, along with nonparametric tools such as chi-square tests, are used to generalize findings from a sample of observations. The purpose of all methods is, again, to better understand the phenomena at work such that the cause-and-effect relationship can be realigned to provide improved outcomes: satisfied customers, minimized costs, healthy margins, and harmonious operations.

Improve

Unfortunately, recognizing the root cause of the problem is not sufficient for correcting it. Action must be taken. That is the concern of the “Improve” stage of DMAIC. Another way of looking at this stage is that it offers the opportunity for competitive advantage when many companies in an industry are staring at a common problem; it is the firm that deals with the problem swiftly and most effectively that achieves valued differentiation. Being the first one to solve the problem does not account for much unless the solution is acted on.

Making effective change is not an easy thing for any organization. Most good ideas never see the light of day given the challenges they face in this stage of implementation. And what is more pathetic than a good idea that fails to be recognized and implemented? The Lean Six Sigma organization is less prone to this disconnect between good ideas and good implementation because bringing effective ideas to the fore of the organization and pursuing them relentlessly is what Lean Six Sigma thinking is all about. It does not start with the good idea itself; it starts with discipline, developing a culture that relishes opportunities for improvement. Key to establishing a culture that is flexible and poised for opportunities is an orientation favoring teamwork. An organization with individuals not only interested in but, in fact, vested in the success of the whole is far more likely to meet change with favor than those that force change to resistant bands of employees. That much said, teamwork alone can be misguided in the absence of leadership. Therefore, vision must guide the team effort for anything worthwhile to be accomplished.

Once the culture for embracing change is in place (and this is no easy feat), the opportunities themselves must be seized using a structured approach. The approach involves open communication of what is at stake, how the improvement will be managed, and what is expected of all team members. Not all team members will necessarily be involved in every improvement opportunity, yet providing open communication to everyone establishes “communal understanding” and an environment of support, while minimizing suspicions that inevitably surface when efforts are made to manage change in a cloistered or underhanded manner.

Along with the communication of the change and expected contribution of team members is communication of the critical measures used to judge the success of the effort and to assess the contribution of each individual. The individual measures hold team members accountable for contribution, though the measures must correlate directly with the bigger goal of the change. Too often, measures ensure the busy-ness of people, but the efforts of the people do not translate into meaningful productivity — actions that fulfill the organizational vision. Lean Six Sigma Logistics and the DMAIC process embrace the belief that every action taken by every team member contributes to value in the eyes of the customer and, in turn, success for the company. Improvement efforts are pursued to rid the wastes and distractions that get in the way of meaningful productivity.

One key point relative to the Improve stage is that Six Sigma in and of itself does not provide the actual solution to the problem. That is, the Six Sigma DMAIC model provides a problem-solving method, but we need to rely on our Lean tools in order to generate possible solutions to the problem.

Control

Despite the challenges presented in bringing a good idea to light in the “Improve” stage of DMAIC, what can prove even more challenging is sustaining the effort. “Control” is the final stage of the DMAIC process, and it focuses on this aspect of improvement projects: avoiding complacency when the project is going well and goals are being met and taking corrective action when either the project strays or the environment changes. Clearly, elements of sustained or corrective action should be part of the improvement initiative from its outset, though it might be regarded as pessimism by some. Despite best efforts and well-established plans, the team must be ready to adapt with the situation. Robust, flexible processes will be those that prove most adept at accommodating change. Processes should be designed such that they can meet not only the immediate challenges of day-to-day fluctuation but also the dramatic or perhaps unthinkable challenges that might be revealed. The Lean Six Sigma organization must be ready for anything associated with these most critical aspects of the service.

Primary considerations in this phase of the DMAIC process center around issues of motivation and measurement. The Lean Six Sigma organization must be sure that the right performance is being measured and recognized. Performance that fails to correlate perfectly with the desired outcomes is waste. Unfortunately, inconsistency between expected and desired outcomes does not become apparent until the improvement project is under way. It is on this basis that many companies engage in a limited trial with each DMAIC project before full-scale rollout. The trial allows for a more comprehensive understanding of issues involved such that any misjudgments can be corrected before they derail the larger effort.

In sum, the DMAIC method is the backbone of Six Sigma methodology, offering a roadmap to improvement projects from conception to completion. As noted, critical to any effort to bring about meaningful change are the cultural attributes of discipline and teamwork. The DMAIC process will only lead to frustrated effort in the absence of these organizational prerequisites. Given the comprehensive nature of the DMAIC method, subsequent tools in this section offer insights that support the “Analyze” stage.

CAUSAL ANALYSIS TOOLS

Many tools are available in the Lean and Six Sigma domain to assist with root cause analysis. Some are very basic and can be conducted with little training and no formal data collection. These include brainstorming, cause-and-effect diagrams, and five-why analysis. These tools provide preliminary analysis but, perhaps more importantly, serve as a starting point for discussion and subsequent analysis. Other tools are more technical, quantitative, and call for extensive data collection to feed the analysis. In return, these tools offer deeper insights that should be free of the biases often found in qualitative approaches. Tools that belong to this more technical category include Design of Experiments and inferential statistical methods. Both qualitative and quantitative types of tools will be reviewed briefly.

Brainstorming

Brainstorming offers a general-purpose way to initiate conversation and gather ideas. Brainstorming sessions are likely to occur throughout the DMAIC process as a way to not only gather ideas but also get team members involved in problem recognition and resolution. Though the sessions should encourage free and open conveyance of ideas, they need not be devoid of structure. In fact, brainstorming sessions that lack structure may prove confusing and unproductive, defeating the improvement effort before it begins. First impressions often serve as lasting impressions. If the kickoff session for an improvement initiative is characterized by chaos rather than order, all confidence in the effort may be lost before it ever has a chance to be built.

One way to structure a brainstorming session is to ask everyone to focus on a single question or problem and then gather the input of participants in sequence around the table. This prevents a free-for-all scene that is commonly experienced in brainstorming sessions. Ideas should be captured on a white board with as little paraphrasing as possible. Ideas that are not relevant to the focal question or beyond the purview of the current scenario should be captured as well in a “parking lot” list. Parking lot items can be revisited later, as necessary. Ideas are gathered until no one has anything new to offer. Once ideas are documented, the leader might engage the group in a “mind-mapping” exercise, a technique used to organize the ideas by displaying them visually and drawing the interconnections that exist among the set. Mind maps help to provide synthesis to the brainstorming exercise.*

Cause-and-Effect Diagrams

Cause-and-effect diagrams (sometimes referred to as fishbone diagrams or Ishikawa diagrams) provide a structured, though qualitative, approach to problem solving. The main purpose of these diagrams is to generate discussion that can close in on the root cause or causes of a focal problem. The cause-and-effect diagram often provides structure to causal analysis brainstorming and serves as a good starting point for deeper analysis. Rarely is the diagram sufficient in and of itself to justify action. Rather, the diagram is a preliminary analytical tool that narrows the scope for subsequent analysis.

Common categories to look toward as potential sources of root causes include people, process, technology, equipment, material, and environment. Though these categories are often used in a manufacturing environment, they find application in logistics as well. Figure 21.2 organizes possible sources of customer dissatisfaction with the ferry service described in Section 2. Discussion should focus on a specific question, such as “Why is the ferry service so

  

Figure 21.2.
Figure 21.2. Cause-and-Effect Diagram for Ferry Service.

unreliable?” A brainstorming session conducted around this question might generate the list of possible causes depicted in the diagram.

Like all forms of brainstorming, cause-and-effect diagrams are qualitative in nature and rely on the imagination of the team members involved to populate the diagram. Two caveats of cause-and-effect diagrams are the prospects of not working on the real problem and failing to identify the true cause of a problem. Yet another concern associated with diagram efforts is the fact that the possible causes identified in the diagram are not necessarily ranked as likely culprits of the problem. These concerns reiterate the need for additional analysis.

Five-Why Analysis

Five-why analysis is another technique that is commonly used to ascertain the root cause of a problem. The belief is that by focusing on a key problem, we can get to its core by asking the question “Why?” in succession. By asking “Why?” up to five times, one can usually feel as though the essence of the problem is understood and the root cause should also be readily apparent. The five-why approach ensures deeper investigation than that typically associated with fishbone diagrams.

A favorite tool of Lean practitioners, five-why analysis is a convenient way to explore cause-and-effect relationships. We can gain a better appreciation for how it works by returning to our ferry example and its unreliable service. Using the cause-and-effect diagram in Figure 21.2, let’s assume that the consensus opinion is that service varies most based on personnel. The line of inquiry might proceed as follows:

Why #1: Why is the ferry service so unreliable?

Response: Because different people are working at different times and the skill levels vary.

Why #2: Why do the skill levels vary?

Response: Some operators are veterans, but many others are newcomers; it takes time to learn the job.

Why #3: Why do we have so many newcomers?

Response: Turnover has been quite high among ferry operators.

Why #4: Why has turnover been high among operators?

Response: Ferry operators have complained about the long hours and erratic shift changes.

Why #5: Why are operators asked to work long hours and erratic shifts?

Response: Poor scheduling has led to unusually long hours and demands for frequent shift changes.

What is interesting to note from this line of inquiry is that the response to the fifth “Why?” points to a potential root cause that was not identified in the cause-and-effect diagram. While poor scheduling is probably not entirely to blame for unreliable service and dissatisfied customers, it seems like a viable problem to tackle in the near term. Can the ferry operator expect service reliability to close in on perfection by making this one simple change? Probably not, but marked improvement should be expected as the cause-effect chain operates in reverse of the five-why questioning: Improved scheduling should lead to more reasonable, steady hours, which should reduce turnover, which should eliminate the need for newcomers, which should provide a more consistent level of experience among “veteran” operators, which should improve the reliability of service. You get the idea.

Something that should not be lost on the ferry service provider, and should not be lost on anyone pursuing Lean Six Sigma Logistics, is that a demonstratively inferior service offering is unlikely to attract significant volumes of new business in the presence of a superior alternative. That is, even with improved reliability in service, the ferry cannot reasonably expect to steal customers away from the bridge, at least not under normal circumstances. This is why companies are challenged to identify the best way to serve customers and not simply improve existing, conventional means. This calls for out-of-the-box thinking that can be spurred by the causal analysis tools presented here, if the company encourages team members to avoid “captive thinking.”

Design of Experiments

As previously indicated, the qualitative causal analysis tools like brainstorming, cause-and-effect diagrams, and five-why analysis are excellent ways to help define a focal problem or initiate analysis, but they are limited in the depth that they can provide. Their value is tied to bringing the parties together for the sake of further investigation of problems and highlighting likely candidates for root causes. Subsequent, in-depth analysis might include Design of Experiments (DOE) methods described earlier. DOE can provide an unbiased, empirical method for root cause examination.

In technical terms, DOE is based on the controlled isolation of cause-and-effect relationships. Experiments involve the application of carefully crafted manipulations to random sets of a sample. In doing so, the researcher can observe how changes to the input factors result in different outputs. By then using inferential statistical methods (described next), the researcher tries to generalize the observed findings to a broader set of circumstances. This basic approach to experiments is used widely in physical and social sciences.

In recent years, Six Sigma practitioners have embraced DOE as a critical element in the DMAIC process. Its power as an analytical tool has gained wide acceptance among operations-oriented personnel, many of whom have relied historically on heuristic (nonquantitative) methods of analysis. Given the precision with which one can determine cause-and-effect relationships through direct observation of controlled conditions, people are cracking open those long-forgotten yet trusty textbooks on research design and statistics to learn (or relearn in some instances) how tried-and-true techniques like experiments can help to solve everyday problems. While a thorough discussion of DOE is beyond the scope of our current effort, worthwhile references are available.*

  

Inferential Statistics

Coupled with the widespread adoption of technical research methods like DOE is the common usage of inferential statistics. Six Sigma and its pursuit of variance reduction is to credit for bringing statistical methods to prominence in the analysis of everyday operations. In order to understand variance, one must realize that variance is simply an expression of dispersion. If everything happened the same way all the time, we would have no dispersion in our observations and, hence, no variance. The fact that variance does, in fact, happen leads us to examine why it happens.

The basic premise of statistics is to use sample data to infer what is occurring in reality. We develop models composed of observable, measurable variables to represent inputs in our prediction of some output. A good model is an efficient model — one that has much to say about reality using a few predictors. Some models may use only a single predictor (univariate analysis), while others rely on multiple predictors (multivariate analysis) to explain what is happening in the output variable (or “dependent” variable). Regression analysis is the primary method used to assess the influence of predictor variables on a dependent variable.

Predictor variables may be either number based (continuous) or label based (categorical or discrete). Continuous variables are those that can be measured using numerical scales that reflect the degrees present in a condition of interest, whether that condition is time (hours), heat (degrees Fahrenheit), weight (pounds), happiness (1 = very unhappy, 7 = very happy), or any other condition that can be quantified in some way. Categorical variables refer to those that are not expressed in numbers, but rather in labels. Examples include gender (F = female, M = male), professional certification (yes, no), and work shift (first, second, third). A much-heralded tool in the Six Sigma tool kit is analysis of variance (ANOVA). ANOVA is a special case of regression where one or more categorical variables, rather than continuous variables, are used to predict behavior of an output variable. Both continuous and categorical variables may be combined in a multivariate regression model to predict a single output variable.

To illustrate the various kinds of regression analyses, let’s revisit our ferryboat example. The output variable of interest is ferry service reliability. Should we decide to use only a single predictor variable to explain variance in service reliability, we can elect to perform simple regression analysis by selecting a continuous variable (say, years of experience for the boat operator) or t-test analysis by using a single categorical variable (weather: good or bad). These relationships are expressed mathematically using the nomenclature of the output (Y) as a function of the input (X), or in our case:

  

When multiple predictors are added to the mix, we should see added explanatory power in the model. If the model is composed entirely of categorical predictors, we use ANOVA. If the model uses continuous variables or a combination of continuous and categorical variables, we use multiple regression. A multiple regression analysis might take on the appearance of the statement below:

By including the different predictor variables in the model, we are stating that we expect that each one will contribute to our understanding of the outcome variable (service reliability), that a relationship exists between each predictor and the outcome. When we expect a relationship to be present between two factors, we have a hypothesis. Our success in developing a good regression model is measured by finding predictors that appear to influence our output variable or, put another way, finding support for our hypotheses. Success is also measured by the amount of variance in the output variable that is explained by the predictors. This measure of variance explained is known as the R-square (R2) statistic or coefficient of determination. For example, if the R2 equals 0.82, then that means that 82 percent of the output can be explained by the inputs.

Figure 21.3 illustrates the concept of explained variance by using Venn diagrams. The first diagram shows that approximately half of the variance in the output variable (ferry service reliability) is explained by the predictor variable (operator experience). The second diagram shows that even more variance (67 percent) is explained when the age of the boat, weather, time of day, and season are added to the mix. Adding predictors to the model improves the explanatory power of the model, but at the expense of efficiency or “model parsimony” as scientists call it.

  

Figure 21.3.
Figure 21.3. Variance Explained in an Output Variable.

To reiterate Lord Kelvin’s observation, “When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.” Inferential statistics provide us with tools to measure and quantify our understanding. Once we understand the sources of variation in the output variable, we can take action to control it. Six Sigma pioneers are to credit for turning operations practitioners into physical and social scientists by demonstrating the relevance of proven tools to the problems faced every day in the supply chain domain.

One caveat when using regression analysis and ANOVA is tied to the realization that these methods rely on correlations, or the “traveling together” of data. Causality cannot be tested using correlations, though it can be inferred given observation of the phenomenon of interest. Correlation analysis is prone to the problem of common underlying causes, sometimes referred to as “spurious effects.” This problem seems to rear its head regularly in studies conducted within the medical community, such as suggesting that regular flossing of teeth leads to longer life expectancy when, most likely, flossing is one part (though not an inconsequential part) of a larger regimen for good health and long life expectancy. To suggest that failure to floss regularly is the root cause of shortened life expectancy is an overstatement, to say the least. Therefore, great consideration must be given to the selection of predictor variables used in analysis such that the root cause is represented somewhere in the set of predictors. Brainstorming, cause-and-effect diagrams, and five-why analysis help to ensure that root causes will be represented in the analysis.

Another caveat is related to the fact that many statistical methods rely on the assumption of normally distributed data, representative of a “normal population” where the mean, mode, and median are equal, with half of the distribution below the mean and half above the mean. A case in point resides in the fact that Six Sigma quality, resulting in no more than 3.4 defects per million opportunities, assumes a perfectly normal distribution. The truth is that most data, and the populations from which data are drawn, are not perfectly normal. Some may be somewhat skewed (asymmetrical) or laden with kurtosis (flatness). Other samples may demonstrate dramatic departures from normality (e.g., bimodal distribution).

The point is that on collecting data, one must engage in preliminary data analysis, reviewing the characteristics of the data set, checking for completeness of the data, assessing reliability and validity of the data and, finally, the distribution of the data. Data that violate the assumption of normality can often be standardized or corrected. In other cases, statistics packages offer adjusted statistical tests or tests appropriate for non-normal data. So, one must not simply assume that the data are complete, reliable, valid, and normal. It is up to the individual conducting the research to ensure that the findings are valid based on sound science. If the data are junk or not fully considered in the choice of statistical test, Garbage in/garbage out makes a repeat appearance. The effort is wasted and perhaps misguiding in its result.

A final caveat worth mentioning here is tied to the ease with which complicated statistical analysis can be conducted today with the aid of PC-based statistics packages. On completing data collection, analysis is a few points and clicks away. The danger is in not understanding the underlying logic at work when models are unwittingly put together and tested. A sound review of statistics is strongly suggested before venturing far down the path of using inferential statistics as a viable problem-solving tool. However, Lean, Six Sigma, and eliminating waste are not only about statistics, so we should not let our natural phobia of mathematics and statistics stop us from doing what is right.

Logo
This book has free materials available for download from the
Web Added Value™ Resource Center at www.jrosspub.com.




* An excellent treatment of the DMAIC method can be found in Gardner, Daniel L., Supply Chain Vector, J. Ross Publishing, Boca Raton, FL, 2004.

* An interesting read on mind mapping can be found in Buzan, Tony and Buzan, Barry, The Mind Map Book: How to Use Radiant Thinking to Maximize Your Brain’s Untapped Potential, Plume, New York, 1996.

* For a good, quick reference on DOE, see The Black Belt Memory Jogger, Goal/QPC and Six Sigma Academy, Salem, NH, 2002. Other references include: Anderson, Mark J. and Whitcomb, Patrick J., DOE Simplified: Practical Tools for Effective Experimentation, Productivity Press, New York, 2000; Barrentine, Larry B., An Introduction to Design of Experiments: A Simplified Approach, ASQ Quality Press, Milwaukee, 1999; and, Breyfogle, Forrest W., Implementing Six Sigma: Smarter Solutions Using Statistical Methods, 2nd ed., Wiley, New York, 2003.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.144.216