2.5. Visual Six Sigma: Strategies, Process, Roadmap, and Guidelines

In this section, we will explore the three strategies that underlie Visual Six Sigma. We then present the Visual Six Sigma Data Analysis Process that supports these strategies through six steps, and define the Visual Six Sigma Roadmap that expands on three of the key steps. This section closes with guidelines that help you assess your performance as a Visual Six Sigma practitioner.

2.5.1. Visual Six Sigma Strategies

As mentioned earlier, Visual Six Sigma exploits the following three key strategies to support the goal of managing variation in relation to performance requirements:

  1. Using dynamic visualization to literally see the sources of variation in your data.

  2. Using exploratory data analysis techniques to identify key drivers and models, especially for situations with many variables.

  3. Using confirmatory statistical methods only when the conclusions are not obvious.

Note that with reference to the section titled "Variation and Statistics," Strategy 1 falls within what was called EDA, or statistics as detective. Strategy 3 falls within what we defined as CDA, or statistics as judge. Strategy 2 has aspects of both EDA and CDA.

Earlier, we stressed that by working in the EDA mode of statistics as detective we have to give up the possibility of a neat conceptual and analytical framework. Rather, the proper analysis of our data has to be driven by a set of informal rules or heuristics that allow us to make new, useful discoveries. However, there are still some useful principles that can guide us. Jeroen de Mast and Albert Trip offer an excellent articulation and positioning of these principles in the Six Sigma context.[] Unsurprisingly, these principles are applicable within Visual Six Sigma, and appear in a modified form in the Visual Six Sigma Roadmap presented later (Exhibit 2.4).

As you recall from Chapter 1, one of the goals of Visual Six Sigma is to equip users who know their business with some simple ideas and tools to get from data to decisions easily and quickly. Indeed, we would argue that the only prerequisite for a useful analysis, other than having high-quality data, is knowledge of what the different variables that are being analyzed actually represent. We cannot emphasize strongly enough this need for contextual knowledge to guide interpretation; it is not surprising that this is one of the key principles listed by de Mast and Trip.

As mentioned earlier, a motivating factor for this book is our conviction that the balance in emphasis between EDA and CDA in Six Sigma is not always correct. Yet another motivation for this book is to address the perception that a team must adhere strictly to the phases of DMAIC, even when the data or problem context does not warrant doing so. The use of the three key Visual Six Sigma strategies provides the opportunity to reengineer the process of going from data to decisions. In part, this is accomplished by freeing you, the practitioner, from the need to conduct unnecessary analyses.

2.5.2. Visual Six Sigma Process

We have found the simple process shown in Exhibit 2.3 to be effective in many real-world situations. We refer to this in the remainder of the book as the Visual Six Sigma (VSS) Data Analysis Process.

Figure 2.3. Visual Six Sigma Data Analysis Process

This gives rise to the subtitle of this book, Making Data Analysis Lean. As the exhibit shows, it may not always be necessary to engage in the "Model Relationships" activity. This is reflective of the third Visual Six Sigma strategy. An acid test for a Six Sigma practitioner is to ask, "If I did have a useful model of Ys against Xs from CDA, how would it change my recommended actions for the business?"

The steps in the Visual Six Sigma Data Analysis Process may be briefly described as follows:

  • Frame Problem. Identify the specific failure to produce what is required (see prior section titled "Measurements"). Identify your general strategy for improvement, estimate the time and resources needed, and calculate the likely benefit if you succeed. Identify the Y or Ys of interest.

  • Collect Data. Identify potential Xs using techniques such as brainstorming, process maps, data mining, failure modes and effects analysis (FMEA), and subject matter knowledge. Passively or actively collect data that relate these to the Ys of interest.

  • Uncover Relationships. Validate the data to understand their strengths, weaknesses, and relevance to your problem. Using exploratory tools and your understanding of the data context, generate hypotheses and explore whether and how the Xs relate to the Ys.

  • Model Relationships. Build statistical models relating the Xs to the Ys. Determine statistically which Xs explain variation in the Ys and may represent causal factors.

  • Revise Knowledge. Optimize settings of the Xs to give the best values for the Ys. Explore the distribution of Ys as the Xs are allowed to shift a little from their optimal settings. Collect new data to verify that the improvement is real.

  • Utilize Knowledge. Implement the improvement and monitor or review the Ys with an appropriate frequency to see that the improvement is maintained.

2.5.3. Visual Six Sigma Roadmap: Uncover Relationships, Model Relationships, and Revise Knowledge

In this section, we expand on the three steps in the Visual Six Sigma Data Analysis Process that benefit the most from the power of visual methods: Uncover Relationships, Model Relationships, and Revise Knowledge. These activities are reflective of where we see the biggest opportunities for removing waste from the process of going from data to decisions.

The Visual Six Sigma Roadmap in Exhibit 2.4 guides you through these three important steps. Given that the displays used for visualization and discovery depend on your own perceptive and cognitive style, the Visual Six Sigma Roadmap focuses on the goal, or the what, of each step. However, in Chapter 3, we will make specific suggestions about how each step can be accomplished using JMP.

Figure 2.4. The Visual Six Sigma Roadmap: What We Do

The Roadmap uses the Six Sigma convention that a variable is usually assigned to a Y role (an outcome or effect of interest) or to an X role (a possible cause that may influence a Y). The phrase Hot X in Exhibit 2.4 relates to the fact that according to the available data this variable really does appear to have an impact on the Y of interest. Of course, in order to make such a determination, this X variable must have been included in your initial picture of how the process operates. Those X variables that are not Hot Xs, in spite of prior expectations, can be thought of as being moved into the noise function for that Y. Other terms for Hot X are Red X and Vital X. Whatever terminology is used, it is important to understand that for any given Y there may be more than one X that has an impact, and, in such cases, it is important to understand the joint impact of these Xs.

Note that, although the designations of Y or X for a particular variable are useful, whether a variable is a Y or an X depends on how the problem is framed and on the stage of the analysis. Processes are often modeled as both serial (a set of connected steps) and hierarchical (an ordered grouping of levels of steps, where one step at a higher level comprises a series of steps at a lower level). Indeed, one of the tough choices to be made in the Frame Problem step (Exhibit 2.3) is to decide on an appropriate level of detail and granularity for usefully modeling the process. Even when a manufacturing process is only moderately complex, it is often necessary to use a divide-and-conquer approach in process and product improvement and design efforts. Improvement and design projects are often divided into small pieces that reflect how the final product is made and operates. Thankfully, in transactional situations modeling the process is usually more straightforward.

Earlier, we used the phrase "data of high quality." Although data cleansing is often presented as an initial step prior to any data analysis, we feel that it is better to include this vital activity as part of the Uncover and Model Relationships steps (Exhibit 2.3), particularly when there are large numbers of variables. For example, it is perfectly possible to have a multivariate outlier that is not an outlier in any single variable. Thus the assessment of data quality and any required remedial action is understood to be woven into the Visual Six Sigma Roadmap.

We should also comment on the need to understand the measurement process that is behind each variable in your data. Variously known as a Gauge Repeatability and Reproducibility (Gauge R&R) study or a Measurement System Analysis (MSA) study, this is critically important. It is only when you understand the pattern of variation resulting from repeatedly measuring the same item that you can correctly interpret the pattern of variation when you measure different items of that type.[]

In many ways, an MSA is best seen as an application of DOE to a measurement process, and properly the subject of a Visual Six Sigma effort of its own. To generalize, we would say that:

  • In a transactional environment, the conventional MSA is often too sophisticated.

  • In a manufacturing environment, the conventional MSA is often not sophisticated enough.

As an example of the second point: If the process to measure a small feature is automated, involving robot handling and vision systems, then the two Rs in Gauge R&R (corresponding to repeatability and reproducibility variation) may not be of interest. Instead we may be concerned with the variation when the robot loads and orients the part, when the camera tracks to supposedly fixed locations, and when the laser scans in a given pattern to examine the feature.

The Revise Knowledge activity is where we try to integrate what we have learned in the Uncover Relationships and possibly the Model Relationships steps with what we already know. There are many aspects to this, and most of them are particular to the specific context.

Regardless, one of the vital tasks associated with the Revise Knowledge step is to consider how, or if, our new findings will generalize. Note that Step 4 in Model Relationships already alerts us to this kind of problem, but this represents an extreme case.

Perhaps unsurprisingly, the best way to tackle this issue is to collect additional, new data via confirmatory runs to check how these fit with what we now expect. This is particularly important when we have changed the settings of the Hot Xs to achieve what appear to be better outcomes. As we acquire and investigate more and more data under the new settings, we have more and more assurance that we did indeed make a real improvement. Many businesses develop elaborate protocols to manage the risk of making such changes. Although there are some statistical aspects, there are at least as many contextual ones, so it is difficult to give general guidance.

In any case, confirmatory runs, no matter how they are chosen, are an expression of the fact that learning should be cumulative. Assuming that the performance gap continues to justify it, the continued application of the Visual Six Sigma Data Analysis Process (Exhibit 2.3) gives us the possibility of a virtuous circle.

2.5.4. Guidelines

Finally, the following are some guidelines that may help you as a practitioner of Visual Six Sigma:

  • Customer requirements of your process or product should establish the context and objectives for all the analyses you conduct.

  • These objectives can always be rephrased in terms of the identification, control, reduction, and/or anticipation of sources of variation.

  • If you do not measure it, then you are guessing.

  • If you do not know the operational definition of your measurement or the capability of your measurement process, then you are still guessing.

  • If you spend more time accessing and integrating data than with Visual Six Sigma, then your information system needs to be carefully examined.

  • The choice of which variables and observational units to include in constructing a set of data should be driven by your current process or product understanding and the objectives that have been set.

  • Given that you have made such a choice, you need to be concerned about how your findings are likely to generalize to other similar situations.

  • Any analysis that ignores business and contextual information and tries to just manipulate numbers will always fail.

  • Any dataset has information that can be revealed by dynamic visualization.

  • Models are used to make predictions, but a useful prediction need not involve a formal model.

  • All models are wrong, but some are useful.

  • The more sophisticated the model you build, the more opportunity for error in constructing it.

  • If you cannot communicate your findings readily to business stakeholders, then you have failed.

  • If the course of action is not influenced by your findings, then the analysis was pointless.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.51.228