2
Basics of Experimentation and Response Surface Methodology

“The best time to plan an experiment is after you have done it.”

Sir Ronald A. Fisher

Overview

Our approach to Strategy of Formulation Development relies heavily on the use of response surface methodology. Data is collected using an experimental design from which a model is developed to understand the formulation system and identify formulations that meet the objective of the study. In this chapter we discuss the fundamentals of good experimentation that enable the collection of good data. These fundamentals include well-defined objectives, high quality data, and diagnosis of the experimental environment. We also introduce a roadmap for sequential experimentation and modeling of formulation systems.

CHAPTER CONTENTS

Overview

2.1 Fundamentals of Good Experimentation

Well-Defined Objectives

High Quality Data

How Many Formulations or Blends Do I Need to Test?

2.2 Diagnosis of the Experimental Environment

2.3 Experimentation Strategy and the Evolution of the Experimental Environment

Screening Phase

Optimization Phase

2.4 Roadmap for Experimenting with Formulations

2.1 Fundamentals of Good Experimentation

In formulation experimentation, as in any other area of science, certain fundamental concepts are critical to the effective use of the associated techniques. These basic ideas are summarized in Table 2.1, discussed briefly in the following paragraphs, and addressed in detail in later chapters. These ideas are useful in all types of experimentation and are not restricted to experiments with formulations.

Table 2.1 – Fundamentals of Good Experimentation

Well-Defined Objectives

•   What questions need to be answered

•   Choice of components (x's) to be studied

•   Component ranges and critical properties or responses (y's)

High Quality Data

•   Randomization

•   Blocking

•   Good administration of the experimentation process

Response (y) Variation

•   Experiment and testing variation

•   Replication

Diagnosis of Experimental Environment

•   Objectives, high quality data, experiment-to-experiment variation and test variation

•   Experimentation strategy

Well-Defined Objectives

A well-defined objective is a basic requirement for conducting good experiments. The objectives include what components are to be studied and what ranges are to be investigated. In all of the formulation studies discussed earlier, the properties of a series of blends or formulations are measured. Clearly defined objectives enable us to identify which formulations to test, in what order, and in what amount. The objectives also define what success looks like, i.e., when the studies have been successfully completed.

In determining the objectives we typically first determine which components (x’s) and responses (y’s) should be considered. The component variables (that is, the proportions of each component present in the mixture) are those that will be deliberately controlled or varied in making up various formulations. The component variables may be referred to as factors (in experimental design literature), predictor variables (in regression analysis literature), proportions (expressed by volume, weight, moles, and so on), or component ratios. The component variables are usually designated as x's. Several names have been used to describe a mixture of two or more components. The terms used most often are formulation, composition, blend, run, mixture, trial, and test. We will use these terms interchangeably. In most instances, formulation, blend, mixture, or test will be used to describe a mixture of ingredients that is being evaluated.

Another part of determining the objectives is to identify for each formulation the measurements of the product properties or responses (y's) that are to be made. The measured variables depend on the proportions of the various components (x's) in the mixture. Experimenters should always ask, "Am I looking at the right y’s?" It is only when the x's and y's are delineated that you can have a clearly defined objective.

High Quality Data

One of the useful by-products of using the statistical approach to formulation development is that high quality data is developed in the process. Conversely, when data is collected haphazardly, or has an unknown “pedigree” (Snee and Hoerl 2012), there are often significant limitations to the data that make development of good models challenging. These problems include important variables excluded from the data (lurking variables), inappropriate ranges of the x variables, missing or inaccurate data, poor time scales (e.g., daily versus hourly data), and so on. With designed experiments, high quality data is developed primarily through the use of randomization, good administration of the experimental process, and blocking. Data cleaning techniques are discussed by Cody (2008).

Randomization

We run experiments in a random order so that any unknown biases do not persistently confuse the average effects of any of the components that have been deliberately varied in the experiments. In other words, randomization ensures that the effect of any lurking variables will not be confused with a particular x variable.

The following example shows how randomization reduces the effect of the lurking variables. In Table 2.2 we see data from a 10-run experiment. The only variable in play here is Variable Z, which is unknown to the experimenter and has a positive effect. The experimenter varies the variable of interest, x1, in the same sequence as Variable Z changes. As a result, the effect of x1 is perfectly correlated with the effect of Variable Z.

When the experimenter plots the Response (yy) versus x1, a strong straight line (linear) relationship is found (Figure 2.1a). Of course, we know that this effect is really the effect of Variable Z (Figure 2.1b). Figure 2.1c shows the correlation between the Response (yy) and the unknown Variable Z.

Table 2.2 – Example Showing the Relationship between a Response Variable, an Experimental Variable (X1), and a Lurking Variable (Z)

Run Unknown Variable Z X1 Response (yy)
1 10 0.00 81.5
2 15 0.25 89.0
3 20 0.50 93.0
4 25 0.75 93.5
5 30 1.00 98.0
6 10 0.00 78.5
7 15 0.25 81.0
8 20 0.50 87.0
9 25 0.75 96.5
10 30 1.00 102.0

Figure 2.1a – Randomization Example – Plot of Response (yy) versus X1 – Strong Correlation Observed

image

Figure 2.1b – Randomization Example – Plot of X1 versus Unknown Variable Z – Variable X1 Is Perfectly Correlated with Z

image

Figure 2.1c – Randomization Example – Plot of Response (yy) versus Unknown Variable Z – Strong Correlation Is Observed

image

Now randomization is introduced. In Table 2.2a the levels of x1 have been randomized. In Figure 2.1d we see that there is now no effect due to x1 all the variation is due to the lurking Variable Z as we saw in Figure 2.1c.  Further, in Figure 2.1e we see that the randomization has reduced the correlation between x1 and Z to essentially zero.

Table 2.2a – Data from Table 2.2 with the Levels of Experimental Variable X1 Randomized to Reduce the Effect of the Lurking Variable (Z)

Run Unknown Variable Z X1 Randomized Response (yy)
1 10 0.00 81.5
2 15 0.25 89.0
3 20 0.50 93.0
4 25 1.00 93.5
5 30 0.25 98.0
6 10 0.75 78.5
7 15 1.00 81.0
8 20 0.00 87.0
9 25 0.75 96.5
10 30 0.50 102.0

Figure 2.1d – Randomization Example – Plot of Response (yy) versus X1 Randomized – No Correlation Is Seen

image

Figure 2.1e – Randomization Example – Plot of Unknown Variable Z versus X1 Randomized – Randomization Has Removed the Correlation between X1 and Z

image

Table 2.2a illustrates how experiments are typically randomized. The table shows the results of a 5-blend experiment in which each blend is run in duplicate with each run being tested one time. The ten runs (5 blends each, prepared and tested twice) were run in the following random order: B1, B2, B3, B5, B2, B4, B5, B1, B4, B3 as shown in Table 2.3. Note that the response (y) data listed in this table is real data, not the hypothetical response data (yy) shown above.

Testing the blends in a random order reduces the effects of the variation introduced by variables not controlled in the experiment, i.e., lurking variables. Randomization spreads the effects of the uncontrolled variables across the experiment. As a result, the estimated effects of all the variables studied are affected a little rather than having a few effects severely biased, which can happen when randomization is not used.

Table 2.3 – Illustration of Randomization of Test Order That Reduces the Effects of Unknown Sources of Variation

Blend Run Order X1 X2 Response (y)
B1 1, 8 0.00 1.00 79, 76
B2 2, 5 0.25 0.75 95, 103
B3 3, 10 0.50 0.50 104, 110
B4 6, 9 0.75 0.25 105. 108
B5 4, 7 1.00 0.00 103, 99

Randomization ensures that every component variable will have its fair share of the favorable and unfavorable characteristics of the experimental environment. Randomization also ensures valid estimates of experimental variation and makes possible the application of statistical tests of significance and the construction of confidence intervals for the observed effects. It is better to include randomization in all experimental situations rather than to contaminate results with potential unknown biases because of lurking variables that changed over time during the experimentation.

Blocking

We sometimes block experiments to remove the effects of important extraneous variables that may be present. Some examples include raw material lots, teams, operators, and day of the week. The variation that is induced by these so-called noise or environmental variables can be accounted for by blocking. In essence, we introduce a blocking variable, perhaps equal to 1 for Day 1 and 2 for Day 2, and incorporate that in the model. The effects of the blocked variables are still present but are isolated in the statistical analysis so that the effects of the components and other variables are not affected.

This type of extraneous variation, that is, variation not related to the component levels, is sometimes referred to as bias variation. Bias is experimental variation in which the numerical value of the variation tends to remain constant over a number of experimental runs until some non-experimental variable, such as operator, raw material batch, or machine, is changed. You may find, for example, that a formulation consistently performs better when using an active ingredient purchased from a particular vendor. Bias variation may also follow a consistent pattern or cycle, depending on the hour of the day, day of the week, or season of the year.

Blocking accounts for background variables that are not direct factors in our experiment by taking advantage of naturally homogeneous groupings in materials, machines, or time. For example, suppose there is only enough time to run 10 of 20 blends under investigation in one day, and then we must finish the other 10 later. It would be most advantageous to group the blends into 2 blocks of 10 each so that we can estimate a time effect and can still determine the effects of the various component variables in the system. That is, the blocking variable should be independent of any of the terms in the model. Blocking is therefore an important experimental consideration. In our experience, blocking is less needed in formulation experimentation than in other fields of experimentation. This doesn’t mean that it should be ignored, however. Blocking is discussed further in Chapter 8.

Both blocking and randomization are used to address variation from extraneous variables that are not part of the experiment. However, there is a big difference. Blocking is used to account for variation that we can anticipate in advance, such as running the experiment over two days. We can fully account for this variation by incorporating a day variable in the model. Randomization, on the other hand, is used to protect against extraneous variation that we cannot anticipate in advance, such as changes in ambient humidity during the experiment.

Experimentation Administration

The use of good experimental controls helps ensure that the experiment is run as defined in the randomization sequence. In addition, these controls ensure that variables not included in the experiment are held as constant as possible, that rigor is used in data collection and measurement, and that any abnormalities during the experiment are documented. The result is that unbiased, high quality data is collected. Lack of controls typically introduces additional variation into the experiment making it difficult to identify the important components. For example, different people may record data in different units, or undocumented changes may be made to variables not included the experiment, causing consternation in analysis. Good administration of the experimentation is enhanced by providing careful direction to the persons conducting the experimental runs.

Variation – Experimental and Testing

Experimental variation is that variation observed in the results of repeat experiments carried out under identical conditions. This variation is also referred to as experimental error, although it does not imply that any mistakes have been made. It is a fact of life that everything has variation, and test results will tend to change to some degree when repeat measurements are made, even for such routine things as taking our blood pressure in the doctor’s office. How do we know when one formulation is really better than another when duplicate experiments do not yield identical results? Figure 2.2 shows a plot of the response (y) versus x1 for the data in Table 2.3. Here we can clearly discern the shape of the response function even when there is variation between the replicate measurements. The replicates in fact provide greater confidence in understanding the response function.

Figure 2.2 – Relationship between Y and X1 for Data in Table 2.3

image

As noted, experimental variation is a fact of life. A good experimental program will take this fact into account and will estimate the variation between replicate experiments. Experimental variation can come from many sources, such as weighing variation, analytical or testing variation, sampling variation, operator variation, and administrative mistakes, to name just a few. In complex problems, one must define and estimate quantitatively many components of variation.

We distinguish between two types of variation that are frequently confused: experimental variation and testing variation.

•   Experimental variation is all the variation that exists between experiments conducted under the same conditions, i.e., replicated formulations.

•   Testing variation is only the variation that exists between multiple tests conducted on the same experimental unit or sample. Test variation represents the variation introduced by the measurement method alone.

Figure 2.3 shows a blend that has been made on two separate occasions (two replicates made under identical conditions and with identical formulations) and then tested on each occasion in duplicate. The experimental variation, all variation between the two replicates, is measured by the average of the results of the two experiments; (T1+T2)/2 vs (T3+T4)/2. The test variation is measured by the differences of duplicate tests on the same samples: T1-T2 and T3-T4.

Figure 2.3 – Experiment and Test Variation

image

Note that many other sources of variation could exist between replicates besides test variation, such as slight, but undetected changes in experimental conditions, ambient temperature or humidity variation, slight differences in how the samples were collected or prepared, and so on. Experimental variation is the variation used to test the significance of the effects of variables. This is because it includes the test variation, but also the other sources of variation that cause replicates to produce different results. Test-to-test variation is therefore not the appropriate measure of variation to conduct statistical tests of significance, as it does not include all sources of experimental variation.

The Value of Replication

Experimental variation is a natural part of any investigation and your experimental strategy should take this into account. Any experiment should be designed to detect the effects of the variables or components in the presence of experimental error. Replication of all or perhaps only some points in the design is the principal statistical tool for measuring experimental variation. It also provides an opportunity for the effects of uncontrolled variables to balance out, as we saw in Figures 2.1a through 2.1e. Thus, replication aids randomization in decreasing bias variation. Replication will also help locate atypical (outlier) observations in the experiments.

Statistical theory shows that the average of a number of observations is more precise than a single observation (Hoerl and Snee 2012). If y is a dependent variable whose standard deviation for a single observation is s, and n independent observations (replicates) of y are made with identical settings of the x's in the experiment, then the standard deviation of the average value of y is s/SQRT(n) where SQRT denotes the square root function.  That is, the uncertainty in the average decreases with the square root of sample size. Note that to avoid confusion with s, the standard deviation of one observation, the standard deviation of the average of y is often referred to as the standard error of the average of y. In Table 2.4 we see that the values of SQRT(1/n) decrease rapidly at first as n increases, but more slowly as n becomes larger.

Table 2.4 – Percent Reduction in Standard Deviation versus Sample Size (n)

No. Replicates (n) 1/SQRT(n) Percent Reduction in Standard Deviation Versus n=1
1 1.00  
2 0.71 29
3 0.58 42
4 0.50 50
5 0.45 55
10 0.32 68

A small amount of replication is helpful, even essential, but large amounts of replication are grossly wasteful of experimental time and money. Table 2.4 shows that the advantages of replication reach a point of diminishing returns, which is why large amounts of replication are not usually practical. It is usually a better strategy to save this effort for subsequent rounds of experimentation. The strategies discussed in this book all involve moderate, never large, amounts of replication.

How Many Formulations or Blends Do I Need to Test?

One of the first and most important questions asked by experimenters is, "How many formulations or blends should I prepare and test?" Certainly you want to learn as much as possible, but not at an impractical cost in time and money. Experimental programs, which usually involve more than one experiment, typically encompass between 8 and 60 runs. For any single group of runs we like to keep the sample size less than 30-35. This helps the learning process move faster (you get data more quickly) and reduces the amount of experimental administration involved with very large experiments. In the final analysis, the number of experimental runs involved depends on the specific system being studied and the objectives of the experimental program.

The number of formulations evaluated depends on the number of components and relationship between the size of the effects to be detected and the size of the experimental variation. The form of the model is also a determining factor. Models involving quadratic and cubic terms require larger numbers of formulations than models involving only linear terms. Both the relationship between size of effect and size of experimental variation and the size of the model are part of the experimental environment discussed in the following section.

2.2 Diagnosis of the Experimental Environment

A central theme of this book is that the best formulation experiment to run depends on the experimental environment. Some environmental characteristics should have little influence on the choice of experimental strategy; other characteristics should have major influence. It is important to know the important characteristics.

Number of components – The most important characteristic of a formulation experiment is the number of components in the formulation. If there are only three components (x's), then a complete exploration of the effects of these components is possible in a moderate number of tested formulations. On the other hand, to explore the effects of a 30-component formulation with comparable thoroughness would require an inordinate number of experimental runs.

Trace components – In some mixture systems, one of the components makes up most (e.g., 90-95%) of the mixture, and the other components are present in trace amounts (e.g., 5%). Obviously, when mixing lemonade, for example, water is the dominant ingredient. The effects of the trace components can be studied using the classic factorial and response surface designs. In these designs the levels of the trace components are varied independently, and the level of the major component is adjusted so that the levels of all of the components in each blend add up to 1. In effect, the major component "takes up the slack" and, hence, is called the slack variable. In our lemonade example, we could vary all the ingredients except water independently, and then add enough water to fill the glass.

Some formulation studies involve two or more major components and trace components. We recommend that mixture designs be used to study the response of this type of system.

Component constraints – In mixture experiments where each of the components can be tested through the total range of 0-1, the component variables are said to be unconstrained. In many other mixture experiments it is not possible or practical to explore the total range of 0-1 on all components. For example, a salad dressing made of 100% vinegar would not be of much interest. These formulations may be constrained between a lower limit ai and an upper limit bi:

0 ≤ ai ≤ xi ≤ bi ≤ 1.0

We will see in subsequent chapters that the experimental regions so defined, and therefore the designs used, will depend on the nature of the component constraints.

Prediction – The quality of prediction possible will vary with the number of components, their constraints, the experimental error, and the number of formulations tested. Every formulation experiment should lead to a model that will give accurate predictions about future behavior of the system. A set of test formulations may solve the problem of finding a satisfactory formulation, but the ultimate objective should be to develop an accurate prediction equation that is actionable going into the future. An experimenter who merely fills notebooks with records of observations but does not produce a useful predictive model has not done an adequate job.

Available theory – If there is an adequate theory available, you should use known theoretical models and derive an experimental strategy designed specifically to the mathematical form of the model. Most often there are two impediments to relying exclusively on a theoretical model:

•   The adequacy of the model may not yet have been thoroughly established, that is, empirically verified.

•   Even when a theoretical model can accurately predict some of the critical responses (for example, solubility and cost of a formulated product), there are usually other important responses for which no theoretical models exist (for example, color impurity, viscosity, and aesthetic properties).

For these reasons, most experimental programs must be designed on the basis of an appropriate empirical model--that is, a model based primarily on data. Good data for an empirical model will also be useful in developing a theoretical model if one is needed. In other words, empirical models and theoretical models should not be viewed as competitors, but rather as synergistic tools. We should always use whatever theory exists in experimentation and modeling, and empirical models should always be interpreted in light of current theory.

2.3 Experimentation Strategy and the Evolution of the Experimental Environment

Proper diagnosis of the environment is critically important to sound experimentation and to problem solving in general (Hoerl and Snee 2015). Sequential application of experimental design and statistical analysis of the subsequent data are essential to most projects addressing large, complex, unstructured problems. Contrary to textbook problems, these complex problems cannot be solved with any one technique, but generally require a sequential approach linking and integrating multiple methods.

The study of how to accomplish such an integration of multiple tools to address complex problems has been referred to as statistical engineering, which has been defined as “The study of how to best utilize statistical concepts, methods, and tools, and integrate them with information technology and other relevant disciplines, to achieve enhanced results.” (Hoerl and Snee 2010, p.12). Some key words and phrases in this definition warrant elaboration. First of all, statistical engineering is defined as the study of something, i.e., a discipline, not a set of tools or techniques.  As with any other engineering discipline, it uses existing concepts, methods, and tools in novel ways to achieve novel results.

Integration is also a key word, not only integration of multiple statistical methods, but also integration with other disciplines, especially information technology. Computer technology is absolutely critical to address the problems discussed in this text. The phrase achieve enhanced results is key in the sense that statistical engineering is inherently tool neutral. That is, it neither promotes classical nor computer-aided experimental designs, neither linear nor non-linear models, and so on. Rather, as an engineering discipline its “loyalty” is to solving the problem--generating results, rather than relying on pre-determined methods. Tools are of course important, but within a statistical engineering paradigm they would be chosen based on the unique nature of the problem, to generate results. As we shall see, each of these principles plays an important role in the approach to experimentation with formulations that we propose.

Every experimental program has a beginning and an end. During its lifetime, every program evolves through a sequence of phases. The best experimental strategy changes greatly from phase to phase. Therefore, the experimenter must learn to recognize where the experimental program is within the natural evolutionary process. Figure 2.4 summarizes the typical stages in an experimental formulation program.

We begin with screening and then move to optimization. This strategy is different from strategy of experimentation for process variables (Pfeiffer 1988), which has a characterization phase between screening and optimization. In formulation experimentation, the characterization and optimization phases involve the same activities. The design and analysis of the types of designs used in the screening and optimization phases, including designs selected using optimal algorithms, are discussed in Chapters 3 through 9 of this book.

Figure 2.4 – Comparison of Experimental Environments

Characteristic Screening Optimization
No. of Components 6 or More 2-5
Desired Information Critical Components Prediction Equation, Optimization, Design Space
Model Form Linear Blending Linear and  Curvilinear Blending
Experiment Design Screening: Simplex and Extreme Vertices Designs Response Surface: Simplex and Extreme Vertices Designs

Screening Phase

At the beginning of an experimental program one should include a large number of components to ensure that no important variables are overlooked. Good experimental strategy starts by studying these candidate variables in screening experiments to find those components that are most important and necessary. If too few components are included at the beginning, some attractive formulations may not be found.

The focus at this point is on estimating linear blending effects. We repeatedly stress the concept of “boldness” in good experimental strategy. This means that we want to look at wider ranges of the components than we think are likely to produce optimal results. In this way, we are able to detect any real effects that are present. Experimentally testing all the possible candidate components is another aspect of boldness.

An experimental program may never have passed the screening stage, despite having been underway for some time. In such a situation, a body of folklore about the effects (or lack of effects) of the component variables will have built up. The wise experimenter will let the screening experiment help sort the facts from the folklore.

Optimization Phase

After the screening phase, we attempt to optimize the mixture formulation and to predict, through response surface experimentation, how changes in the proportions of the composition affect the measured physical properties of the mixture. The resulting models typically include both linear and non-linear blending effects. The goal is to develop models (prediction equations) that can be used to predict the characteristics of the formulations given the percentages of the ingredients in the formulation. These prediction equations can also be used to determine operating windows that define the formulations that will meet all the response requirements and specifications. The operating window is also called the sweet spot by some. It is referred to in the pharmaceutical and biotech industries as the design space.

From an ease of learning perspective, we begin discussion of response surface experimentation with mixtures that are unconstrained or have only lower bounds on their components, and then proceed to screening designs. We conclude with mixture experimentation with upper and lower constrained concentrations of the components. This book concentrates on screening and response surface experiments because they are used most frequently in practice.

We also note that both phases are not used in all situations. In some situations a screening experiment will be sufficient to solve the problem and generate the needed information. In other situations experimentation has progressed to the point that 3-5 components have been identified as important and a response surface design can be used to develop a prediction equation and identify useful operating conditions (design space) as needed. As a result the overall proposed strategy actually provides three strategies:

•   Screening followed by optimization

•   Screening

•   Optimization

All three strategies produce data and information that lead to findings and conclusions.

2.4 Roadmap for Experimenting with Formulations

The procedures discussed above provide a roadmap for experimenting with formulations that is summarized in Figure 2.5. Much of the content of Figure 2.5 has been discussed above but some additional comment is needed.

Figure 2.5 – Roadmap for Experimenting with Formulations

•   Define experiment objective using variety of inputs

•   Choose components (x’s) and responses (y’s)

•   Select experimental strategyScreening or Response Surface?

•   Identify constraints on mixture components

•   Select a blending model form, e.g., linear, quadratic

•   Select appropriate experimental design and replication

•   Augment to protect against higher order terms?

•   Include process variables?

•   Distribute proposal widely for comment

•   Revise as needed based on feedback received

•   Conduct experiment

•   Analyze data – Simplify Models – Profile PlotsContour Plots

•   Practical Conclusions – Report – Oral and Written

While most of these points are self-explanatory, we would like to comment briefly about distributing the proposal widely for comment and final documentation. It is always a good idea to discuss your proposed experimental program and experiment designs with colleagues prior to the execution. This helps you think through your planned approach and get input from your colleagues about how the problem could be approached differently and better. In the process you also get information about who supports your project and who doesn’t. Some organizations require the submission and management approval of an experimental project prior to its implementation.

Documentation of the experiment and results is also needed; research not reported is research not done. Such documentation can be a formal written report or a Microsoft PowerPoint or Apple Keynote presentation. In any event, it is a good approach to present your findings orally before preparing any formal report. In preparing and giving the oral report, you deepen your understanding of the material and identify any weaknesses that may be present, some of which may suggest that additional experiments are needed. The oral presentation will also help identify supporters and detractors of your work and findings.

2.5 Summary and Looking Forward

In this chapter we have discussed the fundamentals of good experimentation that enable the collection of good data. These fundamentals include the size (number of formulations or blends) of typical experimental studies and the evolution of the experimental environment that is fundamental to our proposed strategy, which included two phases: screening and optimization. We also introduced a roadmap for sequential experimentation and modeling of formulation systems and showed how the proposed strategy, concepts, methods, and tools are linked together using the principles of statistical engineering (Hoerl and Snee 2010).

In the next chapter we discuss experimental designs for formulations development when the components can be varied from 0 to 100% of the blend. These are the focus areas:

•   Geometry of the region of experimentation

•   Development of blending models that predict the performance of a blend given the composition of the ingredients in the blend

•   Types of designs including simplex , simplex-centroid, and response surface designs

All of the designs and models are introduced and illustrated with examples.

2.6 References

Cody, R. P. (2008) Cody’s Data Cleaning Techniques Using SAS, 2nd Edition, SAS Institute, Cary, NC.

Hoerl, R. W. and R. D. Snee. (2010) “Statistical Thinking and Methods in Quality Improvement: A Look to the Future.” Quality Engineering, (with discussion) 22 (3), July-September 2010, 119-139.

Hoerl, R. W. and R. D. Snee. (2015) “Guiding Beacon: Using Statistical Engineering Principles for Problem Solving.” Quality Progress, June 2015, 52-54.

Pfeifer, C. G. (1988) “Planning Efficient and Effective Experiments.” Materials Engineering, January 1988, 35-39.

Snee, R. D. and R. W. Hoerl. (2012) “Inquiry on Pedigree: Do you know the quality and origin of your data?” Quality Progress, December 2012, 66-68.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.182.62