7
Screening Constrained Formulation Systems

“Our objective is to reduce the number of ‘essential’ components that we have to work with … by finding those components that have no effects and/or equal effects …. .”

Ronald D. Snee and Donald W. Marquardt (1976)

Overview

The central theme of this book is the need to have a strategy for your formulations development work. Briefly stated, our proposed strategy, which has been successfully applied and refined for decades, is to first run screening experiments. These are to be followed by optimization experiments in order to better understand the effects of the critical components that were identified in the screening experiments. In this chapter we discuss screening experiments for formulations development when the components are subject to constraints. We discuss the designs and models used to plan and analyze such experiments. Several examples are used to illustrate the formulation situations and the proposed methodology.

CHAPTER CONTENTS

Overview

7.1 Strategy for Screening Formulations

7.2 A Formulation Screening Case Study

7.3 Blending Model and Design Considerations

7.4 Analysis: Estimation of Component Effects

Calculating Component Effects: Examples

7.5 Formulation Robustness

7.6 XVERT Algorithm for Computing Subsets of Extreme Vertices

Eight-Component XVERT Design and Analysis

7.7 Summary and Looking Forward

7.8 References

Plackett-Burman Designs for 12, 16, and 20 Runs

7.1 Strategy for Screening Formulations

A pharmaceutical formulation scientist related the following experience. He had worked with a team on the development of a formulation that involved 11 components. He reported that the work was successful in that a formulation “that worked” was identified. Unfortunately, the experience was not a good one; in fact it was “very painful”. The project took a lot of time; there was a lot of stress, anxiety, and uncertainty. He commented further, “We were never sure that the formulation that we found was even close to the best formulation and whether there were other workable formulations that were less costly”.

There must be a better way. Fortunately, there is.

In Chapter 2 we introduced a strategy for developing formulations. This strategy called for performing screening experiments before conducting optimization experiments.  In Chapter 5 we showed how to design and analyze screening experiments when the components could be varied from 0 to 100% of the mixture. In Chapter 6 we discussed how to design and analyze response surface designs when there were upper and lower bounds on the components.

In this chapter we move the methodology one step further and show how to conduct screening experiments when there are lower and upper bounds on the components. When there are only lower bounds on the components, the design region is still a simplex when expressed in terms of pseudo-components, and therefore simplex-screening designs discussed in Chapter 5 can be used. A different approach is needed when the components are subject to both lower and upper constraints.

We begin by referring to the 7-component pharmaceutical tablet formulation study introduced in Chapter 1. This is followed by a discussion of the blending model and design considerations used to develop formulation screening designs. Next, the calculation and interpretation of component effects when there are upper and lower bounds on the components are discussed and illustrated with examples.

Sections 7.5 and 7.6 discuss the construction of formulation screening designs. One case discusses the use of screening designs to assess the robustness of a formulation. The authors of the study (Martinello et al. 2006) constructed the design using the D-Optimality criterion. The other case is in regard to an eight-component blending study. The screening design was constructed using the XVERT algorithm.

7.2 A Formulation Screening Case Study

In Chapter 1 we introduced a pharmaceutical tablet study (Martinello et al. 2006) that investigated the formulation involving the compound paracetamol, which was known to have poor flowability (measures how well the materials flow through the tableting equipment) and compressibility properties. The study involved seven ingredients varied over the following ranges.

Component Low Level High Level
Microcel 50 88
KollydonVA64 10 25
Flowlac 0 25
KollydonCL30 0 10
PEG 400 0 10
Aerosil 0 3
MgSt 0.5 2.5

Nine responses were measured. Of particular interest were repose angle, compressibility, and water content (Table 7.2). A 19-blend extreme vertices design shown in Table 7.1 was used to design the formulations to be tested. This design was selected using the D-Optimality criterion to be discussed below.

 

Table 7.1 – Pharmaceutical Tablet Compactability Study Design

Blend Microcel Kollydon VA64 Flowlac Kollydon CL30 PEG  400 Aerosil MgSt
1 0.58 0.165 0.125 0.05 0.05 0.015 0.015
2 0.615 0.25 0 0 0.1 0.03 0.005
3 0.5 0.25 0.245 0 0 0 0.005
4 0.5 0.25 0.025 0.1 0.1 0 0.025
5 0.595 0.25 0 0.1 0 0.03 0.025
6 0.5 0.1 0.245 0 0.1 0.03 0.025
7 0.875 0.1 0 0 0 0 0.025
8 0.58 0.165 0.125 0.05 0.05 0.015 0.015
9 0.5 0.1 0.245 0.1 0 0.03 0.025
10 0.525 0.1 0.25 0 0.1 0 0.025
11 0.865 0.1 0 0 0 0.03 0.005
12 0.595 0.25 0 0 0.1 0.03 0.025
13 0.58 0.165 0.125 0.05 0.05 0.015 0.015
14 0.5 0.25 0.245 0 0 0 0.005
15 0.695 0.1 0 0.1 0.1 0 0.005
16 0.58 0.165 0.125 0.05 0.05 0.015 0.015
17 0.695 0.1 0 0.1 0.1 0 0.005
18 0.515 0.1 0.25 0.1 0 0.03 0.005
19 0.58 0.165 0.125 0.05 0.05 0.015 0.015

As in any screening experiment, we want to know which of the components are most important as measured by their effect (positive or negative) on the response. This information will enable us to answer the following questions:

•   Can we select a formulation based on the results of this experiment?

•   Is additional experimentation needed?

•   If so, which components should be the focus of future experimentation?

It is not unusual for a screening experiment to provide the information needed to choose a desirable formulation. Such was the case in this particular study. If additional experimentation is needed, the screening experiment results provide a firm basis for designing the optimization experiment.

In this case, a seven-term linear blending model was developed and used to identify an optimal formulation, which, when tested, produced measured responses that were very close to those predicted by the linear blending model. A discussion of the calculation and interpretation of the component effects is presented in Section 7.4.

Table 7.2 – Pharmaceutical Tablet Compactability Study Response Measurements

Blend Repose Angle Compressability Water Content
1 20.93 30.49 2.38
2 18.87 27.49 2.4
3 43.87 31.46 2.35
4 45.8 35.91 2.35
5 12.23 30.65 2.87
6 10.73 30.75 2.45
7 42.9 33.25 2.25
8 16.97 30.45 1.79
9 16.9 29.97 2.5
10 39.67 33.82 1.54
11 14.07 30.05 2.51
12 14.6 30.46 2.15
13 16.43 31.34 1.5
14 45.43 34.12 2.25
15 43.77 35.35 2.19
16 18 29.66 1.97
17 48.1 34.43 2.15
18 16.93 30.09 1.65
19 21.37 30.21 2.05

7.3 Blending Model and Design Considerations

As discussed in Chapter 5, a linear blending model of the following form is used to analyze the results of a screening study.

E(y) = b1x1 + b2x2 + ….  + bqxq

This model is fit to data using the Analyze Platform in JMP: Analyze ► Fit Model. As we will see later, the coefficients (bi) in the model are used to calculate the effects of the different components.

So, from an experimental design perspective, we want to select a design that will provide good estimates of the coefficients in the linear blending model. Elfving (1952) showed that you need consider only the vertices of the experimental region. The collection of vertices, or a subset thereof, is called an extreme vertices design. As the number of components increases, the number of vertices also increases. In some cases the number of vertices can be quite large--sometimes > 500. Snee and Marquardt (1974) showed that good experimental designs could be constructed from a subset of the vertices.

The question is how to select a good subset of the vertices. There are two useful ways to select a subset of the vertices, both using computer algorithms:

•   Optimum Design Algorithm

•   XVERT Algorithm

The optimal design algorithm is the preferred method because such algorithms are part of statistical software such as JMP, which includes the routines to also analyze the data. The XVERT algorithm is easy to implement using a spreadsheet and produces very good designs when the optimal design algorithms are not available.

Here are the different types of design optimality criteria:

Criteria Objective
D-Optimality Minimizes the volume of the confidence region for the model coefficients.
I-Optimality Minimizes the average prediction variance over the experimental region.
G-Optimality Minimizes the maximum prediction variance over the points in the design.
A-Optimality Minimizes the variance of the coefficients in the model. Particularly useful for linear model designs. Sometimes called trace optimality.

The A-Optimality criterion minimizes the variance of the coefficients in the model. As such, it is particularly useful in creating designs for linear blending models. A-Optimality is sometimes referred to as T- or Trace-Optimality, as the trace of the coefficient variance-covariance matrix is the criterion to be minimized. The XVERT algorithm is designed to minimize this trace. Montgomery (2013) and Goos and Jones (2011) provide additional information about the use of optimality criteria in designing experiments.

The D-Optimality and I-Optimality criteria are probably the most widely used in practice. The D criterion tends to select points on the boundary of the region. The I criterion includes points in the interior of the region. An effective strategy is to use more than one criterion to select designs and then compare the designs, basing the comparison on the context and subject matter of the particular formulation being studied.

The number of points in the design is an important consideration; the bare minimum design size is the number of points equal to the number of terms in the model. Such a design is often referred to as a saturated design and is not generally recommended. As discussed in Chapter 6, a general guideline is to select a design size that is 5 to 10 points more than the number of terms in the model. If it is important to include some replicates in the design to estimate experimental variation (error), then “5+5” is a good strategy: 5 points more than the number of parameters and 5 of the points in the design duplicated. Five degrees of freedom provides an F test for lack of fit that has adequate power.

7.4 Analysis: Estimation of Component Effects

As noted in Chapter 5, the purpose of screening experiments is to estimate component effects. Recall that formulas to calculate component effects for simplex designs were discussed in Chapter 5. Those formulas are generalized here to take into account the different ranges of the components.

Central to the calculation of the component effects is the reference blend, which is typically chosen as the centroid of the experimental region defined by the upper and lower bounds on the components. Component effects can be calculated in two ways as summarized below:

Formulation Region Reference Formulation Component Effect Direction
Simplex Centroid of the Simplex Component Axes
Constrained Region Centroid of the Region Cox or Piepel Direction

A Cox effect direction (Cox 1971) for a given component is a line connecting the reference formulation to that component’s vertex of the overall simplex. Along a Cox effect direction, an increase (or decrease) in the component of interest is offset by decreases (or increases) in the other components in the formulation, holding constant the relative proportions (i.e., ratios) of the components in the reference formulation. An example is shown in Figure 7.1.

Figure 7.1 – Constrained Formulation Region and Reference Mixture with Cox Effect Directions

image

A Piepel effect direction (Piepel 1982) for a given component is a line connecting the reference formulation (expressed in pseudo-components) to that component’s vertex of the pseudo-component simplex. The pseudo-component simplex is the simplex defined by the lower bounds on each component such that the resulting simplex encloses the constrained region. Piepel showed that the Cox effect direction may lead to extrapolating the model being used to assess the effects of components. An example of Piepel effect directions and the pseudo-component simplex is given in Figure 7.2.

Figure 7.2 – Constrained Formulation Region and Reference Mixture with Piepel Effect Directions

image

We believe that the Cox effect direction is generally the more useful and understandable of the two because it investigates how a response variable will change if an individual component increases or decreases relative to the reference formulation. This concept of a component effect has existed in several areas of science for many years. One can always compute the effects using both the Cox and Piepel directions, compare the results, and see what difference in conclusions may result. More discussion of the Cox and Piepel directions is provided by Snee and Piepel (2013).

The effect of a component over its range from ai to bi, using the Cox or Piepel effect directions can be calculated as follows:

Ei = (Ri/ (1-si)) (bi – (b1s1 + b2s2 + ….  + bqsq))

The terms in the calculation are as follows:

•   Ri is the range of the component xi (bi – ai).

•   bi is the coefficient in the linear blending model associated with xi.

•   si is the amount of xi in the reference blend.

Effects using the Piepel effect directions are computed by expressing the components in pseudo-components and using the effect formula above.

Recall from Chapter 5 that Cox effects and their standard errors can be calculated directly in JMP. This is done in Fit Model by clicking the options triangle after fitting the linear blending model, and selecting “Estimates,” and then “Cox Estimates.”

Calculating Component Effects: Examples

Snee (2011) provides a useful example for studying the difference between the component effects calculated using the Cox and Piepel effect directions. This formulation study involves four components with lower and upper bounds. Nine formulations were tested (Table 7.3). The reference formulation (centroid of the region) is (0.20, 0.30, 0.064, 0.436).

Table 7.3 – Formulation Study Example

Formulation X1 X2 X3 X4 Y
1 0.25 0.40 0.10 0.25 102.6
2 0.25 0.40 0.03 0.32 102.0
3 0.25 0.20 0.10 0.45 101.2
4 0.15 0.40 0.03 0.42 101.7
5 0.25 0.20 0.03 0.52 100.7
6 0.15 0.20 0.10 0.55 100.7
7 0.15 0.40 0.10 0.35 101.8
8 0.15 0.20 0.03 0.62 100.2
9 0.20 0.30 0.06 0.44 101.2
Centroid 0.20 0.30 0.064 0.436  

The model coefficients and component effects calculated using the Cox and Piepel effect directions are shown in Table 7.4. In this example, the component effects for the Cox and Piepel effect directions are almost identical with a correlation coefficient of 1.000 (rounded to three decimals). This near equality is specific to this example and should not be interpreted as a general result.

Table 7.4 – Model Coefficients and Component Effects for Example

Component Model Coefficient Coefficient Std Error Effect Cox Direction Effect Piepel Direction
X1 103.2 0.8 0.24 0.27
X2 104.5 0.4 0.91 1.01
X3 104.0 1.3 0.20 0.24
X4 97.9 0.3 -2.22 -2.23

Snee and Piepel (2013) report that the Cox and Piepel component effects were also calculated for several other examples to see what differences were found and how the conclusions might change. The examples involved 3 to 11 components. There was very good agreement between the component effects calculated using the Cox and Piepel effect directions. The correlation coefficient was generally more than 0.99, which suggests there was little difference in the Cox and Piepel effects. This experience is not proof of a general finding, since counterexamples with significant differences can be constructed.

These results support the recommendation that the Cox effect direction should be the primary one used for constrained formulation regions. To be safe, one can always calculate the component effects with both the Cox and Piepel effect directions and compare the results.

Pharmaceutical Tablet Compactability Study – Calculation of Component Effects

The component effects for the Pharmaceutical Tablet Compactability Study Design discussed in Section 7.2 above are summarized in Table 7.5. These effects were computed using the Cox direction.

Table 7.5 – Pharmaceutical Tablet Compactability Study: Component Effects

Component Repose Angle Effect Repose Angle p-Value Compressibility Effect Compressibility p-Value Water Content Effect Water Content p-Value
Microcel 15.23 0.018 0.55 0.264 -0.10 0.770
KollydonVA64 3.89 0.260 -0.10 0.737 0.21 0.316
Flowlac 2.81 0.434 0.00 0.996 -0.38 0.099
KollydonCL30 2.76 0.413 0.50 0.093 0.00 0.999
PEG 400 1.66 0.635 0.23 0.436 -0.26 0.226
Aerosil -29.27 <.0001 -1.68 <.0001 0.14 0.492
MgSt -2.38 0.484 0.34 0.251 0.10 0.610
             
RMSE 5.97   1.28   0.36  
AdjRSQ 82   67   -4  
Model F-Ratio 14.61 0.000 7.11 0.002 0.88 0.537
Average 26.71   31.58   2.17  
No. Blends 19   19   19  

In Table 7.5 we see that the Microcel has a significant effect on the tablet repose angle. Aerosil has a significant effect on tablet repose angle and compressibility. None of the components have a significant effect on water content. Regarding repose angle, the Aerosil effect is much larger than the Microcel effect. We see this in the component effect scatter plots in Figure 7.3. These plots were constructed using the Fit Y by X command in JMP: Analyze ► Fit Y by X.

We also note in Figure 7.3 the curvilinear effect of the Aerosil component on repose angle and compressibility. Importantly, we see the linear effect is much stronger than the curvature effect--hence, the linear effect will identify the importance of the component. In our experience this is a general rule; linear effects are generally stronger than curvature effects.

Interpreting Component Effects. We conclude this section with some guidance on the interpretation of component effects. When evaluating component effects to better understand formulation systems, we recommend the following:

1.   Evaluate the component effects plots using the Cox effect directions to assess the nature and magnitude of the component effects.

2.   Use the component effects plots to identify components that may have similar blending behavior. Assess whether similar blending of these components is supported by subject-matter science and business knowledge. If so, fit a reduced model that combines the components that are postulated to have similar blending. The value of reduced models is discussed in Chapter 10.

3.   Study the component effects plot and calculate the component effects to identify components that have no effect. The appropriate next steps to finding components that have no effect may depend on the objectives of the experiment.  Options include setting the component with no effect at any desirable level within the range studied. If zero is at or near the lower end of the range, one can consider removing the component from the formulation.

In all cases, the selected action should take into account subject-matter science and business knowledge.

We have found that this approach typically results in an increased understanding of the formulation system being studied.

Figure 7.3 – Pharmaceutical Tablet Compactability Study Design: Plots of Tablet Repose Angle and Compressibility versus Aerosil and Microcel

image

7.5 Formulation Robustness

An important consideration in experimentation with formulations is robustness. How sensitive is the performance of the formulation to small variations in the amount of the different components in the formulation? This question can be answered by conducting a screening design centered at the target formulation.

The benefits of a pharmaceutical tablet formulation robustness study can be seen in the following example. The objective of the experiment was to see whether minor variations in the levels of a five-component formulation had an effect on the performance of the formulation. Here are the components and selected ranges:

Component Low Level High Level
Drug 0.16 0.17
Lactose 0.35 0.45
Microcrystalline Cellulose (MCC) 0.35 0.45
Starch 0.03 0.05
Magnesium Stearate 0.004 0.006

It was decided to construct a 15-blend design consisting of 14 vertices and the overall centroid of the region defined by the vertices. This results in 10 more blends than there are coefficients in the linear blending model. It was expected that the linear blending model would provide an adequate fit as the components were being varied over a small range. Elfving (1952) showed that for the linear model one needs only to include the vertices of the region in the design. None of the blends were repeated as the experimental and test variation was well known. The resulting 15-blend design and tablet measurements are shown in Tables 7.6 and 7.7.

Table 7.6 – Pharmaceutical Tablet Formulation Robustness Study Blends

Blend Drug Lactose MCC Starch Magnesium Stearate
1 16 45 35.6 3 0.4
2 17 42.4 35 5 0.6
3 16 35.4 45 3 0.6
4 17 35 44.6 3 0.4
5 17 44.4 35 3 0.6
6 17 42.6 35 5 0.4
7 16 35 43.4 5 0.6
8 16 45 35.6 3 0.4
9 17 35 44.4 3 0.6
10 16 45 35.4 3 0.6
11 16 43.4 35 5 0.6
12 17 42.6 35 5 0.4
13 17 35 44.6 3 0.4
14 16 35 43.6 5 0.4
15 16.5 39.5 39.5 4 0.5

  

Table 7.7 – Pharmaceutical Tablet Formulation Robustness Study: Blend Measurements

Blend Weight Thickness Hardness Dissolution (45min) Assay (%) Content Uniformity  (RSD %)
1 99.6 3.18 3.7 94.6 98.0 3.3
2 101.0 3.51 4.1 96.1 99.4 2.0
3 98.9 3.20 3.8 94.1 100.8 2.0
4 99.7 3.29 4.7 93.4 100.1 1.6
5 100.5 3.25 3.8 96.8 101.9 1.8
6 100.2 3.19 3.5 97.9 100.2 1.5
7 101.2 3.19 4.0 94.4 99.0 2.6
8 100.3 3.28 3.9 93.5 98.9 2.1
9 100.8 3.35 3.7 98.8 102.5 1.2
10 100.2 3.21 3.8 95.1 99.8 1.9
11 100.6 3.17 4.0 93.5 99.9 1.5
12 100.2 3.26 4.5 96.3 98.8 2.0
13 99.9 3.45 4.7 99.9 102.4 1.3
14 100.2 3.11 4.0 94.7 100.7 2.5
15 100.4 3.26 3.8 94.2 98.0 2.0
Specification 95-105 2.8-3.8 3-7 >75 90-110 <5.0

First we note in Table 7.7 that all the rest results are well within specification for all responses. This indicates that the variations in the component levels produced by the design (Table 7.6) did not produce any out-of-spec results (Table 7.7).

The results of fitting the linear blending model to the data and computing the component effects are summarized in Table 7.8.

 

Table 7.8 – Pharmaceutical Tablet Formulation Robustness: Component Effects

Tablet Property Average Std Dev Coefficient of Variation (%) Model Adjusted R-Square Model p-Value Formulation Robust?
Weight 100.2 0.58 0.6 24 0.152 Yes
Thickness 3.26 0.11 3.3 26 0.140 Yes
Hardness 4.0 0.36 9.1 6 0.361 Yes
Dissolution 95.6 2.0 2.1 27 0.133 Yes
Assay (%) 100.0 1.4 1.4 29 0.119 Yes
Content Uniformity 1.9 0.54 22 0.168 Yes

In Table 7.8 we see the following:

1.   The model p-values are all large, varying from 0.119 to 0.361. Thus, there is minimal correlation between the components and the various tablet measurements when using these small ranges of variation.

2.   The adjusted R2 statistics are all low, indicating a poor fit of the model to the data. This is consistent with the insignificant fit of the model to the data.

3.   The coefficient of variation is small for each of the responses and consistent with the level of variation typically observed for these measurements. The coefficient of variation is just the standard deviation divided by the average.

Based on this analysis and the fact that all test results were within specification, it was concluded that the formulation was robust to slight variations in the formulation ingredients, when varied over the ranges studied in this experiment. None of the components had any significant effect over the ranges studied.

7.6 XVERT Algorithm for Computing Subsets of Extreme Vertices

The XVERT algorithm is used to select a subset of the vertices of a region defined by lower and upper constraints when JMP software or some other optimal design software is not available. The use of the JMP Custom Design algorithm is discussed in Chapter 8. The resulting design produces good estimates of the coefficients in the linear model (Elfving 1952). XVERT can also be used to compute all the vertices of a constrained region and accomplishes the same objective as the McLean and Anderson (1966) algorithm discussed in Chapter 6. The XVERT algorithm requires

fewer computations than the McLean and Anderson algorithm and, as noted, can also be used to select a subset of the vertices.

The XVERT algorithm has five basic steps:

1.   Rank the components in order of increasing ranges (bi - ai): x1 has the smallest range; xq has the largest range.

2.   Form a two-level factorial design from the upper and lower bounds of the q-1 components with the smallest ranges (i.e., 2q-1 factorial design).

3.   Compute the level of the qth component:

xq = 1 – (x1 + x2 + x3 + …. + xq-1)

A given point is an extreme vertex if the calculated level of xq from Step 3 is between the possible low and high levels of xq: aq < xq < bq. For those points that are outside the constraint limits, set xq equal to either the upper or the lower limits, whichever is closer to the computed value.

4.   From each point originally outside the limits, generate additional points (max = q-1) by adjusting the level of one, and only one, component by an amount equal to the difference between the computed value for xq and the substituted upper or lower limit. Adjusting more than one component at a time produces points that are not vertices. Additional extreme vertices are only those points whose adjusted component levels will remain within the limits of the components.

XVERT is also used to generate a subset of the vertices by starting with a fraction of a 2(q-1) design. The Plackett-Burman designs are particularly useful for this purpose (Snee and Marquardt 1974). These designs exist for n = 4, 8, 12, 16, 20, and so on. Designs for 12, 16, and 20 runs are included at the end of the chapter. Other fractions of two-level factorial designs can also be used.

The subset of the extreme vertices is selected using the following rules:

1.   Compute the level of xq for all points in the design:

xq = 1 – (x1 + x2 + x3 + …. + xq-1)

2.   All points where aq < xq < bq form the core of the design. These points are called core points because, in this design selection algorithm, they are automatically included in all designs evaluated, thereby forming the central portion of the design.

3.   Each point whose level of xq must be adjusted creates additional points that form a candidate subgroup. If a candidate subgroup has only one point, the point is added to the core points and the subgroup is eliminated from further consideration.

4.   As with the McLean and Anderson algorithm, some blends will be generated more than once. Extra points should be deleted from the list of core and candidate subgroup points so that each blend appears only once in the final list. The design will consist of the core points and one point from each candidate subgroup.

5.   In the illustrative example discussed below there are three components. A 2x2 factorial design is used to generate the vertices. When x3 is computed so that the component levels sum to one, two of the points (A, B) satisfy the constraints on the components. The other two points (C, D) can be adjusted to satisfy the constraints. These two points create two candidate subgroups: C1 and C2: D1 and D2. The resulting four-point design consists of points A, B, C1 or C2, and D1 or D2.

6.   To identify the best design, a series of candidate designs is developed by creating all possible combinations of the core points and one point from each candidate subgroup. The quality of the design is determined using the A-Optimality criterion, which minimizes the sum of the variances of the coefficients in the linear blending model (Montgomery 2013).

Experience with the XVERT algorithm has shown that a good design (one close to the best) can be developed from the core points and the first point from each candidate subgroup (Snee and Marquardt 1974). This is our recommended procedure when computer software that can evaluate all possible designs is not available. The computation of the vertices is easily done using a spreadsheet. Creating the software in R allows the evaluation of all possible designs.

The XVERT computations will be illustrated using the three-component example discussed in Section 6.3. Here are the component ranges in this example:

Component Minimum (a) Maximum (b) Range (b – a)
C1 0.2 0.6 0.4
C2 0.1 0.6 0.5
C3 0.1 0.5 0.4

Here are the components ranked in order of increasing ranges:

Component Minimum (a) Maximum (b) Range (b – a)
X1 = C1 0.2 0.6 0.4
X2 = C3 0.1 0.5 0.4
X3 = C2 0.1 0.6 0.5

The core points are generated by using levels of a 2(q-1) = 2(3-1) = 22 (2x2) factorial design to determine the levels of the first q-1 = 3-1 = 2 factors. The level of xq = x3 is given as follows:

x3=l.0-(x1+x2).

Here are the resulting points:

Point X1 X2 X3 Comment
A 0.2 0.1 0.7 Out of Limits
B 0.6 0.1 0.3 Vertex
C 0.2 0.5 0.3 Vertex
D 0.6 0.5 -0.1 Out of Limits

Note that points A-D form a 22 design in x1 and x2. Points B and C both have x3 = 0.3. This value is within the 0.1 to 0.6 range for x3. The levels of x3 for points A and D (x3=0.7 and -0.1 respectively) are not in the range specified for x3. Points A and D must be adjusted to meet the constraint on x3. Point A will have x3 reduced by 0.1 to 0.6, the upper constraint on x3. Point D will have x3 increased by 0.2 to 0.1, the lower limit on x3. Both x1 and x2 can be adjusted to compensate for the alteration of x3, resulting in a point A1 and also a point A2

Candidate Subgroup Point X1 X2 X3 X3 Adjustment
  A 0.2 0.1 0.7  
1 A1 0.2 0.2 0.6 -0.1
1 A2 0.3 0.1 0.6 -0.1
           
  D 0.6 0.5 -0.1  
2 D1 0.6 0.3 0.1 0.2
2 D2 0.4 0.5 0.1 0.2

Each point produces two additional points resulting in a total of six extreme vertices.

Point Vertex X1 X2 X3
B 3 0.6 0.1 0.3
C 4 0.2 0.5 0.3
A1 2 0.2 0.2 0.6
A2 5 0.3 0.1 0.6
D1 1 0.6 0.3 0.1
D2 6 0.4 0.5 0.1

These vertices are the same as those computed earlier using the McLean and Anderson algorithm with the exception that the positions of components 2 and 3 are reversed because of the reordering of the components.

To illustrate the XVERT algorithm vertices selection procedure, we will construct a four-point design. Points A and D generate two candidate subgroups of two points each resulting in four possible designs.

Candidate Design Points
1 B C A1 D1
2 B C A1 D2
3 B C A2 D1
4 B C A2 D2

Using the rule stated earlier, we would choose the B, C, A1, D1 design because it contains the core points (B,C) and the first point in each of the two candidate subgroups (Al, D1). In this particular example, it happens that this is the best design by all statistical optimality criteria discussed in Section 7.3. We note that in this case you would typically run all the vertices including the overall centroid. We discuss the four-point design here to illustrate the use of the XVERT algorithm in selecting a subset of the vertices.

Eight-Component XVERT Design and Analysis

The eight-component formulation study discussed by Snee and Marquardt (1976) illustrates the use of XVERT in developing a linear mixture model design. Work on a new product has progressed to the point at which it is necessary to determine the importance of eight candidate ingredients. Economic considerations require that the final formulation should contain four of the ingredients (x1, x2, x5, and x6).

The effects of the other components are unknown and it is decided to study the effects of the components over the following ranges:

Component Lower Upper Component Lower Upper
X1 0.10 0.45 X5 0.10 0.60
X2 0.05 0.50 X6 0.05 0.20
X3 0 0.10 X7 0 0.05
X4 0 0.10 X8 0 0.05

The resulting region has 182 vertices. In screening designs, we like to evaluate approximately 5 to 10 more blends than the number of components in the study (i.e., n should vary from n = q + 5 to n = q + 10). In this example, then, we ought to test 13 to 18 blends. There are sufficient time and funds available to test 20 blends. Hence, it is decided to use a 16-run extreme vertices screening design plus four replicate centroid points that would be interspersed at equal time intervals throughout the random testing sequence of the 16 blends.

The XVERT calculations are summarized in Table 7.9. This design is calculated using the first seven columns of the 16-run Plackett-Burman design (1946) shown at the end of this chapter. The XVERT design, which consists of the core points and one point from each of the candidate subgroups, would require the evaluation of 3x2x4x25 =768 designs, as there are eight candidate subgroups that contain 3, 2, 4, 2, 2, 2, 2, and 2 points, respectively.  When software is not available, we recommend the design consist of the core points and the first point in each candidate subgroup. The resulting design is shown in Table 7.9. The overall centroid is the average of the 16 vertices in the design. The centroid (average) of all 182 vertices could also be used if software is available to do the associated computations.

The component scatter plots are shown in Figure 7.4, which was constructed using the Fit Y by X  command in JMP: Analyze ►Fit Y by X. These exploratory plots are used to get a feel for the strength, magnitude, and direction of the component effects. In Figure 7.4 we see that the largest effects are due to components 1, 2, and 5. The component effects are summarized in Table 7.10 and shown graphically in Figure 7.5, which was constructed using the JMP Quality and Process platform (Analyze ► Quality and Process ► Variability/Attribute Gauge Chart).

In Figure 7.5 we see the large positive effect of component 5 and the negative effects of components 1 and 2 confirming the trends that we saw in Figure 7.4. Components 3 and 6 have no significant effect. The results of this screening experiment can be summarized as follows:

•   Components with equal effects: x1 and x2, x3 and x4, and x7 and x8.

•   Components with no significant effect: x3 and x6.

•   Curvature effects are not significant (p=0.317), Adj. R2= 0.97.

 

Table 7.9 – Eight-Component Screening Example

Blend Run Order X1 X2 X3 X4 X5 X6 X7 X8 Response (y)
1 18 0.1 0.5 0 0 0.1 0.2 0.05 0.05 30
2 3 0.1 0.05 0 0 0.55 0.2 0.05 0.05 113
3 11 0.1 0.5 0 0.1 0.1 0.2 0 0 17
4 17 0.15 0.05 0 0.1 0.6 0.05 0.05 0 94
5 1 0.1 0.05 0.1 0 0.55 0.2 0 0 89
6 13 0.1 0.5 0.1 0.1 0.1 0.05 0 0.05 18
7 7 0.1 0.05 0.1 0.1 0.55 0.05 0 0.05 90
8 12 0.4 0.05 0.1 0.1 0.1 0.2 0.05 0 20
9 16 0.35 0.05 0.1 0.1 0.1 0.2 0.05 0.05 21
10 8 0.3 0.5 0 0 0.1 0.05 0 0.05 15
11 6 0.1 0.5 0.1 0 0.2 0.05 0.05 0 28
12 14 0.45 0.05 0 0 0.45 0.05 0 0 48
13 4 0.45 0.2 0 0.1 0.1 0.05 0.05 0.05 18
14 19 0.45 0.15 0 0.1 0.1 0.2 0 0 7
15 2 0.45 0.25 0.1 0 0.1 0.05 0.05 0 16
16 9 0.45 0.1 0.1 0 0.1 0.2 0 0.05 19
17 5 0.259 0.222 0.05 0.05 0.244 0.125 0.025 0.025 38
18 10 0.259 0.222 0.05 0.05 0.244 0.125 0.025 0.025 30
19 15 0.259 0.222 0.05 0.05 0.244 0.125 0.025 0.025 35
20 20 0.259 0.222 0.05 0.05 0.244 0.125 0.025 0.025 40

Considering that high values of the response are desired and that the product must contain components x1, x2, x5, and x6, the suggested next steps are illustrated below:

Component Effect Action Regarding Component
X1 and X2 Large Negative Minimize amounts
X4 Small Negative Minimize amounts or delete
X5 Large Positive Consider increasing level
X3 and X6 Not significant (p=0.667) Set between max and min levels studied in the experiment
X7 and X8 Small Positive Consider studying over wider range

Figure 7.4 – Eight-Component Screening Example: Plots of Response versus Component Levels Levels

image

Table 7.10 – Eight-Component Screening Example: Component Effects and Associated 95% Confidence Limits

Component Range Reference Effect Effect Std Error t-Ratio p-Value Lower Conf Limit Upper Conf Limit
X1 0.35 0.259 -34.32 3.55 -9.67 <.0001 -42.05 -30.76
X2 0.45 0.222 -28.66 3.65 -7.85 <.0001 -36.61 -25.01
X3 0.10 0.050 -4.42 2.88 -1.54 0.1503 -10.69 -1.54
X4 0.10 0.050 -6.21 2.89 -2.15 0.0524 -12.50 -3.33
X5 0.50 0.244 73.46 3.86 19.02 <.0001 65.04 77.32
X6 0.15 0.125 1.24 2.91 0.43 0.677 -5.10 4.16
X7 0.05 0.025 6.47 2.86 2.27 0.0429 0.24 9.33
X8 0.05 0.025 7.66 2.89 2.65 0.0212 1.36 10.55

Figure 7.5 – Eight-Component Screening Example: Plot of Component Effects and Associated 95% Confidence Intervals

image

7.7 Summary and Looking Forward

Screening designs are an effective and useful strategic tool that enables formulation scientists to get good information about the blending behavior of components while running a small number of test blends. In the process, the most critical components are identified and the duration of formulations development is reduced as well as the cost of experimentation. Computer algorithms are available in software such as JMP to construct the blending designs as well as compute the component effects and construct the associated graphical displays.

In Chapter 8 we discuss how to design and analyze quadratic model designs--i.e., response surface designs, when the components are subject to upper and lower constraints. Focus will be on using the optimality algorithms mentioned in this chapter to construct designs that meet the needs of the study in a minimum number of runs.

7.8 References

Cox, D. R. (1971) “A note on polynomial response functions for mixtures.” Biometrika, 58 (1), 155-159.

Elfving, G. (1952) “Optimum Allocation in Linear Regression Theory.” The Annals of Mathematical Statistics, 23 (2), 255-262.

Goos, P. and B. Jones. (2011) Optimal Design of Experiments: a Case Study Approach, John Wiley & Sons, Hoboken, NJ.

Martinello, T., T. M Kaneko, M. V. R. Velasco, M. E. S. Taqueda. and V. O. Consiglieri. (2006) “Optimization of Poorly Compactable Drug Tablets Manufactured by Direct Compression using the Mixture Experimental Design.” International Journal of Pharmaceutics, 322 (1-2), 87-95.

McLean, R. A. and V. L. Anderson. (1966) “Extreme Vertices Design of Mixture Experiments.” Technometrics, 8 (3), 447-454.

Montgomery, D. C. (2013) Design and Analysis of Experiments, 8th Edition, John Wiley & Sons, Hoboken, NJ.

Piepel, G. F. (1982) “Measuring component effects in constrained mixture experiments.” Technometrics, 24 (1), 29-39.

R.L. Plackett and J.P. Burman. (1946) "The Design of Optimum Multifactorial Experiments." Biometrika 33 (4), 305-325.

Snee, R. D. and D. W. Marquardt. (1974) “Extreme vertices designs for linear mixture models.” Technometrics, 16 (3), 399-408.

Snee, R. D. and D. W. Marquardt. (1976) “Screening concepts and designs for experiments with mixtures.” Technometrics, 18 (1), 19-29.

Snee, R. D. (2011) “Understanding Formulation Systems – A Six Sigma Approach”, Quality Engineering, 23 (3), July-September 2011, 278-286.

Snee, R. D. and G. F. Piepel. (2013) “Assessing Component Effects in Formulation Systems”, Quality Engineering, Vol. 25:1, 46-53.

Plackett-Burman Designs for 12, 16, and 20 Runs

Plackett-Burman 12-Run Design

Run X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11
1 1 -1 1 -1 -1 -1 1 1 1 -1 1
2 1 1 -1 1 -1 -1 -1 1 1 1 -1
3 -1 1 1 -1 1 -1 -1 -1 1 1 1
4 1 -1 1 1 -1 1 -1 -1 -1 1 1
5 1 1 -1 1 1 -1 1 -1 -1 -1 1
6 1 1 1 -1 1 1 -1 1 -1 -1 -1
7 -1 1 1 1 -1 1 1 -1 1 -1 -1
8 -1 -1 1 1 1 -1 1 1 -1 1 -1
9 -1 -1 -1 1 1 1 -1 1 1 -1 1
10 1 -1 -1 -1 1 1 1 -1 1 1 -1
11 -1 1 -1 -1 -1 1 1 1 -1 1 1
12 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1

Plackett-Burman 16-Run Design

Run X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15
1 1 1 1 1 -1 1 -1 1 1 -1 -1 1 -1 -1 -1
2 1 1 1 -1 1 -1 1 1 -1 -1 1 -1 -1 -1 1
3 1 1 -1 1 -1 1 1 -1 -1 1 -1 -1 -1 1 1
4 1 -1 1 -1 1 1 -1 -1 1 -1 -1 -1 1 1 1
5 -1 1 -1 1 1 -1 -1 1 -1 -1 -1 1 1 1 1
6 1 -1 1 1 -1 -1 1 -1 -1 -1 1 1 1 1 -1
7 -1 1 1 -1 -1 1 -1 -1 -1 1 1 1 1 -1 1
8 1 1 -1 -1 1 -1 -1 -1 1 1 1 1 -1 1 -1
9 1 -1 -1 1 -1 -1 -1 1 1 1 1 -1 1 -1 1
10 -1 -1 1 -1 -1 -1 1 1 1 1 -1 1 -1 1 1
11 -1 1 -1 -1 -1 1 1 1 1 -1 1 -1 1 1 -1
12 1 -1 -1 -1 1 1 1 1 -1 1 -1 1 1 -1 -1
13 -1 -1 -1 1 1 1 1 -1 1 -1 1 1 -1 -1 1
14 -1 -1 1 1 1 1 -1 1 -1 1 1 -1 -1 1 -1
15 -1 1 1 1 1 -1 1 -1 1 1 -1 -1 1 -1 -1
16 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1

Plackett-Burman 20-Run Design

Run X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17 X18 X19
1 1 -1 1 1 -1 -1 -1 -1 1 -1 1 -1 1 1 1 1 -1 -1 1
2 1 1 -1 1 1 -1 -1 -1 -1 1 -1 1 -1 1 1 1 1 -1 -1
3 -1 1 1 -1 1 1 -1 -1 -1 -1 1 -1 1 -1 1 1 1 1 -1
4 -1 -1 1 1 -1 1 1 -1 -1 -1 -1 1 -1 1 -1 1 1 1 1
5 1 -1 -1 1 1 -1 1 1 -1 -1 -1 -1 1 -1 1 -1 1 1 1
6 1 1 -1 -1 1 1 -1 1 1 -1 -1 -1 -1 1 -1 1 -1 1 1
7 1 1 1 -1 -1 1 1 -1 1 1 -1 -1 -1 -1 1 -1 1 -1 1
8 1 1 1 1 -1 -1 1 1 -1 1 1 -1 -1 -1 -1 1 -1 1 -1
9 -1 1 1 1 1 -1 -1 1 1 -1 1 1 -1 -1 -1 -1 1 -1 1
10 1 -1 1 1 1 1 -1 -1 1 1 -1 1 1 -1 -1 -1 -1 1 -1
11 -1 1 -1 1 1 1 1 -1 -1 1 1 -1 1 1 -1 -1 -1 -1 1
12 1 -1 1 -1 1 1 1 1 -1 -1 1 1 -1 1 1 -1 -1 -1 -1
13 -1 1 -1 1 -1 1 1 1 1 -1 -1 1 1 -1 1 1 -1 -1 -1
14 -1 -1 1 -1 1 -1 1 1 1 1 -1 -1 1 1 -1 1 1 -1 -1
15 -1 -1 -1 1 -1 1 -1 1 1 1 1 -1 -1 1 1 -1 1 1 -1
16 -1 -1 -1 -1 1 -1 1 -1 1 1 1 1 -1 -1 1 1 -1 1 1
17 1 -1 -1 -1 -1 1 -1 1 -1 1 1 1 1 -1 -1 1 1 -1 1
18 1 1 -1 -1 -1 -1 1 -1 1 -1 1 1 1 1 -1 -1 1 1 -1
19 -1 1 1 -1 -1 -1 -1 1 -1 1 -1 1 1 1 1 -1 -1 1 1
20 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.149.238