CHAPTER 22

Measurement Process: Statistical Concepts

None plan to fail;
but most fail to plan
Plan the work,
Work the plan

– Anonymous

SYNOPSIS

Measurement process is like a manufacturing process affected by variation due to ‘common’ and ‘special’ causes. Based on this understanding, the concepts of ‘bias’ and ‘precision’ are developed. Both are essential for ‘calibration’ of an instrument. Precision termed more ‘precisely’ as gage R&R is the basis for determining the suitability of an instrument for any application to ensure that the data on measurements are not influenced by instrument variation.

Measurement system ‘ideal’ but measurements are not identical

One is familiar with making measurements. An experience commonly found by anyone accustomed to making measurements on similar items is that the items are similar but not their measurements. In fact, it has been a well-established fact that even an ideal measurement system would not produce measurements, wherein each measurement is the same as that of the standard.

Differences are bound to be present between measurements. In other words, ‘variation’ among measurements is bound to be present even if they were to be obtained from an ‘ideal’ set-up. This has to be recognised. This is the first statistical understanding that one should acquire regarding measurements.

Pattern of variation: measurements

Study the statistical logic flow chart in Figure 22.1.

From Figure 22.1, it can be inferred that measurements obtained from a measurement process which is in ‘control’ follows the statistical law called Normal Law shown in Figure 22.2. Hence, the measurements do exhibit the two statistical properties, namely, ‘location value’ and ‘width value’. Based on these two key parameters, other statistical properties of measurements are derived. These are discussed in the later sections of this chapter. At this juncture, it is pertinent to note that the statistical properties that measurement data produce—location and width—determine the quality of the measurement system.

Figure 22.1 Statistical logic flow chart on measurements

flow chart

 

Figure 22.2 Characteristics of the measurement process

measurement process

The importance of Normal Law is given in Figure 22.3 whose author is W. J. Youden, acknowledged expert in the field of statistical design of experiments.

Figure 22.3 Importance of normal law [Ref: Experimentation and measurement, (1962) by Youden, W.J.]

Importance of normal law

Statistical properties of measurement data

Stability

The word stability conjures up various thoughts associated with the words consistent, uniform, homogenous, less variation, etc. These qualitative but correct comprehensions are concretised as a model from which the parameters of stability can be determined. In this phase, the process of measurement analysis gets into the realm of science of statistics which is briefly dealt with here.

The measurements are obtained, grouped as frequency distribution and represented graphically as histogram. In case the pattern resembles the normal distribution, it can be said that measurements were obtained from a system under control and the system is stable.

There is another statistical method for assessing the system stability. It is through the analysis of measurement data by xbarR chart.

To assess the system stability, only the statistical methods have to be used.

Bias

Location, a value around which individual measurements are clustered is assessed by ‘average’ of the measurements represented as xbar (X-bar). Measurements taken are compared against a standard/reference value. In an ideal measurement system, the value of X-bar would be the same as that of the reference value. In practical terms, X-bar may or may not be the same as the reference value. There may be a possibility of a deviation. That is, the difference between observed value and reference value is not zero. This difference is known as bias and rarely bias is ‘zero’, the ideal value. This is illustrated in Figure 22.4. Thus ‘location’ is satisfactory when the ‘X-bar’, its statistical measure, is close to the reference value.

Variation

The base or width of the Normal Law represents variation. Width is assessed by standard deviation of the measurements. Standard deviation is the quantification of variation due to common causes. When the standard deviation is as small as desirable the measuring equipment is satisfactory and hence possesses the property of ‘precision’—repeatability.

Figure 22.4 Illustration of bias

Illustration of bias

Bias and variation: relationship

The relationship between ‘bias’ (location) and ‘repeatability’ is given in Figure 22.5. The following points need to be observed from Figure 22.5.

  1. All the four cases are under ‘statistical control’.
  2. Though each case is within ‘statistical control’, only the first one is satisfactory—no bias; least variability.

Therefore, a measurement system is said to be satisfactory, only when it is in statistical control with respect to ‘bias’ as well as ‘variation’.

Bias: assessment

Consider the data given in Table 22.1.

From Table 22.1, it can be seen that bias is rarely zero and assumes positive and negative values. The values of bias are represented as a histogram in Figure 22.6.

Average bias = Average observed value − Reference value
With reference to data in Table 22.1,
Average bias = 6.0067 − 6.0000 = 0.0067.

Figure 22.5 Relationship between bias and repeatability

Relationship

TABLE 22.1 Bias Study: Data

Trials Measured value Bias w.r.t. reference value 6.0
1 5.8 −0.2
2 5.7 −0.3
3 5.9 −0.1
4 5.9 −0.1
5 6.0 0.0
6 6.1 0.1
7 6.0 0.0
8 6.1 0.1
9 6.4 0.4
10 6.3 0.3
11 6.0 0.0
12 6.1 0.1
13 6.2 0.2
14 5.6 −0.4
15 6.0 0.0

 

Figure 22.6 Bias study—histogram of bias values

Bias values

Test of significance of bias

Does the fact that ‘bias’ is not ‘zero’ mean that the measurement system is biased? Such an inference is not valid. The numerical value of ‘bias’ which may be a positive or negative value can also mean that the system has no ‘bias’. This is judged by the statistical logic of ‘variation due to common causes’. If the variation found in the values of bias is due to ‘common causes’, then the bias values, positive as well as negative, are ‘statistically’ as good as zero. This assessment is made through ‘statistical test of significance’, the subject matter of Chapter 23.

Linearity

The difference of bias throughout the expected operating (measurement) range of the equipment is called linearity. Linearity can be thought of as a change of bias with respect to size. The concept of linearity is illustrated here.

Five parts A, B, C, D and E are chosen as reference parts covering the entire range of the instrument. Each part is measured five times and the data obtained are in Table 22.2.

The change in the values of bias from one reference value to another over the range of the instrument is referred to as linearity. In the ‘ideal’ case, the bias would be zero at all the reference values. In other words, there would be no linearity. This is illustrated in Figure 22.7 by plotting the values of bias against the reference values.

TABLE 22.2 Linearity Assessment

Linearity Assessment

 

Figure 22.7 Bias vs. reference value

Bias vs. reference value

From Figure 22.7, it can be seen that the practical line of bias differs from the ‘ideal’ line of bias. Now the issue is whether the practical line of bias is statistically different from the ideal one. If it is ‘statistically different’, then the instrument is afflicted by linearity and it has to be probed to ensure that no bias is present over the range. If it is ‘statistically not different’, one can come to the conclusion that the instrument is not inflicted with linearity. This analysis calls for (i) fitting the regression line of bias against reference value as y = a + bx, where y is the bias corresponding to its reference value x and (ii) applying statistical test of significance on a and b to judge whether each one is significantly different from ‘zero’. If they are not, it means statistically y = a + bx is as good as y = 0 for any value of x. If a and b are statistically different from ‘zero’, the line of regression is tenable, linearity exists and the instrument has to be examined to eliminate ‘bias’ over the entire range. Chapter 24 deals with regression analysis to fit the line as well as tests of significance of the line of regression.

Measurement capability

Capability is a technical terminology related to the statistical concept of variation/variability. Capability is a measure of the variation due to common causes. Variation is quantified through a statistical measure called standard deviation. Capability is six times the standard deviation. Measurement capability determines the integrity of the measurement instrument itself. Hence, measuring capability of an instrument has to have a relation with what is being measured and its specification in such a way that instrument variability has no bearing on the measurements made on the product. These are discussed here.

Relationship: product specification and process capability and measurement system

Every product has a product specification governing each of its quality characteristic. Specification is also a measure of width, that is, variation. Here the variation is what is allowed as per mutual agreement between customer and producer or mandatory stipulation.

For a process to turn out 100 per cent of the product that meets the specification, the width of the process, that is, variation in the process due to common causes called process capability, has to be superior to that of the product specification. In other words, the process width has to be much smaller than that of product specification width. This has been explained in Chapter 12.

To assure that the output complies with the limits of process capability, the measurement system must ensure that the data on measurements are not influenced by the variability due to measurement system. This is possible when measurement capability is superior to that of process capability. In other words, width of the measurement system is less than that of process width.

Thus specification, process capability, and measurement capability are to be compatible to achieve 100 per cent compliance of the final product to its specification. This interrelationship is portrayed in Figure 22.8.

From Figure 22.8, it can be seen that the best relationship to achieve defect-free output is given as

  1. Measurement capability should be superior to process capability. Hence, measurement catches drift in the process well in advance to take timely action.
  2. Measurement capability should be superior to product tolerance. Hence, there is better discrimination of items at border lines (upper and lower limits) and the risk of wrong classification is the least—classifying good as bad and bad as good.

Figure 22.8 Relationship among product specification, process and measurement system capabilities

specification

Precision

Precision is a general expression used in the context of system variation—width. Closeness of repeated readings to each other is precision. In practical terms, precision is

A combination of ‘repeatability’ and ‘reproducibility’
which is quantified as ‘gage R&R’.

Repeatability

Definition   Repeatability is the variation in measurements obtained with one measurement instrument when used several times by one appraiser while measuring the identical characteristic on the same part. This is the inherent variation or capability of the equipment variation (EV). In fact, repeatability is the common cause (random error) of variation from successive trials under defined conditions of measurement. The best term for repeatability is within-system variation when the conditions of measurement are fixed and defined—fixed part, instrument, standard, method, operator, environment and assumptions. In addition to within-EV, repeatability will include all within variation from any condition in the error model.

Reproducibility

Definition   Reproducibility is typically defined as the variation in the average of the measurements made by different appraisers using the same measuring instrument when measuring the identical characteristic on the same part. This is often true for manual instruments influenced by the skill of the operator. It is not true, however, for the measurement processes (i.e., automated systems) where the operator is not a major source of variation. For this reason, reproducibility is referred to as the average variation between systems or between conditions of measurement.

Gage R&R

Gage R&R (GRR) is an index of the quality of a measuring instrument’s capability and it is on such an index the decision on using an instrument for a given need/application is taken.

Gage R&R is an estimate of the combined variation of repeatability and reproducibility. Stated another way, GRR is the variance equal to the sum of within-system and between-system variances.

Consistency and uniformity

Consistency is the repeatability over time. Uniformity is homogeneity (sameness) of the repeatability over size. In terms of ‘difference in variation’ of the measurements taken, the term consistency refers to variation over time; and uniformity to that of the variation over operating range of the gauge.

Assessment of gage R&R

The average and range method (xbar and R) will provide an estimate of both repeatability and reproducibility for a measurement system. It is dealt in Youden (1962) and can be referred to.

Applicability criteria: gage R&R, width error

The criteria as to whether a measurement system’s variability is satisfactory are dependent upon the percentage of the manufacturing production process variability or the part tolerance that is consumed by measurement system variation. The final acceptance criteria for specific measurement systems depend on the measurement system’s environment and purpose and should be agreed by the customer.

For measurement systems, whose purpose is to analyse a process, the thumb rule for measurement system acceptability is

  • Under 10 per cent error: generally considered to be an acceptable measurement system.
  • 10 to 30 per cent error: may be acceptable based on importance of application, cost of measurement device, cost of repair, etc.
  • Over 30 per cent: considered not being acceptable. Every effort should be made to improve the measurement system.

Causes of bias/linearity, and inadequate repeatability and reproducibility

When bias, linearity, inadequate gage R&R are found, the reasons for the same need to be found and appropriate action need to be taken. To facilitate this Table 22.3 lists all the possible causes for bias and linearity.

TABLE 22.3 Possible Causes for Bias/Linearity

Instrument needs calibration, reduction of the calibration interval
Wornout instrument, equipment or fixture
Normal ageing or obsolescence
Poor maintenance: corrosion, rust, cleanliness
Wornout or damaged master(s), error in master(s): minimum/maximum
Improper calibration (not covering the operating range) or use of the setting master(s)
Instrument design or method lacks robustness
Wrong gage for the application
Different measurement method: setup, loading, clamping, technique
Distortion (gauge or part) changes with part size
Environment: temperature, humidity, vibration, cleanliness
Application: part size, position, operator skill, fatigue, observation error (readability, parallax)

Tables 22.4 and 22.5 list the possible causes of poor gage R&R.

TABLE 22.4 Causes for Poor Repeatability: ’Within’ a Set-up

Within-part (sample): form, position, surface finish, sample consistency
Within-instrument: repair, wear, equipment or fixture failure, poor-quality maintenance
Within-standard: quality, class, wear
Within-method: variation in setup, technique, zeroing, holding, clamping, point density
Within-appraiser: technique, position, lack of experience, skill or training, feel, fatigue
Within-environment: short-cycle fluctuations in temperature, humidity, vibration, lighting, cleanliness
Violation of an assumption: stable, proper operation
Instrument design or method lacks robustness, poor uniformity
Wrong gage for the application
Distortion (gage or part), lack of rigidity
Application: part, size, position, observation error (readability, parallax)

 

TABLE 22.5 Causes for Poor Reproducibility: ’Between’ Set-ups

Between-parts (samples): average difference when measuring same parts A, B, C, etc., using the same instrument, operators and method
Between-instruments: average difference using instruments I, II, III, etc., for the same part, operators and environment
Between-standards: average influence of different setting standards in the measurement process
Between-methods: average difference caused by changing point densities, manual versus automated systems, holding or clamping methods, etc.
Between-appraisers (operators: average difference between appraisers X, Y, Z, etc., caused by training, technique, skill and experience)
Between-environment: average difference in measurements over time 1, 2, 3, etc., caused by environmental cycles
Violation of an assumption in the study
Instrument design or method lacks robustness
Operator training effectiveness
Application: part, size, position, observation error (readability, parallax), etc.

Note: Set-up comprises samples, instruments, appraisers, environment, methods, standards.



When the statistical analysis indicates the possibility of bias, linearity, poor reproducibility or poor repeatability, the instrument in question needs to be checked and examined to set it right.

Conclusion

The different concepts explained in this chapter are generally not given due importance. With the use of high technology, measurements play a key role in decision making on product and process acceptance. Hence, calibration of measuring instruments assumes great importance. There is now an exclusive international standard for quality certification laboratories. The subject matter of this chapter is essential to meet these demands.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.119.114