Chapter Ten
Quality Management Techniques

B. G. Dale, B. Dehe and D. Bamford

Introduction

This chapter provides an overview of six core quality management techniques and of ‘Six Sigma’, a strategic improvement approach, often deployed in an organization's improvement process.

  1. Quality Function Deployment
  2. Design of Experiments
  3. Failure Mode and Effects Analysis
  4. Statistical Process Control
  5. Benchmarking
  6. Business Process Re-engineering and Value Stream Mapping
  7. Six Sigma

Quality Function Deployment

With thanks to I. Ferguson and B. G. Dale (2007)

Quality Function Deployment (QFD) is a systematic procedure which is used to help build quality into the upstream processes and also into new product development. It helps to avoid problems in the downstream production and delivery processes and will consequently shorten the new product/service development time. The concept helps to promote proactive rather than reactive development by capturing and measuring the ‘voice of the customer’.

QFD is a technique that is used in the first place for translating the needs of the customers into design requirements, being based on the philosophy that the ‘voice of the customer’ drives all company operations. It requires reliable data from the following diverse sources: customers, design functionality, costs and capital, reliability, reproducibility.

It employs a step-by-step approach from customer needs and expectations through the four planning phases of:

  • Product planning
  • Product development
  • Process planning
  • Production planning through to manufactured products and delivered services

The technique of QFD seeks to identify those features of a product or service which satisfy the real needs and requirements of customers (market- or customer-required quality). A critical part of the analysis is that it takes into account discussions with the people who actually use the product in order to obtain data on issues such as:

  • What do they feel about existing products?
  • What bothers them?
  • What features should new products have?
  • What is required to satisfy their needs, expectations, thinking and ideas?
  • How and where is the product used?

Understanding Customer Needs

The voice of the customer is the cornerstone of QFD. Hence, talking and listening to the customer is paramount to understanding their real needs and requirements; of the three methods outlined below, the preferred method is direct contact with the customer.

  1. Direct contact with the customer:
    • Customer questionnaires
    • Face-to-face discussions with customers
    • Consumer contact.
  2. Failure-related information shows where customer needs are not being met and includes:
    • Field-failure data
    • Warranty returns
    • Customer complaints
    • Consumer association reports.
  3. Survey:
    • Market surveys
    • Dealer information
    • Trade shows
    • Test marketing
    • Product reports, as typically reported in trade magazines
    • Product to market share trend information
    • Competitive data.

In using these methods for understanding customer requirements typical issues that need to be considered include:

  • What is wrong with the product and/or service
  • Performance features that delight the customer
  • The ‘if only’ factor and, in particular, when, how, and by whom it is used.

The QFD Road

The main objectives are to:

  • Identify customer requirements
  • Determine competitive opportunities
  • Determine substitute quality characteristics
  • Pinpoint requirements for further study.

An example of the ‘house of quality’ derived from the product-planning phase of QFD is shown in Figure 10.1.

Diagram shows a table representing house of quality with entries marked as solid circle for strong, circle for medium and triangle for weak relationships.

Figure 10.1 The house of quality

Source: Ferguson and Dale 2007:388

In simple terms, the key elements of the product planning stage comprise the following.

The project

The scope of the project should be clearly outlined, including targets, operating constraints and time scale. A clearly defined mission statement should be produced and a team formed. It is useful to create a business model which includes market definition and size, product life history, competitive products and prices, projected sales, prices and costs, and the estimated capital requirements and likely payback.

Customer needs

Gathering the voices of customers can be done in different ways as previously detailed. The information gathered can be entered into a chart similar to that shown in Figure 10.2, complete with full information on why the product is needed, for what purposes, who uses it and when, and where and how it is used. This information provides the basis for more easily translating the customer's voice into customer needs which can be satisfied by design features. For example, ‘In the UK, mainly men will use the mobile phone while on the move’ will translate the needs of that group of customers into a requirement for one-handed operation of the phone, including the ability to dial and hold from the same hand. The phone will then conveniently have design features of the width and depth of the mobile phone, button areas and depression forces, etc. (see Figure 10.1).

Diagram shows a table with eight columns each representing customer classification, voice of customer, what, who, when, where and how.

Figure 10.2 Gathering the voice of the customer and interpreting it into customer needs

Source: Ferguson and Dale 2007:389

Customer priorities and competitive comparisons and planned improvements

This is the key to prioritization and decision-making on critical design features, which will be a common thread throughout all the stages of the QFD process. The columns to the right in Figure 10.1 are used in the following way.

  1. 1Degree of importance. Information gathered during customer surveys together with team knowledge is the key for grading each ‘need’ on a scale of 1 to 5, with 5 being the most important.
  2. 2Our company rating. Listed here is an objective view of the company's standing against each customer need from the perception of the customer on a scale of 5 to 1, with 5 being very good and 1 being poor. As much information as can be obtained from impartial sources should be used in this analysis.
  3. 3&4Competitors' rating. Similar sources as used in 2 will obtain this information for the major competitors. Benchmarking should be used to supplement the information acquired in this way.
  4. 5Planned level. This is the company's strategy for the new or modified product, influenced by competitive issues and strategic policy objectives.
  5. 6Improvement ratio. This is obtained by dividing the planned level by the company rating.
  6. 7Sales point. A maximum of 1.5 is given for a strong marketing feature down to 1.0 for the expected features. Only two or three such points should feature in this analysis. It is in this analysis that ‘excitement’ qualities are taken into consideration.
  7. 8Importance weight. The result of multiplying the degree of importance by the improvement ratio and by the sales point.
  8. 9Relative weight. This figure is obtained by taking each importance weight as a percentage of all the weights.

Design features or requirements

This is a very challenging step for engineers. The key is to look for characteristics, features and technical requirements that express the customers' needs and are recognizable as quality features of the product, rather than finite design specifications. This assists in examining the best option for a number of criteria.

The central relationship matrix: the whats vs. the hows

The centre block of the house of quality shown in Figure 10.1 represents the relationship strength of each customer need with every design feature. The solid circle symbol represents a strong relationship, the open circle a medium relationship, and the triangle a weak relationship. These relationships are usually equated to numbers 9, 3, and 1 respectively. The difference between them represents a means of emphasizing a design feature that is very important over one that is less so.

If there is no relationship between a customer need and a design feature, this will be highlighted by an empty row, indicating that the need will not be satisfied. On the other hand, if there is no relationship between a design feature and a customer need, it will result in an empty column, indicating that the design feature is not necessarily required from a customer perspective.

Relative weights of importance

This calculation indicates the strength of each design feature required in relation to other design features, and the priority from the customer's perspective of the need that created the design feature. To achieve these two parts the weight of importance of each design feature is the multiplication of the relative weight of the customer need and the particular relationship that has been designated in the central matrix. For example, from Figure 10.1 ‘Quick to turn on’ is 6.7 in customer relative weight, satisfied by one of the design features ‘On/off response time’. The relationship between them is strong (9). Thus 9 × 6.7 = 60.3 is one component part of the importance weight of the ‘On/off response time’ design feature.

Design feature interactions: the hows vs. the hows

Each design feature needs reconciling with other design features. This is recorded in the roof of the house of quality. Its purpose is to relate the interactions to the proposed target values of design features. A positive relationship is an opportunity to reduce a value that may help to reconcile an interacting negative relationship. Negative relationships require determined design alternatives to weaken the relationship, as they are potential sources of conflict and quality assurance problems.

Target values

Each design feature should have a target value assigned to it in order to act as a benchmark in the choice of design concepts at a later stage in the process. The target value will normally be best in class and one that will satisfy the customer to the point of delight. The values are not design specifications and could well be enhanced as the QFD process proceeds. They will certainly be equal to or better than any competitively benchmarked design. These target values may be modified in the light of the information contained in the roof of the house. The reconciliation between relationships is helped by declaring the feature that is a constraint and adjusting the other value according to its ideal value.

Technical comparisons

Technical comparisons are made with the design features, both from the company's existing product range and also those competitive ranges which are under investigation.

The comparisons may be made on some form of quantitative scale or on a ‘same’, ‘better’, and ‘not so good’ basis. Reference will be made to competitive designs where the feature has a higher assessment, and if this cannot be bettered it should be adopted. The customer's evaluation of the company's product and that of its competitors should also be considered. In theory, the engineer's technical evaluation and the customer's evaluation should agree. If this is not the case then the target value chosen is not perceived as the best one.

Service information and special requirements

Service information affecting design features from warranty, complaints, field failures, defect records, internal quality costs, and product performance is recorded. The purpose of this is to ensure that concepts and design work later in the process will eliminate these faults. Safety items, special regulatory items, and environmental issues affecting any design feature are also recorded. The purpose of this is that any concept or product definition must be seen to satisfy these requirements.

Implementation of QFD

The following are key steps in the effective implementation of QFD:

  • Management issues
    • The process must be driven by senior management.
    • Appropriate resources and the provision of training need to be allocated and auctioned by management.
    • Appoint a steering committee and a QFD champion.
    • Use of a project management system to act as a communication vehicle.
  • Project issues
    • Select the first project with a limited time frame and a good chance of an early success.
    • Establish a time frame for the project at the outset and keep to it.
    • Have a clear project definition and objectives and always have them in view; it is also important to identify any project limitations and operating constraints. This helps to create a focus on what is being done.
    • Develop a clear market definition and business model.
    • Provide a glossary of terms used in the QFD process.
  • The QFD team
    • Train as a team using as many as company-specific examples as possible.
    • Establish a core team, which is multidisciplinary, of between five and seven people.
    • Hold short, regular team meetings.
    • Do detailed work outside the meeting and use the meeting for analysis and decision-making. It is important that each member of the team is prepared to make a significant time commitment to the project.
    • One of the cornerstones of QFD is the customer's voice – it is best for all the team to be part of the data-collection process which is involved in listening to that voice.
    • It takes longer, but consensus decisions generally work best.
    • Team energy can be created by paying attention to direction, structure, project management, human issues.
  • Methods of working
    • Do as much concurrent work as possible (e.g. competitive benchmarking with existing product and process designs).
    • Create a planning matrix of customer needs to decide what should go into a house of quality; some items will be better achieved by traditional means.
    • Keep a realistic perspective on the detail entered about customer needs. Focus on the important, the difficult, and the new.
    • Try and ensure that the house of quality is kept to within an approximately 30 × 30 matrix.
    • Use the customer's voice and benchmarks as major decision-makers to achieve best in class.

Summary of QFD

The QFD process provides a powerful structure for product and process development. When it is used in an effective manner it can bring a correct customer focus to designs that will perform to a high degree of satisfaction with reliability and cost-worthiness. It shortens the development cycle for the design and results in fewer engineering changes. In this way the product/service which the customer receives not only meets their needs but also, if the customer interface has been done correctly, there can be unexpected product features which will cause delight and product loyalty. There will also be a common thread through all operations which is traceable back to what the customer really wants.

Design of Experiments

With thanks to I. Ferguson and B. G. Dale (2007)

The design of experiments is a series of techniques that involve the identification and control of parameters which have a potential impact on the performance and reliability of a product design and/or the output of a process, with the objective of optimizing product design, process design and process operation, and limiting the influence of noise factors. The methodology is used to analyse the significance of effects on system outputs of different values of design parameters. The objective is to optimize the values of these design parameters to make the performance of the system immune to variation. The concept can be applied to the design of new products and processes or to the redesign of existing ones, in order to:

  • Optimize product design, process design and process operation.
  • Achieve minimum variation of best system performance.
  • Achieve reproducibility of best system performance in manufacture and use.
  • Improve the productivity of design engineering activity.
  • Evaluate the statistical significance of the effect of any controlling factor on the outputs.
  • Reduce costs.

There are several methodologies of experiments such as the trial and error, the full factorial, the fractional factorial and the Taguchi method. In this section only an overview of the Taguchi technique is provided.

Taguchi: An Overview of his Approach

Design of experiments historically required a great deal of statistical knowledge and understanding, which most industrial users of experiments found somewhat intimidating. Over the years much effort has been devoted to simplifying the task of experimentation. In the late 1970s, the work of Genichi Taguchi on experimental design made what is regarded by many as a major breakthrough in its application. Dr Taguchi was a statistician and electrical engineer who was involved in rebuilding the Japanese telephone system, and has been involved in applying design of experiments in the Japanese electronics industry for over 30 years. Since the 1980s, Taguchi (1986) has been an acknowledged worldwide consultant in his methodology. He promotes three distinct stages of designing in quality:

  • System design: the basic configuration of the system is developed. This involves the selection of parts and materials and the use of feasibility studies and prototyping. In system design technical knowledge and scientific skills are paramount.
  • Parameter design: the numerical values for the system variables (product and process parameters – termed factors) are chosen so that the system performs well, no matter what disturbances or noises (i.e. uncontrollable variables) are encountered by the system (i.e. robustness). The objective is to identify optimum levels for these control factors so that the product and/or process is least sensitive to the effect of changes in noise factors. The experimentation pinpoints this best combination of product/process parameter levels. The emphasis in parameter design is on using low-cost materials and processes in the production of the system; it is a key stage of designing in quality.
  • Tolerance design: the third stage in the design process, not to be confused with ‘tolerancing’. The tolerance design process uses experimental design to investigate the effect on the variance of the output characteristic of:
    • Product design: choosing the upper specification limit (USL) and lower specification limit (LSL) around the nominals of key design parameters that have been prescribed by the parameter design study. Having done this, reconciling the choice of limits of the factors in the design that are predicted to cause most variation, with, typically, the cost of reducing the tolerance gap, or the choice of more expensive materials.
    • Process design: choosing the USL and LSL around the nominals of key process factors that have been prescribed by the parameter design study. Having done this, reconciling the choice of limits of the factors in the process that are predicted to cause most variation, with, typically, the cost of reducing the tolerance gap, or the choice of more expensive methods.

Taguchi's approach also addresses the following:

  • Determining the quality level, as expressed in his loss function concept.
  • Improving the quality level in a cost-effective manner by parameter and tolerance design.
  • Monitoring the quality level using SPC. A feedback/feed-forward closed-loop system is also recommended.

Taguchi's methods (i.e. engineering, experimental design and data analysis) have proven successful both in Japan and the West, and those organizations which have adopted his methods have succeeded in making continuous improvement. There is little doubt that his work has led to increased interest by engineers in a variety of approaches and methodologies relating to design of experiments. He has provided a technique to analyse the effects of control factors on variability with respect to noise. However, it should not be overlooked that a number of other people have made significant improvements with the other approaches to experimental design.

Steps in Experimental Design

Based on Ferguson (1995) the key steps in designing and running a fractional factorial experiment are outlined in brief below.

  • Step 1: Define the project objectives.
  • Step 2: Select critical characteristics.
  • Step 3: Determine the issues that affect the critical characteristics.
  • Step 4: Identify control factors and noise factors.
  • Step 5: Select the control factors to be optimized during the experiment.
  • Step 6: Choose the orthogonal array and assign factors to columns in the array.
  • Step 7: Choose the levels of the control factors.
  • Step 8: Choose sample size.
  • Step 9: Organize the experiment and carry it out.
  • Step 10: Analyse the data.
  • Step 11: Predicting the result of the confirmation run.
  • Step 12: Interprete the confirmation run and decide if the project is finished.

Summary of DoE

Experimental design using a variety of matrices which suit different conditions is a key technique for understanding the effect of each controllable factor, be it a product or a process design, in minimizing variation while centring the output on a target value. It is a major technique in investigating quality problems. Statistical design of experiments is a complex subject, but it is possible to develop ‘easy-to-use’ methods.

Failure Mode and Effects Analysis

With thanks to J. R. Aldridge and B. G. Dale (2007)

This section provides an overview of the concept of failure mode and effects analysis (FMEA), and its value as a planning tool to assist with building quality into an organization's product, service and processes (Dale and Shaw 1990a).

The technique of FMEA was developed around 1962 in the aerospace and defense industries as a method of reliability analysis, risk analysis and risk management. It is a systematic and analytical quality planning tool for identifying, at the product, service and process design and development stages, what might potentially go wrong, either with a product (during manufacture, or during end-use by the customer), or with the provision of a service, thereby aiding fault diagnosis. The use of FMEA is a powerful aid to advanced quality planning of new products and services, and can be applied to a wide range of problems which may occur in any system or process. Its effective use should lead to a reduction in:

  • Defects during the production of initial samples and in volume production
  • Customer complaints
  • Failures in the field
  • Performance-related deficiencies (these are less likely if a detailed development plan is generated from the design FMEA)
  • Warranty claims
  • Safety concerns.

In addition, there will be improved customer satisfaction and confidence as products and services are produced from robust and reliable production and delivery methods. It also has relevance in the case of product liability.

What is Failure Mode and Effects Analysis?

There are two categories of FMEA: design and process. A design FMEA assesses what could, if not corrected, go wrong with the product in service and during manufacture as a consequence of a weakness in the design. Design FMEA also assists in the identification or confirmation of critical characteristics. On the other hand, process FMEA is mainly concerned with the reasons for potential failure during manufacture and in service as a result of non-compliance with the original design intent, or failure to achieve the design specification.

The procedure involved in the development of FMEA examines ways in which a product service or process can fail and is known as progressive iteration. In brief, it involves the following steps:

  • The function of the product, service and/or process is agreed, along with suitable identifications.
  • Potential failure modes are identified.
  • The effects of each potential failure are assessed and summarized.
  • The causes of potential failure are examined.
  • Current controls for the detection of the failure mode are identified and reviewed.
  • A Risk Priority Number (RPN) is determined; the details are provided below.
  • The corrective action which is to be taken to help eliminate potential concerns is decided.
  • The potential failure modes in descending order of RPN are the focus for improvement action to reduce/eliminate the risk of failure occurring.
  • The recommendations, corrective actions and counter-measures which have been put into place are monitored and reviewed for effectiveness.

The RPN comprises an assessment of occurrence, detection and severity of ranking and is the multiplication of the three rankings:

  • The occurrence is the likelihood of a specific cause which will result in the identified failure mode, and is based on perceived or (in the case of process capability) estimated probability. It is ranked on a scale of 1–10.
  • The detection criterion relates, in the case of a design FMEA, to the likelihood of the design verification programme pinpointing a potential failure mode before it reaches the customer; a ranking of 1–10 is again used. In the process FMEA, the detection criterion relates to the existing control plan.
  • The severity of effect, on a scale of 1–10, indicates the likelihood of the customer noticing any difference in the functionality of the product or service.

The resulting RPN should always be checked against past experience of similar products, services and situations.

The requisite information and actions are recorded on a standard format in the appropriate columns. An example of a process FMEA from Allied Signal Automotive is shown in Figure 10.3. The FMEA is a live document and should always be modified in the light of new information or changes.

Table shows potential failure mode and effect analysis with each column representing process function, potential failure mode, potential effects of failure, Class current process controls et cetera.

Figure 10.3 Potential failure mode and effects analysis (process FMEA)

Source: Dale and Shaw (2007:428)

From the design FMEA, the potential causes of failure should be studied and actions taken before designs and drawings are finalized. When used in the proper manner, FMEA prevents potential failures occurring in the manufacturing, production and/or delivery processes or end product in use, and will ensure that processes, products and services are more robust and reliable. It is a powerful technique and a number of well-publicized product recall campaigns could conceivably be avoided by the effective use of FMEA. However, it is important that FMEA is seen not just as a catalogue of potential failures, but also as a means for pursuing continuous improvement. Nor should it be viewed as a paperwork exercise carried out to retain business, as this will limit its usefulness. The concept, procedures and logic involved with FMEA are not new: every forward-thinking design, planning and production engineer and technical specialist carries out, in an informal manner, various aspects of FMEA. In fact, most of us in our daily routines will subconsciously use a simple informal FMEA. However, this mental analysis is rarely committed to paper in a format which can be evaluated by others and discussed as the basis for a corrective action plan. What FMEA does is to provide a planned systematic method of capturing and documenting this knowledge. It also forces people to use a disciplined approach, and is a vehicle for obtaining collective knowledge and experience through a team activity.

A pilot study carried out at Girobank within the data capture services of the headquarters operations directorate has confirmed that FMEA is of benefit in paper processing-type activities. The technique has since been incorporated into an interdepartmental improvement project to address sub-process improvement relating to a particular stream of work. One of the main benefits of process FMEA is that it has helped to address the complex internal customer–supplier relationship while improving sub-process procedures. The application of process FMEA is considered by the bank as a valuable improvement tool and will be developed alongside other such tools with Girobank's ongoing training initiatives (see Gosling et al. 1992).

Development of a Design FMEA

For a design FMEA the potential failure mode may be caused, for example, by an incorrect material choice, part geometry, or inappropriate dimensional specification.

The procedure then identifies the effects of each potential failure mode, examines the causes of potential failure and reviews current controls for the design FMEA, which usually include some form of design verification programme. In the case of a turbocharger this includes items such as material pull tests, heat-cycling tests of components subject to high temperatures, life cycle fatigue tests to failure, static engine testing, and dynamic engine testing on development vehicles. With regard to the latter, these tests are often carried out by the customers as part of their overall engine/vehicle evaluation programme. Past experience on similar products is often used to verify the validity of certain component parts for a design.

The occurrence for a design FMEA is an estimate, on a scale of 1–10, of the potential failure occurring at the hands of the customer, a ranking of 1 indicating that the failure is unlikely (typifying a possible failure rate of <1 in a million), and a ranking of 10 indicating an almost inevitable failure (typically 1 in 2).

The detection criterion rests on the likelihood of a current design verification programme highlighting a potential failure mode before it reaches the customer. A ranking of 1 indicates almost certain detection, and a ranking of 10 indicates that the current controls are very unlikely to detect the failure mode before dispatch to the customer.

The severity-of-effect ranking is again on a 1–10 basis. A ranking of 1 indicates that the customer is unlikely to notice any real effect, in the case of a vehicle, on performance or the performance of the sub-system. A ranking of 10 implies that a potential failure mode could affect safe vehicle operation and/or non-compliance with government regulations. A severity ranking cannot be altered by any means other than by redesign of a component or assembly; it is a fixed feature. Clearly serious implications exist under product liability legislation for high-severity rankings and these high rankings must be addressed as a matter of urgency.

The activity following the evaluation of current controls is the determination of the RPN.

Development of a Process FMEA

In the case of a process FMEA the potential failure mode may be caused by, for example, the operator assembling the part incorrectly, or by variation in the performance of the equipment or data entered incorrectly into a system by an operator.

The procedure then, as in the case of a design FMEA, identifies the effects of each potential failure mode, examines the causes of the potential failure mode, and reviews the current controls. For a process FMEA the current controls might be operator-performed inspection or SPC information on the capability of the process. The occurrence for a process FMEA is again based on a 1–10 scale, with a ranking of 1 indicating that a failure in the manufacturing process is almost certain not to occur. This is based on past experience of a similar process, both within the factory and in the field with the customer, typically identified by a high process capability value. Conversely, a ranking of 10 indicates that the failure is almost certain to occur and will almost definitely reach the subsequent operation or customer if counter-measures and controls are not put in place. An occurrence ranking of 10 suggests, and indeed demands, that corrective action be undertaken because it highlights a potentially incapable process.

Detection rankings for a process FMEA indicate for a ranking of 1 that the potential failure mode is unlikely to go undetected through the manufacturing process. A ranking of 10 suggests that current manufacturing inspection controls and procedures are unlikely to detect the potential failure mode in the component or the assembly before it leaves the factory, and that urgent corrective action is required. It is interesting to note that a successive inspection check (e.g. bolt torque conformance) does not result in the detection ranking being markedly reduced; it would still be assigned a ranking of between 7 and 10 since experience indicates that 100 per cent subsequent inspection is only capable of detecting 80 per cent or so of defects. The situation would be assessed differently in the case of automated inspection.

A much better method of detection is to introduce a successive check at a subsequent operation whereby the operator is unable to perform his or her operation unless the previous operation has been correctly executed. This can be achieved by designing fixturing in such a way that it will only accept conforming parts from a previous operation. Another method is to install error-proofing devices at the source.

The criterion for the severity-of-effect ranking is determined in a similar manner to that for a design FMEA.

The activity following the evaluation of current controls for a process FMEA is again the determination of the RPN.

Analysis of Failure Data

To apply FMEA effectively it is necessary to obtain some real figures for the calculation of the RPN, in particular for internal and external failure rates. These can then be used for compilation of the occurrence ranking. This was achieved at the plant by analysing and summarizing external failure and internal reject data. External failure data are collated using computer aids by field service engineers. The data are obtained from visits to customers to review units which have failed, the disposition of which is determined (i.e. whether the failure liability is due to the plant or the customer or if, in some cases, there is in fact no fault found). Internal process failure rates are collated weekly by the quality assurance department. The data are obtained from rejection notes attached to non-conforming parts by production and inspection personnel.

It is important to realize that if a process FMEA is being compiled for a new product, for which no internal or external failure rate history is known, then it is acceptable to use judgment on failure rates for a similar product. At the plant an analysis was performed of the external failure rates over a five-year period using a spreadsheet program on a computer. These failure rates were then ranked highest to lowest to identify the highest occurring items. The external data were then compared with the internal failure rate data and comparisons made to identify trends in which external and internal failure rates were correlated. When looking at external failure rates, a degree of caution needs to be exercised. In terms of the company's products, a guarantee is given, from the time a product is sold, to the end-user. It is impractical to consider the previous year of warranty data only, since for many applications, particularly for the commercial diesel business, the completed vehicle may not be put into use for up to 18 months following the date of manufacture. Additionally some customers are relatively slow in requesting visits for claims evaluation. This obviously leads to a distorted overall picture – which makes the five-year evaluation more realistic. Consideration must also be taken of any high-occurrence failures attributable to one cause. Investigation should be undertaken to see if the cause has been eliminated and, if so, then these should be ignored. The emphasis should be placed on identifying consistent patterns of regularly occurring effects of failure. These types of failure are the ones to which corrective action should be applied.

Recommended Actions for Design and Process FMEA

Following the determination of the RPN it is usual to perform a Pareto analysis and address the potential failure modes in order of decreasing RPN. Determining the figure for an acceptable RPN is really a matter of the application of common sense. If 100 is assumed to be the acceptable maximum then this should be checked against past experience. The rule to be applied is to adopt a consistent approach for each of the rankings, and generally it will be found that the high RPNs are as expected. This takes the form of identifying recommended action(s) and assigning personnel to take appropriate improvement measures by a particular date, which should be before scheduled product release to the customer.

Following satisfactory completion of the actions the RPN can be recalculated, and if it is below the agreed acceptable limits then the corrective action can be assumed to be adequate. If this is not the case, then the design or process must be readdressed by appropriate corrective actions.

For a design FMEA the potential failure causes must be studied before drawings are released to production status. In the case of the process FMEA, all the controls and measures to ensure design intent which are carried forward into the final product must be implemented. If this is not done properly then problems relating to identified failure modes will occur during manufacture. In the case of a new process, potential failure modes may be overlooked because of lack of experience. However, if this is discovered at a later date, these must be included in both process and design FMEAs for future consideration.

Summary of FMEA

Finally, a few dos and don'ts are given which may help organizations to avoid some of the difficulties and traps typically encountered in the preparation and use of FMEA.

Do

  • Develop a strategy for the use of FMEA.
  • Drive the implementation with the full support of senior managers; it is the responsibility of the senior management to see that there is a positive attitude in the organization to FMEA.
  • Ensure that all personnel who are to be involved with the FMEA are made aware of the potential benefits arising from the procedure and the necessity for corrective action to be implemented if improvements are to be made.
  • Try to ensure that engineers feel that FMEAs are an important part of their job.
  • Make FMEA meetings short but regular throughout the early stages of the product life cycle.
  • Consider producing FMEA for product families, material categories, main assemblies and process routes (i.e. generic FMEA) rather than for each component.
  • Put into place a procedure for review/update of the FMEA; it should always be treated as a living document.

Don't

  • Overlook the benefits of involving customers and suppliers in the preparation of FMEA.
  • Start the FMEA process when the design has reached an almost fixed state, when changes will be that much harder to effect.
  • Allow the preparation of FMEA to be carried out in isolation by one individual.
  • Allow important failure modes to be dismissed lightly with comments such as, ‘we’ve always done it like this', or ‘that will involve a considerable investment to change’, without considering the feasibility and cost of the change.
  • Use the technique as just window dressing for the customer. There is little difference in the effort made when using FMEA in this way from that required when using it in the correct manner.

Statistical Process Control

With thanks to B. G. Dale and P. Shaw (2007)

Introduction

Statistical process control (SPC) is not a new concept; its roots can be traced back to the work of Shewhart (1931) at Bell Laboratories in 1923. The control charts in use today for monitoring processes are little different from those developed by Shewhart for distinguishing between controlled and uncontrolled variation.

Today, in the West, there is considerable interest in quality and how it might be improved effectively and economically. It is the pursuit of quality improvement that has promoted the revitalized interest in SPC.

The aim of this section is to give an overview of SPC and its concepts, both statistical and philosophical, to examine the issues involved with implementation, and to illustrate some typical problems encountered in the introduction and application of SPC. For more detailed studies see Dale and Shaw (1989, 1990b and 1991) and Dale et al. (1990).

What is Statistical Process Control?

Statistical process control is generally accepted as a means to control the process through the use of statistics or statistical methods.

There are four main uses of SPC:

  • To achieve process stability.
  • To provide guidance and understanding on how the process may be improved by the reduction of variation and to keep it reduced.
  • To assess the performance of a process.
  • To provide information to assist with management decision-making.

SPC is about control, capability and improvement, but only if used correctly and in a working environment that is conducive to the pursuit of continuous improvement, with the full involvement of every company employee. It is the responsibility of the senior management team to create these conditions, and they must be prime motivators in the promotion of this goal and provide the necessary support to all those engaged in this activity.

It should be recognized at the outset that on its own SPC will not solve problems; the control charts only record the ‘voice of the process’ and SPC may, at a basic level, simply confirm the presence of a problem. There are many tools and techniques which guide and support improvement and, in many instances, they may have to be used prior to the application of SPC, and concurrently with it to facilitate analysis and improvement.

The application of SPC can potentially be extensive. It is not simply for use in high-volume ‘metal cutting’; it can be used in most manufacturing areas, industrial or processing, and in non-manufacturing situations, including service and commerce.

The Development of Statistical Process Control

When first evolved, the control chart, using data that provided a good overall picture of the process under review, had control limits set out from the process average, which reflected the inherent variation of the process. This variation was established from an accurate review or study, and consequently the limits were deemed to reflect the actual ‘capability’ of the process. The charts so constructed were actually called ‘charts for controlling the process within its known capability’. As the word ‘capability’ has in the last decade been taken to mean something slightly different, the charts tend now to be called ‘performance-based’ charts (i.e. to control the process within its known performance).

When this idea was discussed with potential users, the question was asked, ‘But what if the control limits are outside the specification limits?’ This resulted in the development of a chart where the control limits were set in from the specification limits. The distance these limits are set in is a function of the inherent variation in the process. Those processes with greater variation will have limits which are set in further from the specification limits than those with less variation.

If an organization's quality objective is to produce parts or services to specification, the so-called tolerance-based chart may prove useful, and signals are given to alert operational personnel to the likelihood of producing out-of-specification products. This type of chart does not encourage the pursuit of improvement in process performance.

Using the performance-based charts with limits which reflect the inherent variation of the process and having some statistical estimate of this variation, the objective is to establish its source(s), perhaps using experimental design tools and appropriate tools and techniques, and strive to reduce it on a continuous improvement basis. The consequence of this is that control limits should, over time, reduce, reflecting the reduction in process variation and thereby demonstrating an organization's commitment to continuous improvement. This reduction in variation is confirmed by increased values or measures of process capability.

If an organization is not using SPC in this manner, management needs to critically evaluate their use of SPC.

Variation and Process Improvement

Products manufactured under the same conditions and to the same specification are seldom identical; they will most certainly vary in some respect. The variation, which may be large or almost immeasurably small, comes from the main constituents of a process – machine, manpower, method, material, and Mother Nature. The measuring system itself may also give rise to variation in the recorded measurement; this is why repeatability and reproducibility studies are so important.

  • Repeatability is the closeness between results of successive measurements of the same characteristics carried out under the same conditions.
  • Reproducibility is the closeness between the results of measurement of the same characteristic carried out under changed conditions of measurement.

An important means of improvement is the reduction of variation. SPC is a very useful technique because, given the capability of the measuring system, it ascertains the extent of the variation and whether it is due to special or common causes of variation, process improvement being achieved by removal of either or both causes. It should be stressed that while SPC, if properly used, will give an indication of the magnitude of the variation, it will not give the source. The efforts of management, technical, engineering, and management services and site service activities should be directed at establishing the likely source or sources of variation and, more importantly, reducing them continuously.

The first step in the use of SPC is to collect data to a plan and plot the gathered data on a graph called a control chart, as shown in Figure 10.4. Once the process is rendered stable by the identification and rectification of special causes of variation, its process capability can be assessed. The next task is to reduce, as much as possible, the common causes of variation so that the output from the process is centred around a nominal or target value. This is a continuing process in the pursuit of continuous improvement. It is not the natural state of a process to be in statistical control, and a great deal of effort is required to achieve this status and a great deal more to keep it so. The amount of this effort and its focus is a function of senior management within their overall remit.

Diagram shows SPC X bar chart with upper and lower action limit along with upper and lower warning limit.

Figure 10.4 Sample SPC chart

What are special and common causes of variation?

Special (or assignable) causes of variation influence some or all the measurements in different ways. They occur intermittently in the form of shocks and disturbances to the system and reveal themselves as unusual patterns of variation on a control chart. Special causes should be identified and rectified and hopefully, with improved process or even product design, their occurrence will in the long term be minimized. In the short term, their presence should be highlighted and a response programme established to deal with them. It is imperative in the management and control of processes to record not only the occurrence of such causes, but any remedial action that has been taken, together with any changes that may occur or have been made in the process. This provides a valuable source of information in the form of a ‘process log’, to prevent the repetition of mistakes and enable the development of improved processes. Typical special causes may be:

  • Change in raw material
  • Change in machine setting
  • Broken tool or die or pattern
  • Failure to clean equipment
  • Equipment malfunction
  • Keying in incorrect data.

Common (or unassignable) causes influence all measurements in the same way. They produce the natural or random pattern of variation observed in data when they are free of special causes. Common causes arise from many sources and do not reveal themselves as unique patterns of variation; consequently, they are often difficult to identify. If only common cause variation is present, the process is considered to be stable, hence predictable. Typical common causes may be:

  • Badly maintained machines
  • Poor lighting
  • Poor workstation layout
  • Poor instructions
  • Poor supervision
  • Materials and equipment not suited to the requirements.

In the pursuit of process improvement it is important that a distinction is made between special and common cause sources of variation because their removal may call for different types and levels of resources and improvement action. Special causes can usually be corrected by operational personnel – the operator and/or first-line supervisor. Common causes require the attention of management, engineering, technical, management services, or site services personnel. Teams made up of relevant personnel are often set up to eliminate special and common causes of variation. Operational personnel often have a considerable knowledge of process parameters and they should be included in such teams.

Process Capability

The capability of a process is defined as three standard deviations on either side of the process average when the process is normally distributed. The Cp index is found as the result of comparing the perceived spread of the process with the appropriate specification width or tolerance band.

numbered Display Equation

Today, customers are specifying to their suppliers minimum requirements for Cp; for example:

numbered Display Equation

In simple terms this means that all parts should lie comfortably inside the specification limits.

Given that the process ‘spread’ is equal to six standard deviations the following should be noted:

numbered Display Equation

It follows that:

  1. The specification limits have to be wide – commensurate with excellent physical and functional requirements of the product. Or
  2. The process variation as determined by the standard deviation has to be small. Or
  3. Both conditions (1) and (2) apply.

As the Cp index compares the ‘spread of the process’ with the tolerance band, it is primarily concerned with precision – it takes no account of the accuracy or setting of the process. It is for this reason that Cp is often defined as ‘process potential’ capability, i.e. what the process is potentially capable of achieving.

The Cpk index, however, takes into account both accuracy and precision by incorporating in the calculations, G or x-bar, i.e. the process (or grand) average. There are two formulae:

numbered Display Equation

where USL is the upper specification limit, or:

numbered Display Equation

and LSL is the lower specification limit.

It is customary to quote the smaller of the two values, giving the more critical part of the measurements distribution. Similar minimum requirements are often prescribed for Cpk as for Cp mentioned above.

Because Cpk indices assess both accuracy and precision, they are often defined as ‘process performance capability’ measures. That is, the Cpk gives an estimate of how the process actually performs (i.e. its capability) whereas the Cp gives an estimate of its potential (i.e. what it could do if the setting was on the nominal or target value of the specification).

In the calculation of both Cp and Cpk it is necessary to know or obtain an estimate of the process standard deviation (H). The standard deviation can be estimated by using the formula:

numbered Display Equation

where d2 is a constant derived from statistical tables and is dependent upon the sample size.

This exploits the relationship between the range and the standard deviation which was mentioned earlier in the chapter.

With reference to this the following points should be noted:

  • F is the average within sample variation. There may be present in the process some considerable between sample variation which should be included in H. If this is not investigated, H could be underestimated, hence any Cp or Cpk index will be overestimated.
  • The indices implicitly assume that the data (measurements) when drawn out as a histogram or frequency distribution curve, give a reasonable approximation to the Normal (or Gaussian) Distribution Curve. While many processes will offer data which comply with this, there are exceptions, and some modifications in the calculations may be necessary.

The comments made on capability relate to data collected over the long term (many days or shifts) from a stable, in-control and predictable process. Often short-term capability needs to be investigated, particularly for new products or processes (it may be required as part of supplier verification programme, i.e. initial sampling requirements or first article inspection). The time scale is then dramatically reduced to cover only a few hours' run of the process.

It is recommended that data are collected in the same manner as for initial control chart study, but the frequency of sampling is increased to get as many samples (of size n) as possible to give a good picture of the process (i.e. about 20 samples of size n). Data are plotted on the control chart with appropriate limits, but the following indices are calculated:

numbered Display Equation

numbered Display Equation

The formula is exactly as for Cp and Cpk but the minimum requirements may be higher (e.g. Pp ≥ 1.67), i.e. 1.67 implies the tolerance band is 10 standard deviations wide and the process ‘spread’ equals six standard deviations, i.e.

numbered Display Equation

It should not be forgotten that all capability indices are estimates derived from estimates of the process variation (σ). The reliability or confidence in the estimate of the process standard deviation is a function of:

  • The amount of data which have been collected
  • The manner in which the data were collected
  • The capability of the measuring system (i.e. its accuracy and precision)
  • The skill of the people using the measuring system
  • People's knowledge and understanding of statistics

Difficulties Experienced in Introducing and Applying SPC

The purpose behind the application of SPC is straightforward – to reduce variation in process output, first by establishing whether or not a process is in a state of statistical control, and secondly, if it is not, getting it under control by eliminating ‘special’ causes of variation. Finally, SPC may be used to help reduce ‘common’ causes of variation, as shown in Figure 10.5.

Days versus date graph shows SPC after limit change with curves representing data, graph average, UCL and LCL.

Figure 10.5 SPC Chart after limit change

However, a number of organizations do encounter problems in the introduction and application of SPC. According to Dale and Shaw (2007) the top three difficulties in introducing SPC were:

  • Lack of knowledge of/expertise in SPC
  • Poor understanding and awareness within the company of the purpose of SPC
  • Lack of action from senior management.

The three main difficulties in its application were:

  • Applying SPC to a particular process
  • Resistance to change
  • Deciding which characteristic and/or parameter to chart.

When the range of difficulties is studied further (see Dale et al. 2007) it is apparent that they can be categorized under two main headings: management commitment, and having the knowledge and confidence to use SPC successfully.

It is clear that the majority of difficulties are caused by the lack of commitment, awareness, understanding, involvement and leadership of middle and senior managers.

Summary of SPC

SPC, supported by the positive commitment of all employees in an organization within a framework of TQM and strategic process improvement, has proved to be a major contribution in the pursuit of excellence. It supports the philosophy that products and services can always be improved. However, it is a technique which, by itself, will do little to improve quality. It is basically a measurement technique and it is only when a mechanism is in place to remove ‘special’ causes of variation and to squeeze out of the process ‘common’ causes of variation that an organization will have progressed from simply charting data to using SPC to its fullest potential. Management commitment and leadership and a structured and ongoing training programme correctly used are crucial to the success of SPC.

Benchmarking

With thanks to R. Love and B. G. Dale (2007)

Introduction

From the late 1980s onwards there has been a growth of interest in the subject of benchmarking as part of the culture of continuous improvement. This has been triggered by the success of the improvement methods used by the Xerox Corporation and by the development of the self-assessment methods promoted by the MBNQA and EFQM models for business excellence. Benchmarking as it is known today originated in Rank Xerox. It is now well documented that when Rank Xerox started to evaluate its copying machines against the Japanese competition it was found that the Japanese companies were selling their machines for what it cost Rank Xerox to make them. It was assumed that the Japanese-produced machines were of poor quality, but this proved not to be the case. This exposure of the corporation's vulnerability highlighted the need for change. In simple terms, the aim of benchmarking is to identify practices that can be implemented and adopted to improve company performance.

Benchmarking is an opportunity to learn from the experience of others. It helps to develop an improvement mindset amongst staff, facilitates an understanding of best practices and processes, helps to develop a better understanding of processes, challenges existing practices within the business, assists in setting goals based on fact and provides an educated viewpoint of what needs to be done rather than relying on whim and gut instinct.

Most organizations carry out what can be termed informal benchmarking. This traditional form of benchmarking has been carried out for years, beginning with military leaders. It takes two main forms:

  • Visits to other companies to obtain ideas on how to facilitate improvement in one's own organization.
  • The collection, in a variety of ways, of data about competitors.

This is often not done in any planned way; it is interesting but limited in its value owing to a lack of structure and clear objectives. This approach is often branded ‘industrial tourism’. To make the most effective use of benchmarking and use it as a learning experience as part of a continuous process rather than a one-off exercise, a more formal approach is required.

There are three main types of formal benchmarking:

  1. Internal benchmarking. This is the easiest and simplest form of benchmarking to conduct and involves benchmarking between businesses or functions within the same group of companies. Many companies commence benchmarking with this form of internal comparison. In this way best internal practice and initiatives are shared across the corporate business.
  2. Competitive benchmarking. This is a comparison with direct competitors, whether of products, services or processes within a company's market. It is often difficult, if not impossible in some industries, to obtain the data for this form of benchmarking as by the very nature of being a competitor the company is seen as a threat.
  3. Functional/generic benchmarking. This is comparison of specific processes with ‘best in class’ in different industries, often considered to be world class in their own right. ‘Functional’ relates to the functional similarities of organizations, while ‘generic’ looks at the broader similarities of businesses, usually in disparate operations. With functional benchmarking the partners will usually share common characteristics in the industry, whereas generic benchmarking is not restricted to an industry. It is usually not difficult to obtain access to other organizations to perform this type of benchmarking. Organizations are often keen to swap and share information in a network or partnership arrangement, particularly when no direct threat is presented to a company's business or market share.

There are a number of steps in a formal benchmarking process. They are now briefly described:

  • Identify what is the subject to be benchmarked (e.g. the invoicing process), decide who will be in the team, the support they require (e.g. training, project champion) and their roles and responsibilities, reach agreement on the benchmark measures to be used (e.g. number of invoices per day, per person), create a draft project plan and communicate with the required internal parties.
  • Identify which companies will be benchmarked from a set of selection criteria defined from the critical success factors of the project. Research potential partners and select the best partner(s).
  • Develop a data-collection plan. Agree the most appropriate means of collecting the data, the type of data to be collected, who will be involved and a plan of action to obtain the data (e.g. explore benchmarking databases, identify contacts in partnering organizations, the questionnaire(s) to be used and the composition, telephone surveys, site visits, etc.).
  • Tabulate and analyse data. Determine the reasons for the current gap (positive or negative) in performance between the company and the best amongst the companies involved in the benchmarking exercise.
  • Estimate, over an agreed time frame, the change in performance of the company and the benchmark company in order to assess whether the gap is going to grow or decrease, based on the plans and goals of the parties concerned.
  • Define and establish the goals to close or increase the gap in performance. This step requires effective communication of the benchmarking exercise.
  • Develop action plans to achieve the goals. This step involves gaining acceptance of the plans by all employees likely to be affected by the changes.
  • Implement the actions, plans and strategies. This involves effective project planning and management.
  • Assess and report the results of the action plans.
  • Reassess or recalibrate the benchmark to assess if the actual performance/ improvement is meeting that which has been projected. This should be conducted on a regular basis and involves maintaining good links with the benchmarking partners.

This section summarizes the main learning experiences as regards benchmarking from Dale et al. 2007. The ‘10-step’ benchmarking process, as shown in Figure 10.6, provides a good outline for benchmarking teams to follow.

Block diagram shows steps in benchmarking process such as planning, analysis, integration and action.

Figure 10.6 The United Utilities benchmarking process

Source: Dale et al. 2007

Success Factors

The choice of benchmarking partners is critical in the success and failure of the project, so it is important that due care and attention are paid to the selection. When contacting potential benchmarking partners it is helpful to identify the specific areas of activity and the measurement of success which are to be discussed during the visit. It was found to be important to send out a pack of information to those organizations that, in principle, had agreed to participate in a benchmarking project. The following is typical of the information it needed to include:

  • Covering letter including the reason for undertaking the benchmarking project.
  • Overview of the organization.
  • Details of the process being benchmarked, including key performance indicators (KPIs) and their definitions and descriptions.
  • Benchmarking code of conduct to be signed by both parties reaching agreement on this encourages openness and trust between the partners.
  • Data-collection plan.
  • Questionnaire seeking data from the benchmarking partner and a completed questionnaire by the benchmarking team reflecting the state of the art of the process being benchmarked.

The benchmarking teams rated and selected their partners on the basis of their critical success factors in terms of what needed to be achieved by the project as well as such aspects as comparable size, structure, geography (where deemed appropriate), reputation with respect to product and service quality and market position and segmentation using a criteria rating form to focus on the critical few. Consideration was also given to their understanding and experience with benchmarking.

The key findings from each benchmarking visit were related to an action plan with respect to what was being/had been implemented. This helped to ensure that the best practices identified were captured and acted upon. In addition, it was found to be important that the analysis identified common threads from the benchmarking visits. Simple graphical displays were used to communicate to all concerned the comparison of the KPI of the process being benchmarked with that of the partners. This assisted with the acceptance of changes that needed to be made, as well as with regular communication of progress, which was built into the project plan after the completion of each phase so that everyone concerned was up to speed before being presented with the project findings.

As benchmarking is about breakthrough improvement and the implementation of best practices, looking within the industry is insufficient as often the ‘best’ at particular practices are from diverse areas. This is a common problem for very specific benchmarking projects.

It is important to contact the benchmarking partners early. It can be more difficult than expected with respect to the time involved and the issue of identifying the right partner(s). Desk research into the companies being considered as benchmarking partners should be undertaken before making a decision, although this does depend on time constraints and the type of project being tackled. It has been found useful to visit four to five organizations, and to make all the visits within a period of one month. However, as the company is looking for high gains in the long term from benchmarking it is worthwhile taking the time to ensure that the company being benchmarked is suitable for analysis.

After each visit to a benchmarking partner it is important to detail and collate what has been learnt as quickly as possible while the experience is fresh in the team members' minds. It was also found helpful to summarize what the organization was doing better than the benchmark partner in a report format, identifying key points and providing quantitative as well as qualitative data.

Difficulties and Pitfalls

In order for a benchmarking project to be a success there are certain difficulties and pitfalls that must be avoided. Based on the projects undertaken the most common ones are:

  • Unrealistic assumptions. When planning the actual project realistic assumptions need to be made about the time required to complete the individual steps of the benchmarking project, the resources needed and the commitment of employees, other than the team members. The planning of the project needs to be as pragmatic as possible. It is also important to ‘manage’ the expectations at senior management level in respect of quick results and instant benefits as well as their role in the benchmarking process.
  • The team members must be free to participate in the project. The activities associated with the project should not become something else team members do as part of their normal working week as this will hamper progress and may seriously affect time scales, commitment, and eventually the findings of the project.
  • Lack of a contingency plan. If the project plan is based on a single set of circumstances or conditions, it is extremely vulnerable to changes. It is essential that a contingency plan be prepared to support the implementation and prepare for any unexpected major changes to the project. This contingency plan must be developed to cope with both favourable and adverse changes. If implementation is broken down into a number of sequential steps, then it must be possible to bring phase 2 forward if phase 1 takes less time than was initially expected, just as phase 2 would be delayed if phase 1 took longer than expected.
  • Failure to update the plan. Too often the creation of a plan is seen as a means to an end. Instead the plan should be considered as a living document that is based on certain assumptions, such as time, cost, resources and levels of commitment, as well as external factors such as the benchmarking partner's response and the time of year. These assumptions will almost certainly change over the period of a benchmarking project, requiring the plan to be updated in terms of what is required and when. In any reconfiguration of the original project the assumptions should be taken into account and any necessary changes made. For example, if the project is to finish on time more money may have to be spent on resources than was originally estimated to achieve any results of significance.
  • Failure to communicate the plan. Communication of what has been done, what is currently being done and what is planned is vital to the success of a project. If people are not fully aware of what is expected of them, the type of information which has been gathered about best practice, how the benchmark information is to be used to initiate improvements and the changes that will result from the implementation of best practices found from the benchmarking project, then it is highly likely that the plan will fail. It is also important to consider what needs to be communicated and the detail, as well as how it should be done.
  • Inadequate project definition. If the benchmarking team is not aware of why it is doing a particular project and its capability to change a process then the project will lack direction and focus, leaving the team unsure of what to measure and what best practices it is looking for.
  • Inadequate process understanding. When documenting a process which is being benchmarked, it is important that not only the processes should be described but also each process step plus the main practices. When carrying this out, the question ‘How do we know this?’ should be asked a number of times (i.e. the ‘5 whys’ approach) in order to validate what the team considers to be the process with those who are involved at each step. If this is not done then any conclusions drawn from the benchmarking study may be invalid and a potential danger to the present process.
  • Team members try to do everything themselves. It is important that the team members do not become insular and try to do everything in relation to the project by themselves. At times they will need to seek the advice and help of individuals who are not directly involved in the benchmarking project. This assistance may be in areas such as data collection, where the data required are already being collected by someone either within a department or externally.
  • The subject area is too large. Unless the process is within the control of the team and within its comprehension then it is very difficult to both measure, in meaningful terms, what is done and ask the right questions of the benchmarking and business partners (e.g. customers and suppliers).
  • It seems like a good idea to use benchmarking (i.e. it is the latest fad and fashion). Benchmarking, just like any other quality management technique, when used inappropriately, will not bring the expected benefit to the business. Therefore a balance should be reached between the scope of the problem, the return on investment which is expected and the level of improvement. There is little point in spending considerable time, money and resources on benchmarking a process which will not affect customers in any significant way by bringing breakthrough improvement to business operation.

Summary of Benchmarking

Benchmarking is a technique for the continuous improvement of processes. It is therefore important to ensure that the process of benchmarking is thought of in a similar vein; the objective is the continual improvement of the benchmarking process used for each project by sharing each project's successes, pitfalls, and failures and thereby promoting continuous learning. There is also a need to ensure that benchmarking is incorporated into an organization's culture of continuous improvement. A benchmarking project is likely to generate other additional benchmarking projects within the process studied or with interfacing processes. A project, in addition to the savings generated, is also helpful in promoting understanding of KPIs and measures of quality; in other words, what do we need to have in place to understand what we do, how we do it, why we do it, and how well we do it?

Business Process Re-engineering and Value Stream Mapping

With thanks to J. Macdonald and B. G. Dale (2007)

Introduction

In recent times business process re-engineering (BPR) has emerged as the concept which enables an organization to take a radical and revolutionary look at the way in which it operates and the way work is done, and references to it abound in management and technical publications with such words as ‘radical’, ‘dramatic’, ‘rethinking’, ‘optimize’, and ‘redesign’. It has become popular in a short period of time, promising amazing results very quickly in relation to corporate and technological change, transformation and competitive pressures. The protagonists of BPR argue that it is a concept which enables an organization to make the necessary step changes, introducing originality and improvements to the business which will enable it to leapfrog the competition.

While TQM is based, in general, on continuous improvement in processes over a relatively long period of time, BPR emphasizes structural process redesign, process re-engineering and fundamental rethinking of the business by ignoring the status quo, leading to claims of producing faster returns in a relatively short period of time through its one-step solution to a problem. They are both approaches to improve the performance of a business, but in the authors' view continuous improvement should come first and provide the base for the more radical change and improvements generated by BPR. It should also not be overlooked that TQM also drives breakthrough improvements.

The underlying issues in BPR are not necessarily new, although the language and approach are modern.

There is some confusion as to what constitutes BPR, what it covers, which initiatives it embraces, and its relationship with TQM. This is not helped by the variety of terms (e.g. business process improvement, business process redesign, business process re-engineering, core value-driven process re-engineering, process redesign, business restructuring, new industrial engineering, process simplification and value management) that authors use in their description of BPR, along with the lack of precision with which they use them. However, most of the terms refer to roughly the same type of activity, pointing out that gains in performance can be achieved by taking a holistic and objective view of business processes.

The authors view TQM and BPR as complementary and integral approaches rather than ones that are in opposition to each other. In fact many of the tools and techniques which have been proved and used in continuous improvement are employed in BPR projects, and a number of the principles and practices of BPR are very similar to those which underpin TQM and strategic process improvement.

Our combined practical and research evidence points to the fact that those companies which have been successful in building continuous improvement principles into their business operation in an evolutionary manner have created the solid platform and environment in which to develop the concept of BPR. Those organizations starting with TQM will have a better understanding of processes, which is central to both TQM and BPR. Having learned how to change using the continuous improvement philosophy, they are more ready to deal with the increasingly radical designing of new processes that is demanded by BPR. In general, it has been service industries and public sector organizations that have taken up the theme of BPR rather than manufacturing industry. It would be argued by managers in the former that, without BPR having been undertaken as part of the natural management process of running a business, they would simply not have survived. In general, service industries and the public sector have only relatively recently felt the winds of change.

The aim is to present, in simple terms, what BPR means, its main approaches and methods, techniques employed, and main principles and practices.

Approaches Used in BPR

The two main approaches employed in BPR are process redesign and process re-engineering, and these are examined below. The approaches are based on taking a holistic and objective view of the core processes that are needed to accomplish specific business objectives without being constrained by what already exists (i.e. ‘clean slate’). BPR covers a range of activities that result in radical change, to either individual processes or to the total organization. The main differences between the two approaches are that the latter involves greater structural change and risk while the former is quicker and less costly to implement but with potentially fewer benefits and improvements.

Business process redesign

Hammer and Champy (1993) point out that every re-engineering measure usually starts with process redesign. Process redesign can be carried out in many different ways depending on the degree to which the process is to be changed; it usually takes the existing process(es) as the base. It concentrates on those core processes with cross-functional boundaries and is generally customer-focused, with a view to process simplification, streamlining, mistake-proofing the process, efficiency and adaptability. It tends to seek answers to questions such as:

  • What is this process doing?
  • What are the core competencies?
  • What are the key elements?
  • What are its key measurables?
  • What are the main information flows?
  • Is the process necessary?
  • Is it adding value?
  • Is it producing an output which fully meets customer requirements?
  • How can it be improved?
  • How can it be done differently?
  • Who is the process owner?
  • Can it be done by someone less skilled?
  • Is the technology employed used to best advantage?
  • Can new technology provide new solutions?
  • Can activities be integrated?
  • Can activities be done in parallel?

It also uses many of the techniques used in a TQM and strategic process improvement initiatives, such as the value stream mapping, and employs modern methods of information technology to best advantage, in particular for integrating process activities.

Business process re-engineering

Process re-engineering or new process design demands more imagination and inductive thinking and radical change than process redesign, with those charged with the implementation of a project encouraged to abandon their belief in the rules, procedures, practices, systems and values that have shaped the current organization. It raises and challenges assumptions such as make or buy decisions, structures, functional responsibilities and tasks, systems and documentation (e.g. supplier payment), elimination of specialist departments, etc. Hammer and Champy (1993) define re-engineering as:

A fundamental rethink and radical redesign of business processes to achieve dramatic improvements in critical contemporary measures of performance, such as cost, quality, service and speed.

The approach is based on the view that continuous improvement is not sufficient to meet the organizational expectations for business development and change. Business process re-engineering seeks to make major fundamental, radical and dramatic breakthroughs in performance and is holistic in nature. The main focus is to ensure a ‘clean slate’ or ‘greenfield’ approach to processes, pinpointing that part of the organization on which to put the emphasis and highlighting the processes which add value. It is, however, not without risks and the demands on resources, time and costs which are associated with the efforts involved in a re-engineering project.

The concept is based on making best use of information technology (IT) in terms of communication and information-handling systems. It harnesses the enablers of technology and the release of the innovative potential of people to achieve breakthrough improvements, requiring a change from functional thinking to process thinking.

The Principles of BPR

The fundamental principles of BPR represent good management practice. Despite the difference in emphasis and terminology used by various authors the principles and values remain relatively common. From publications such as Coulson-Thomas (1994), Hammer and Champy (1993), Macdonald (1995a, 1995b), and Tinnila (1995) the main principles of BPR can be summarized as follows:

  • Strategic in concept
  • Customer-focused
  • Output- rather than input-focused
  • Focused on key business processes
  • Process responsibility and decisions at the point where work is performed
  • Cross-functional in nature
  • Involves internal and external customer–supplier relationships
  • Involves senior management commitment and involvement
  • Involves networking people and their activities
  • Involves integration of people and technical aspects
  • Requires clear communication and visibility
  • Has a mindset of outrageous improvement
  • People at all levels of the organization must be prepared to question the status quo in terms of technology, practices, procedures, approaches, strategies.

Value Stream Mapping

Within a BPR initiative, it is critical to thoroughly understand and visualize the selected process in the first instance, and then to collect its associated performance data, in terms of cycle time, speed, quality and cost (Venkataraman et al. 2014). Hence value stream mapping (VSM) is a cornerstone technique within BPR. VSM builds on process maps and flowcharts to provide fact-based process description for understanding the current problems and thinking about the future states. It is powerful to allow the team to communicate and assess how the process should work and perform once the waste and non-added-value activities have been removed. It is an excellent vehicle for involvement and participation (Bicheno and Holweg 2009).

George et al. (2005) explained that VSM is an elaborated process map encompassing data on WIP, set up time, processing time, error rates, idle time, etc., as well as the information regarding the flows. It is a fundamental technique as part of a Lean, Six Sigma, TQM or BRP approach (Gurumurthy and Kodaly 2011; Abdulmalek and Rajgopal 2006; and Winkel et al. 2015). It is relevant for the team to decide on the appropriate level and on the boundaries of the VSM. A high-level perspective is often recommended at the start in order to depict the major elements and their interactions. Bicheno and Holweg (2009) promote that VSM should consider and analyse the big picture (production, human resources, marketing, finance, engineering, etc.), which will allow to determine a strategic indication of the opportunities. However, a low-level view depicting the specific value-added and non-value-added activities will be essential to generate the breakthrough improvement aimed at. In this logic, it is appropriate to consider the current process first and then the ideal and future state. This should encourage the team to think outside of the constraints and think innovatively by stretching their imagination.

George et al. (2005) provide a 7-step method to create a VSM:

  1. Determine the process, product or service
  2. Draw the process flow
  3. Add the material flow
  4. Add the information flow
  5. Collect the process data and add them to the VSM
  6. Add process and lead time data to the VSM
  7. Verify and validate it.

Similarly Bicheno and Holweg (2009) suggested the following cycle to implement a VSM:

  1. Organizing the team in synch with the BPR team
  2. Pre-mapping the selected processes
  3. Developing the basic VSM
  4. Collecting the process data and engaging with the different stakeholders
  5. Identifying the current state
  6. Visualizing the future state
  7. Running some simulation and collecting performance data
  8. Implementing the changes
  9. Reviewing the improvement.

The core application of VSM involves creating a ‘Current State’ map (see Figure 10.7) then identifying the necessary improvements.

Diagram shows VSM current state map with supplier, prod schedule, orders and customers.

Figure 10.7 VSM current state map

These are agreed and defined; then communicated using a ‘Future State’ map (See Figure 10.8).

Diagram shows VSM future state map with supplier, prod schedule, orders and customers.

Figure 10.8 VSM future state map

Summary of BPR and VSM

The authors are of the view that BPR is complementary to TQM, rather than being an alternative or in opposition to it. For example, TQM can help to ‘hold the gains’ achieved through BPR and can create an environment that will help to ensure the success of BPR projects.

BPR is based, in general, on radical and breakthrough change over a relatively short period and TQM is based, in general, on incremental improvement over the longer term and on working within existing framework systems and procedures by improving them. In the authors' view, aiming for large step changes makes a project riskier and more complex, and also involves greater expense. Incremental change is safer and costs less. The simplicity of incremental improvement often overshadows the fact that in practice it requires effort and constant application to implement in an effective and efficient manner.

TQM and BPR do share common themes, such as a focus on customers, key processes, eliminating waste, and benchmarking. BPR tends to concentrate on one process at a time using a value stream mapping, whereas TQM takes a more holistic view of the organization culture, building improvement into all its areas of operation. TQM acts as the foundation for an organization's day-to-day functioning and continual improvement that allows and supports the development of BPR as an effective business improvement technique. To get the best out of both concepts they should be combined and integrated to produce a comprehensive approach to business improvement. TQM can sometimes stall and plateau, and other initiatives, within the overall framework of the approach, can often provide the spark to revitalize it. BPR could provide this type of excitement, but to do so it needs to be positioned within the broader TQM approach.

BPR requires dedication, acceptance of risk and considerable upheaval. It is important that an organization is clear on this because it is so easy to find it in conflict with the potential cost savings. Not every organization is capable of accomplishing the level of change required, but any organization that has the ambition to be the best cannot ignore BPR but must accept the challenge. Some industries which operate in dynamic environments are more suited to taking on the risks associated with BPR than others, where the disturbance of processes could have severe consequences. It is also important for organizations to be clear on whether they need business process redesign or the more radical process re-engineering. Both are important to stimulate process innovation so that organizations can become more agile in responding to unpredictable changes and respond quickly to the needs and demands of customers.

Managers are central to the success of re-engineering projects and they must be prepared to change their role and power structures and provide the necessary leadership.

Six Sigma

With thanks to A. van der Wiele, J. D. van Iwaarden, B. G. Dale and A. R. T. Williams (2007)

Introduction

In this last section of this substantive chapter, Six Sigma is going to be presented, discussed and reviewed. The authors consider Six Sigma as a data-driven integrated process improvement and problem-solving approach, which encompasses and is built on all of the techniques described previously in this chapter: QFD, DoE, FMEA, SPC, Benchmarking and BPR and VSM, and many more.

Motorola created the concept of Six Sigma in the mid-1980s to improve the performance, productivity and quality of their key processes. The main factor behind its development was continuous improvement in the manufacture of complex devices involving a large number of parts with a high probability of defects in the end product. At the same time, customers were demanding that Motorola improve quality in their final product offerings. This external driver supported the need for continuous improvement. The goal of Six Sigma is value creation through quality improvement. The process by which this is attained would involve training of employees in tools and techniques as well as a problem-solving protocol. Six Sigma makes use of quality engineering methods within a defined problem-solving structure to identify and eliminate process defects and solve problems, and in this way improve yield, productivity, operating effectiveness and customer satisfaction (Bhote and Bhote 1991; Harry and Schroeder 1999; McFadden 1993; Pande et al. 2000; Pyzdek 2003; Gijo et al. 2014; Jesus et al. 2015). It is based on the well-established quality management ideas of understanding and eliminating the causes of variation and robust designing for manufacture and assembly. Therefore, its roots are in Statistical Process Control (SPC). The well-publicized bottom-line benefits achieved by Motorola (De Feo 2000) led to its adoption by high-profile organizations such as AlliedSignal (now Honeywell) and General Electric. Interest is currently very high and a wide range of organizations are following the adoption of Six Sigma. The concept is variously described in books and papers (Breyfogle 1999; Harry and Schroeder 1999; Snee and Hoerl 2003; Linderman et al. 2003; De Mast and Lokkerbol 2012; Shafer and Moeller 2012), and its protagonists claim it is a complete management system.

Many of the objectives of Six Sigma are similar to those of Total Quality Management (e.g. customer orientation and focus, team-based activity, comprehensive education and training, and problem-solving methodology) and it undoubtedly builds on TQM. There is no doubt that Six Sigma brings engineering and statistical analysis back into quality and is returning quality back to its roots.

Many of the success stories in the literature are from American organizations. AlliedSignal and General Electric have both used the financial benefits achieved through Six Sigma to persuade financial analysts that their firms' stock prices should be higher. This is perhaps the first time that executives have been able to argue that their quality initiatives will result in financial benefits that should be taken into account in the valuation of their companies.

It seems that Six Sigma is interpreted by different organizations in different ways. Some organizations interpret it simply as a measurement and improvement device, while others use it as a label for their organization-wide quality approach (Breyfogle 1999; Dusharme 2001; Harry and Schroeder 1999; Snee and Hoerl 2003; Jacobs et al. 2015).

What Does Six Sigma Mean?

A sigma is a statistical indication of variation in terms of the standard deviation of the characteristic under consideration. It indicates the spread of each unit around the target value, and therefore it is essentially an indication of how good a product or service is. Traditionally, designers used the three-sigma rule to evaluate whether or not a part would meet specification. When a part's specifications are consistent with a spread of six standard deviations of process variation (three sigma to either side of the target value), around 99.73 per cent of the parts for a process which is centred would be expected to conform to specification. The higher the sigma value, the lower the number of defects associated with the process, the lower the costs of rework and scrap and the lower the cycle time of the process. In essence, sigma measures the capability of a process to produce defect-free work and is a means of calibrating process performance to meet the requirements of customers. For example, a process that is at a quality level of three sigma means 66,807 defects per million opportunities (DPMO), while Six Sigma is 3.4 DPMO. Other sigma levels and their corresponding number of defects are presented in Table 10.1.

Table 10.1 Six Sigma and defects per million opportunities

Sigma Defects per million opportunities (DPMO)
2 308,537
3 66,807
4 6,210
5 233
6 3.4

The question of how organizations perceive Six Sigma and what they are doing under the umbrella of a Six Sigma approach has been the focus of research undertaken by van Iwaarden et al. (2008). They conducted a survey project amongst British, American and Dutch companies that use Six Sigma. Their results indicate that the Six Sigma approach is universal in the three countries surveyed, and that Six Sigma improves efficiency and profitability. The latter issue is the major driving force for organizations to start a Six Sigma implementation process. Looking into the tools and techniques that are found to be important within the context of a Six Sigma approach, it is clear that the basic (statistical) quality tools and techniques are seen as the cornerstone of Six Sigma. However, many of these tools and techniques were found to already be in place before the companies started implementing Six Sigma, indicating that Six Sigma is usually based on existing knowledge and practices.

Six Sigma Prerequisites

Six Sigma, like any major organizational change programme, is not easy. Its success will depend on at least four major factors.

Firstly, it involves high levels of commitment and involvement of management. It is based on an understanding of statistics and this is not a popular area for most managers. It also requires that high-performing managers are released to be trained and, after training, that they commit a significant amount of their time to the Six Sigma concept. So it needs to be led by senior management.

Secondly, it cannot be treated as yet another stand-alone activity. Like TQM, it requires adherence to a whole philosophy rather than usage of a few tools and techniques, however sophisticated. A Six Sigma-style initiative demands a degree of sophistication from the organization adopting it and the organization must be ripe for the change. For example, the organization must be used to working with cross-functional teams and should have its major processes identified and under some degree of control. In short, it must have many of the fundamentals of TQM already in place.

Thirdly, Six Sigma is about reducing defects. Improvement depends on how opportunities for defects or failure are defined and measured (i.e. the possible defects). What matters in a Six Sigma approach, as in any other quality approach, is what the customer wants or needs; this is why QFD (see above) is also a technique strongly associated with Six Sigma. The most critical effects or failures are those that most concern the customer.

Fourthly, Six Sigma needs to be concentrated on those elements of a business which will result in customers perceiving that they would rather deal with them than with one of its competitors. Therefore Six Sigma, like any performance improvement drive, should start from strategy – where does a company want to be? What will really make a difference to getting there? And, therefore, where must it concentrate its improvement drives? For many organizations the key factors influencing whether they will achieve their desired strategy are: How many customers will stay loyal? What really governs a customer's actual purchasing behaviour (as opposed to what he or she says governs it)? Will enough customers continue to be willing to pay a slight premium for their products? Can the organization increase its market share?

In the aforementioned international survey project on Six Sigma by van Iwaarden et al. (2008), respondents were asked what was the required level of quality experience at the start of the Six Sigma implementation, and what factors influenced the sustainability of a Six Sigma approach. From the survey it was concluded that a successful Six Sigma implementation had to build on experience of earlier quality management programmes. Having developed quality awareness and a quality culture, and having reached a certain level of quality management maturity, are essential prerequisites for the success of Six Sigma. For the sustainability of a Six Sigma approach, a wide range of items were found to be important, indicating that it is difficult, as with any management approach, to follow a specific approach over a long period of time. There will always be potential obstacles: the benefits of projects may diminish over time, management's focus may shift to other priorities, and important players in the organization may lose interest.

Six Sigma Core Elements

Six Sigma builds on a range of improvement methods that have proven to be effective. This can be seen in its central themes, which can be considered to be the following:

  • Focus on the customer. Six Sigma measures start with customer satisfaction. The emphasis is on understanding customer expectations and requirements.
  • Data- and fact-driven management. This is a classical quality management theme, including speaking with data, management decisions based on fact, developing an in-depth understanding of internal processes.
  • Specific training. A defined and formal infrastructure (based on the martial arts hierarchy) of champions, master black belts, black belts and green belts that head and influence Six Sigma projects (see Pyzdek 2003). Master black belts are the technical experts who provide training and support for the other belts; they also lead major cross-functional Six Sigma projects. The black belts undertake full-time work on Six Sigma projects and lead the project teams. The green belts are part-time process owners, and usually undertake work on a small scale, in contrast to the black belts.
  • Structured approach. Six Sigma is based on a structured problem-solving approach. For existing processes, the approach consists of the following steps: define, measure, analyse, improve and control (DMAIC); for new processes the steps are define, measure, analyse, design and verify (DMADV). Both approaches are discussed below.
  • Quality engineering. Six Sigma uses a full range of tools and techniques, as typically described in Chapters 9 and 10 of this book. A number of writers on Six Sigma suggest the application of specific tools and techniques against each stage of the DMAIC and DMADV problem-solving approaches.
  • Process focus, control and improvement. The key aspect is understanding the process in order to control its input and thereby facilitate its improvement. This involves an examination of potential defects, root causes and potential corrective and long-term actions. It is important to understand the relationship between inputs (X) and outputs (Y) with respect to issues such as: which Xs have the biggest positive effect on Ys; reduction of variation in inputs; and improvement in process outputs.
  • Proactive management. Management at all levels must attempt to understand the key principles of Six Sigma. They must be active in challenging why ‘things’ are done in a certain way, defining root causes of problems, setting and maintaining aggressive improvement targets, and being prepared to devote a large amount of their time. Managers must expect that, with Six Sigma, pressure on them will increase.
  • ‘Boundaryless’ collaboration. Teamwork is an essential part of Six Sigma and it is important to have a range of skills within the team.
  • Drive for perfection. In the drive to eliminate defects it is important to accept that from time to time things will go wrong and some projects will not achieve their goals. It is important to understand the reasons for failure, to learn from experiments, and to put in place counter-measures to prevent defects occurring in the future.
  • Cost savings of each project. A sense of urgency is created with Six Sigma, through the financial targets linked to each project. Each Six Sigma project should lead to verifiable bottom-line results.
  • Short-term improvement projects. A key requirement is that a time scale is agreed for a specific project's completion. Moreover, the duration of each project is relatively short, with a typical project lasting between three and six months.

Structured Problem-Solving Approaches

Six Sigma improvement projects adhere to strict problem-solving approaches. Depending on the organizational processes to which they are applied (i.e. existing processes or new processes), improvement projects use either the DMAIC or DMADV approach. In the practice of consultancy firms, new acronyms are occasionally developed; however, they do not differ markedly from DMAIC and DMADV.

A number of writers (e.g. Eckes 2000; Pande et al. 2000) outline how the implementation of Six Sigma involves three aspects of process development:

  • Process improvement
  • Process design/redesign
  • Process management.

Process improvement

This primarily concerns the elimination of the root causes of process problems and is clearly associated with continuous improvement activities (i.e. improve what you already have). Most Six Sigma activities are initially based on process improvement. It involves the identification of the vital few (Xs) that influence results (Ys). The DMAIC structured problem-solving approach, which is employed in making improvements to existing processes and products, is related to other problem-solving approaches, including Deming's PDCA cycle. The DMAIC approach takes the following steps:

  • Define in clear terms the specific problem to be worked on and the process to be improved. The problem needs to be one where it is critical to succeed and/or one that will give the quickest or greatest returns. This phase also involves defining the project scope and boundary conditions, selecting appropriate performance metrics, and agreeing the goals of the selected project, its financial impact, and the project champion, process owner and team members.
  • Measure the factors which are critical to quality (Xs). This involves selecting the process outputs to be improved, developing a data collection plan, gathering data to evaluate current performance and an assessment of the performance measurement systems and their capability.
  • Analyse the relationship between the cost of defects and key process variables. The purpose of this is to determine the root cause of variation and defects.
  • Improve the process using experimentation, pilot studies and simulation techniques to address the root causes of the problems identified in the analysis phase. This involves various improvement loops, with appropriate confirmation studies related to each phase of process improvement.
  • Control the process outputs (Ys) to ensure the long-term gains and improved performance of the process. This involves verifying the benefits and savings, ensuring that the changes are fully integrated into procedures, and communicating the findings as appropriate.

Process design/redesign

This is normally the second stage after process improvement. The emphasis is on new processes rather than fixes to existing ones and in this there are similarities to Business Process Re-engineering (see Chapter 10).

The method used is the DMADV approach, also called design for Six Sigma (DFSS). The steps are different from those of the DMAIC method, and can be described as:

  • Define the project goals and customer (internal and external) deliverables
  • Measure and determine customer needs and specifications
  • Analyse the process options to meet the customer needs
  • Design (detailed) the process to meet the customer needs
  • Verify the design performance and ability to meet customer needs.

Process management

This third phase reflects a change of emphasis from oversight and direction of functions to understanding and process facilitation. The key activities include: processes are managed end to end; customer requirements are clearly identified; meaningful measures of input, process activities, and output are developed.

A Six Sigma approach can be considered to have the following five stages:

  • Stage 1 – Identify the need. Gather data; analyse data; define cost of poor quality; consider the voice of the customer.
  • Stage 2 – Clarify the vision. Define project brief; set goals/target objectives; establish realistic time scales.
  • Stage 3 – Develop the plan. Establish the project initiative; identify the project champion; select the team; understand the present documentation system; review the current process; understand current process performance; consider training requirements; define the available resources.
  • Stage 4 – Implement the plan. Promote clear process ownership; optimize the process; record progress results and display using appropriate visual management methods; evaluate improvement and process management; review cost savings; establish continuous improvement mechanisms.
  • Stage 5 – Sustain improvement. Document the initiative; maintain team responsibility; apply knowledge gained across the board; communicate and recognize success; identify the next project.

Summary of Six Sigma

Six Sigma is a well-structured improvement approach with verifiable financial results. However, organizations' quality initiatives should not be driven solely by the savings needed to impress the financial analysts. This can be a dangerous approach because its focus is too short-term. Most organizations need a longer-term focus on customers, and not solely on suppliers of capital. A successful Six Sigma implementation should build upon a number of prerequisites such as an existing quality culture and a certain level of quality maturity. The sustainability of Six Sigma in the long term depends on many factors, such as top management commitment, being able to show successful projects, high investment in training, high investment in management time, and the involvement of key players in the organization.

Six Sigma can revolutionize an organization and it will go deep into its fabric; it therefore needs top management drive behind it. It must be seen as part of a total approach, and it demands a level of quality competence from the organization before the benefits can begin to be delivered. Quality improvement methods such as Six Sigma may be very powerful but they have to be directed and need a clear strategy to measure and interpret customers' needs successfully. Managers must also realize that, if their organization has no clear power structure and the desired level of competence is not present, then a Six Sigma programme is unlikely to work.

References

  1. Abdulmalek, F. W. and Rajgopal, J. (2006), Analyzing the benefits of lean manufacturing and value stream mapping via simulation: A process sector case study, International Journal of Production Economics, 107, 223–36.
  2. Aldridge, J. R. and Dale, B. G. (2007), Failure Mode and Effects Analysis. In: B. G. Dale, T. van der Wiele and J. van Iwaarden. 5th edn. Managing Quality. Oxford: Blackwell Publishing.
  3. Bhote, K. R. and Bhote, A. K. (1991), World-Class Quality: Using Design of Experiments to Make it Happen, 2nd edn. New York: American Management Association.
  4. Bicheno, J. and Holweg, M. (2009), The lean toolbox – the essential guide to lean transformation. 4th edn. Buckingham: PICSIE Books.
  5. Breyfogle, F. W. (1999), Implementing Six Sigma: Smarter Solutions Using Statistical Methods. New York: John Wiley & Sons.
  6. Coulson-Thomas, C. (1994), Business Process Re-engineering: Myth and Reality. London: Kogan Page.
  7. Dale, B. G. and Shaw, P. (1989), The application of statistical process control in UK automotive manufacture: Some research findings. Quality and Reliability Engineering International, 5(1), 5–15.
  8. Dale, B. G. and Shaw, P. (1990a), Failure mode and effects analysis in the motor industry: A state-of-the-art study. Quality and Reliability Engineering International, 6(3), 179–88.
  9. Dale, B. G. and Shaw, P. (1990b), Some problems encountered in the construction and interpretation of statistical process control. Quality and Reliability Engineering International, 6(1), 7–12.
  10. Dale, B. G. and Shaw, P. (1991), Statistical process control: an examination of some common queries. International Journal of Production Economics, 22(1), 33–41.
  11. Dale, B. G. and Shaw, P. (2007), Statistical Process Control. In: B. G. Dale, T. van der Wiele and J. van Iwaarden. 5th edn. Managing Quality. Oxford: Blackwell Publishing.
  12. Dale, B. G., Shaw, P. and Owen, M. (1990), SPC in the motor industry: an examination of implementation and use. International Journal of Vehicle Design, 11(2), 115–31.
  13. Dale, B. G., van der Wiele, T. and van Iwaarden, J. (2007), Managing Quality, 5th edn. Oxford: Blackwell Publishing.
  14. De Feo,J. A. (2000), An ROI story. Training and Development, 54(7), 25–7.
  15. De Mast, J. and Lokkerbol, J. (2012), An analysis of the Six Sigma DMAIC method from the perspective of problem solving. International Journal of Production Economics, 139(2), 604–14.
  16. Dusharme, D. (2001), Six Sigma survey: breaking through the Six Sigma hype. Quality Digest, November.
  17. Eckes, G. (2000), The Six Sigma Revolution: How General Electric and Others Turned Process into Profits. New York: John Wiley & Sons.
  18. Ferguson, I. (1995), A Practical Course in Parameter Design. Birmingham: Ian Ferguson Associates.
  19. Ferguson, I. and Dale, B. G. (2007), Quality Function Deployment. In: B. G. Dale, T. van der Wiele and J. van Iwaarden. 5th edn. Managing Quality. Oxford: Blackwell Publishing.
  20. George M. L., Rowlands, D., Price, M. and Maxey, J. (2005), The Lean Six Sigma Pocket Toolbook. New York: McGraw-Hill.
  21. Gijo, E. V., Bhat, S. and Jnanesh, N. A. (2014), Application of Six Sigma methodology in a small-scale foundry industry, International Journal of Lean Six Sigma, 5(2), 193–211.
  22. Gosling, C., Rowe, S. and Dale, B. G. (1992), The use of quality management tools and techniques in financial services: an examination. Proceedings of the 7th OMA (UK) Conference, Manchester Business School, June, 285–90.
  23. Gurumurthy, A. and Kodaly, R. (2011), Design of lean manufacturing systems using value stream mapping with simulation: A case study, Journal of Manufacturing Technology Management, 22(4), 444–73.
  24. Hammer, M. and Champy, J. (1993), Re-engineering the Corporation. London: Nicholas Brealey.
  25. Harry, M. and Schroeder, R. (1999), Six Sigma: The Breakthrough Management Strategy Revolutionizing the World's Top Corporations. New York: Currency.
  26. Jacobs, B. W., Swink, M. and Linderman, K. (2015), Performance effects of early and late Six Sigma adoptions, Journal of Operations Management, 36, 244–57.
  27. Jesus, A. R., Antony, J., Lepikson, H.A. and Teixeira Cavalcante, C. A. (2015), Key observations from a survey about Six Sigma implementation in Brazil, International Journal of Productivity and Performance Management, 64(1), 94–111.
  28. Linderman, K., Schroeder, R. G., Zaheer, S. and Choo, A. S. (2003), Six Sigma: A goal-theoretic perspective, Journal of Operations management, 21(2), 193–203.
  29. Love, R. and Dale, B. G. (2007), Benchmarking. In: B. G. Dale, T. van der Wiele and J. van Iwaarden. 5th edn. Managing Quality. Oxford: Blackwell Publishing.
  30. Macdonald, J. (1995a), Understanding Business Process Re-engineering. London: Hodder & Stoughton.
  31. Macdonald, J. (1995b), Together TQM and BPR are winners. The TQM Magazine, 7(3), 21–5.
  32. Macdonald, J. and Dale, B. G. (2007), Business Process Re-engineering. In: B. G. Dale, T. van der Wiele and J. van Iwaarden. 5th edn. Managing Quality. Oxford: Blackwell Publishing.
  33. McFadden, F. R. (1993), Six Sigma quality programs. Quality Progress, 26(6), 37–42.
  34. Pande, P. S., Neuman, R. and Cavanagh, R. R. (2000), The Six Sigma Way: How GE, Motorola and Other Top Organizations are Honing their Performance. New York: McGraw-Hill.
  35. Pyzdek, T. (2003), The Six Sigma Handbook: The Complete Guide for Greenbelts, Blackbelts, and Managers at All Levels. New York: McGraw-Hill.
  36. Shafer, S. M. and Moeller, S. B. (2012), The effects of Six Sigma on corporate performance: An empirical investigation, Journal of Operations Management, 30(7), 521–32.
  37. Shewhart, W. A. (1931), Economic Control of Quality of Manufactured Product. New York: D. Van Nostrand Co. Inc.
  38. Snee, R. D. and Hoerl, R. W. (2003), Leading Six Sigma – A Step by Step Guide Based on Experience at GE and Other Six Sigma Organizations. New Jersey: Prentice Hall.
  39. Taguchi, G. (1986), Introduction to Quality Engineering: Designing Quality into Products and Processes. Tokyo: Asian Productivity Organization.
  40. Tinnila, M. (1995), Strategic perspective to business process re-design. Management Decision, 33(3), 25–34.
  41. van der Wiele, A., van Iwaarden, J. D., Dale, B. G. and Williams, A. R. T. (2007), Six Sigma. In: B. G. Dale, T. van der Wiele and J. van Iwaarden. 5th edn. Managing Quality. Oxford: Blackwell Publishing.
  42. van Iwaarden, J. D., van der Wiele, A., Dale, B. G., Williams, A. R. T. and Bertsch, B. (2008), The Six Sigma improvement approach: A transnational comparison. International Journal of Production Research, 46(23), 6739–58.
  43. Venkataraman, K., Ramnath, B. V., Kumar, V. M. and Elanchezhian, C. (2014), Application of Value Stream Mapping for Reduction of Cycle Time in a Machining Process. Procedia Materials Science, 6, 1187–96.
  44. Winkel, J., Edwards, K., Birgisdóttir, B. D. and Gunnarsdóttir, S. (2015), Facilitating and inhibiting factors in change processes based on the lean tool ‘value stream mapping’: An exploratory case study at hospital wards, International Journal of Human Factors and Ergonomics, 3(3-4), 291–302.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.17.40