Chapter 6

Total Quality Management

Any exploration of lean management would be incomplete without examining its vital companion, total quality management (TQM). Philip Crosby gained fame with his belief that organizations that establish a quality program will see savings returns more than offset the cost of the quality program.1 He referred to this as “quality is free.” Although there are increased prevention costs, well-executed TQM programs pay for themselves in the form of decreased internal and external failure costs. Numerous research reports have shown that quality can enhance return on sales and investment and can lower total system costs.2

Like lean management, TQM possesses a customer-driven philosophy for organizationwide continuous or ongoing improvement and waste elimination. It possesses a methodical foundation of numerous principles and tools, experimentation, scientific analysis, and problem solving. Although TQM often conjures up images of statistics and tools such as Pareto charts and control charts, it goes well beyond statistics and tools to incorporate components of leadership, culture, and teamwork as well. Similar to lean management, TQM should be viewed in a systematic manner with activities eventually leading to enhancing customer value.

Many definitions of quality have been offered. One definition suggests the term fitness for use to assess quality. However, variations of fitness exist in grades or levels of quality. Terms such as basic and premium are example descriptors used to portray various quality grades. Distilled from all the definitions is the single important dimension for all outcomes of TQM programs. Quality must be viewed from the standpoint of the downstream customer.

From the ultimate consumer standpoint, the concept of conformance to specifications conveys the idea that consumers have an expectation to be met. This expectation is best viewed in terms of a specific outcome for a transaction. Because all processes possess inherent variability (random variation), consumers recognize three components to measure outcomes: an expected value and a reasonable output range made up of upper and lower specifications within which satisfactory outputs lie. This concept is shown in Figure 6.1. As long as process outcomes lie within upper and lower specification limits, an honest consumer will be satisfied.

Figure 6.1 Distribution of process outcomes

Consumers should establish their expectations (the three components) prior to a transaction. Variability that leads to unexpected outcomes exceeding specification limits leads to dissatisfied consumers. The best approach to measure quality assesses whether consumer expectations are being met. Therefore, quality should be defined as the elimination of variability because if variability is eliminated, consumer’s expectations will be met.

As with lean management, a firm should understand its strengths and capabilities, its weaknesses, potential opportunities, and any threats assessed with a SWOT (discussed in Chapter 1) analysis prior to establishing a TQM program. After its completion, quality planning and management may proceed.

Effective planning and management of quality follows with a systems approach. TQM involves the execution of three interdependent planning stages, each comprising various activities. These three planning stages are strategic quality planning, tactical quality assurance, and operational quality management, control, and improvement.3 These three stages begin with quality planning, a strategic approach to identifying and understanding consumers’ wants, needs, and preferences and assessing an organization’s ability to meet them. Tactical quality assurance is a proactive set of activities having a goal of adherence and maintenance of product and service quality levels. Operational quality management, control, and improvement assess operational process outputs in an ongoing manner to ensure conformance to specifications, all the while attempting to improve future process outputs as well.

Table 6.1 Six TQM principles

  1. It must have a customer focus: externally and internally
  2. It must have top management’s utmost commitment
  3. Quality can be built into product design
  4. Quality can be built into process design
  5. After-the-sale service quality is essential
  6. Use of a variety of quality tools is necessary

Furthermore, in order to truly design an effective TQM program throughout these three planning stages, organizations must adhere to six specific TQM principles. These six principles are shown in Table 6.1. Each of the three planning stages as well as the six principles is addressed in the following sections.

Strategic Quality Planning

At the onset of any strategic planning, it is imperative to clearly understand program goal(s). Quality planning is a strategic approach to (a) identify and understand consumers wants, needs, and preferences, (b) establish TQM program goals, (c) assess an organization’s ability to meet these goals, and (d) quantify the costs for achieving the goals.

First, quality planning must engage customers to solicit market requirements. Consumers’ quality expectations must be understood. ­Second, specific and measureable goals that allow for subsequent assessment must be established. These goals must recognize market ­requirements as well as the various costs of quality (internal, external, appraisal and prevention costs). Third, products and services must be designed, developed, and produced, which meet consumers’ wants, needs, and preferences. Fourth, the costs for achieving these goals must be estimated. The costs include internal and external failure costs, prevention costs, as well as the cost of inspection and the cost of passing defects downstream.

During this stage, there must be a clear understanding of the importance upper management attaches to quality. A critical link exists between leadership, its commitment, and the ultimate success of the quality program. Leadership is often regarded as the single most critical factor in the success or failure of institutions.4 This is true for organizations as a whole or simply for programs such as TQM.

A good starting point to assess the management’s understanding of this importance is to examine an organization’s mission statement. A mission statement is typically viewed as a formal, short statement of the purpose of an organization. It is intended to guide the actions of the organization or to provide it with a sense of direction. It guides subsequent strategic choices such as strategic quality planning, tactical quality assurance, and quality management, control and improvement. If quality is important, reference to its importance should appear in the mission statement.

Remember, leadership should be viewed as interpersonal influence, exercised in situations and directed through the communication process, toward the attainment of a specified goal or goals. It includes setting the direction of the organization through a thorough, long-term vision of the organization’s value-producing processes. TQM and lean management should represent lifelong commitments to continuous improvement. In order to promote quality, there must be a concerted effort by all to better understand the customer’s wants, needs, and preferences. The practice of asking many questions to promote a better understanding is essential. People often assume they know what customers want, based upon their own preferences. Well, odd as it may seem, people actually do have wants, needs, and preferences that differ from our own.

It is during this stage of developing a quality plan that the strategic, group hoshin kanri and nemawashi process discussed in Chapter 3 is again utilized. What is sought is a long-term systematic plan agreed to by all that will be used year after year to assess performance and alter future activities.

As noted earlier, engaging others is an important step in any major change. Before any formal steps are taken, successful group planning enhances the possibility of change with the consent of all stakeholders. Although it is time-consuming, the hoshin kanri process can turn skepticism and resistance into support, create cross functional cooperation, fully engage the workforce in developing executable strategies, link improvement and corrective actions with financial results, as well as enhance the ability for the team to respond to changes and setbacks.

Tactical Quality Assurance

The second stage in the development of a TQM program is tactical quality assurance. This stage represents a proactive set of activities having a goal of adherence and maintenance of product and service quality levels. This includes providing inputs for establishing policies and standardized specifications, documenting outcomes for assessment and verification, and specific procedures to remedy deficiencies.

Tactical quality assurance entails multifunctional processes. As a result, numerous stakeholders should participate to provide process inputs. An example of this extols the third principle noted in Table 6.1, which argues that quality can be built into the product or service design. This has sometimes been referred to as the product’s manufacturability or its ease of production and the ability to conform to specifications. The essential idea is to design and build in quality rather than inspect it, as nonconformance is costly. Product designs ought to start with obtaining market information such as customer needs. The needs must be reflected in procurement decisions, engineering design requirements, production processes, as well as distribution choices.

An invaluable tool used to reflect external customers’ specifications for various functional units is quality function deployment (QFD). QFD is designed to help planners focus on characteristics of a new or existing product or service from the consumer viewpoint. The QFD process begins with assessing consumers’ requirements (sometimes referred to as listening to the voice of the customer), sorting and prioritizing the requirements, and then translating these requirements into specific product or service characteristics. One tool that has proven useful in the QFD process is known as the House of Quality. This tool attempts to map customer requirements with product or service characteristics. This tool has also proven to facilitate communications among functional units of an organization.

Product or service design quality may be enhanced with several additional practices. Product simplicity utilizing fewer parts (e.g., fewer mechanical fasteners), reliance upon robotic technology, vertical orientation for assembly, product redundancies, improved supplier relations, and preventative maintenance are all practices aimed at ensuring adherence and maintenance of product and service quality levels.

The fourth principle noted in Table 6.1 suggests that quality can also be built into process design. Elements of process design, including standardizing operating practices with approaches such as International Organization for Standardization (ISO) 9000 or TS 16949, reduced bureaucracy through fewer management levels, as well as the involvement of employees, support adherence and maintenance of product and service quality levels.

The expression, “The next process is the customer,” attributed to Kaoru Ishikawa, acknowledges that downstream workers are essentially internal process customers.5 It is important to understand that the quality of downstream process work is limited to the best quality of upstream sources. Namely, upstream work limits the quality found downstream. Process design can lead to enhanced quality with an internal customer focus. To do so, it is first essential for everyone in the organization to understand the shared vision of a quality objective. Organizational leadership must convey this message and it must create the conditions so that it is understood, agreed to, and voluntarily pursued by all. Leadership is more likely to engage employees with an understanding of their importance with a demonstration of various practices such as participative management and teamwork, and through an emphasis on quality at the source.

Once the quality culture is established, quality at the source, which is as much of a principle as it is practice, can be utilized. It acknowledges that quality is the responsibility of every upstream source: employee, work group, department, or vendor. It represents a decentralization of the responsibility for quality outcomes through a culture that appreciates the importance of adhering to standards and through the use of practices such as visual management and mistake proofing. However, it is incumbent on leadership to understand that accountability (being answerable for the satisfactory completion of a specific assignment) only occurs if employees possess both responsibility (the obligation incurred by individuals in their roles in the formal organization in order to effectively perform assignments) as well as authority (the power granted to individuals so that they can make final decisions to complete their assignments). Accountability is responsibility coupled with authority. Employees must be given the opportunity to take corrective actions, which may entail various responses ranging from simple reporting measures to root cause investigations to more extensive preventive measures.

An internal customer focus can be emphasized with further recognition of the vital contributions of employees. The skillset of employees should be regularly improved with an emphasis on education and training. Employees must be provided the necessary knowledge to use investigative tools and apply the technology to achieve quality objectives.

Further direct employee involvement can be achieved with a tool such as quality circles. These too represent a decentralization of management’s responsibility for achieving quality objectives. It is commonly a small group of employees doing related work, which meets at regular intervals to pursue objectives of increased productivity and quality. It can provide for substantial individual motivation and improve managerial decision making. Involving employees with education and training programs, ­utilization of participative management programs including quality ­circles, or team-based matrix organizational structure simply recognizes the value of employees.

Product or service quality can also be enhanced with both upstream (vendor) and downstream (customer) supply chain support. For ­example, consider process design elements that warrant potential examination including distribution choices, possible product installation, as well as continued after-the-sale support. Each of these can significantly impact product or service quality.

Operational Quality Management, Control, and Improvement

Nonconformities do occur despite an organization’s best efforts to proactively eliminate them. Quality management, control, and improvement refers to efforts to detect nonconformities and to ensure that operational process outputs meet consumer expectations today and exceed expectations tomorrow. The essential goal of this stage is to identify the source of variation so it may be reduced, or to possibly eliminate its source. The sixth principle noted in Table 6.1 recognizes the numerous quality control and improvement tools that exist, which can enhance quality management, control, and improvement efforts. There are two broad categories of quality tools. These two categories are often referred to as process improvement and statistical process control tools. Some of these tools are discussed in the following sections.

Process Improvement Tools

Benchmarking is a process of comparing one’s business processes and performance to industry bests. It may imply a comparison within a peer group. It need not be in the same industry. Namely, it may be a comparison with best practices from other industries. Quality is a common dimension for benchmark comparisons.

The simple intent is to achieve improvements through learning from other organizations. The process begins with an organization attempting to better understand an existing performance gap, to devise strategies for narrowing the gap, implementing the plan, and monitoring and controlling subsequent performance.

Brainstorming is a popular group creativity technique designed to generate a large number of ideas for improving quality. Although evidence suggests that its benefits for improving quality may be limited, it clearly offers the potential to boost morale, enhance work enjoyment, and to improve teamwork.

There are four basic rules in brainstorming, which are intended to reduce group member social inhibitions, stimulate idea generation, and to increase group creativity. The first rule supports the generation of many ideas. It suggests that the greater the number of ideas generated, the greater the chance of producing a radical and effective innovative solution. The second rule is to withhold criticism. In a group environment, criticism of another’s ideas often defeats the brainstorming process. Criticism frequently leads to either sharp disagreements, withdrawals, or both among participants. Instead, participants should focus on extending or adding to ideas. By suspending judgment, participants will feel free to generate unusual ideas. Participants should be advised to reserve criticism for a later stage of the process. The third rule is to welcome unusual ideas. Unusual ideas and new ways of thinking typically provide a greater range of options to be considered. The fourth rule suggests that ideas be combined to form better solutions.

Figure 6.2 Engineering process map and operation process map symbols

Figure 6.3 Example histogram

Process mapping or flowcharting is a tool that utilizes different shapes to represent different types of process flow tasks. An example is portrayed in Figure 6.2. This is an example of an “engineering” process map where the rectangular shape represents a task, while the triangular shape represents assessment. Similarly, there is also an “operational” process map that uses five different symbols that depict items in one of five process flow states: (a) the performance of a process task (operation), (b) transportation (movement), (c) being stored in inventory, (d) a delay (e.g., waiting to be moved), or (e) being inspected. It commonly uses a ­circular shape to portray a task, an arrow to portray a movement, a triangular shape to portray an inventory, a D shape to portray a delay, and a square to portray inspection. In either process map type, the user should depict the process in sufficient detail so that value-added activities may be ­distinguished from non-value-added activities.

Another tool, a histogram, is a graphical representation of a data distribution. It typically consists of tabular frequencies, shown as adjacent rectangles, drawn over discrete intervals with an area equal to the frequency of the observations in the interval. An example is shown in the following text as Figure 6.3. The x-axis represents the categories of concern, for example, repair times for various failures. Reading over to the y-axis from the height of the x-axis concern category is the frequency (or the probability) of the category occurring. This tool can help direct improvement efforts by identifying those concerns that happen with the greatest frequency.

A Pareto chart is a visual tool that represents a frequency distribution by classes or categories of concern. It is often thought of as an ordered ­histogram whereby categories of concern are arranged from most ­frequently occurring to least frequently occurring. The chart suggests the most ­frequently occurring issue be addressed first, but it is important to note that it may not be the most important. An example of a Pareto chart appears in the following text as Figure 6.4.

You will note that the horizontal x-axis of the chart identifies the categories of concern while the vertical y-axis of the chart depicts the relative frequency of each category. In all cases, the sum of the relative frequencies of the respective categories of concern will be 1.00. The dashed line in the figure corresponds with the cumulative frequency of the various categories. There are also several variations of Pareto charts.

Cause-and-effect diagrams are often referred to by several alternative names. These names include Ishikawa diagram after its developer Kaoru Ishikawa. It is also called a fishbone diagram as it resembles the skeleton of a fish. And it is called a 6M diagram as the six primary “casual” branches emanating from the central “effect” trunk begin with the letter M (man, machine, materials, methods, metrics, and Mother Nature). An example is shown in Figure 6.5.

Figure 6.4 Example Pareto chart

Figure 6.5 Example cause-and-effect diagram

One of the six primary causes is often the root cause or source of the problem. Man may refer to various effects such as inadequate training or low morale. Machine may refer to various effects such as worn tooling or incorrect settings. Materials may refer to effects such as inferior quality of material elements or a component from a vendor that is out of specification. Methods may refer to work that does not follow standards or the proper sequence. Metrics typically encourage behaviors, so it may refer to the use of the wrong measurements being used to encourage appropriate outcomes. Mother Nature refers to elements in the environment that may lead to assignable variability such as humidity or the level of lighting.

Emanating off the six primary cause branches, you will find secondary causes. These are typically referred to as twigs. In turn, these may have emanating tertiary branches commonly referred to as twiglets. This dissection or decomposition of the issue continues until the root cause problem source has been clearly identified.

A check sheet is a useful tool for data collection. It typically summarizes historical information often by date, time, location, and issue. An example is shown in the following text as Figure 6.6. Keeping a running tally of the issue or defect type by date, time, process location, part number, operator, or other diagnostic statistic enables relative issue frequencies, trends, or other meaningful patterns to be determined early in order to direct resources for improvement efforts.

A scatter diagram is a graphical portrayal of the relationship between two variables. It is useful for depicting the correlation that may exist between the two variables. An example is shown in Figure 6.7. It can depict how one variable (e.g., humidity) may impact the outcome of a process. However, the user should remember that there may be additional variables that impact outcomes. Furthermore, correlation does not necessarily relate to causality.

Statistical Process Control Tools

The second category of quality improvement tools may be divided into two subclasses: control charts and acceptance plans. Each of these subcategories is discussed in the following text.

Figure 6.6 Example check sheet

Figure 6.7 Example scatter diagram

Figure 6.8 Generic control chart*

The control chart was developed by Walter Shewhart in the 1920s. It is a graphical tool for describing and monitoring the state of control, typically for repetitive processes. The basic control charts consists of three elements: a center line (CL), which represents the process target level or process mean while in a state of statistical control (shown as the dashed line in Figure 6.8), and upper control limit (UCL) and lower control limit (LCL). Although the details for the construction of control charts goes beyond the intent of this book, the UCL and LCL of the chart are typically established as some equal number of process standard deviations above and below the process mean (CL), regardless of the type of control chart being used. Over time, sample values are used to monitor process performance. An example of a generic control chart is depicted in Figure 6.8.

The construction of a control chart varies with two general data types. Some data is measured over a continuous (variable) scale such as ounces or inches. For continuous data, a means (x) chart, which is used to monitor if a process is operating near its target level (central tendency), and a range (R) chart, which is used to monitor process variability, may be used to assess performance.

The other data type is attribute (0, 1) in nature. An example is a light bulb, which either works correctly or is defective. For attribute data, several chart types may be used to assess performance. Two charts assess performance for items that either meet specifications (good) or do not (defective or nonconformity). These two charts are known as a p-chart, for monitoring the proportion defective in a sample, and an np-chart for monitoring the number defective in a sample. Two additional control charts for attribute data monitor nonconformities when multiple defects are possible per unit of output in the sample. Furthermore, although an item may possess nonconformities, it may not be a defective item unless the number of nonconformities exceeds a hurdle value. These two charts, useful when multiple nonconformities may occur per unit of output, include the c-chart for monitoring the total number of nonconformities in a sample and a u-chart for monitoring the average number of nonconformities in a sample.

As noted earlier, control charts are useful for describing and monitoring the state of control typically for repetitive processes. Variability in these processes is derived from two causes: random and assignable causes. Random causes cannot be eliminated as they are simply inherent and always will be. Assignable variation suggests that the variation may be assigned to a particular cause and thereby eliminated by improvement efforts.

A process is deemed to be in a state of statistical control if all of the variability is attributable to random causes. Whenever assignable variation is present, the process is deemed to be out of control. Example causes of assignable variation include incorrect machine settings, operator error, and out-of-specification materials. In the presence of assignable variation, the quality statistic being monitored will typically exhibit greater variability or some other pattern. The chart is used to detect various conditions in order to signal the need for further investigation.

Control charts can reveal many conditions that suggest the need for deeper investigation. Any one of four general scenarios could suggest the need for further investigation. First, one should compare the actual versus the expected number of observations falling outside of the control limits. If the actual number varies significantly from expectations, an investigation may be suggested. Second, plotted data points should depict a random pattern if all of the variability is attributable to chance causes. Plotted data points that reflect patterns such as a trend are not random. Patterns suggest the need for further investigation. Third, the extent of variability reflected in the sample data points is preferably low. If a large degree of variability is reflected in the sample data points, there may be assignable causes of variation present. Finally, there should not be any evidence of runs in the data. A run is defined as a significant number of observations lying on the same side of the CL. In all four of these scenarios, any condition suggesting the need to conduct an investigation must be coupled with one’s judgment and experience and tempered by the cost of conducting an investigation versus the benefit of reducing defects being passed downstream.

These four scenarios may lead to an interpretation needing further investigation. However, they do not necessarily mean assignable variation is present. The decision maker may conclude the process is out of control when in reality it is in control, or that a process is in control when there is assignable variable and the process truly is out of control. The common operating hypothesis is that a process is in a state of statistical control. A type I error occurs when a decision is made to conduct an investigation looking for a source of assignable variation when in reality the process has none. This is sometimes referred to as producer’s risk. The cost associated with a type I error may be lost production time and the cost of testing for an absent problem. On the other hand, a type II error occurs when a process continues to be deemed in control when in reality assignable variation is present. This is sometimes referred to as consumer’s risk. The cost associated with a type II error includes potential scrap, rework, as well as possible after-the-sale service costs, which can be difficult to measure.

Acceptance plans are typically used to assess the quality of a batch of items. It is common to use this tool to make an “accept” or “reject” decision (lot sentencing) upon receipt if the incoming quality of a batch is suspect or just prior to a batch shipment to a customer. It is also used for lot sentencing in lower-volume batch processes as batch orders flow from operation to operation. If applied during the flow path of a batch, it is more common to make a decision regarding the disposition of a lot just prior to a costly, irreversible, or covering operation.

While performing acceptance sampling, it is important to understand that the disposition decision made does not typically grade the level of quality. Rather, a decision to accept or reject is simply made. Acceptance sampling may be viewed as a means of auditing quality or providing assurance that specifications are being met. It is a less expensive alternative to 100 percent inspection but when applied, introduces the risk of accepting inferior lots and rejecting superior lots.

Although the details for the development of an acceptance plan go beyond the intent of this book, there are several characteristics of these plans that are noteworthy. First, acceptance plans may be devised for either continuous measures or attribute data. Second, there are plan variations that utilize single, double, or multiple samples in order to make a disposition decision. Third, in order to apply acceptance sampling, the user must determine several parameters, including the sample size, number of samples to be drawn, and the acceptance or rejection criterion. These parameters will determine the plan’s discriminatory power. It is important that this power ensure that the customer’s lowest acceptable quality level is being met.

If a lot is rejected, corrective action is warranted. This may include actions such as the return of the complete lot to the vendor or further inspection of the remaining items not previously evaluated. In either case, it is desirable to know why acceptable quality was not achieved so that preventative action(s) may be taken.

Summary

The TQM tools noted in the preceding sections are not meant to represent an exhaustive list. Many others exist such as various Six Sigma practices including the use of statistical tools and tests (e.g., regression analysis, paired comparisons, rank order tests, analysis of variance, failure modes and effects analysis), Dorian Shainin’s contributions (e.g., Lot Plots and the Red X effect), Taguchi’s contributions (e.g., his off-line quality control strategy consisting of three stages: system design, parameter design, and tolerance design; as well as the Taguchi Loss Function), TRIZ (or TIPS: the theory of inventive problem solving) consisting of generalized patterns and distinguishing characteristics, which may be used to solve problems, and others.

Is must be remembered that TQM is a complementary and inseparable continuous improvement program sharing the same objectives of lean management. Both possess a customer-driven philosophy for organizationwide continuous or ongoing improvement. Both possess a systematic perspective consisting of leadership, culture, and teamwork as well as a methodical foundation of numerous principles and tools, experimentation, scientific analysis, and problem solving.

Similar to lean management, effective management of quality follows with a systems approach. TQM involves the execution of three interdependent planning stages, each comprising various activities. These three planning stages discussed earlier are strategic quality planning, tactical quality assurance, and operational quality management, control, and improvement. It is during the third phase that many of the TQM tools examined earlier are applied.

To reiterate, a fundamental understanding must exist that quality is free. Although there may be increased prevention costs, well-executed TQM programs pay for themselves in the form of decreased internal and external failure costs resulting in lower total system costs.

__________________

* Abbreviations: LCL, lower control limit; UCL, upper control limit.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.98.208