3
Quality Management: Practices, Tools, and Standards

  1. 3-1 Introduction and chapter objectives
  2. 3-2 Management practices
  3. 3-3 Quality function deployment
  4. 3-4 Benchmarking and performance evaluation
  5. 3-5 Health care analytics
  6. 3-6 Tools for continuous quality improvement
  7. 3-7 International Standards ISO 9000 and other derivatives
  8. Summary

3-1 Introduction and Chapter Objectives

The road to a quality organization is paved with the commitment of management. If management is not totally behind this effort, the road will be filled with potholes, and the effort will drag to a halt. A keen sense of involvement is a prerequisite for this journey, because like any journey of import, the company will sometimes find itself in uncharted territory. Company policies must be carefully formulated according to principles of a quality program. Major shifts in paradigms may occur. Resources must, of course, be allocated to accomplish the objectives, but this by itself is not sufficient. Personal support and motivation are the key ingredients to reaching the final destination.

In this chapter we look at some of the quality management practices that enable a company to achieve its goals. These practices start at the top, where top management creates the road map, and continue with middle and line management, who help employees follow the map. With an ever-watchful eye on the satisfaction of the customer, the entire workforce embarks on an intensive study of product design and process design. Company policies on vendor selection are discussed. Everything is examined through the lens of quality improvement. The importance of health care analytics is introduced as well.

The prime objectives of this chapter are to provide a framework through which management accomplishes its task of quality assurance. Principles of total quality management are presented. Additional tools such as quality function deployment, which plays a major role in incorporating customer needs in products and processes, are discussed. Problems to address are investigated through Pareto charts and failure mode and effects criticality analysis. Following this, root cause identification is explored through cause-and-effect diagrams. The study of all processes, be they related to manufacturing or service, typically starts with a process map that identifies all operations, their precedence relationships, the inputs and outputs for each operation along with the controllable and uncontrollable factors, and the designated ownership of each. A simpler version of the process map is a flowchart that shows the sequence of operations and decision points and assists in identifying value-added and non-value-added activities.

Finally, we consider the standards set out by the International Organization for Standardization (ISO): in particular, ISO 9000 standards. Organizations seek to be certified by these standards to demonstrate the existence of a quality management process in their company. Standards from some other industries are briefly examined.

3-2 Management Practices

A company's strategic plan is usually developed by top management; they are, after all, responsible for the long-range direction of the company. A good strategic plan addresses the needs of the company's constituencies. First and foremost, of course, is the customer, who can be internal and/or external. The customer wants a quality product or service at the lowest possible cost. Meeting the needs of the shareholders is another objective. Shareholders want to maximize their return on investment. Top management has the difficult task of balancing these needs and creating a long-term plan that will accomplish them.

What management needs are specific practices that enable them to install a quality program. That is what this chapter is about, but first we need some terminology. In this context, the term total quality management (TQM) refers to a comprehensive approach to improving quality. According to the U.S. Department of Defense, TQM is both a philosophy and a set of guiding principles that comprise the foundation of a continuously improving organization. Other frequently used terms are synonymous to TQM; among them are continuous quality improvement, quality management, total quality control, and company wide quality assurance.

Total Quality Management

Total quality management revolves around three main themes: the customer, the process, and the people. Figure 3-1 shows some basic features of a TQM model. At its core are the company vision and mission and management commitment. They bind the customer, the process, and the people into an integrated whole. A company's vision is quite simply what the company wants to be. The mission lays out the company's strategic focus. Every employee should understand the company's vision and mission so that individual efforts will contribute to the organizational mission. When employees do not understand the strategic focus, individuals and even departments pursue their own goals rather than those of the company, and the company's goals are inadvertently sabotaged. The classic example is maximizing production with no regard to quality or cost.

Figure depicting feature of a TQM model where two concentric circles are present with the central circle divided into company vision and mission (upper) and management commitment (lower). On the top of the outer circle is a box labeled customer and on the left- and right-hand sides are boxes labeled process and people, respectively. Factors affecting customer are satisfying customer needs and expectation. Factors influencing process are self-directed cross functional teams, process analysis and continuous improvement, integration of vendors, and organizational culture. Factors affecting people are empowerment, organizational culture, and open channels of communication.

Figure 3-1 Features of a TQM model.

Management commitment is another core value in the TQM model. It must exist at all levels for the company to succeed in implementing TQM. Top management envisions the strategy and creates policy. Middle management works on implementation. At the operational level, appropriate quality management tools and techniques are used.

Satisfying customer needs and expectations is a major theme in TQM—in fact, it is the driving force. Without satisfied customers, market share will not grow and revenue will not increase. Management should not second-guess the customer. For example, commercial builders should construct general merchandise stores only after they have determined that there is enough customer interest to support them. If consumers prefer specialty stores, specialty stores should be constructed. Direct feedback using a data-driven approach is the best way to identify customer expectations and needs. A company's strategic plan must conform to these needs.

A key principle in quality programs is that customers are both internal and external. The receiving department of a processed component is a customer of that processing unit. Feedback from such internal customers identifies problem areas before the product reaches its finished stage, thus reducing the cost of scrap and rework.

Customer expectations can, to some extent, be managed by the organization. Factors such as the quality of products and services and warranty policies offered by the competitor influence customer expectations directly. The company can, through truthful advertising, shape the public's expectations. For example, if the average life of a lawn mower under specified operating conditions is 15 years, there is no reason to exaggerate it. In service operations, customers know which companies are responsive and friendly. This doesn't need advertising. Customer surveys can help management determine discrepancies between expectations and satisfaction. Taking measures to eliminate discrepancies is known as gap analysis.

The second theme in TQM is the process. Management is responsible for analyzing the process to improve it continuously. In this framework, vendors are part of the extended process, as advocated by Deming. As discussed earlier, integrating vendors into the process improves the vendors' products, which leads to better final products. Because problems can and do span functional areas, self-directed cross-functional teams are important for generating alternative feasible solutions—the process improves again. Technical tools and techniques along with management tools come in handy in the quest for quality improvement. Self-directed teams are given the authority to make decisions and to make appropriate changes in the process.

The third theme deals with people. Human “capital” is an organization's most important asset. Empowerment—involving employees in the decision-making process so that they take ownership of their work and the process—is a key factor in TQM. It is people who find better ways to do a job, and this is no small source of pride. With pride comes motivation. There is a sense of pride in making things better through the elimination of redundant or non-value-added tasks or combining operations. In TQM, managing is empowering.

Barriers restrict the flow of information. Thus, open channels of communication are imperative, and management had better maintain these. For example, if marketing fails to talk to product design, a key input on customer needs will not be incorporated into the product. Management must work with its human resources staff to empower people to break down interdepartmental barriers. From the traditional role of management of coordinating and controlling has developed the paradigm of coaching and caring. Once people understand that they, and only they, can improve the state of affairs, and once they are given the authority to make appropriate changes, they will do the job that needs to be done. There originates an intrinsic urge from within to do things better. Such an urge has a force that supersedes external forms of motivation.

Linking the human element and the company's vision is the fabric we call organizational culture. Culture comprises the beliefs, values, norms, and rules that prevail within an organization. How is business conducted? How does management behave? How are employees treated? What gets rewarded? How does the reward system work? How is input sought? How important are ethics? What is the social responsibility of the company? The answers to these and many other questions define an organization's culture. One culture may embrace a participative style of management that empowers its employees and delights its customers with innovative and timely products. Another culture may choose short-term profit over responsibility to the community at large. Consider, for example, the social responsibility adopted by the General Electric Company. The company and its employees made enormous contributions to support education, the arts, the environment, and human services organizations worldwide.

Vision and Quality Policy

A company's vision comprises its values and beliefs. Their vision is what they want it to be, and it is a message that every employee should not only hear but also believe in. Visions, carefully articulated, give a coherent sense of purpose. Visions are about the future, and effective visions are simple and inspirational. Finally, it must be motivational so as to evoke a bond that creates unison in the efforts of persons working toward a common organizational goal. From the vision emanates a mission statement for the organization that is more specific and goal oriented.

A service organization, IBM Direct, is dedicated to serving U.S. customers who order such IBM products as ES/9000 mainframes, RS/6000 and AS/400 systems, connectivity networks, and desktop software. Its vision for customer service is “to create an environment for customers where conducting business with IBM Direct is considered an enjoyable, pleasurable and satisfying experience.” This is what IBM Direct wants to be. Its mission is “to act as the focal point for post-sale customers issues for IBM Direct customers. We must address customer complaints to obtain timely and complete resolutions. And, through root cause analysis, we must ensure that our processes are optimized to improve our customer satisfaction.” Here, again, the mission statement gets specific. This is how IBM Direct will get to its vision. Note that no mention is made of a time frame. This issue is usually dealt with in goals and objectives.

Framed by senior management, a quality policy is the company's road map. It indicates what is to be done, and it differs from procedures and instructions, which address how it is to be done, where and when it is to be done, and who is to do it. A beacon in TQM leadership, Xerox Corporation is the first major U.S. corporation to regain market share after losing it to Japanese competitors. Xerox attributes its remarkable turnaround to its conversion to TQM philosophy. The company's decision to rededicate itself to quality through a strategy called Leadership Through Quality has paid off. Through this process, Xerox created a participatory style of management that focuses on quality improvement while reducing costs. It encouraged teamwork, sought more customer feedback, focused on product development to target key markets, encouraged greater employee involvement, and began competitive benchmarking. Greater customer satisfaction and enhanced business performance are the driving forces in its quality program, the commitment to which is set out in the Xerox quality policy: “Quality is the basic business principle at Xerox.”

Another practitioner of TQM, the Eastman Chemical Company, manufactures and markets over 400 chemicals, fibers, and plastics for over 7000 customers around the world. A strong focus on customers is reflected in its vision: “to be the world's preferred chemical company.” A similar message is conveyed in its quality goal: “to be the leader in quality and value of products and services.” Its vision, values, and goals define Eastman's quality culture. The company's quality management process is set out in four directives: “focus on customers; establish vision, mission, and indicators of performance; understand, stabilize, and maintain processes; and plan, do, check, act for continual improvement and innovation.”

Eastman Chemical encourages innovation and provides a structured approach to generating new ideas for products. Cross-functional teams help the company understand the needs of both its internal and external customers. The teams define and improve processes, and they help build long-term relationships with vendors and customers. Through the Eastman Innovative Process, a team of employees from various areas—design, sales, research, engineering, and manufacturing—guides an idea from inception to market. People have ownership of the product and of the process. Customer needs and expectations are addressed through the process and are carefully validated. One outcome of the TQM program has been the drastic reduction (almost 50%) of the time required to launch a new product. Through a program called Quality First, employees team with key vendors to improve the quality and value of purchased materials, equipment, and services. Over 70% of Eastman's worldwide customers have ranked the company as their best supplier. Additionally, Eastman has received an outstanding rating on five factors that customers view as most important: product quality, product uniformity, supplier integrity, correct delivery, and reliability. Extensive customer surveys led the company to institute a no-fault return policy on its plastic products. This policy, believed to be the only one of its kind in the chemical industry, allows customers to return any product for any reason for a full refund.

Balanced Scorecard

The balanced scorecard (BSC) is a management system that integrates measures derived from the organization's strategy. It integrates measures related to tangible as well as intangible assets. The focus of BSC is on accomplishing the company's mission through the development of a communication and learning system. It translates the mission and strategy to objectives and measures that span four dimensions: learning and growth, internal processes, customers, and financial (Kaplan and Norton 1996). Whereas traditional systems have focused only on financial measures (such as return on investment), which is a short-term measure, BSC considers all four perspectives from a long-term point of view. So, for example, even for the financial perspective, it considers measures derived from the business strategy, such as sales growth rate or market share in targeted regions or customers. Figure 3-2 shows the concept behind the development of a balanced scorecard.

Figure illustrating balanced scorecard where an arrow from organizational strategy points at a dashed box denoting perspectives. Perspectives include learning and growth, internal processes, customers, and financial (connected bidirectionally). Rightward arrows from all the components of perspectives point at another dashed box representing balanced scorecard that includes diagnostic and strategic measures. An arrow connects balanced scorecard to organizational strategy.

Figure 3-2 Balanced scorecard.

Measures in the learning and growth perspective that serve as drivers for the other three perspectives are based on three themes. First, employee capabilities, which include employee satisfaction, retention, and productivity, are developed. Improving satisfaction typically improves retention and productivity. Second, development of information systems capabilities is as important as the system for procuring raw material, parts, or components. Third, creation of a climate for growth through motivation and empowerment is an intangible asset that merits consideration.

For each of the four perspectives, diagnostic and strategic measures could be identified. Diagnostic measures relate to keeping a business in control or in operation (similar to the concept of quality control). On the contrary, strategic measures are based on achieving competitive excellence based on the business strategy. They relate to the position of the company relative to its competitors and information on its customers, markets, and suppliers. Strategic measures could be of two types: outcome measures and performance measures. Outcome measures are based on results from past efforts and are lagging indicators. Examples are return on equity or employee productivity. Performance measures reflect the uniqueness of the business strategy and are lead indicators, examples of which are sales growth rate by segment or percentage revenue from new products. Each performance measure has to be related to an outcome measure through a cause-and-effect type of analysis, which will therefore reflect the financial drivers of profitability. It could also identify the specific internal processes that will deliver value to targeted customers if the company strategy is to expand its market share for a particular category of customers.

In the learning and growth perspective, employee satisfaction, a strategic lag indicator, could be measured on an ordinal scale of 1 to 5. Another lag indicator could be the revenue per employee, a measure of employee productivity. A performance measure, a lead indicator, could be the strategic job coverage ratio, which is the ratio of the number of employees qualified for strategic jobs to the organizational needs that are anticipated. This is a measure of the degree to which the company has reskilled its employees. Under motivation, an outcome measure could be the number of suggestions per employee or the number of suggestions implemented.

When considering the internal processes perspective, one is required to identify the critical processes that will enable the meeting of customer or shareholder objectives. Based on the expectations of specific external constituencies, they may impose demands on internal processes. Cycle time, throughput, and costs associated with existing processes are examples of diagnostic measures. In the strategic context, a business process for creating value could include innovation, operations, and post-sale service. Innovation may include basic research to develop new products and services or applied research to exploit existing technology. Time to develop new products is an example of a strategic outcome measure. Under post-sale service, measures such as responsiveness (measured by time to respond), friendliness, and reliability are applicable.

The strategic aspect of the customer perspective deals with identifying customers to target and the corresponding market segments. For most businesses, core outcome measures are market share, degree of customer retention, customer acquisition, customer satisfaction, and customer profitability from targeted segments. All of these are lagging measures and do not indicate what employees should be doing to achieve desired outcomes. Thus, under performance drivers (leading indicators), measures that relate to creating value for the customer are identified. These may fall in three broad areas: product/service attributes, customer relationship, and image and reputation. In association with product/service attributes, whereas lead time for existing products may be a diagnostic measure, time to serve targeted customers (e.g., quick check-in for business travelers in a hotel) is a strategic performance measure. Similarly, while quality of product/services (as measured by, say, defect rates) is considered as a “must,” some unique measures such as service guarantees (and the cost of such), which offer not only a full refund but also a premium above the purchase price, could be a performance measure.

Under the financial perspective, the strategic focus is dependent on the stage in which the organization currently resides (i.e., infancy, dominancy, or maturity). In the infancy stage, companies capitalize on significant potential for growth. Thus, large investments are made in equipment and infrastructure which may result in negative cash flow. Sales growth rate by product or market segment could be a strategic outcome measure. In the dominancy phase, where the business dwells mainly on the existing market, traditional measures such as gross margin or operating income are valid. Finally, for those in the mature stage, a company may not invest in new capabilities with a goal of maximizing cash flow. Unit costs could be a measure. For all three phases, some common themes are revenue growth, cost reduction, and asset utilization (Kaplan and Norton 1996).

Several features are to be noted about the balanced scorecard. First, under strategic measures, the link between performance measures and outcome measures represent a cause-and-effect relationship. However, based on the outcome measures observed and a comparison with the strategic performance expected, this feedback may indicate a choice of different performance measures. Second, all of the measures in the entire balanced scorecard represent a reflection of a business's performance. If such a performance does not match the performance expected based on the company strategy, a feedback loop exists to modify the strategy. Thus, the balanced scorecard serves an important purpose in linking the selection and implementation of an organization's strategy.

Performance Standards

One intended outcome of a quality policy is a desirable level of performance: that is, a defect-free product that meets or exceeds customer needs. Even though current performance may satisfy customers, organizations cannot afford to be complacent. Continuous improvement is the only way to stay abreast of the changing needs of the customer. The tourism industry, for instance, has seen dramatic changes in recent years; options have increased and customer expectations have risen. Top-notch facilities and a room filled with amenities are now the norm and don't necessarily impress the customer. Meeting and exceeding consumer expectations are no small challenges. Hyatt Hotels Corporation has met this challenge head-on. Its “In Touch 100” quality assurance initiative provides a framework for its quality philosophy and culture. Quality at Hyatt means consistently delivering products and services 100% of the time. The In Touch 100 program sets high standards—standards derived from guest and employee feedback—and specifies the pace that will achieve these standards every day. The core components of the quality assurance initiative are standards, technology, training, measurements, recognition, communication, and continuous improvement (Buzanis 1993).

Six Sigma Quality

Although a company may be striving toward an ultimate goal of zero defects, numerical standards for performance measurement should be avoided. Setting numerical values that may or may not be achievable can have an unintended negative emotional impact. Not meeting the standard, even though the company is making significant progress, can be demoralizing for everyone. Numerical goals also shift the emphasis to the short term, as long-term benefits are sacrificed for short-term gains.

So, the question is: How do we measure performance? The answer is: by making continuous improvement the goal and then measuring the trend (not the numbers) in improvement. This is also motivational. Another effective method is benchmarking; this involves identifying high-performance companies or intra-company departments and using their performance as the improvement goal. The idea is that although the goals may be difficult to achieve, others have shown that it can be done.

Quantitative goals do have their place, however, as Motorola, Inc. has shown with its concept of six sigma quality. Sigma (σ) stands for the standard deviation, which is a measure of variation in the process. Assuming that the process output is represented by a normal distribution, about 99.73% of the output is contained within bounds that are three standard deviations (3σ) from the mean. As shown in Figure 3-3, these are represented as the lower and upper tolerance limits (LTL and UTL). The normal distribution is characterized by two parameters: the mean and the standard deviation. The mean is a measure of the location of the process. Now, if the product specification limits are three standard deviations from the mean, the proportion of nonconforming product is about 0.27%, which is approximately 2700 parts per million (ppm); that is, the two tails, each 1350 ppm, add to 2700 ppm. On the surface, this appears to be a good process, but appearances can be deceiving. When we realize that most products and services consist of numerous processes or operations, reality begins to dawn. Even though a single operation may yield 97.73% good parts, the compounding effect of out-of-tolerance parts will have a marked influence on the quality level of the finished product. For instance, for a product that contains 1000 parts or has 1000 operations, an average of 2.7 defects per product unit is expected. The probability that a product contains no defective parts is only 6.72% (e−2.7, using the Poisson distribution discussed in a later chapter)! This means that only about 7 units in 100 will go through the entire manufacturing process without a defect (rolled throughput yield)—not a desirable situation.

Figure depicting bell-shaped curve for normal distribution representing process output. A vertical line in the center denotes mean whereas vertical lines on the left- and right-hand sides denote lower tolerance limit and upper tolerance limit, respectively. Areas of the curve on the left of LTL and right of UTL are shaded and denote 1350 ppm = 0.135%, respectively. Left and right of mean indicates -3? and +3?, respectively.

Figure 3-3 Process output represented by a normal distribution.

For a product to be built virtually defect free, it must be designed to specification limits that are significantly more than ±3σ from the mean. In other words, the process spread as measured by ±3σ has to be significantly less than the spread between the upper and lower specification limits (USL and LSL). Motorola's answer to this problem is six sigma quality; that is, process variability must be so small that the specification limits are six standard deviations from the mean. Figure 3-4 demonstrates this concept. If the process distribution is stable (i.e., it remains centered between the specification limits), the proportion of nonconforming product should be only about 0.001 ppm on each tail.

Figure representing six sigma capability, where a bell-shaped curve is superimposed on two bell-shaped curves. The LSL and USL of the lower two curves are -6σ and +6σ, respectively, whereas that of the central curve are -3σ and +3σ.

Figure 3-4 Six sigma capability.

In real-world situations, the process distribution will not always be centered between the specification limits; process shifts to the right or left are not uncommon. It can be shown that even if the process mean shifts by as much as 1.5 standard deviations from the center, the proportion nonconforming will be about 3.4 ppm. Comparing this to a 3σ capability of 2700 ppm demonstrates the improvement in the expected level of quality from the process. If we consider the previous example for a product containing 1000 parts and we design it for 6σ capability, an average of 0.0034 defect per product unit (3.4 ppm) is expected instead of the 2.7 defects expected with 3σ capability. The cumulative yield (rolled throughput yield) from the process will thus be about 99.66%, a vast improvement over the 6.72% yield in the 3σ case.

Establishing a goal of 3σ capability is acceptable as a starting point, however, because it allows an organization to set a baseline for improvement. As management becomes more process oriented, higher goals such as 6σ capability become possible. Such goals may require fundamental changes in management philosophy and the organizational culture.

Although the previous description of six sigma has defined it as a metric, on a broader perspective six sigma may also be viewed as a philosophy or a methodology for continuous improvement. When six sigma is considered as a philosophy, it is considered as a strategic business initiative. In this context, the theme of identification of customer needs and ways to satisfy the customer is central. Along the lines of other philosophies, continuous improvement is integral to six sigma as well.

As a methodology, six sigma may be viewed as the collection of the following steps or phases: define, measure, analyze, improve, and control. Within each phase there are certain tools that could be utilized. Some of the tools are discussed in this chapter. In the define phase customer needs are translated to specific attributes that are critical to meeting such needs. Typically, these are categorized in terms of critical to quality, delivery, or cost. Identification of these attributes will create a framework for study. For example, suppose that the waiting time of customers in a bank is the problem to be tackled. The number of tellers on duty during specific periods in the day is an attribute that is critical to reducing the waiting time, which might be a factor of investigation.

The measure phase consists of identifying metrics for process performance. This includes the establishment of baseline levels as well. In our example, a chosen metric could be the average waiting time prior to service, in minutes, while the baseline level could be the current value of this metric, say 5 minutes. Key process input and output variables are also identified. In the define phase, some tools could be the process flowcharts or its detailed version, the process map. To identify the vital few from the trivial many, a Pareto chart could be appropriate. Further, to study the various factors that may affect an outcome, a cause-and-effect diagram may be used. In the measure phase, one first has to ensure that the measurement system itself is stable. The technical name for such a study is gage repeatability and reproducibility. Thereafter, benchmark measures of process capability could be utilized, some of which are defects per unit of product, parts per million nonconforming, and rolled throughout yield, representing the proportion of the final product that has no defects. Other measures of process capability are discussed later in a separate chapter.

In the analyze phase, the objective is to determine which of a multitude of factors affects the output variable (s) significantly, through analysis of collected data. Tools may be simple graphical tools such as scatterplots and multivari charts. Alternatively, analytical models may be built linking the output or response variable to one of more independent variables through regression analysis. Hypothesis testing on selected parameters (i.e., average waiting time before and after process improvement) could be pursued. Analysis-of-variance techniques may be used to investigate the statistical significance of one or more factors on the response variable.

The improve phase consists of identifying the factor levels of significant factors to optimize the performance measure chosen, which could be to minimize, maximize, or achieve a goal value. In our example, the goal could be to minimize the average waiting time of customers in the bank subject to certain resource or other process constraints. Here, concepts in design of experiments are handy tools.

Finally, the control phase deals with methods to sustain the gains identified in the preceding phase. Methods of statistical process control using control charts, discussed extensively in later chapters, are common tools. Process capability measures are also meaningful in this phase. They may provide a relative index of the degree to which the improved product, process, or service meets established norms based on customer requirements.

3-3 Quality Function Deployment

Quality function deployment (QFD) is a planning tool that focuses on designing quality into a product or service by incorporating customer needs. It is a systems approach involving cross-functional teams (whose members are not necessarily from product design) that looks at the complete cycle of product development. This quality cycle starts with creating a design that meets customer needs and continues on through conducting detailed product analyses of parts and components to achieve the desired product, identifying the processes necessary to make the product, developing product requirements, prototype testing, final product or service testing, and finishing with after-sales troubleshooting.

QFD is customer driven and translates customers' needs into appropriate technical requirements in products and services. It is proactive in nature. Also identified by other names—house of quality, matrix product planning, customer-driven engineering, and decision matrix—it has several advantages. It evaluates competitors from two perspectives, the customer's perspective and a technical perspective. The customer's view of competitors provides the company with valuable information on the market potential of its products. The technical perspective, which is a form of benchmarking, provides information on the relative performance of the company with respect to industry leaders. This analysis identifies the degree of improvements needed in products and processes and serves as a guide for resource allocation.

QFD reduces the product development cycle time in each functional area, from product inception and definition to production and sales. By considering product and design along with manufacturing feasibility and resource restrictions, QFD cuts down on time that would otherwise be spent on product redesign. Midstream design changes are minimized, along with concerns on process capability and post-introduction problems of the product. This results in significant benefits for products with long lead times, such as automobiles. Thus, QFD has been vital for the Ford Motor Company and General Motors in their implementation of total quality management.

Companies use QFD to create training programs, select new employees, establish supplier development criteria, and improve service. Cross-functional teams have also used QFD to show the linkages between departments and thereby have broken down existing barriers of communication. Although the advantages of QFD are obvious, its success requires a significant commitment of time and human resources because a large amount of information is necessary for its startup.

QFD Process

Figure 3-5 shows a QFD matrix, also referred to as the house of quality. The objective statement delineates the scope of the QFD project, thereby focusing the team effort. For a space shuttle project, for example, the objective could be to identify critical safety features. Only one task is specified in the objective. Multiple objectives are split into separate QFDs in order to keep a well-defined focus.

Schematic diagram representing quality function deployment matrix. Four boxes are placed vertically in the center topped by a triangle and starting from bottom the boxes denote technical competitive assessment of “Hows,” relationship matrix between the “Hows” and the “Whats,” target goals of “Hows,” and technical distributors of “Hows” and the triangle denotes correlation matrix of “Hows.” On the right-hand side is a box representing customer assessment of competitors and on the left is a box denoting  objective statement.

Figure 3-5 Quality function deployment matrix: the house of quality.

The next step is to determine customer needs and wants. These are listed as the “whats” and represent the individual characteristics of the product or service. For example, in credit-card services, the “whats” could be attributes such as a low interest rate, error-free transactions, no annual fee, extended warranty at no additional cost, customer service 24 hours a day, and a customers' advocate in billing disputes. The list of “whats” is kept manageable by grouping similar items. On determination of the “whats” list, a customer importance rating that prioritizes the “whats” is assigned to each item. Typically, a scale of 1–5 is used, with 1 being the least important. Multiple passes through the list may be necessary to arrive at ratings that are acceptable to the team. The ratings serve as weighting factors and are used as multipliers for determining the technical assessment of the “hows.” The focus is on attributes with high ratings because they maximize customer satisfaction. Let's suppose that we have rated attributes for credit-card services as shown in Table 3-1. Our ratings thus imply that our customers consider error-free transactions to be the most important attribute and the least important to be charging no annual fee.

Table 3-1 Importance Rating of Credit-Card Customer Requirements

Customer Requirement (“Whats”) Importance Rating
Low interest rate 2
Error-free transactions 5
No annual fee 1
Extended warranty at no additional cost 3
Customer service 24 hours a day 4
Customers' advocate in billing disputes 4

The customer plays an important role in determining the relative position of an organization with respect to that of its competitors for each requirement or “what.” Such a comparison is entered in the section on “customer assessment of competitors.” Thus, customer perception of the product or service is verified, which will help identify strengths and weaknesses of the company. Different focus groups or surveys should be used to attain statistical objectivity. One outcome of the analysis might be new customer requirements, which would then be added to the list of “whats,” or the importance ratings might change. Results from this analysis will indicate what dimensions of the product or service the company should focus on. The same rating scale that is used to denote the importance ratings of the customer requirements is used in this analysis.

Consider, for example, the customer assessment of competitors shown in Table 3-2, where A represents our organization. The ratings are average scores obtained from various samples of consumers. The three competitors (companies B, C, and D) are our company's competition, so the maximum rating scores in each “what” will serve as benchmarks and thus the acceptable standard toward which we will strive. For instance, company C has a rating of 4 in the category “customer service 24 hours a day” compared to our 2 rating; we are not doing as well in this “what.” We have identified a gap in a customer requirement that we consider important. To close this gap we could study company C's practices and determine whether we can adopt some of them. We conduct similar analyses with the other “whats,” gradually implementing improved services. Our goal is to meet or beat the circled values in Table 3-2, which represent best performances in each customer requirement. That is, our goal is to become the benchmark.

Table 3-2 Customer Assessment of Competitors

Customer Requirements (“Whats”) Competitive Assessment of Companies
A B C D
Low interest rate 3 2 img 2
Error-free transactions 4 img 3 3
No annual fee img img 2 3
Extended warranty at no additional cost 2 2 1 img
Customer service 24 hours a day 2 2 img 3
Customers' advocate in billing disputes img 2 3 3

Coming up with a list of technical descriptors—the “hows”—that will enable our company to accomplish the customer requirements is the next step in the QFD process. Multidisciplinary teams whose members originate in various departments will brainstorm to arrive at this list. Departments such as product design and development, marketing, sales, accounting, finance, process design, manufacturing, purchasing, and customer service are likely to be represented on the team. The key is to have a breadth of disciplines in order to “capture” all feasible “hows.” To improve our company's ratings in the credit-card services example, the team might come up with these “hows”: software to detect errors in billing, employee training on data input and customer services, negotiations and agreements with major manufacturers and merchandise retailers to provide extended warranty, expanded scheduling (including flextime) of employee operational hours, effective recruiting, training in legal matters to assist customers in billing disputes, and obtaining financial management services.

Target goals are next set for selected technical descriptors or “hows.” Three symbols are used to indicate target goals: ↑ (maximize or increase the attained value), ↓ (minimize or decrease the attained value), and img (achieve a desired target value). Table 3-3 shows how our team might define target goals for the credit-card services example. Seven “hows” are listed along with their target goals. As an example, for how 1, creating a software to detect billing errors, the desired target value is zero: that is, no billing errors. For how 2, it is desirable to maximize or increase the effect of employee training to reduce input errors and interact effectively with customers. Also, for how 4, the target value is to achieve customer service 24 hours a day. If measurable goals cannot be established for a technical descriptor, it should be eliminated from the list and the inclusion of other “hows” considered.

Table 3-3 Target Goals of Technical Descriptors

“Hows” 1 2 3 4 5 6 7
Target goals img img
Legend
Number  Technical descriptors or “hows”
 1 Software to detect billing errors
 2 Employee training on data input and customer services
 3 Negotiations with manufacturers and retailers (vendors)
 4 Expanded scheduling (including flextime) of employees
 5 Effective recruiting
 6 Legal training
 7 Financial management services
Symbol  Target goal
 ↑ Maximize or increase attained value
 ↓ Minimize or decrease attained value
img Achieve a target value

The correlation matrix of the relationship between the technical descriptors is the “roof” of the house of quality. In the correlation matrix shown in Figure 3-6, four levels of relationship are depicted: strong positive, positive, negative, and strong negative. These indicate the degree to which the “hows” support or complement each other or are in conflict. Negative relationships may require a trade-off in the objective values of the “hows” when a technical competitive assessment is conducted. In Figure 3-6, which correlates the “hows” for our credit-card services example, how 1, creating a software to detect billing errors, has a strong positive relationship (++) with how 2, employee training on data input and customer services. The user friendliness of the software will have an impact on the type and amount of training needed. A strong positive relationship indicates the possibility of synergistic effects. Note that how 2 also has a strong positive relationship with how 5; this indicates that a good recruiting program in which desirable skills are incorporated into the selection procedure will form the backbone of a successful and effective training program.

Diagram illustrating correlation matrix of “Hows” where a triangle is placed on a rectangle. The rectangle is divided into two rows and seven columns (How #1–How #7). The triangle is divided into various sections that correlates the various “hows” on four levels of relationship as strong positive, positive, negative, and strong negative.

Figure 3-6 Correlation matrix of “Hows.”

Following this, a technical competitive assessment of the “hows” is conducted along the same lines as the customer assessment of competitors we discussed previously. The difference is that instead of using customers to obtain data on the relative position of the company's “whats” with respect to those of the competitors, the technical staff of the company provides the input on the “hows.” A rating scale of 1–5, as used in Table 3-2, may be used. Table 3-4 shows how our company's technical staff has assessed technical competitiveness for the “hows” in the credit-card services example. Our three competitors, companies B, C, and D, are reconsidered. For how 1 (creating a software to detect billing errors), our company is doing relatively well, with a rating of 4, but company B, with its rating of 5, is doing better; company B is therefore the benchmark against which we will measure our performance. Similarly, company C is the benchmark for how 2; we will look to improve the quality and effectiveness of our training program. The other assessments reveal that we have room to improve in hows 3, 4, and 5, but in hows 6 and 7 we are the benchmarks. The circled values in Table 3-4 represent the benchmarks for each “how.”

Table 3-4 Technical Competitive Assessment of “Hows”

Company Technical Descriptors (“Hows”)
1 2 3 4 5 6 7
A 4 3 2 3 4 img img
B img 3 1 img 1 2 3
C 3 img 2 2 img 3 2
D 2 2 img 1 3 3 4

The analysis shown in Table 3-4 can also assist in setting objective values, denoted by the “how muches,” for the seven technical descriptors. The achievements of the highest-scoring companies are set as the “how muches,” which represent the minimum acceptable achievement level for each “how.” For example, for how 4, since company B has the highest rating, its achievement level will be the level that our company (company A) will strive to match or exceed. Thus, if company B provides customer service 16 hours a day, this becomes our objective value. If we cannot achieve these levels of “how muches,” we should not consider entering this market because our product or service will not be as good as the competition's.

In conducting the technical competitive assessment of the “hows,” the probability of achieving the objective value (the “how muches”) is incorporated in the analysis. Using a rating scale of 1–5, 5 representing a high probability of success, the absolute scores are multiplied by the probability scores to obtain weighted scores. These weighted scores now represent the relative position within the industry and the company's chances of becoming the leader in that category.

The final step of the QFD process involves the relationship matrix located in the center of the house of quality (see Figure 3-5). It provides a mechanism for analyzing how each technical descriptor will help in achieving each “what.” The relationship between a “how” and a “what” is represented by the following scale: 0 ≡ no relationship; 1 ≡ low relationship; 3 ≡ medium relationship; 5 ≡ high relationship. Table 3-5 shows the relationship matrix for the credit-card services example. Consider, for instance, how 2 (employee training on data input and customer services). Our technical staff believes that this “how” is related strongly to providing error-free transactions, so a score of 5 is assigned. Furthermore, this “how” has a moderate relationship with providing customer service 24 hours a day and serving as customers' advocate in billing disputes, so a score of 3 is assigned for these relationships. Similar interpretations are drawn from the other entries in the table. “Hows” that have a large number of zeros do not support meeting the customer requirements and should be dropped from the list.

Table 3-5 Relationship Matrix of Absolute and Relative Scores

Customer Requirements (“Whats”) Importance Ratings Technical Descriptors (“Hows”)
1 2 3 4 5 6 7
Low interest rate 2 0 (0) 0 (0) 5 (10) 0 (0) 0 (0) 0 (0) 5 (10)
Error-free transactions 5 5 (25) 5 (25) 0 (0) 3 (15) 5 (25) 0 (0) 0 (0)
No annual fee 1 0 (0) 0 (0) 3 (3) 0 (0) 0 (0) 0 (0) 5 (5)
Extended warranty 3 0 (0) 1 (3) 5 (15) 0 (0) 0 (0) 3 (9) 3 (9)
Customer service  24 hours a day 4 1 (4) 3 (12) 0 (0) 5 (20) 5 (20) 3 (12) 0 (0)
Customers' advocate in billing disputes 4 1 (4) 3 (12) 5 (20) 0 (0) 3 (12) 5 (20) 1 (4)
Absolute score 33 52 48 35 57 41 28
Relative score 6 2 3 5 1 4 7
Technical competitive assessment 5 5 4 4 5 4 5
Weighted absolute score 165 260 192 140 285 164 140
Final relative score 4 2 3 6.5 1 5 6.5

The cell values, shown in parentheses in Table 3-5, are obtained by multiplying the rated score by the importance rating of the corresponding customer requirement. The absolute score for each “How” is calculated by adding the values in parentheses. The relative score is merely a ranking of the absolute scores, with 1 representing the most important. It is observed that how 5 (effective recruiting) is most important because its absolute score of 57 is highest.

The analysis can be extended by considering the technical competitive assessment of the “hows.” Using the rating scores of the benchmark companies for each technical descriptor—that is, the objective values (the “how muches”) from the circled values in Table 3-4—our team can determine the importance of the “hows.” The weighted absolute scores in Table 3-5 are found by multiplying the corresponding absolute scores by the technical competitive assessment rating. The final scores demonstrate that the relative ratings of the top three “hows” are the same as before. However, the rankings of the remaining technical descriptors have changed. Hows 4 and 7 are tied for last place, each with an absolute score of 140 and a relative score of 6.5 each. Management may consider the ease or difficulty of implementing these “hows” in order to break the tie.

Our example QFD exercise illustrates the importance of teamwork in this process. An enormous amount of information must be gathered, all of which promotes cross-functional understanding of the product or service design system. Target values of the technical descriptors or “hows” are then used to generate the next level of house of quality diagram, where they will become the “whats.” The QFD process proceeds by determining the technical descriptors for these new “whats.” We can therefore consider implementation of the QFD process in different phases. As Figure 3-7 depicts, QFD facilitates the translation of customer requirements into a product whose features meet these requirements. Once such a product design is conceived, QFD may be used at the next level to identify specific characteristics of critical parts that will help in achieving the product designed. The next level may address the design of a process in order to make parts with the characteristics identified. Finally, the QFD process identifies production requirements for operating the process under specified conditions. Use of quality function deployment in such a multiphased environment requires a significant commitment of time and resources. However, the advantages—the spirit of teamwork, cross-functional understanding, and an enhanced product design—offset this commitment.

Figure depicting customer requirements, product design, parts characteristics, process design, and production requirements as phases of use of QFD.

Figure 3-7 Phases of use of QFD.

3-4 Benchmarking and Performance Evaluation

The goal of continuous improvement forces an organization to look for ways to improve operations. Be it a manufacturing or service organization, the company must be aware of the best practices in its industry and its relative position in the industry. Such information will set the priorities for areas that need improvement.

Organizations benefit from innovation. Innovative approaches cut costs, reduce lead time, improve productivity, save capital and human resources, and ultimately lead to increased revenue. They constitute the breakthroughs that push product or process to new levels of excellence. However, breakthroughs do not happen very often. Visionary ideas are few and far between. Still, when improvements come, they are dramatic and memorable. The development of the computer chip is a prime example. Its ability to store enormous amounts of information in a fraction of the space that was previously required has revolutionized our lives. Figure 3-8 shows the impact of innovation on a chosen quality measure over time. At times a and b innovations occur as a result of which steep increases in quality from x to y and y to z are observed.

A graph is plotted between quality measure on the y-axis and time on the x-axis to depict impact of innovation and continuous improvement. A dashed concave down curve denotes continuous improvement where as a step-like curve denotes innovation. It is observed from the graph that for certain periods of time a process with continuous improvement performs better than one that depends only on innovation.

Figure 3-8 Impact of innovation and continuous improvement.

Continuous improvement, on the other hand, leads to a slow but steady increase in the quality measure. Figure 3-8 shows that for certain periods of time a process with continuous improvement performs better than one that depends only on innovation. Of course, once an innovation takes place, the immense improvement in the quality measure initially outperforms the small improvements that occur on a gradual basis. This can be useful in gaining market share, but it is also a high-risk strategy because innovations are rare. A company must carefully assess how risk averse it is. If its aversion to risk is high, continuous improvement is its best strategy. A process that is guaranteed to improve gradually is always a wise investment.

One way to promote continuous improvement is through innovative adaptation of the best practices in the industry. To improve its operations, an organization can incorporate information on the companies perceived to be the leaders in the field. Depending on the relative position of the company with respect to the industry leader, gains will be incremental or dramatic. Incorporating such adaptations on an ongoing basis provides a framework for continuous improvement.

Benchmarking

As discussed earlier, the practice of identifying best practices in industry and thereby setting goals to emulate them is known as benchmarking. Companies cannot afford to stagnate; this guarantees a loss of market share to the competition. Continuous improvement is a mandate for survival, and such fast-paced improvement is facilitated by benchmarking. This practice enables an organization to accelerate its rate of improvement. While innovation allows an organization to “leapfrog” its competitors, it does not occur frequently and thus cannot be counted on. Benchmarking, on the other hand, is doable. To adopt the best, adapt it innovatively, and thus reap improvements is a strategy for success.

Specific steps for benchmarking vary from company to company, but the fundamental approach is the same. One company's benchmarking may not work at another organization because of different operating concerns. Successful benchmarking reflects the culture of the organization, works within the existing infrastructure, and is harmonious with the leadership philosophy. Motorola, Inc., winner of the Malcolm Baldrige Award for 1988, uses a five-step benchmarking model: (1) Decide what to benchmark; (2) select companies to benchmark; (3) obtain data and collect information; (4) analyze data and form action plans; and (5) recalibrate and start the process again.

AT&T, which has two Baldrige winners among its operating units, uses a nine-step model: (1) Decide what to benchmark; (2) develop a benchmarking plan; (3) select a method to collect data; (4) collect data; (5) select companies to benchmark; (6) collect data during a site visit; (7) compare processes, identify gaps, and make recommendations; (8) implement recommendations; and (9) recalibrate benchmarks.

A primary advantage of the benchmarking practice is that it promotes a thorough understanding of the company's own processes—the company's current profile is well understood. Intensive studies of existing practices often lead to identification of non-value-added activities and plans for process improvement. Second, benchmarking enables comparisons of performance measures in different dimensions, each with the best practices for that particular measure. It is not merely a comparison of the organization with a selected company, but a comparison with several companies that are the best for the measure chosen. Some common performance measures are return on assets, cycle time, percentage of on-time delivery, percentage of damaged goods, proportion of defects, and time spent on administrative functions. The spider chart shown in Figure 3-9 is used to compare multiple performance measures and gaps between the host company and industry benchmark practices. Six performance measures are being considered here. The scales are standardized: say, between 0 and 1, 0 being at the center and 1 at the outer circumference, which represents the most desired value. Best practices for each performance measure are indicated, along with the companies that achieve them. The current performance level of the company performing the benchmarking is also indicated in the figure. The difference between the company's level and that of the best practice for that performance measure is identified as the gap. The analysis that focuses on methods and processes to reduce this gap and thereby improve the company's competitive position is known as gap analysis.

Figure representing spider chart for gap analysis where six points on the circle represent performance measure (PM1–PM6 in anticlockwise manner) for companies A, B, and C. The opposite points are connected by dashed lines and all the lines intersect at the center of the circle. A six-sided figure (bold line) is formed by joining the six dashed lines representing best practice. Inside the six-sided figure is another smaller six-sided figure denoting current level of company performance.

Figure 3-9 Spider chart for gap analysis.

Another advantage of benchmarking is its focus on performance measures and processes, not on the product. Thus, benchmarking is not restricted to the confines of the industry in which the company resides. It extends beyond these boundaries and identifies organizations in other industries that are superior with respect to the measure chosen. It is usually difficult to obtain data from direct competitors. However, companies outside the industry are more likely to share such information. It then becomes the task of management to find ways to adapt those best practices innovatively within their own environment.

In the United States, one of the pioneers of benchmarking is Xerox Corporation. It embarked on this process because its market share eroded rapidly in the late 1970s to Japanese competition. Engineers from Xerox took competitors' products apart and looked at them component by component. When they found a better design, they sought ways to adapt it to their own products or, even better, to improve on it. Similarly, managers from Xerox began studying the best management practices in the market; this included companies both within and outside the industry. As Xerox explored ways to improve its warehousing operations, it found a benchmark outside its own industry: L. L. Bean, Inc., the outdoor sporting goods retailer.

L. L. Bean has a reputation of high customer satisfaction; the attributes that support this reputation are its ability to fill customer orders quickly and efficiently with minimal errors and to deliver undamaged merchandise. The backbone behind this successful operation is an effective management system aided by state-of-the-art operations planning that addresses warehouse layout, workflow design, and scheduling. Furthermore, the operations side of the process is backed by an organizational culture of empowerment, management commitment through effective education and training, and a motivational reward system of incentive bonuses.

Figure 3-10 demonstrates how benchmarking brings the “soft” and “hard” systems together. Benchmarking is not merely identification of the best practices. Rather, it seeks to determine how such practices can be adapted to the organization. The real value of benchmarking is accomplished only when the company has integrated the identified best practices successfully into its operation. To be successful in this task, soft and hard systems must mesh. The emerging organizational culture should empower employees to make decisions based on the new practice.

Figure depicting role of benchmarking in implementing best practices where arrows from components of soft systems (left) and hard systems (right) point at benchmarking (center). Soft systems include organizational culture of empowerment, strategic commitment, and motivation through reward and recognition. Hard systems include performance measurement, training for technical skills, and resource commitment.

Figure 3-10 Role of benchmarking in implementing best practices.

For benchmarking to succeed, management must demonstrate its strategic commitment to continuous improvement and must also motivate employees through an adequate reward and recognition system that promotes learning and innovative adaptation. When dealing with hard systems, resources must be made available to allow release time from other activities, access to information on best practices, and installation of new information systems to manage the information acquired. Technical skills, required for benchmarking such as flowcharting and process mapping, should be provided to team members through training sessions. The team must also identify performance measures for which the benchmarking will take place. Examples of such measures are return on investment, profitability, cycle time, and defect rate.

Several factors influence the adoption of benchmarking; change management is one of them. Figure 3-11 illustrates factors that influence benchmarking and the subsequent outcomes that derive from it. In the current environment of global competition, change is a given. Rather than react haphazardly to change, benchmarking provides an effective way to manage it. Benchmarking provides a road map for adapting best practices, a major component of change management. These are process-oriented changes. In addition, benchmarking facilitates cultural changes in an organization. These deal with overcoming resistance to change. This is a people-oriented approach, the objective being to demonstrate that change is not a threat but an opportunity.

A schematic diagram representing influences on benchmarking and its outcomes. Arrows from change management, time-based competition, and technological development point at benchmarking. From benchmarking arrows point at current profile and competitive profile and these further point arrow at gap analysis. From Gap analysis an arrow points at strategic and operations planning and from here further to new composition position.

Figure 3-11 Influences on benchmarking and its outcomes.

The ability to reduce process time and create a model of quick response is important to all organizations. The concept of time-based competition is linked to reductions in cycle time, which can be defined as the interval between the beginning and ending of a process, which may consist of a sequence of activities. From the customer's point of view, cycle time is the elapsed time between placing an order and having it fulfilled satisfactorially. Reducing cycle time is strongly correlated with performance measures such as cost, market share, and customer satisfaction. Detailed flowcharting of the process can identify bottlenecks, decision loops, and non-value-added activities. Reducing decision and inspection points, creating electronic media systems for dynamic flow of information, standardizing procedures and reporting forms, and consolidating purchases are examples of tactics that reduce cycle time. Motorola, Inc., for example, reduced its corporate auditing process over a three-year period from an average of seven weeks to five days.

Technological development is another impetus for benchmarking. Consider the micro-electronics industry. Its development pace is so rapid that a company has no choice but to benchmark. Falling behind the competition in this industry means going out of business. In this situation, benchmarking is critical to survival.

Quality Auditing

The effectiveness of management control programs may be examined through a practice known as quality auditing. One reason that management control programs are implemented is to prevent problems. Despite such control, however, problems can and do occur, so, quality audits are undertaken to identify problems.

In any quality audit, three parties are involved. The party that requests the audit is known as the client, the party that conducts the audit is the auditor, and the party being audited is the auditee. Auditors can be of two types, internal or external. An internal auditor is an employee of the auditee. External auditors are not members of the auditee's organization. An external auditor may be a single individual or a member of an independent auditing organization.

Quality audits fulfill two major purposes. The first purpose, performed in the suitability quality audit, deals with an in-depth evaluation of the quality program against a reference standard, usually predetermined by the client. Reference standards are set by several organizations, including the ANSI/ASQ, ISO, and British Standards Institute (BSI). Some ISO standards are discussed later in this chapter. The entire organization may be audited or specific processes, products, or services may be audited. The second purpose, performed in the conformity quality audit, deals with a thorough evaluation of the operations and activities within the quality system and the degree to which they conform to the quality policies and procedures defined.

Quality audits may be categorized as one of three types. The most extensive and inclusive type is the system audit. This entails an evaluation of the quality program documentation (including policies, procedures, operating instructions, defined accountabilities, and responsibilities to achieve the quality function) using a reference standard. It also includes an evaluation of the activities and operations that are implemented to accomplish the quality objectives desired. Such audits therefore explore conformance to quality management standards and their implementation to specified norms. They encompass the evaluation of the phases of planning, implementation, evaluation, and comparison. An example of a system audit is a pre-award survey, which typically evaluates the ability of a potential vendor to provide a desired level of product or service.

A second type of quality audit (not as extensive as the system audit) is the process audit, which is an in-depth evaluation of one or more processes in the organization. All relevant elements of the identified process are examined and compared to specified standards. Because a process audit takes less time to conduct than a system audit, it is more focused and less costly. If management has already identified a process that needs to be evaluated and improved, the process audit is an effective means of verifying compliance and suggesting places for improvement. A process audit can also be triggered by unexpected output from a process. For industries that use continuous manufacturing processes, such as chemical industries, a process audit is the audit of choice.

The third type of quality audit is the product audit, which is an assessment of a final product or service on its ability to meet or exceed customer expectations. This audit may involve conducting periodic tests on the product or obtaining information from the customer on a particular service. The objective of a product audit is to determine the effectiveness of the management control system. Such an audit is separate from decisions on product acceptance or rejection and is therefore not part of the inspection system used for such processes. Customer or consumer input plays a major role in the decision to undertake a product audit. For a company producing a variety of products, a relative comparison of product performance that indicates poor performers could be used as a guideline for a product audit.

Audit quality is heavily influenced by the independence and objectivity of the auditor. For the audit to be effective, the auditor must be independent of the activities being examined. Thus, whether the auditor is internal or external may have an influence on audit quality. Consider the assessment of an organization's quality documentation. It is quite difficult for an internal auditor to be sufficiently independent to perform this evaluation effectively. For such suitability audits, external auditors are preferable. System audits are also normally conducted by external auditors. Process audits can be internal or external, as can product audits. An example of an internal product audit is a dock audit, where the product is examined prior to shipment. Product audits conducted at the end of a process line are also usually internal audits. Product audits conducted at the customer site are typically external audits.

Vendor audits are external. They are performed by representatives of the company that is seeking the vendor's services. Knowledge of product and part specifications, contractual obligations and their secrecy, and purchase agreements often necessitate a second-party audit where the client company sends personnel from its own staff to perform the audit. Conformity quality audits may be carried out by internal or external auditors as long as the individuals are not directly involved in the activities being audited.

Methods for conducting a quality audit are of two types. One approach is to conduct an evaluation of all quality system activities at a particular location or operation within an organization, known as a location-oriented quality audit. This audit examines the actions and interactions of the elements in the quality program at that location and may be used to interpret differences between locations. The second approach is to examine and evaluate activities relating to a particular element or function within a quality program at all locations where it applies before moving on to the next function in the program. This is known as a function-oriented quality audit. Successive visits to each location are necessary to complete the latter audit. It is helpful in evaluating the overall effectiveness of the quality program and also useful in tracing the continuity of a particular function through the locations where it is applicable.

The utility of a quality audit is derived only when remedial actions in deficient areas, exposed by the quality audit, are undertaken by company management. A quality audit does not necessarily prescribe actions for improvement; it typically identifies areas that do not conform to prescribed standards and therefore need attention. If several areas are deficient, a company may prioritize those that require immediate attention. Only on implementation of the remedial actions will a company improve its competitive position. Tools that help identify critical areas, find root causes to problems, and propose solutions include cause-and-effect diagrams, flowcharts, and Pareto charts; these are discussed later in the chapter.

Vendor Selection and Certification Programs

As discussed in Chapter 2, the modern trend is to establish long-term relationships with vendors. In an organization's pursuit of continuous improvement, the purchaser (customer) and vendor must be integrated in a quality system that serves the strategic missions of both companies. The vendor must be informed of the purchaser's strategies for market-share improvement, advance product information (including changes in design), and delivery requirements. The purchaser, on the other hand, should have access to information on the vendor's processes and be advised of their unique capabilities.

Cultivating a partnership between purchaser and vendor has several advantages. First, it is a win–win situation for both. To meet unique customer requirements, a purchaser can then redesign products or components collaboratively with the vendor. The vendor, who makes those particular components, has intimate knowledge of the components and the necessary processes that will produce the desired improvements. The purchaser is thus able to design its own product in a cost-effective manner and can be confident that the design will be feasible to implement. Alternatively, the purchaser may give the performance specification to the vendor and entrust them with design, manufacture, and testing. The purchaser thereby reduces design and development costs, lowers internal costs, and gains access to proprietary technology through its vendor, technology that would be expensive to develop internally. Through such partnerships, the purchaser is able to focus on its areas of expertise, thereby maintaining its competitive edge. Vendors gain from such partnerships by taking ownership of the product or component from design to manufacture; they can meet specifications more effectively because of their involvement in the entire process. They also gain an expanded insight into product and purchaser requirements through linkage with the purchaser; this helps them better meet those requirements. This, in turn, strengthens the vendor's relationship with the purchaser.

Vendor Rating and Selection

Maintaining data on the continual performance of vendors requires an evaluation scheme. Vendor rating based on established performance measures facilitates this process. There are several advantages in monitoring vendor ratings. Since the quality of the output product is a function of the quality of the incoming raw material or components procured through vendors, it makes sense to establish long-term relationships with vendors that consistently meet or exceed performance requirements. Analyzing the historical performance of vendors enables the company to select vendors that deliver their goods on time. Vendor rating goes beyond reporting on the historical performance of the vendor. It ensures a disciplined material control program. Rating vendors also helps reduce quality costs by optimizing the cost of material purchased.

Measures of vendor performance, which comprise the rating scheme, address the three major categories of quality, cost, and delivery. Under quality, some common measures are percent defective as expressed by defects in parts per million, process capability, product stoppages due to poor quality of vendor components, number of customer complaints, and average level of customer satisfaction. The category of cost includes such measures as scrap and rework cost, return cost, incoming-inspection cost, life-cycle costs, and warranty costs. The vendor's maintenance of delivery schedules is important to the purchaser in order to meet customer-defined schedules. Some measures in this category are percent of on-time deliveries, percent of late deliveries, percent of early deliveries, percent of underorder quantity, and percent of overorder quantity.

Which measures should be used are influenced by the type of product or service, the customer's expectations, and the level of quality systems that exists in the vendor's organization. For example, the Federal Express Corporation, winner of the 1990 Malcolm Baldrige National Quality Award in the service category, is the name in fast and reliable delivery. FedEx tracks its performance with such measures as late delivery, invoice adjustment needed, damaged packages, missing proof of delivery on invoices, lost packages, and missed pickups. For incoming material inspection, defectives per shipment, inspection costs, and cost of returning shipment are suitable measures. For vendors with statistical process control systems in place, measuring process capability is also useful. Customer satisfaction indices can be used with those vendors that have extensive companywide quality systems in place.

Vendor performance measures are prioritized according to their importance to the purchaser. Thus, a weighting scheme similar to that described in the house of quality (Figure 3-5) is often used. Let's consider a purchaser that uses rework and scrap cost, price, percent of on-time delivery, and percent of underorder quantity as its key performance measures. Table 3-6 shows these performance measures and the relative weight assigned to each one. This company feels that rework and scrap costs are most important, with a weight of 40. Note that price is not the sole determinant; in fact, it received the lowest weighting.

Table 3-6 Prioritizing Vendor Performance Measures Using a Weighting Scheme

Vendor A Vendor B Vendor C
Performance Measure Weight Rating Weighted Rating Rating Weighted Rating Rating Weighted Rating
Price 10 4 40 2 20 3 30
Rework and scrap cost 40 2 80 4 160 3 120
Percent ofon-time delivery 30 1 30 3 90 2 60
Percent of underorder quantity 20 2 40 4 80 5 100
Weighted score 190 350 310
Rank 3 1 2

Table 3-6 shows the evaluation of vendors, A, B, and C. For each performance measure, the vendors are rated on a scale of 1–5, with 1 representing the least desirable performance. A weighted score is obtained by adding the products of the weight and the assigned rating for each performance measure (weighted rating). The weighted scores are then ranked, with 1 denoting the most desirable vendor. From Table 3-6 we can see that vendor B, with the highest weighted score of 350, is the most desirable.

Vendor evaluation in quality programs is quite comprehensive. Even the vendor's culture is subject to evaluation as the purchaser seeks to verify the existence of a quality program. Commitment to customer satisfaction as demonstrated by appropriate actions is another attribute the purchaser will examine closely. The purchaser will measure the vendor's financial stability; the purchaser obviously prefers vendors that are going to continue to exist so the purchaser will not be visited with the problems that follow from liquidity or bankruptcy. The vendor's technical expertise relating to product and process design is another key concern as vendor and purchaser work together to solve problems and to promote continuous improvement.

Vendor Certification

Vendor certification occurs when the vendor has reached the stage at which it consistently meets or exceeds the purchaser's expectations. Consequently, there is no need for the purchaser to perform routine inspections of the vendor's product. Certification motivates vendors to improve their processes and, consequently, their products and services. A vendor must also demonstrate a thorough understanding of the strategic quality goals of the customer such that its own strategic goals are in harmony with those of the customer. Improving key processes through joint efforts strengthens the relationship between purchaser and vendor. The purchaser should therefore assess the vendor's capabilities on a continuous basis and provide adequate feedback.

A vendor goes through several levels of acceptance before being identified as a long-term partner. Typically, these levels are an approved vendor, a preferred vendor, and finally a certified vendor: that is, a “partner” in the quality process. To move from one level to the next, the quality of the vendor's product or service must improve. The certification process usually transpires in the following manner. First, the process is documented; this defines the roles and responsibilities of personnel of both organizations. Performance measures, described previously, are chosen, and measurement methods are documented. An orientation meeting occurs at this step.

The next step is to gain a commitment from the vendor. The vendor and purchaser establish an environment of mutual respect. This is important because they must share vital and sometimes sensitive information in order to improve process and product quality. A quality system survey of the vendor is undertaken. In the event that the vendor is certified or registered by a third party, the purchaser may forego its own survey and focus instead on obtaining valid performance measurements. At this point, the purchaser sets acceptable performance standards on quality, cost, and delivery and then identifies those vendors that meet these standards. These are the approved vendors.

Following this step, the purchaser decides on what requirements it will use to define its preferred vendors. Obviously, these requirements will be more stringent than for approved vendors. For example, the purchaser may give the top 20% of its approved vendors preferred vendor status. Preferred vendors may be required to have a process control mechanism in place that demonstrates its focus on problem prevention (as opposed to problem detection).

At the next level of quality, the certified vendor, the criteria entail not only quality, costs, and delivery measures but also technical support, management attitude, and organizational quality culture. The value system for the certified vendor must be harmonious with that of the purchaser. An analysis of the performance levels of various attributes is undertaken, and vendors that meet the stipulated criteria are certified. Finally, a process is established to ensure vendor conformance on an ongoing basis. Normally, such reviews are conducted annually.

3M Company, as part of its vendor management process, uses five categories to address increasing levels of demonstrated quality competence to evaluate its vendors. The first category is the new vendor. Their performance capabilities are unknown initially. Interim specifications would be provided to them on an experimental basis. The next category is the approved vendor, where agreed-upon specifications are used and a self-survey is performed by the vendor. To qualify at this level, vendors need to have a minimum performance rating of 90% and must also maintain a rating of no less than 88%. Following this is the qualified vendor. To enter at this level, the vendor must demonstrate a minimum performance rating of 95% and must maintain a rating of at least 93%. Furthermore, the vendor must show that it meets ISO 9001 standards, or be approved by the U.S. Food and Drug Administration, or pass a quality system survey conducted by 3M. The next category is the preferred vendor. To enter this category, the vendor must demonstrate a minimum performance rating of 98% and must maintain a rating of at least 96%. The preferred vendor demonstrates continuous improvement in the process and constantly meets 3M standards. Minimal or no incoming inspection is performed. The highest level of achievement is the strategic vendor category. These are typically high-volume, critical-item, or equipment vendors that have entered into strategic partnerships with the company. They share their own strategic plans and cost data, make available their plants and processes for study by representatives from 3M, and are open to joint ventures, where they pursue design and process innovations with 3M. The strategic vendor has a long-term relationship with the company.

Other certification criteria are based on accepted norms set by various agencies. The ISO is an organization that has prepared a set of standards: ISO 9001, Quality Management Systems: Requirements. Certification through this standard sends a message to the purchaser that the vendor has a documented quality system in place. To harmonize with the ISO 9000 standards, an international automotive quality standard, ISO/TS 16949, was developed. In the United States, the big three automakers, Daimler, Ford, and General Motors, have been standardizing their requirements for suppliers and now subscribe to the ISO/TS 16949 standards. Further, the Automotive Industry Action Group (AIAG) has been instrumental in eliminating multiple audits of suppliers and requirements (often conflicting) from customers. Over 13,000 first-tier suppliers to the big three automobile companies were required to adopt the standards. These first-tier suppliers, in turn, created a ripple effect for second-tier and others in the supply chain to move toward adoption of the ISO/TS 16949 standards.

3-5 Health Care Analytics

A unique service industry is that of health care. It is of paramount importance for several reasons. It not only comprises a significant portion of the gross domestic product but also addresses a basic service that is desirable for all citizens of a country. In the United States, the rising costs of health care and the associated increase in the aging population further necessitate a careful consideration of the quality and cost of health care.

Health Care Analytics and Big Data

Health-care-related data, be it on patients, physicians, hospitals and providers, research and evidence-based findings, and knowledge expansion through breakthroughs are compounding at an aggressive rate beyond comprehension. It may physically be impossible to keep up with all of the data on an individual basis. The concept of big data is a reality. The increased volume and velocity of such data are critical issues. Furthermore, a major challenge is the ability to integrate data from a variety of sources, some not necessarily compatible to each other, into a common platform on a dynamic basis and analyze the information to provide value to the provider, patient, and organization. This becomes a formidable task for health care analytics.

Application of health care analytics to big data may yield beneficial results to entities at several levels. Figure 3-12 shows some sequential steps in decision making, using big data, utilizing health care analytics. At the micro-level, data collected from data warehouses and other sources will often require some form of cleansing to make it compatible for decision making. In the simplest format, process data on a patient such as waiting time to registration, time to bed occupancy, and waiting time to see a physician may be kept on individual patients. On the other hand, a laboratory in a facility conducting blood tests may keep records on turnaround time by day and time of day and the order number, which is linked to a particular patient. Hence, these two data sets could be linked by the patient number, a commonality present in both. Further, electronic medical records (EMRs) on patients and facilities may be kept in a variety of formats. Data on some patients on some of the various process attributes could be missing, implying some form of data cleansing could be necessary.

A schematic diagram representing health care analytics using big data where arrows from knowledge base, ongoing research and development, and data warehouse and EMR point at big data. Upward arrows in series from big data connect data compatibility and cleansing, descriptive analytics through dashboards and visual displays, predictive analytics through model building, and prescriptive analytics for population health management.

Figure 3-12 Influences on benchmarking and its outcomes.

Once data are structured in a format suitable for processing, a visual means for summarizing the information is usually a first step in the analysis. Development of dashboards such as average length of stay and mortality or morbidity rates is one form of visualization. Alternatively, summary graphs such as the histogram of length of stay of patients on a monthly basis is another form of data visualization.

At the next stage, say at the patient level, health care analytics could utilize the data to develop predictive models by the type of illness. Here, based on patient characteristics, risk-adjusted models, for example, could be developed for diabetic patients. Models using methods of regression analysis may be utilized in this context. Such information could be helpful to physicians for diagnosis as well as monitoring of patients. It could lead to changes in medications as well as patient advisement.

At the macro-level, analysis of various forms of health care data at the aggregate level could provide a status report on population health by state, region, country, or the world. Such information could shape the development of health care policies at the national level. Formulation of such policies based on prescriptive analytics of data could be utilized in population health management. They typically involve optimization techniques based on stated objectives and constraints. For example, for diabetic patients, guidelines for a maximum level of hemoglobin A1C levels could be prescribed along with those for cholesterol levels.

Uniqueness of Health Care

Customers in Health Care and Their Various Needs

There are several features in the health care system in the United States that make it rather unique. Let us first consider who are the customers in such a system and what are their needs. First and foremost, patients are the primary customers. Their needs are in two categories: process-related experiences and outcomes based on diagnosis. Satisfaction in physician and staff communication and waiting time to be assigned to bed are examples in the process category. Mortality and morbidity rates could be outcome measures. It should be noted that, according to the Kano model, needs of patients could be prioritized by importance, which will assist management in selecting a focused approach.

There are several secondary customers of the health care facility whose needs may be different from each other. Physicians, who are retained or perform services for the health care facility, are also customers. Their needs involve the availability of qualified medical and support staff, adequate facilities, and a suitable work schedule. Support staff and nurses prefer a comfortable work environment, adequate compensation, and an acceptable work schedule. Payers such as insurance companies and the federal or state government constitute another category of customers. They prefer an effective and efficient system in the delivery of health care services by hospitals and physicians. Investors and stakeholders of the facility are interested in a growing reputation of the organization and an acceptable rate of return on their investment.

Organizational Structure

Health care facilities usually have an organizational structure that is composed of medical and nonmedical staff. The management structure may consist of a chief executive officer and a chief operational officer who are not necessarily trained in the technical aspects of health care. It is obvious that senior management would prefer the facility to be ranked highly among peers. Improving patient satisfaction and operational measures and reducing errors may help in achieving the stated goals. However, input from physicians and health care knowledgeable staff is important since they deal with the technical aspects of curing an illness or alleviating the suffering. Flexibility of management in incorporating such technical input is important to the successful functioning of the organization.

Who Pays?

This is one of the most intricate issues in health care in the United States. The patient, the consumer, is often not the major contributor. The federal government, through the Centers for Medicare and Medicaid Services (CMS, 2015), administers the Medicare program for the elderly and works with state governments to administer the Medicaid program. health maintenance organizations (HMO) (Kongstvedt, 2001) are another source that manage care for health insurance for self-funded individuals or group plans for employers. They act as a liaison with health care providers, which include physicians, hospitals, clinics, and so on a prepaid basis. Patients have to select a primary care physician, who often has to provide a referral to see a specialist. HMOs have various operational models. In a staff model, physicians are salaried employees of the organization. In a group model, the HMO may contract with a multispecialty physician group practice, where individual physicians are employed by the group and not the HMO.

Another form of health insurance coverage is through preferred provider organizations (PPOs). This is a variation of an HMO and combines features of a traditional insurance with those of managed care. The PPO plan sponsor negotiates a fee-for-service rate with physicians and hospitals. Patients may choose from a network of providers but do need a primary care physician referral to see a specialist. A PPO enrollee has the option to seek care outside the network, for which they pay a higher cost.

Yet another alternative form of health insurance coverage is through a point-of-service (POS) model, which is a hybrid model that combines features of HMOs and PPOs. Patients pay a copayment for contracted services within a network of providers. However, they do not need a referral from a primary care physician before seeing a specialist. Additionally, they have the flexibility to seek care from an out-of-network provider similar to a traditional indemnity plan. However, the deductible and copayment may be higher in such instances.

Cost containment is a major issue and the U.S. Congress continues to consider legislation to regulate managed care providers. Some believe that for-profit HMOs place greater emphasis on revenue than on providing the needed care. The type, form, and criteria of reimbursement by private and public (such as the federal government) payers to health care providers have been evolving over the years. Some forms of reimbursement policies are now discussed.

Fee-for-Services

Prior to the 1970s, private and government payers reimbursed physicians and hospitals customary fees for their services. Insurance companies typically paid the full amount submitted. As the cost of health care continued to rise, a new model came into being in the 1970s.

Diagnosis-Related Groups

This scheme incorporates the concept of fixed case-rate payment. Based on their diagnosis, patients are categorized into diagnosis-related groups (DRGs). Introduced in the 1980s, specific payment rates were given to hospitals based on the patient diagnosis. This introduced the concept of capitation and encouraged hospitals to reduce their costs. Expensive, optional tests could be omitted. The hospitals retained more of their reimbursement if they could run their operations in an effective and efficient manner. Adoption of such a payment scheme had an impact in reducing length of stay of patients in hospitals. Medicare's adoption of this prospective payment system (PPS) using DRG codes had a major impact on medical financing through the federal government. Currently, DRG codes exist in a variety of systems to meet the expanded needs (Baker 2002). These include Medicare DRGs (CMS-DRGs and MS-DRGs), refined DRGs (R-DRG), all-patient DRGs (AP-DRG), severity DRGs (S-DRG), and all-patient severity-adjusted DRGs (APS-DRG), for instance. There are some barriers to the use of DRGs. First, there are many DRGs to choose from and it may be difficult to identify the exact choice because of overlapping definitions. Second, many of the DRG codes are privately held. For example, the College of American Pathologists holds diagnosis codes, while the American Medical Association holds procedure codes.

Pay-for-Participation

A newer payment scheme is pay-for-participation, which provides incentives to hospitals, say, for participation in quality measurement, regardless of actual quality delivered (Birkmeyer and Birkmeyer 2006). Such programs create procedure-specific patient outcome registries that promote collaboration among hospitals and provide regular feedback. Participants may meet regularly to discuss performance and methods through which quality may be improved. Quality, therefore, is judged collectively rather than on an individual hospital. Such a system does not require public reporting. Consequently, public support lacks for these programs, even though collaborative efforts may lead to quality improvement.

Pay-for-Performance

The present trend of payment schemes leans toward the pay-for-performance system. It rewards high-performance hospitals and clinicians with a monetary bonus while low-performance providers are penalized a portion of their reimbursement (Chung and Shauver 2009). In general, this supports the concept of “value-based purchasing.” At the core of the implementation of such a scheme lies the selection of metrics that define quality. Such metrics, classified into the three areas of structure, process, and outcome, will be discussed subsequently as they also address a broader issue: How is quality measured in the area of health care?

The CMS initially piloted such a program in 2003 and currently has several demonstration projects. Core performance measures were developed in a variety of areas such as coronary artery bypass graft, heart failure, acute myocardial infarction, community-acquired pneumonia, and hip and knee replacement (Darr 2003). Process measures were mainly used to measure quality in these areas. Once hospitals report their compliance to the performance measures, CMS ranks the hospitals, makes them public, and uses them to distribute incentives or assess penalties. Usually, hospitals in the top 10% receive a 2% bonus, those in the next 10% receive a 1% bonus, and those in the top 50% receive recognition but no monetary bonus. Hospitals that do not meet minimum performance levels could be penalized as much as 2% of their reimbursements.

The Institute of Medicine (IOM) has created design principles for pay-for-performance programs (IOM 2006). They focus on reliable measures that signify good care and optimal health outcomes, promote coordination among providers while maintaining a patient-centered approach, and reward data collection, reporting, and integration of information technology. While the intentions of the pay-for-performance system have been good, the results have been mixed. Carroll (2014) claims that while incentives may change practice, clinical outcomes have not necessarily improved. Desirable practices such as spending time with patients are not always quantifiable. Some studies (Rosenthal and Frank 2006) have shown that there could be some unintended consequences, such as avoidance of high-risk patients, when payments are linked to outcome improvements. CMS has proposed the elimination of negative incentives that result from injury, illness, or death.

Hybrid Programs

A national nonprofit organization, The Leapfrog Group 2000 (www.leapfroggroup.org), was created through a membership of 160 private and public sector employers to create transparency in quality and safety of health care in U.S. hospitals. Some major goals are to reduce preventable medical errors, encourage public reporting of quality and outcomes data, and assist consumers in making informed decisions. An objective is to improve quality while reducing costs. The four “leaps” are as follows (The Leapfrog Group 2000): computerized physician order entry; evidence-based hospital referral; intensive care unit physician staffing; and the Leapfrog safe practice score. An online voluntary hospital survey is available. The group has adopted a hospital rewards program, governed by measures of quality and safety, and provides incentives for both participation and excellence in performance, two concepts previously described.

Capitation

Capitation is a form of payment arrangement contracted by HMOs with health care providers, such as physicians and nurses that pays a set amount per time period for each HMO-enrolled patient. The amount reimbursed to the provider is a function of the patient's medical history as well as cost of providing care in the particular geographical location. Providers focus on preventive health care since there is a greater financial reward and less financial risk in preventing rather than treating an illness. Such risks are better managed by large providers (Cox 2011).

Bundled Payment

A bundled payment reimbursement scheme to health care providers, which may consist of hospitals and physicians, is based on the expected costs for clinically defined episodes of care (Rand Corporation 2015). Alternative names to this scheme are episode-based payment, case rate, evidence-based case rate, global bundled payment, or packaged pricing. Since a single payment is made to cover, for example, all inpatient and physician services in a coronary artery bypass graft (CABG) surgery as well as a period of post–acute care, it is expected that there will be better coordination among the service providers, leading to an efficient and less costly set of services.

The federal government, through the CMS, is an advocate of such a scheme, which is also linked to outcomes. It is expected to lead to financial and performance accountability for episodes of care. Currently four broadly defined models of care exist (CMS 2015): retrospective acute care inpatient hospitalization; retrospective acute care hospital stay plus post–acute care; retrospective post–acute care only; and a prospective acute care based on an entire episode of care. Different episodes of care have been defined in a variety of areas that span many DRGs. These include, for example, acute myocardial infarction, amputation, atherosclerosis, cardiac arrhythmia, congestive heart failure, CABG surgery, diabetes, gastrointestinal hemorrhage, sepsis, and so forth. In the context of providing coordinated care to patients across various care settings, the model of accountable care organizations (ACOs) has evolved. These are groups of hospitals, physicians, and health care providers who voluntarily join to provide coordinated high-quality care to the medicare patients that they serve. Such coordination may help to prevent medical errors and reduce duplication of services, thereby resulting in cost savings for Medicare, employers, and patients. Before an ACO can share in any savings generated, it must demonstrate that it has met defined quality performance standards developed by CMS. Currently, there are 33 quality measures in the 2014 ACO quality standards in four key domain areas of patient/caregiver experience, care coordination/patient safety, at-risk population (that includes diabetes, hypertension, ischemic vascular disease, heart failure, and coronary artery disease), and preventive care (CMS 2015).

Some advantages of bundled payments include streamlining and coordination of care among providers. It improves efficiency and reduces redundancy, such as duplicate testing and unnecessary care. It may encourage economies of scale, especially if providers use a single product or type of medical supply. Allowing for risk adjustment or case mix, on a patient-by-patient basis, assists in the determination of an equitable payment amount. Certain drawbacks may also exist in such a system. It does not discourage unnecessary episodes of care. Providers could avoid high-risk patients, overstate the severity of illness, provide the lowest level of service, or delay post-hospital care until after the end of the bundled payment. Certain illnesses may not fall neatly into the “defined episodes” or a patient could have multiple bundles that overlap each other (Robinow 2010).

Challenges in Health Care Quality

Given the unique structure of the health care delivery system, there are some challenges to improving quality and reducing costs concurrently. A few of these are discussed.

Lack of Strategic Planning

In all organizations, in conjunction with the vision and mission, strategic plans must be created, from which operational goals and objectives should be derived. In the formulation of such plans, the priorities and needs of the customer must be the guiding light. With the patient being the primary customer, their prioritized needs demand attention. This leads to identification of the customer needs in health care and associated metrics for measuring and monitoring them.

Quality Metrics

Selection of metrics that define quality is influenced by the patient, physician, payer, health care facility, accreditation agencies such as The Joint Commission (TJC), not-for-profit agencies such as the National Committee for Quality Assurance (NCQA), and the federal government, such as CMS and the IOM, which is a division of the National Academies of Sciences, Engineering, and Medicine. If a health care organization seeks accreditation, wishes to use the seal of approval from the NCQA, or wishes to treat Medicare patients and be reimbursed by CMS, it must meet the prescribed quality standards set by the corresponding authority. At the national level, there needs to be some form of coordination to determine stated benchmarks.

Since the various stakeholders may have a different set of measures, prioritized in different ways, it may be a challenge to come up with measures that satisfy everyone. It should be noted that any chosen metric must be measurable for it to be monitored. Let us, for example, consider the quality metrics from a patient perspective. These could be facility related, such as satisfaction with the room, meals, or noise level at night; process related, such as the interaction with the physician, nurse, staff or waiting time for a procedure; or outcome related, such as overall patient satisfaction, mortality, morbidity, or length of stay. At a more aggregate perspective at the national level, improved population health could be an objective, where certain measures such as the proportion of people suffering from diabetes or hypertension or deaths annually related to cancer would be of interest. On the other hand, some operational metrics of quality of a health care facility could be measured by the turnaround time for a laboratory test, the accuracy of the financial or billing system, or the efficiency of its collections. For a surgeon, the proportion of readmissions could be a quality metric.

Adoption and Integration of Health Information Technology (HIT)

As data continue to amass at an ever-increasing rate, the use of electronic health records (EHRs) or EMRs will be the only feasible option that will support decision making in an informed, timely, and error-free manner. The federal government through the CMS is strongly supporting this initiative.

While data input to EHR/EMR by physicians and providers is important, a critical issue is the development of a compatible health information technology platform that can assimilate and integrate various forms and types of data from different sources. Figure 3-13 demonstrates the challenges involved in this concept.

Figure illustrating health information technology platform. Laboratory, pharmacy, and facilities combine to form operational data; EMR and health information networks form clinical data; and billing and collections and income and expenditure combine to form financial data. All the three data (clinical, operational, and financial) collectively form the compatible health information technology platform.

Figure 3-13 Influences on benchmarking and its outcomes.

Presently, in creating EHR/EMR records there is not a single standard. Further, clinical data for the same patient who has multiple providers not in the same network may be stored in different formats. Additionally, operational data from health care facilities, laboratories, and pharmacies could be in varying formats. Financial data, important for billing and collection and estimation of profitability of the organization, could be in yet another format. The challenge is to create an HIT platform that can aggregate and integrate these structures into a compatible platform that is suitable for health care decision making.

Health Care Decision Support Systems

One of the major challenges in decision making in health care in the twenty-first century is the development of an adequate health care decision support system (HCDSS). With the rate at which knowledge in the field is expanding, it may not be feasible for individuals to keep up with this information and utilize it in their decision making without appropriate technology support. While it is true that a decision support system can only recommend actions based on historical evidence, the importance of the physician to integrate that information along with current knowledge, which is ongoing and dynamic, will always exist. Figure 3-14 displays the challenges and the benefits from the development of a HCDSS.

A flowdiagram depicting health care decision support systems. An arrow from data warehouse and ongoing research and discoveries point at integrated dynamic knowledge base and from here another upward arrow points at search engine. From search engine an arrow points at health care decision support systems and from here arrows point at improved population health, improved patient outcomes, improved physician performance, and improved effectiveness and efficiency of facilities.

Figure 3-14 Influences on benchmarking and its outcomes.

Aggregating and integrating information from various data warehouses as well as ongoing research and discoveries to create an integrated and dynamic knowledge base will require a coordinated and dedicated effort on a continual basis. This notion will support the concept of continuous quality improvement but may have the following barriers. High investment costs and lack of standards, such that most applications do not communicate well, leading to high costs of interfacing, are a couple. Maintaining privacy of patient records is another barrier as merging of information takes place (Bates and Gawande 2003).

An ongoing challenge is the development of an appropriate search engine, which is the backbone of a HCDSS. The search engine must be able to handle the volume of information and support timely decision making at the point of service (POS) level. It must be able to integrate the discoveries in the fields of genomics, proteomics, and pharmacogenomics, for example, to assist the health care provider.

With the adoption of an adequate search engine, the creation of a suitable HCDSS will occur. Methods of health care analytics will be utilized in the formulation of decisions to be depicted by the decision support system. The benefits from such a HCDSS will be realized at all levels. At the aggregate level, it may lead to improved population health. Such a DSS could project the amount of vaccination that would be necessary to stop the spread of a certain influenza virus in the population. At the patient and physician levels, major benefits may occur. Reductions in medication errors could occur as possible interactions between drugs to be taken by the patient are reported to the provider in a timely manner. Also, there is less chance of an error in the calculation of weight-based doses of medication. Additionally, as current information on the patient is input to the system, the DSS could provide a rapid response on certain adverse events such as nosocomial infections. As another example, remote monitoring of intensive care unit (ICU) patients could happen, which could aid in timely decisions. On a more general basis, with the reduction of medical errors and updates on waiting times in various units within the facilities, decisions recommended will improve the effectiveness and efficiency of the facilities.

3-6 Tools for Continuous Quality Improvement

To make rational decisions using data obtained on a product, process, service or from a consumer, organizations use certain graphical and analytical tools. We explore some of these tools.

Pareto Diagrams

Pareto diagrams are important tools in the quality improvement process. Alfredo Pareto, an Italian economist (1848–1923), found that wealth is concentrated in the hands of a few people. This observation led him to formulate the Pareto principle, which states that the majority of wealth is held by a disproportionately small segment of the population. In manufacturing or service organizations, for example, problem areas or defect types follow a similar distribution. Of all the problems that occur, only a few are quite frequent; the others seldom occur. These two problem areas are labeled the vital few and the trivial many. The Pareto principle also lends support to the 80/20 rule, which states that 80% of problems (nonconformities or defects) are created by 20% of causes. Pareto diagrams help prioritize problems by arranging them in decreasing order of importance. In an environment of limited resources, these diagrams help companies decide the order in which they should address problems.

Table 3-7 Customer Dissatisfaction in Airlines

Reasons Count
Lost baggage 15
Delay in arrival 40
Quality of meals 20
Attitude of attendant 25

Figure 3-15 shows a Pareto diagram of reasons for airline customer dissatisfaction. Delays in arrival is the major reason, as indicated by 40% of customers. Thus, this is the problem that the airlines should address first.

Figure depicting a Pareto diagram for dissatisfied airline customers where the left and the right vertical axes represent count of customers dissatisfied and percent, respectively. The horizontal axis denotes the concerns of the airline customers. Individual values are represented in descending order by bars, and the cumulative total is represented by the line. It is observed from the graph that delays in arrival is the major reason, as indicated by 40% of customers.

Figure 3-15 Pareto diagram for dissatisfied airline customers.

Flowcharts

Flowcharts, which show the sequence of events in a process, are used for manufacturing and service operations. They are often used to diagram operational procedures to simplify a system, as they can identify bottlenecks, redundant steps, and non-value-added activities. A realistic flowchart can be constructed by using the knowledge of the personnel who are directly involved in the particular process. Valuable process information is usually gained through the construction of flowcharts. Figure 3-16 shows a flowchart for patients reporting to the emergency department in a hospital. The chart identifies where delays can occur: for example, in several steps that involve waiting. A more detailed flowchart would allow pinpointing of key problem areas that contribute to lengthening waiting time.

Flowchart for patients in an emergency department (ED) starts with patient presenting to ED followed by some wait and then triage by nurse. If the condition is stable then after some wait the administrative staff registers the patient and after some more wait transports the patient to ED bed. If the condition is unstable the patient is immediately transported to ED bed. After some wait and primary assessment by nurse if the condition is unstable the patient is immediately evaluated by doctor and in case of stable condition the doctor evaluates the patient after some wait.

Figure 3-16 Flowchart for patients in an emergency department (ED).

Further, certain procedures could be modified or process operations could be combined to reduce waiting time. A detailed version of the flowchart is the process map, which identifies the following for each operation in a process: process inputs (e.g., material, equipment, personnel, measurement gage), process outputs (these could be the final results of the product or service), and process or product parameters (classified into the categories of controllable, procedural, or noise). Noise parameters are uncontrollable and could represent the in-flow rate of patients or the absenteeism of employees. Through discussion and data analysis, some of the parameters could be classified as critical. It will then be imperative to monitor the critical parameters to maintain or improve the process.

Cause-and-Effect Diagrams

Cause-and-effect diagrams were developed by Kaoru Ishikawa in 1943 and thus are often called Ishikawa diagrams. They are also known as fishbone diagrams because of their appearance (in the plotted form). Basically, cause-and-effect diagrams are used to identify and systematically list various causes that can be attributed to a problem (or an effect) (Ishikawa 1976). These diagrams thus help determine which of several causes has the greatest effect. A cause-and-effect diagram can aid in identifying the reasons why a process goes out of control. Alternatively, if a process is stable, these diagrams can help management decide which causes to investigate for process improvement. There are three main applications of cause-and-effect diagrams: cause enumeration, dispersion analysis, and process analysis.

Cause enumeration is usually developed through a brainstorming session in which all possible types of causes (however remote they may be) are listed to show their influence on the problems (or effect) in question. In dispersion analysis, each major cause is analyzed thoroughly by investigating the subcauses and their impact on the quality characteristic (or effect) in question. This process is repeated for each major cause in a prioritized order. The cause-and-effect diagram helps us analyze the reasons for any variability or dispersion. When cause-and-effect diagrams are constructed for process analysis, the emphasis is on listing the causes in the sequence in which the operations are actually conducted. This process is similar to creating a flow diagram, except that a cause-and-effect diagram lists in detail the causes that influence the quality characteristic of interest at each step of a process.

Using Minitab, for each Branch or main cause, create a column and enter the subcauses in the worksheet. Then, execute the following: Stat > Quality Tools > Cause-and-Effect. Under Causes, enter the name or column number of the main causes. The Label for each branch may be entered to match the column names. In Effect, input the brief problem description. Click OK. Figure 3-17 shows the completed cause-and-effect diagram.

Figure depicting a cause-and-effect diagram for the bore size of tires where a rightward arrow denotes the bore size of the tires. Four branches above the arrow and three below the arrow denote the causes. From left to right the branches in the upper section denote measuring equipment, operator, mixing, and incoming material and branches in the lower section denote press, splicing, and tubing. The subcauses corresponding to each cause are also listed.

Figure 3-17 Cause-and-effect diagram for the bore size of tires.

Scatterplots

The simplest form of a scatterplot consists of plotting bivariate data to depict the relationship between two variables. When we analyze processes, the relationship between a controllable variable and a desired quality characteristic is frequently of importance. Knowing this relationship may help us decide how to set a controllable variable to achieve a desired level for the output characteristic. Scatterplots are often used as follow-ups to a cause-and-effect analysis.

Table 3-8 Data on Depth of Cut and Tool Wear

Observation Depth of Cut (mm) Tool Wear (mm) Observation Depth of Cut (mm) Tool Wear (mm)
1 2.1 0.035 21 5.6 0.073
2 4.2 0.041 22 4.7 0.064
3 1.5 0.031 23 1.9 0.030
4 1.8 0.027 24 2.4 0.029
5 2.3 0.033 25 3.2 0.039
6 3.8 0.045 26 3.4 0.038
7 2.6 0.038 27 2.8 0.040
8 4.3 0.047 28 2.2 0.031
9 3.4 0.040 29 2.0 0.033
10 4.5 0.058 30 2.9 0.035
11 2.6 0.039 31 3.0 0.032
12 5.2 0.056 32 3.6 0.038
13 4.1 0.048 33 1.9 0.032
14 3.0 0.037 34 5.1 0.052
15 2.2 0.028 35 4.7 0.050
16 4.6 0.057 36 5.2 0.058
17 4.8 0.060 37 4.1 0.048
18 5.3 0.068 38 4.3 0.049
19 3.9 0.048 39 3.8 0.042
20 3.5 0.036 40 3.6 0.045

Using Minitab, choose the commands Graph > Scatterplot. Select Simple and click OK. Under Y, enter the column number or name, in this case, “Tool wear.” Under X, enter the column number or name, in this case, “Depth of cut.” Click OK.

The resulting scatterplot is shown in Figure 3-18. It gives us an idea of the relationship that exists between depth of cut and amount of tool wear. In this case the relationship is generally nonlinear. For depth-of-cut values of less than 3.0 mm, the tool wear rate seems to be constant, whereas with increases in depth of cut, tool wear starts increasing at an increasing rate. For depth-of-cut values above 4.5 mm, tool wear appears to increase drastically. This information will help us determine the depth of cut to use to minimize downtime due to tool changes.

Figure depicting a scatterplot plotted between tool wear on the y-axis (on a scale of 0.03–0.07 mm) and depth of cut on the x-axis (on a scale of 1–6 mm) depicting a generally nonlinear relationship.

Figure 3-18 Scatterplot of tool wear versus depth of cut.

Multivariable Charts

In most manufacturing or service operations, there are usually several variables or attributes that affect product or service quality. Since realistic problems usually have more than two variables, multivariable charts are useful means of displaying collective information.

Several types of multivariate charts are available (Blazek et al. 1987). One of these is known as a radial plot, or star, for which the variables of interest correspond to different rays emanating from a star. The length of each ray represents the magnitude of the variable.

A graph is plotted between percentage nonconforming on the y-axis (on a scale of 1–3) and sampling time on the x-axis (on a scale of 1–2) to depict radial plot of multiple variables. The data points are plotted such that a vertical and a horizontal line passes through the point. The upper and lower point of vertical line denotes temperature and silicon and the left and right point of horizontal line denotes pressure and manganese, respectively.

Figure 3-19 Radial plot of multiple variables.

Several process characteristics can be observed from Figure 3-19. First, from time 1 to time 2, an improvement in the process performance is seen, as indicated by a decline in the percentage nonconforming. Next, we can examine what changes in the controllable variables led to this improvement. We see that a decrease in temperature, an increase in both pressure and manganese content, and a basically constant level of silicon caused this reduction in the percentage nonconforming.

Other forms of multivariable plots (such as standardized stars, glyphs, trees, faces, and weathervanes) are conceptually similar to radial plots. For details on these forms, refer to Gnanadesikan (1977).

Matrix and Three-Dimensional Plots

Investigating quality improvement in products and processes often involves data that deal with more than two variables. With the exception of multivariable charts, the graphical methods discussed so far deal with only one or two variables. The matrix plot is a graphical option for situations with more than two variables. This plot depicts two-variable relationships between a number of variables all in one plot. As a two-dimensional matrix of separate plots, it enables us to conceptualize relationships among the variables. The Minitab software can produce matrix plots.

Table 3-9 Data on Temperature, Pressure, and Seal Strength for Plastic Packages

Obser-vation Temper-ature Pressure Seal Strength
1 180 80 8.5
2 190 60 9.5
3 160 80 8.0
4 200 40 10.5
5 210 45 10.3
6 190 50 9.0
7 220 50 11.4
8 240 35 10.2
9 220 50 11.0
10 210 40 10.6
11 190 60 8.8
12 200 70 9.8
13 230 50 10.4
14 240 45 10.0
15 240 30 11.2
Obser-vation Temper-ature Pressure Seal Strength
16 220 40 11.5
17 250 30 10.8
18 180 70 9.3
19 190 75 9.6
20 200 65 9.9
21 210 55 10.1
22 230 50 11.3
23 200 40 10.8
24 240 40 10.9
25 250 35 10.8
26 230 45 11.5
27 220 40 11.3
28 180 70 9.6
29 210 60 10.1
30 220 55 11.1

Using Minitab, the data are entered for the three variables in a worksheet. Next, choose Graph > Matrix Plot and Matrix of Plots Simple. Under Graph variables, input the variable names or column numbers. Click OK. The resulting matrix plot is shown in Figure 3-20. Observe that seal strength tends to increase linearly with temperature up to a certain point, which is about 210°C. Beyond 210°C, seal strength tends to decrease. The relationship between seal strength and pressure decreases with pressure. Also, the existing process conditions exhibit a relationship between temperature and pressure that decreases with pressure. Such graphical aids provide us with some insight on the relationship between the variables, taken two at a time.

Figure representing matrix plot of strength, temperature, and pressure of plastic package to investigate the impact of temperature and pressure on seal strength.

Figure 3-20 Matrix plot of strength, temperature, and pressure of plastic packages.

Three-dimensional scatterplots depict the joint relationship of a dependent variable with two independent variables. While surface-plots demonstrate two-variable relationships, they do not show the joint effect of more than one variable on a third variable. Since interactions do occur between variables, a three-dimensional scatterplot is useful in identifying optimal process parameters based on a desired level of an output characteristic.

Figure representing a three-dimensional graph plotted between seal strength on the z-axis (on a scale of 8–11), temperature on the y-axis (on a scale of 150–250) and pressure on the x-axis (on a scale of 40–80) to depict the affect of temperature and pressure on seal strength.

Figure 3-21 Three-dimensional surface plot of strength versus temperature and pressure of plastic packages.

Failure Mode and Effects Criticality Analysis

Failure mode and effects criticality analysis (FMECA) is a disciplined procedure for systematic evaluation of the impact of potential failures and thereby to determine a priority of possible actions that will reduce the occurrence of such failures. It can be applied at the system level, at the design level for a product or service, at the process level for manufacturing or services, or at the functional level of a component or subsystem level.

In products involving safety issues, say the braking mechanism in automobiles, FMECA assists in a thorough analysis of what the various failure modes could be, their impact and effect on the customer, the severity of the failure, the possible causes that may lead to such a failure, the chance of occurrence of such failures, existing controls, and the chance of detection of such failures. Based on the information specified above, a risk priority number (RPN) is calculated, which indicates a relative priority scheme to address the various failures. The risk priority number is the product of the severity, occurrence, and detection ratings. The larger the RPN, the higher the priority. Consequently, recommended actions are proposed for each failure mode. Based on the selected action, the ratings on severity, occurrence, and detection are revised. The severity ratings typically do not change since a chosen action normally influences only the occurrence and/or detection of the failure. Only through fundamental design changes can the severity be reduced. Associated with the action selected, a rating that measures the risk associated with taking that action is incorporated. This rating on risk is a measure of the degree of successful implementation. Finally, a weighted risk priority number, which is the product of the revised ratings on severity, occurrence, and detection and the risk rating, is computed. This number provides management with a priority in the subsequent failure-related problems to address.

Several benefits may accrue from using failure modes and effects criticality analysis. First, by addressing all potential failures even before a product is sold or service rendered, there exists the ability to improve quality and reliability. A FMECA study may identify some fundamental design changes that must be addressed. This creates a better product in the first place rather than subsequent changes in the product and/or process. Once a better design is achieved, processes to create such a design can be emphasized. All of this leads to a reduction in development time of products and, consequently, costs. Since a FMECA involves a team effort, it leads to a thorough analysis and thereby identification of all possible failures. Finally, customer satisfaction is improved, with fewer failures being experienced by the customer.

It is important to decide on the level at which FMECA will be used since the degree of detailed analysis will be influenced by this selection. At the system level, usually undertaken prior to the introduction of either a product or service, the analysis may identify the general areas of focus for failure reduction. For a product, for example, this could be suppliers providing components, parts manufactured or assembled by the organization, or the information system that links all the units. A design FMECA is used to analyze product or service designs prior to production or operation. Similarly, a process FMECA could be used to analyze the processing/assembly of a product or the performance of a service. Thus, a hierarchy exists in FMECA use.

After selection of the level and scope of the FMECA, a block diagram that depicts the units/operations and their interrelationships is constructed, and the unit or operation to be studied is outlined. Let us illustrate use of FMECA through an example. Consider an OEM with one supplier who assembles to-order computers, as shown in Figure 3-22. There is flow of information and goods taking place in this chain. We restrict our focus to the OEM, where failure constitutes not meeting customer requirements regarding order quantity, quality, and delivery date.

Figure representing a schematic diagram depicting original equipment manufacturer with a single supplier. Flow of goods occurs from  supplier to original equipment manufacturer and from here to customer. The flow of information occur from customer to original equipment manufacturer and orders received and from here to original equipment manufacturer. A two-sided flow of information also takes place between supplier and original equipment manufacturer.

Figure 3-22 Original equipment manufacturer with a single supplier.

Now, functional requirements are defined based on the selected level and scope. Table 3-10 lists these requirements based on customer's order quantity, quality, and delivery date. Through group brainstorming, potential failures for each functional requirement are listed. There could be more than one failure mode for each function. Here, for example, a failure in not meeting order quantity could occur due to the supplier and/or the OEM, as shown in Table 3-10. The impact or effects of failures are then listed. In the example, it leads to customer dissatisfaction. Also, for failures in order quantity or delivery date, another effect could be the creation of back orders, if permissible.

Table 3-10 Failure Mode and Effects Criticality Analysis

Functional Requirement Failure Mode Failure Effects Severity Causes Occurrence Controls Detection Risk Priority Number
Meet customer order quantity Not meet specified order quantity due to supplier Dissatisfied customer; back order (if acceptable) 6 Lack of capacity at supplier 7 Available capacity/inventory level reports 4 168
Not meet specified order quantity due to OEM Dissatisfied customer; back order (if acceptable) 6 Lack of capacity at OEM 4 Available capacity reports 2 48
Meet customer order quality Not meet specified order quality at supplier Dissatisfied customer 7 Lack of process quality control at supplier 5 Process control; capability analysis 4 140
Not meet specified order quality at OEM Dissatisfied customer 7 Lack of process quality control at OEM 3 Incoming inspection; matching of customer orders with product bar code 3 63
Meet customer delivery date Not meet specified due date due to supplier Dissatisfied customer; back order (if acceptable) 6 Lack of capacity at supplier 7 Available capacity/inventory level reports 4 168
Not meet specified due date due to OEM Dissatisfied customer; back order (if acceptable) 6 Lack of capacity at OEM 4 Available capacity reports 2 48

The next step involves rating the severity of the failure. Severity ratings are a measure of the impact of such failures on the customer. Such a rating is typically on a discrete scale from 1 (no effect) to 10 (hazardous effect). The Automotive Industry Action Group (AIAG) has some guidelines for severity ratings (El-Haik and Roy 2005), which other industries have adapted correspondingly. Table 3-11 shows rating scores on severity, occurrence, and detection, each of which is on a discrete scale of 1–10. AIAG has guidelines on occurrence and detection ratings as well, with appropriate modifications for process functions (Ehrlich 2002). For the example we indicate a severity rating of 6 on failure to meet order quantity or delivery date, while a rating of 7 is assigned to order quality, indicating that it has more impact on customer dissatisfaction, as shown in Table 3-10.

Table 3-11 Rating Scores on Severity, Occurrence, and Detection in FMECA

Rating Score Severity Criteria Occurrence Criteria Detection Criteria
10 Hazardous without warning Very high; ≥50% in processes; ≥10% in automobile industry Almost impossible; no known controls available
9 Hazardous with warning Very high; 1 in 3 in processes; 5% in automobile industry Very remote chance of detection
8 Very high; customer dissatisfied; major disruption to production line in automobile industry High; 1 in 8 in processes; 2% in automobile industry Remote chance that current controls will detect
7 High; customer dissatisfied; minor disruption to production line in automobile industry High; 1 in 20 in processes; 1% in automobile industry Very low chance of detection
6 Moderate; customer experiences discomfort Moderate; 1 in 80 in processes; 0.5% in automobile industry Low chance of detection
5 Low; customer experiences some dissatisfaction Moderate; 1 in 400 in processes; 0.2% in automobile industry Moderate chance of detection
4 Very low; defect noticed by some customers Moderate; 1 in 2000 in processes; 0.1% in automobile industry Moderately high change of detection
3 Minor; defect noticed by average customers Low; 1 in 15,000 in processes; 0.05% in automobile industry High chance of detection
2 Very minor; defect noticed by discriminating customers Very low; 1 in 150,000 in processes; 0.01% in automobile industry Very high chance of detection
1 None; no effect Remote; <1 in 1,500,000 in processes; ≤ 0.001% in automobile industry Almost certain detection
Source: Adapted from B. H. Ehrlich, Transactional Six Sigma and Lean Servicing, St. Lucie Press, 2002; B. El-Haik and D. M. Roy, Service Design for Six Sigma, Wiley, New York, 2005.

Causes of each failure are then listed, which will lead to suggestion of remedial actions. The next rating relates to the occurrence of failures; the larger the rating the more likely the possible happening. The guidelines in Table 3-11 could be used to select the rated value. Here, we deem that a failure in not meeting order quantity or delivery date at the supplier to be more likely (rating of 7 in occurrence) relative to that of the OEM (rating of 4). Further, in not meeting specified quality, we believe that this is less likely to happen at the supplier (supplier rating is 5) and even more remote at the OEM (rating of 3). Existing controls, to detect failures, are studied. Finally, the chance of existing controls detecting failures is indicated by a rating score. In this example, there is a moderately high chance (rating of 4) of detecting lack of capacity at the supplier's location through available capacity/inventory reports, whereas detecting the same at the OEM through capacity reports has a very high chance (rating of 2). A similar situation exists for detecting lack of quality, with the OEM having a high chance of detection (rating of 3) through matching of customer orders with product bar codes relative to that of the supplier (rating of 4) through process control and capability analysis. Finally, in Table 3-10, a risk priority number (RPN) is calculated for each failure mode and listed. Larger RPN values indicate higher priority to the corresponding failure modes. Here, we would address capacity issues at the supplier (RPN of 168), and not meeting due dates due to the supplier (RPN of 168) first, followed by lack of quality at the supplier (RPN of 140).

Continuing on with the FMECA analyses, the next step involves listing specific recommended actions for addressing each failure mode. Table 3-12 presents this phase of the analysis. The objectives of the recommended actions are to reduce the severity and/or occurrence of the failure modes and to increase their detection through appropriate controls such as evaluation techniques or detection equipment. Whereas severity can be reduced only through a design change, occurrence may be reduced through design or process improvements. Of the possible recommended actions, Table 3-12 lists the action taken for each failure mode and the revised ratings on severity, occurrence, and detection. In this example, with no design changes made, the severity ratings are the same as before. However, the occurrence has been lowered by the corresponding action taken. Also, for certain failure modes (e.g., lack of quality), through monitoring critical to quality (CTQ) characteristics, the chances of detection are improved (lower rating compared to those in Table 3-10). An RPN is calculated, which could then be used for further follow-up actions. Here, it seems that failure to meet order quantity due to lack of capacity at the supplier and the OEM is still a priority issue.

Table 3-12 Impact of Recommended Actions in FMECA Analysis

Functional Requirement Recommended Actions Actions Taken Revised Revised RPN Risk Weighted Risk Priority Number
Severity Occurrence Detection
Meet customer order quantity Increase capacity through overtime at supplier Overtime at supplier 6 4 4 96 3 288
Flexibility to add suppliers or through overtime at OEM Overtime at OEM 6 2 2 24 1 24
Meet customer order quality Identify critical to quality characteristics (CTQ) through Pareto analysis at supplier Monitor CTQ characteristics at supplier 7 3 3 63 3 189
Pareto analysis of CTQ characteristics at OEM and appropriate remedial action Monitor CTQ characteristics at OEM 7 2 2 28 2 56
Meet customer delivery date Increase capacity through overtime at supplier Overtime atsupplier 6 4 4 96 3 288
Flexibility to add suppliers or through overtime at OEM Overtime at OEM 6 2 2 24 1 24
Reduce downtime at OEM 6 2 2 24 4 96

Associated with each action, a rating on a scale of 1–5 is used to indicate the risk of taking that action. Here, risk refers to the chance of implementing the action successfully, with a 1 indicating the smallest risk. Using this concept, observe from Table 3-12 that it is easier to add overtime at the OEM (risk rating of 1) compared to that at the supplier (risk rating of 3), since the OEM has more control over its operations. Similarly, for the OEM it is easier to add overtime than it is to reduce downtime (risk rating of 4). Finally, the last step involves multiplying the revised RPN by the risk factor to obtain a weighted risk priority number. The larger this number, the higher the priority associated with the failure mode and the corresponding remedial action. Management could use this as a means of choosing areas to address.

3-7 International Standards ISO 9000 and Other Derivatives

Quality philosophies have revolutionized the way that business is conducted. It can be argued that without quality programs the global economy would not exist because quality programs have been so effective in driving down costs and increasing competitiveness. Total quality systems are no longer an option—they are required. Companies without quality programs are at risk. The emphasis on customer satisfaction and continuous quality improvement has necessitated a system of standards and guidelines that support the quality philosophy. To address this need, the ISO developed a set of standards, ISO 9000, 9001, and 9004.

The ISO 9000 standards referred to as quality management standards (QMSs) were revised in 2015. This series consists of three primary standards: ISO 9000, Quality Management Systems: Fundamentals and Vocabulary; ISO 9001, Quality Management Systems: Requirements; and ISO 9004, Quality Management Systems: Guidelines for Performance Improvements. ISO 9001 is a more generic standard applicable to manufacturing and service industries, which have the option of omitting requirements that do not apply to them specifically. Further, organizations may be certified and registered only to ISO 9001. ISO 9004 presents comprehensive quality management guidelines that could be used by companies to improve their existing quality systems; these are not subject to audit, and organizations do not register to ISO 9004.

Features of ISO 9000 a

There are eight key management principles based on which the revisions of ISO 9000 and the associated ANSI/ISO/ASQ Q9000 have been incorporated. They reflect the philosophy and principles of total quality management, as discussed earlier in the chapter.

With the primary focus on an organization being to meet or exceed customer needs, a desirable shift in the standards has been toward addressing customer needs. This blends with our discussion of the philosophies of quality management that equally emphasize this aspect. Senior management must set the direction for the vision and mission of the company, with input from all levels in order to obtain the necessary buy-in of all people. As stated in one of the points of Deming's System of Profound Knowledge, optimization of the system (which may consist of suppliers and internal and external customers) is a desirable approach. Further, emphasis on improving the process based on observed data and information derived through analyses of the data, as advocated by Deming, is found to exist in the current revision of the standards. Finally, the principle of continuous improvement, advocated in the philosophy of total quality management, is also embraced in the standards.

ISO 9001 and ISO 9004 are a consistent pair. They are designed for use together but may be used independently, with their structures being similar. Two fundamental themes, customer-related processes and the concept of continual improvement, are visible in the new revision of the standards.

The standards have evolved to a focus on developing and managing effective processes from documenting procedures. An emphasis on the role of top management is viewed along with a data-driven process of identifying measurable objectives and measuring performance against them. Concepts of quality improvement discussed under the Shewhart (Deming) cycle of plan–do–check–act are integrated in the standards.

The new version of ISO 9000 standards follows a high-level structure with uniform use of core texts and terms. The focus on a process-oriented approach is adopted along with an inclusion of topics on risk management, change management, and knowledge management.

Other Industry Standards

Various industries are adopting to standards, similar to ISO 9000, but modified to meet their specific needs. A few of these standards are listed:

  1. ISO/TS 16949. In the United States, the automotive industry comprised of the big three companies—Daimler, Ford, and General Motors adopted the ISO/TS 16949—Quality Management Systems—Particular Requirements for Automotive Production and Relevant Service Organizations, through the Automotive Industry Action Group (AIAG), standards, thereby eliminating the conflicting requirements for suppliers. Previously, each company had its own requirements for suppliers.
  2. AS 9100. The aerospace industry, following a process similar to that used in the automotive industry, has developed the standard AS 9100, Quality Management Systems—Requirements for Aviation, Space, and Defense Organizations. These standards incorporate the features of ISO 9001 as well as the Federal Aviation Administration (FAA) Aircraft Certification System Evaluation Program and Boeing's massive D1-9000 variant of ISO 9000. Companies such as Boeing, Rolls-Royce Allison, and Pratt & Whitney use AS 9100 as the basic quality management system for their suppliers.
  3. TL 9000. This is the standard developed in the telecommunications service industry to seek continuous improvement in quality and reliability. The Quality Excellence for Suppliers of Telecommunications (QuEST) Leadership Forum, formed by leading telecommunications service providers such as BellSouth, Bell Atlantic, Pacific Bell, and Southwestern Bell was instrumental in the creation of this standard. The membership now includes all regional Bell operating companies (RBOCs), AT&T, GTE, Bell Canada, and telecommunications suppliers such as Fujitsu Network Communications, Lucent Technologies, Motorola, and Nortel Networks. The globalization of the telecommunications industry has created a need for service providers and suppliers to implement common quality system requirements. The purpose of the standard is to effectively and efficiently manage hardware, software, and services by this industry. Through the adoption of such a standard, the intent is also to create cost- and performance-based metrics to evaluate efforts in the quality improvement area.
  4. ISO 13485. The ISO 13485 standard is applicable to medical devices manufacturers and is a stand-alone standard.
  5. Anticipated developments. OHSAS 18001—Occupational Health and Safety Zone is a set of international occupational health and safety management standards to help minimize risks to employees. ISO 45001—Occupational Health and Safety Management Standard is set to replace OHSAS 18001.

Summary

In the chapter we examined the philosophy of total quality management and the role management plays in accomplishing desired organizational goals and objectives. A company's vision describes what it wants to be; the vision molds quality policy. This policy, along with the support and commitment of top management, defines the quality culture that prevails in an organization. Since meeting and exceeding customer needs are fundamental criteria for the existence and growth of any company, the steps of product design and development, process analysis, and production scheduling have to be integrated into the quality system.

The fundamental role played by top management cannot be overemphasized. Based on a company's strategic plans, the concept of using a balanced scorecard that links financial and other dimensions, such as learning and growth and customers, is a means for charting performance. Exposure to the techniques of failure mode and effects criticality analysis enables adequate product or process design.

The planning tool of quality function deployment is used in an interdisciplinary team effort to accomplish the desired customer requirements. Benchmarking enables a company to understand its relative performance with respect to industry performance measures and thus helps the company improve its competitive position. Adaptation of best practices to the organization's environment also ensures continuous improvement. Vendor quality audits, selection, and certification programs are important because final product quality is influenced by the quality of raw material and components.

The field of health care is unique and important. Accordingly, a discussion of health care analytics, its application to big data, and challenges in creating an information technology platform that will promote the development of a decision support system to impact point-of-service decisions are presented.

Since quality decisions are dependent on the collected data and information on products, processes, and customer satisfaction, simple tools for quality improvement that make use of such data have been presented. These include Pareto analysis, flowcharts, cause-and-effect diagrams, and various scatterplots. Finally, some international standards on quality assurance practices have been depicted.

Key Terms

  1. approved vendor
  2. AS 9100
  3. balanced scorecardbenchmarking
  4. big data
  5. cause-and-effect diagram
  6. certified vendor
  7. change management
  8. cycle time
  9. empowerment
  10. failure mode and effects criticality analysis
  11. flowchart
  12. gap analysis
  13. health care analytics
    1. data visualization
    2. predictive models
    3. prescriptive analytics
  14. health information technology
    1. health care decision support systems
  15. house of qualityISO 13485ISO/TS 16949
  16. ISO 9000; 9001, 9004
  17. matrix plot
  18. mission statement
  19. multivariable charts
  20. organizational culture
  21. Pareto diagram
  22. payment schemes in health care
    1. bundled payment
    2. capitation
    3. diagnosis-related groups
    4. fee-for-services
    5. pay-for-participation
    6. pay-for-performance
  23. performance standards
  24. preferred vendor
  25. quality audit
    1. conformity quality audit
    2. process audit
    3. product audit
    4. suitability quality audit
    5. system audit
  26. quality function deployment
  27. quality policy
  28. risk priority number
  29. scatter diagrams
  30. scatterplot
  31. six sigma quality
    1. define phase
    2. measure phase
    3. analyze phase
    4. improve phase
    5. control phase
  32. three-dimensional scatterplot
  33. time-based competition
  34. TL 9000
  35. vendor certification
  36. vendor rating
  37. vision

Exercises

  1. 3.1 Describe the total quality management philosophy. Choose a company and discuss how its quality culture fits this theme.
  2. 3.2 What are the advantages of creating a long-term partnership with vendors?
  3. 3.3 Compare and contrast a company vision, mission, and quality policy. Discuss these concepts in the context of a hospital of your choice.
  4. 3.4 Describe Motorola's concept of six sigma quality and explain the level of nonconforming product that could be expected from such a process.
  5. 3.5 What are the advantages of using quality function deployment? What are some key ingredients that are necessary for its success?
  6. 3.6 Select an organization of your choice in the following categories. Identify the organization's strategy. Based on these strategies, perform a balanced scorecard analysis by indicating possible diagnostic and strategic measures in each of the areas of learning and growth, internal processes, customers, and financial status.
    1. Information technology services
    2. Health care
    3. Semiconductor manufacturing
    4. Pharmaceutical
  7. 3.7 Consider the airline transportation industry. Develop a house of quality showing customer requirements and technical descriptors.
  8. 3.8 Consider a logistics company transporting goods on a global basis. Identify possible vision and mission statements and company strategies. Conduct a balanced scorecard analysis and indicate suggested diagnostic and strategic measures in each of the areas of learning and growth, internal processes, customers, and financial.
  9. 3.9 Consider the logistics company in Exercise 3-8. Conduct a quality function deployment analysis where the objective is to minimize delays in promised delivery dates.
  10. 3.10 Describe the steps of benchmarking relative to a company that develops microchips. What is the role of top management in this process?
  11. 3.11 What are the various types of quality audits? Discuss each and identify the context in which they are used.
  12. 3.12 A financial institution is considering outsourcing its information technology–related services. What are some criteria that the institution should consider? Propose a scheme to select a vendor.
  13. 3.13 The area of nanotechnology is of much importance in many phases of our lives—one particular area being development of drugs for Alzheimer's disease. Discuss the role of benchmarking, innovation, and time-based competition in this context.
  14. 3.14 In a large city, the mass-transit system, currently operated by the city, needs to be overhauled with projected demand expected to increase substantially in the future. The city government is considering possible outsourcing.
    1. Discuss the mission and objectives of such a system.
    2. What are some criteria to be used for selecting a vendor?
    3. For a private vendor, through a balanced scorecard analysis, propose possible diagnostic and strategic measures.
  15. 3.15 What is the purpose of vendor certification? Describe typical phases of certification.
  16. 3.16 Discuss the role of national and international standards in certifying vendors.
  17. 3.17 The postal system has undertaken a quality improvement project to reduce the number of lost packages. Construct a cause-and-effect diagram and discuss possible measures that should be taken.
  18. 3.18 The safe operation of an automobile is dependent on several subsystems (e.g., engine, transmission, braking mechanism). Construct a cause-and-effect diagram for automobile accidents. Conduct a failure mode and effects criticality analysis and comment on areas of emphasis for prevention of accidents.
  19. 3.19 Consider Exercise 3-18 on the prevention of automobile accidents. However, in this exercise, consider the driver of the automobile. Construct a cause-and-effect diagram for accidents influenced by the driver. Conduct a failure model and effects criticality analysis considering issues related to the driver, assuming that the automobile is in fine condition.
  20. 3.20 You are asked to make a presentation to senior management outlining the demand for a product. Describe the data you would collect and the tools you would use to organize your presentation.
  21. 3.21 Consider a visit to your local physician's office for a routine procedure. Develop a flowchart for the process. What methods could be implemented to improve your satisfaction and reduce waiting time?
  22. 3.22 What are some reasons for failure of total quality management in organizations? Discuss.
  23. 3.23 A product goes through 20 independent operations. For each operation, the first-pass yield is 95%. What is the rolled throughput yield for the process?
  24. 3.24 Consider Exercise 3-23. Suppose, through a quality improvement effort, that the first-pass yield of each operation is improved to 98%. What is the percentage improvement in rolled throughput yield?
  25. 3.25 Consider Exercise 3-24. Through consolidation of activities, the number of operations has now been reduced to 10, with the first-pass yield of each operation being 98%. What is the percentage improvement in rolled throughout yield relative to that in Exercise 3-24?
  26. 3.26 Discuss the importance of health care analytics and its possible contributions.
  27. 3.27 What are some of the challenges faced by the health care industry in the twenty-first century?
  28. 3.28 Discuss the challenges and the contributions that could be derived from the development of a health care decision support system in the current century.
  29. 3.29 Discuss the role of established standards and third-party auditors in quality auditing. What is the role of ISO 9000 standards in this context?
  30. 3.30 In a printing company, data from the previous month show the following types of errors, with the unit cost (in dollars) of rectifying each error, in Table 3-13.
    1. Construct a Pareto chart and discuss the results.
    2. If management has a monthly allocation of $18,000, which areas should they tackle?

    Table 3-13

    Error Categories Frequency Unit Costs
    Typographical 4000 0.20
    Proofreading 3500 0.50
    Paper tension 80 50.00
    Paper misalignment 100 30.00
    Inadequate binding 120 100.00
  31. 3.31 An insurance company is interested in determining whether life insurance coverage is influenced linearly by disposable income. A randomly chosen sample of size 20 produced the data shown in Table 3-14. Construct a scatterplot. What conclusions can you draw?

    Table 3-14

    Disposable Income ($ thousands) Life Insurance Coverage ($ thousands) Disposable Income ($ thousands) Life Insurance Coverage ($ thousands)
    45 60 65 80
    40 58 60 90
    65 100 45 50
    50 50 40 50
    70 120 55 70
    75 140 55 60
    70 100 60 80
    40 50 75 120
    50 70 45 50
    45 60 65 70
  32. 3.32 Use a flowchart to develop an advertising campaign for a new product that you will present to top management.
  33. 3.33 Is accomplishing registration to ISO 9001 standards similar to undergoing an audit process? What are the differences?
  34. 3.34 Discuss the emerging role of ISO 9000 standards in the global economy.
  35. 3.35 In a chemical process, the parameters of temperature, pressure, proportion of catalyst, and pH value of the mixture influence the acceptability of the batch. The data from 20 observations are shown in Table 3-15.
    1. Construct a multivariable chart. What inferences can you make regarding the desirable values of the process parameters?
    2. Construct a matrix plot and make inferences on desirable process parameter levels.
    3. Construct contour plots of the proportion nonconforming by selecting two of the process parameters at a time and comment.

    Table 3-15

    Observation Temperature (°C) Pressure (kg/cm2) Proportion of Catalyst Acidity (pH) Proportion Nonconforming
    1 300 100 0.03 20 0.080
    2 350 90 0.04 20 0.070
    3 400 80 0.05 15 0.040
    4 500 70 0.06 25 0.060
    5 550 60 0.04 10 0.070
    6 500 50 0.06 15 0.050
    7 450 40 0.05 15 0.055
    8 450 30 0.04 20 0.060
    9 350 40 0.04 15 0.054
    10 400 40 0.04 15 0.052
    11 550 40 0.05 10 0.035
    12 350 90 0.04 20 0.070
    13 500 40 0.06 10 0.030
    14 350 80 0.04 15 0.070
    15 300 80 0.03 20 0.060
    16 550 30 0.05 10 0.030
    17 400 80 0.03 20 0.065
    18 500 40 0.05 15 0.035
    19 350 90 0.03 20 0.065
    20 500 30 0.06 10 0.040

References

  1. ASQ(2005). Quality Management Systems: Fundamentals and Vocabulary , ANSI/ISO/ASQ Q9000. Milwaukee, WI: American Society for Quality.
  2. ———(2008). Quality Management Systems: Requirements , ANSI/ISO/ASQ Q9001. Milwaukee, WI: American Society for Quality.
  3. ———(2009). Quality Management Systems-Guidelines for Performance Improvement , ANSI/ISO/ASQ Q9004. Milwaukee, WI: American Society for Quality.
  4. Baker, 2002 Baker, J. J. (2002). “Medicare Payment System for Hospital Inpatients: Diagnosis Related Groups,” Journal of Health Care Finance, 28 (3): 1–13.
  5. Bates and Gawande, 2003 Bates, D. W. , and A. A. Gawande (2003). “Improving Safety with Information Technology,” New England Journal of Medicine, 348: 2526–2534.
  6. Birkmeyer and Birkmeyer, 2006 Birkmeyer, N. J. , and J. D. Birkmeyer (2006). “Strategies for Improving Surgical Quality—Should Payers Reward Excellence or Effort?” New England Journal of Medicine, 354 (8): 864–870.
  7. Blazek, L. W., B. Novic, and D. M. Scott (1987). “Displaying Multivariate Data Using Polyplots ” Journal of Quality Technology, 19(2): 69--74.
  8. Buzanis, C.H. (1993).“Hyatt Hotels and Resorts: Achieving Quality Through Employee and Guest Feedback Mechanisms.” InManaging Quality in America's Most Admired Companies, J.W. Spechler (Ed.).San Francisco, CA:Berrett-Kochler.
  9. Carroll, 2014 Carroll, A. E. (2014). “The New Health Care: The Problem with ‘Pay for Performance’ in Medicine,” New York Times, July 28, 2014.
  10. CMS, 2015 Centers for Medicare and Medicaid Services (CMS) (2015). www.medicare.gov.
  11. Chung and Shauver, 2009 Chung, K. C. , and M. J. Shauver (2009). “Measuring Quality in Healthcare and Its Implications for Pay-for-Performance Initiatives,” Hand Clinics, 25 (1): 71–81.
  12. Cox, 2011 Cox, T. (2011). “Exposing the True Risks of Capitation Financed Healthcare,” Journal of Healthcare Risk Management, 30: 34–41.
  13. Darr, 2003 Darr, K. (2003). “The Centers for Medicare and Medicaid Services Proposal to Pay for Performance,” Hospital Topics, 81 (2): 30–32.
  14. Ehrlich, B.H. (2002).Transactional Six Sigma and Lean Servicing.Boca Raton, FL:St. Lucie Press.
  15. El-Haik, B. , and D.M. Roy (2005).Service Design for Six Sigma. Hoboken, NJ:Wiley.
  16. Gnanadesikan, R. (1977). Methods of Statistical Data Analy sis of Multivariate Observations. New York: Wiley.
  17. IOM, 2006 Institute of Medicine (IOM) (2006). Rewarding Provider Performance: Aligning Incentives in Medicare . Report Brief. http://iom.nationalacademies.org.
  18. IOM, 2006 Ishikawa, K. (1976). Guide to Quality Control. Hong Kong: Asian Productivity Organization, Nordica International Limited.
  19. ISO (2015a). Quality Management Systems: Fundamentals and Vocabulary , ISO 9000. Geneva: International Organization for Standardization.
  20. ———(2015b). Quality Management Systems: Requirements , ISO 9001. Geneva: International Organization for Standardization.
  21. ———(2015c). Quality Management Systems: Guidelines for Performance Improvement , ISO 9004. Geneva: International Organization for Standardization.
  22. Kaplan, R.S. , and D.P. Norton (1996).The Balanced Scorecard.Boston MA:Harvard Business School Press.
  23. Kongstvedt, 2001 Kongstvedt, P. R. (2001). The Managed Health Care Handbook, 4th ed., New York: Aspen.
  24. Mills, C.A. (1989).The Quality Audit.Milwaukee, WI:American Society Quality Control.
  25. Minitab, Inc.(2014). Release 17. State College, PA: Minitab.
  26. Rand Corporation, 2015 Rand Corporation (2015). “Bundled Payment.” www.rand.org/health/key-topics/paying-for-care/bundled-payment.
  27. Robinow, 2010 Robinow, A. (2010). “The Potential of Global Payment: Insights from the Field.” Washington, DC: The Commonwealth Fund.
  28. Rosenthal and Frank, 2006 Rosenthal, M. B. , and R. G. Frank (2006). “What Is the Empirical Basis for Paying for Quality in Health Care?” Medical Care Research and Review, 63 (2): 135–157.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.17.40