Chapter Four. Focus on the Customer and Eliminate Waste

The most significant source of waste is overproduction. . . . [which] results in shortages, because processes are busy making the wrong things. . . . you need extra capacity because you are using some of your labor and equipment to produce parts that are not yet needed.

Mike Rother and John Shook1

1. Mike Rother and John Shook, Learning to See: Value-Stream Mapping to Create Value and Eliminate Muda (The Lean Enterprise Institute, 2003), p. 43.

In Rother and Shook’s Learning to See, they seek to teach how to recognize sources of waste. As humans, we become so conditioned to our environment and the way things have always been done that we are often blind to opportunities for improvement. For example, producing large quantities of products in batches in order to optimize the efficiency of a particular plant doesn’t appear wasteful from the perspective of the plant manager. But if the products then sit in inventory for months until they are needed by downstream processes to satisfy customer demand, it is tremendously wasteful. Waste in the world of integration can be hard to see because of past conditioning, because of our narrow perspective, and because we are dealing with “virtual” computer software products. For example, gold-plating, or building software functionality before it is needed, is an example of overproduction. As you progress through this chapter and the rest of the book, we hope that you will learn to see more opportunities to eliminate waste.

Lean Integration starts by focusing on the customer, that is, the people who demand, and usually pay for, the integrated capabilities in an enterprise. This requires that integration be defined as a service in order to establish a customer-supplier relationship. The Integration Competency Center acts as an internal services business, providing a variety of integration services to functional groups and disparate application teams across the enterprise.

This contrasts with the traditional approach where integration is performed as ad hoc point-to-point information exchanges or is controlled by a project with a finite life. When the project is over, ongoing management of the integration elements is dispersed to various teams, none of which sees the total picture, and the focus on the end customer rapidly fades. Enough has been said about the inevitable emergence of the integration hairball and its negative effects under this approach. In order to implement a Lean Integration strategy, you must break out of the old pattern where integration elements are static extensions of business systems and accept a new paradigm where integration components collectively are a system with their own life cycle.

This chapter presents the concepts and core principles of what it means to focus on the customer, eliminate muda from the customer’s perspective, and map the value chain of end-to-end activities that produce value. To amplify the concepts, we include case studies from a retail company that we shall call Clicks-and-Bricks, and a large bank that we call Big Bank, to provide concrete examples of waste elimination and value stream mapping.

Focus on the Customer

The critical element in the Lean paradigm is to view integration as a service and to understand the value that integration provides in the eyes of the customer. Value is a customer perspective, not a supplier perspective. For example, most business users don’t care about data quality; they care about reducing the order-to-cash cycle, increasing market campaign effectiveness, approving more loans without increasing risk, reducing average resolution time in the call center, and getting accurate and timely financial reports. Of course, data that is complete and consistent across multiple operational systems facilitates these business results, but it is the outcomes that business leaders care about and not the methods and tools that produced them.

The first step in the Lean Integration journey, therefore, is to identify who the internal customer is, understand the customer’s needs, define the services that the ICC will provide, and develop a clear picture of the value the customer expects. This is easier said than done. Chapter 12, Integration Methodology, delves into specific techniques for defining the mission of an ICC, its scope of operations, the activities it performs, the customers it serves, and the services it provides. But first it is essential to understand the concept.

Once you have defined the customer, the services that are provided, and the value that is expected, you can map out the value stream. Womack and Jones in Lean Thinking define the value stream this way: “The value stream is the set of all the specific actions required to bring a specific product (whether a good, a service, or, increasingly, a combination of the two) through three critical management tasks of any business: the problem-solving task running from concept through detailed design and engineering to production launch, the information management task running from order-taking through detailed scheduling to delivery, and the physical transformation task proceeding from raw materials to a finished product in the hands of the customer.”2

2. James P. Womack and Daniel T. Jones, Lean Thinking: Banish Waste and Create Wealth in Your Corporation (Free Press, 2003), p. 19.

A restaurant experience, for example, involves finding out what the customer wants to order, processing the order and related transactions including payment, and actually producing and delivering the meal. An integrated and optimized implementation of these tasks provides superior customer experiences. In a data integration example, the three management tasks are

1. Clarifying the business problem by figuring out the specific data that needs to be combined, cleansed, synchronized, or migrated

2. Transacting the project activities, such as initiation, planning, execution, tracking, and deployment

3. Building the integration infrastructure, such as installing middleware platforms, codifying business transformation rules, optimizing performance, and capturing operational metadata

Each of these three management tasks (problem solving, information management, and physical transformation) contains activities and steps that the customer will value, as well as some that from the customer’s perspective are non-value-adding. From a Lean perspective, non-value-added activities are muda and must be eliminated. The percentage of waste in a value stream is a good measure of the maturity of a Lean Integration practice. As Womack and Jones wrote, “Based on years of benchmarking and observation in organizations around the world, we have developed the following simple rules of thumb: Converting a classic production system . . . will double labor productivity all the way through the system while cutting production throughput times by 90 percent and reducing inventories in the system by 90 percent as well.”3

3. Ibid., p. 27.

In other words, if you have not yet implemented a Lean Integration practice, a detailed value-stream-mapping exercise will likely show that 50 percent of the labor and 90 percent of the delivery time in the end-to-end process is waste. By applying Lean practices, you should expect, over time, to produce twice as much work with the same staff, reduce lead time by 90 percent, and reduce work in progress and integration dependencies (“inventories” in the integration domain) by 90 percent. Do these claims sound fantastic and unbelievable? If so, it may be that you have become locked into the perspective, driven by years of fragmented silo thinking, that integrations are being built as efficiently as they can be. Time and time again we have come across managers and technical leaders in organizations who insist they are highly efficient. One manager told us he needed 90 days to build and deploy a new integration feed to the enterprise data warehouse. When we challenged that perspective and suggested it should take no more than 5 days, the response was “That would never work here.” An analysis of the manager’s operation showed that the value stream consisted of 1 day of value-added work and 89 days of waste. Here are a few of the specifics that contributed to the high percentage of waste:

• Problem-solving activities were slow because there were no documented data standards. Each integration problem required meetings (usually several of them) with multiple SMEs. Once the right people were in the room, the requirements were quickly identified, but simply finding a time to schedule the meeting when everyone was available typically added 20 to 30 days to the process. Furthermore, the time to schedule, and reschedule, meetings was non-value-added, as were the delays in waiting for meetings. The actual time spent solving problems in the meeting was also waste, but this may have been unavoidable in the short term until documentation, metadata, and processes improved.

• Process execution activities included a great deal of waiting time because multiple teams were involved and the communication between teams was inconsistent and slow because of a traditional batch and queue system. For example, a given project might involve staff from multiple teams such as project management, systems engineering, network security, multiple application teams, database administration, testing, change management, and operations management. Any time a team needed support from another team, someone would request it through either an email or an internal Web application that captured the details. The SLA for each team was typically 2 days (i.e., a request for support would typically result in a response within 2 days), which sounds relatively quick. Sometimes teams responded more quickly, and other times the response after 2 days was to ask the submitter to resend the request because of missing or inaccurate information. In any event, even a simple integration project required on the order of ten or more inter-team service requests, which added up to an average of 20 days of delay waiting for responses.

• Manual reviews of the changes to authorize movement of integration code from development to test and then from test to production added another 20 days to the process. The standard process in this case was that each time code was to be migrated, it required signoff by a change control committee that consisted of 10 to 15 staff members from different departments (the number varied based on the nature of the change). This process had been established over a period of years in response to production incidents and outages as a result of deploying changes where the dependencies between components were not well understood. Of course, the customer values changes that don’t disrupt current operations when deployed, but the fact that 10 or more people needed to review each change (twice) before moving it to production was a waste. Once again this waste may have been unavoidable in the short term until quality improved and change controls could be automated, but nonetheless it was still waste from the customer perspective.

• Other wasted activities included excessive project ramp-up time because of disagreements about what tools or project standards to use, delays in gaining access to test environments that were shared by multiple project teams, rework in the integration cycle once data quality issues were identified, and other factors.

When we added up all the wasted time, we found that the amount of actual value-added work was only 1 day out of 90. This ratio of value-added work to non-value-added work is not unusual for organizations looking at their end-to-end processes from the customer’s point of view for the first time. Changing processes to eliminate this waste will not happen overnight. As anyone who has worked in this sort of complex, federated organizational environment knows, gaining agreement across the organization for the necessary process changes to eliminate the waste is neither easy nor quick. But nonetheless, it is possible to do so, and the rewards when you are successful are huge from the customer’s perspective.

Integration Wastes

The seven wastes (muda) that were targeted in manufacturing and production lines are

1. Transportation (unnecessary movement of materials)

2. Inventory (excess inventory including work in progress)

3. Motion (extra steps by people to perform the work)

4. Waiting (periods of inactivity)

5. Overproduction (production ahead of demand)

6. Overprocessing (rework and reprocessing)

7. Defects (effort involved in inspecting for and fixing defects)

Developing software has many parallels with the production of physical goods, and Mary and Tom Poppendieck have a done an excellent job in Implementing Lean Software Development to translate these wastes into the world of software development. Many of these are fairly obvious, of course, such as wasted effort fixing defects rather than developing defect-free code in the first place. Overproduction in the software world is the equivalent of adding features to a software component that were not requested or are not immediately required (sometimes referred to as “gold-plating”). As the Poppendiecks say, “Wise software development organizations place top priority on keeping the code base simple, clean, and small.”4 We can reexpress the seven wastes in terms of their software corollaries:

4. Mary and Tom Poppendieck, Implementing Lean Software Development: From Concept to Cash (Addison-Wesley, 2006), p. 69.

1. Transportation (unnecessary people or data handoffs)

2. Inventory (work not deployed)

3. Motion (task switching)

4. Waiting (delays)

5. Overproduction (extra features)

6. Overprocessing (revisiting decisions or excessive quality)

7. Defects (data and code defects)

Other parallels between the physical product world and the virtual software world are less obvious, such as transportation waste being the equivalent of loss of information in handoffs between designers, coders, testers, and implementers, or motion waste being the equivalent of task switching in an interrupt-driven work environment. We don’t need to repeat here what the Poppendiecks say about waste in software development, but their book does not address all the integration sub-disciplines, and there are several areas where we often see a tremendous amount of wasted time, effort, and money.

The following sections describe five integration wastes, but the list is not complete or exhaustive. We include these to highlight some areas that are often overlooked as waste and to change your perspective so that you can begin to identify waste in your own environment.

Waste 1: Gold-Plating

Building functional integration capabilities or extra features before they are needed is a waste. There is a strong desire among integration teams to anticipate organizational needs and build interfaces, data marts, canonical messages, or service-oriented architecture (SOA) services with the needs of the entire enterprise in mind. This is an admirable objective indeed, but the integration teams often don’t have the resources and funding to build the common capability, and so they run into trouble when the first project that could use the capability is saddled with the full cost to build the enterprise-wide solution. This practice is referred to as “the first person on the bus pays for the bus” or a similar metaphor. Business leaders whose project budgets are impacted by this practice hate it. The business units are given a budget to optimize their function and they become frustrated, and even angry, when told that the implementation will take longer and cost more money because it must be built as a generic reusable capability for other future users. This may indeed be the policy of the organization, but in the end it pits the integration team against the project sponsor rather than fostering alignment.

Developing functionality before it is needed is a waste of time and money for the initial development and a waste of the resources that are then required to maintain and support the increased complexity resulting from the unused functionality. The rationale for eliminating this category of waste includes these factors:

• There is an imperfect understanding of requirements for future projects if they are built in advance, so it is possible that the wrong thing will be developed.

• The additional code or data will cost money to develop and maintain without any benefits until the next project comes along to use it.

• The business benefits from the first project are delayed while the supposedly “ideal” solution that will meet future needs is being built.

• The business sponsor for the first project will be dissatisfied with the implementation team (which is a good way to chase away your customer).

We suggest another approach: Build only the features/functions that the first project requires, and do so in such a way that they can be extended in the future when the second project comes along. And if the nature of change is such that the second project requires refactoring of the solution that was developed for the first project, so be it. This approach is more desirable because of the risks and muda of building functionality prematurely.

Organizations would be much better off adding the cost of refactoring to the second project when (and if) it comes along rather than burdening the first project. Under this scenario, the needs of the second project are clear, as are the needs of the first project (since of course it is already in operation), so there is no ambiguity about what to build. Furthermore, the benefits of the first project will be realized sooner, which can in essence help to fund the cost of the second project (silo accounting practices may still make this difficult in many organizations, but nonetheless the advantage is clear). Our bottom-line advice is “Refactor integrations constantly so that they don’t become legacy.”

Michael K. Levine adds another perspective to the waste of gold-plating in A Tale of Two Systems when explaining how requirements specifications are like work-in-process inventory and excessive requirements can be a drag on change: “That requirements document is inventory, just as piles of work in process in a factory are inventory. If you have a smaller inventory of ideas to change, change can be positive, instead of a threat. Think of unimplemented tested code, untested code, designs not yet coded, and requirements not yet designed as inventory waste, like computer parts when the industry moves so fast the probability of obsolescence is high.”5

5. Michael K. Levine, A Tale of Two Systems: Lean and Agile Software Development for Business Leaders (CRC Press, 2009), p. 292.

If there really is a solid need within the organization for a generic reusable interface or integration object, the cost of building it should be borne by the integration function (the ICC) rather than the first project team. For example, if a contractor told you that building your house will cost an extra $50,000 and take a few months longer to finish because he needs to develop an automated process for constructing modular cabinets so that all future houses can be built more quickly and at a lower cost, you simply wouldn’t hire that contractor. You would expect this sort of investment to be borne by the contractor in the interests of being faster, cheaper, and better than his/her competitors.

In summary, then, the recommended policy in a Lean Integration organization is to deliver solutions only to meet project needs and to build general-purpose reusable components either by refactoring existing components as new projects come along or by funding the reusable components as separate initiatives not tied to the business project. We will come back to the topic of how to address the “first person on the bus pays for the bus” in Chapter 11 on Financial Management.

Waste 2: Using Middleware like Peanut Butter

Applying middleware technology everywhere is a waste. This may sound like heresy to integration specialists; after all, a host of benefits emerges from implementing an abstraction layer between applications, such as facilitating loose coupling between applications, orchestrating flow-through processes, extending the life of legacy business applications, and many more. No debate on that front. But the reality is that while each middleware layer adds a potential benefit, it also adds a cost, so this is really a cost/benefit trade-off decision.

As integration professionals, we often deride point-to-point integrations as “evil” since they tightly couple components and, if applied without standards, over time will result in an integration hairball. True enough. But that doesn’t mean that point-to-point interfaces for specific high-volume data exchanges with stringent performance requirements aren’t the best solution when used as part of an integration system. Each middleware layer bears a cost in terms of technology, people development, organizational change, and complexity on an ongoing basis, so the abstraction layers should be added only when the benefits outweigh the cost of sustaining them.

Another example of a middleware abstraction layer is canonical models or common data definitions. It is hard to argue against the principle of having a common definition of data across systems with incompatible data models, but nonetheless common data models are not static objects that are created at a point in time; they evolve constantly and need to be maintained. Unless you can justify the incremental staff to maintain the canonical model, don’t add this layer since it will surely become stale and irrelevant within just a few short years, which is yet another example of waste. That said, if you are implementing a metadata strategy along with a Lean Integration strategy, you may not need to add any staff at all since the labor productivity savings from the elimination of muda, along with the appropriate metadata tools, will more than compensate for the labor required to maintain the abstraction layer.

Waste 3: Reinventing the Wheel

Not taking advantage of economies of scale and instead reinventing the wheel is a waste. The reality is that there is a relatively small number of integration patterns that can satisfy the integration needs of even the largest corporations. While the total number of integrations may be large (thousands or tens of thousands), the number of patterns is quite small—probably no more than a handful for 90 percent or more of the data exchanges. If each project team treats the integration development work as a unique effort, the result over time will be thousands of “works of art.”

We know from years of experience in manufacturing lines that cost savings of 15 to 25 percent accrue every time volume doubles6 in a repeatable process. But if you are like Rembrandt producing an original oil painting, the second, third, and hundredth paintings will still take as long as the first. However, our experience in software development tells us that these savings are real, because of two factors: the benefits of the learning curve (the more times you do something, the better you get at it) and visibility into reuse opportunities (the more integrations one does, the more obvious the patterns become).

6. George Stalk, “Time—The Next Source of Competitive Advantage,” Harvard Business Review, no. 4 (July–August 1988).

Waste 4: Unnecessary Complexity

Unnecessary variation in tools and standards is a waste. We have also learned from the world of manufacturing that costs increase 20 to 35 percent every time variation doubles.7 Variation in the integration arena arises from different middleware platforms, development tools, interchange protocols, data formats, interface specifications, and metadata standards, to name a few. In the absence of governance around these and other sources of variation, the variety of integrations could be huge.

7. Ibid.

If we combine the effects of not reinventing the wheel and stopping unnecessary variation, the cost and quality implications are an order of magnitude different. To take a simple example, consider an organization with eight divisions where each develops one integration per year with its own preferred set of tools, techniques, and standards. It is fairly obvious that if they were to combine efforts and have one team build eight integrations, there would be cost savings from a reduced learning curve, reuse of tools, and common components.

Using this simple example and the rule-of-thumb savings, if the cost of building an integration for each of the eight divisions was $10,000, the cost for a central team would be $3,144 or less—a 70 percent reduction! This result comes from doubling volumes three times with savings of 15 percent each time and cutting variation in half three times with a 20 percent savings each time. Real-world case studies have shown that this kind of dramatic improvement in integration development savings is very achievable.

The basic idea is that a team that builds multiple integrations per year will see more opportunities for reuse and will get much faster simply by doing similar work with similar tools on a repeated basis. A centralized team that is producing higher volumes sees the patterns and can take advantage of them.

Waste 5: Not Planning for Retirement

Everything has a life cycle—people, products, software, even data and ideas. The typical life-cycle phases are birth, growth, change, and death. Traditional ad hoc or project-based integration practices deal effectively with the birth phase but do a poor job supporting the rest. More mature practices such as ICCs typically address the growth phase (in terms of both growth in volume and reuse of common components) as well as the ongoing change phase needed to ensure that integration points don’t disintegrate over time.

The phase that generally receives the least focus is the death phase—that is, determining in advance when and how integration points are no longer needed or adding value and should be eliminated. For example, when does a business no longer need a business intelligence report that is generated daily, weekly, or monthly from the data warehouse? Most of the organizations we have worked with have no systematic way of tracking whether business users are actually reading the reports that are generated or even care about them any longer. One organization that was producing 20,000 reports on a regular basis conducted a detailed survey and eventually reduced the number of reports to fewer than 1,000. Or what about data that is being replicated daily from operational systems to tables in a shared central repository—when is the shared data no longer needed? The case study of waste elimination at Clicks-and-Bricks (described later in this chapter) demonstrated with its real-time message queue system that 25 percent of the infrastructure was waste.

As a result of our experience in working with dozens of organizations across different industries around the globe, we have developed a rough rule of thumb about the amount of IT infrastructure waste. In any large organization (Global 2000) that has been in business for over 30 years and has not had a systematic enterprise-wide IT architecture with a clear rationalization (simplification) focus, 25 to 50 percent of the IT infrastructure is waste. Granted, this is not a scientific study, but it is based on dozens of real-life cases, and so far the rule of thumb is holding firm. Eliminating this waste is not easy because of the high degree of complexity and interdependency between many of the components, which of course is exactly why we wrote this book—to provide the structures, tools, and disciplines to systematically eliminate the waste and keep it from coming back.

Fortunately, one area of retirement planning that has been growing in many organizations in recent years is information life-cycle management (ILM). The proliferation of data and the massive size of databases have forced organizations to begin to establish formal policies and processes for the archiving or destruction of everything from emails, to Web site statistics, to outdated and inactive customer records. ILM is a relatively new class of middleware software that helps efficiently archive data that is not needed for company operations, but still makes it accessible if the need ever arises, and to systematically destroy it when it eventually has no further value or could even potentially become a liability.

For example, we spoke to the research director of a life sciences research organization who said the organization doesn’t keep the data generated from its tests once a particular research project is over. The data for a single sample from a genome sequencer or an electron microscope can be hundreds of gigabytes. The director said that while there is value in keeping the data for potential future research efforts, there is also a cost to maintain it, and that overall it costs less to rerun the experiment than to keep the data indefinitely.

Planning for the retirement of integration points and integration infrastructure is an essential component of a Lean Integration strategy. Subsequent chapters provide further specifics on how to go about creating business cases to justify retiring legacy integrations and to establish effective metrics and monitoring techniques to determine when an integration point is no longer being used or valued.

Focus on the Integration Value Chain

The Big Bank case highlights an example of mapping data flows in the delivery of integrated data in an operational environment. Another category of mapping the value chain relates to how the integration team (the ICC or Integration Factory) works with other functional teams in an enterprise to deliver new integrated solutions.

We tend to think of the ICC as the “prime contractor” responsible for delivering a finished product to the customer and all the other dependent functional teams as subcontractors. In the Clicks-and-Bricks waste elimination case study, the ICC was responsible and accountable for the end-to-end operation of the MQ infrastructure, yet the ICC could do so only by relying on and working with other groups. For example, a separate team of systems engineers was responsible for configuring the actual computer systems on which the queue objects ran; it was therefore essential that the ICC work closely with the engineering team to ensure that the MQ objects operated as expected on the IBM mainframe, UNIX server, AIX server, and Wintel server in all the different versions of operating systems and a wide variety of software and security configurations. The ICC also needed to work with production operations to schedule changes, with network security to maintain security keys, and with business analysts to understand future business volume projections in order to perform capacity planning.

In a nutshell, the integration team must

• Know who their customers are

• Offer clearly defined services that provide value to the customers

• Maintain integrated processes that deliver the value-added services

• Establish and manage internal and external subcontractor relationships with other groups that are (usually) invisible to the customer

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.83.199