CHAPTER 2
The hidden levers driving waste

The overruns and waste described in the last section are real. I have watched some of these projects. This is not embezzlement or accounting fraud. This is a lot of well-meaning people working very hard to complete difficult tasks.

What we want to explore in this section is what people are inadvertently doing that is running up the costs. In Chapter 4, we look at some entrenched beliefs that keep this diseconomy locked in, but first we need to understand what is going on.

How to think about information systems resources

Costs in information systems do not behave as they do in more tangible industries, and attempts to apply analogies from one area to the other are fraught with misapplication.

Traditional industries have been viewed through a lens based on the nature and relative amount of resource types they consume. The most common being:

  • Material
  • Labor
  • Capital
  • Energy

Traditional agriculture was labor- and capital-intensive (the capital being the land). Modern agriculture is capital and energy intensive, as the amount of labor per unit of output has plummeted, the resource consumption taking its place being machinery and fuel.

The information systems industry has a slightly different set of drivers.

The cost of an information system, ultimately, is the resources consumed to create and run it. There are many intermediary players in this industry, and many different pricing models, but ultimately the resources consumed must be paid for. If a vendor of a subscription-based application manages to charge significantly more for an annual subscription that the resources they consume, they will make outsized profits for a while. But these margins will attract new players to the niche. Eventually the subscription cost will be driven down by competition to closer to the resource costs. For this reason we focus our analysis on the resource costs, which over the long run will be the real drivers, even though some short-term profit making will occur.

Let’s look at the main categories of resource consumption and how they behave economically. The main categories of how costs are incurred are:

  • Computer hardware
  • Networking costs
  • Software licenses
  • Professional services

You can likely think of many more categories, but they are subsumed by these. For instance, Software as a Service (SaaS) is a bundling of hardware, networking, and software licensing costs that are charged in a different manner but underneath they consume those resources. We shall return to some of the common bundlings after we describe these four in a bit more detail.

Computer hardware

At one point, CPUs and storage devices dominated the cost of an information system. In the 1950’s a mainframe computer cost tens of millions of dollars. There were no networking costs and what little software there was came with the implementation. A small team of analysts took care of the design and programming of the systems.

Since then, the cost of a transistor has been falling at an impressive rate, dropping well over a billion fold over 40 years.11

Six decades of Moore’s law have seen the capacity of our hardware explode, while the cost has plummeted to near zero. One hardware component, disk drive space, dropped 100,000 fold over 29 years.12

The cost of storage has come down even more impressively. In 30 years, the cost for one megabyte of storage has gone from $250,000 to 3 cents.

Many of us making decisions about computer systems learned our basic design tradeoffs and habits at a time when the cost of storage was a million times higher than it is now. Many of the architectures we base our systems on were likewise built at a time when hardware costs ruled the world.

We marvel that the processing power of the smart phone in your pocket exceeds that of the mission control computers that sent the first men to the moon. Yet the reality is even more extreme than that. The computer in your pocket cost several hundred dollars. Commodity chips, such as those that power the myriad devices in our cars or appliances, can be acquired for as little as one cent, and with 50,000 transistors running at 20 million instructions per second, they are roughly equivalent to the ground control computers for the Apollo Moon Mission.

The total amount spent on computer hardware per year is still a huge number ($1 trillion) but the amount of computer hardware needed for any given application is no longer the significant part of the equation.

We still need computer hardware to run our systems, but the cost is hardly a factor anymore, and we will need to rethink how we trade hardware costs for other costs.

Networking costs

The computer network, which we use to transfer data, voice, and video, is of course largely hardware-based as well. Most people separate networks and communication from processing and storage hardware expenses, due at least in part because the network became an externalized outside utility.

The proprietary data networks of the 1960’s and 1970’s, such as SNA from IBM and DECNet from Digital Equipment, have all the same disadvantages of today’s non-interchangeable proprietary “lock in” platforms. With the advent of open standards such as Ethernet and TCP/IP, new entrants could compete at any layer in the communication stack and performance and efficiency blossomed.

Over thirty years we’ve seen about a 100,000-fold improvement in internet throughput and speed.13

For many years, processing and storage were relatively cheap, and communicating between computers was relatively expensive and slow. We developed strategies that accommodated these tradeoffs. We updated databases on a weekly or monthly basis with small streams of transactions that shuttled the changes from one system to another. We often had dedicated communication links to allow piping data from system to system in uninterrupted bursts.

Message based architectures, with Service-Oriented Architecture being the most mature, were developed to organize the transfer of small transactions from the system that created a change to others that needed to know about the change. We developed the idea that data could be packaged into messages and sent from queue to queue. Industries have become organized around standards for messaging such as HL7 in healthcare, ACORD in Insurance, SIP in telecommunication, and EDI in retail.

Then once the World Wide Web took hold, what seemed unthinkable just a few decades prior became commonplace. The idea that a transaction or a file might be exploded into dozens of “packets” and sent in different routes, usually being forwarded between dozens of network nodes, each owned and controlled by different organizations, and arriving at its destination a few hundred milliseconds after it was sent, went from fantasy to reality.

The revenue models of the World Wide Web rewarded suppliers that could increase bandwidth and decrease latency. In the 1980’s, Nicholas Negroponte of the MIT Media lab observed the evolution of the telephone and the television marketplaces. In an almost comical historical accident, each marketplace selected the delivery technology that would have much better suited the other. Telephony was using delivery technology that was really better suited for television, and vice versa.

Telephone was our connection to the home and office network. In the one hundred years leading up to the 1980’s, AT&T had run a string of copper to virtually every establishment in the United States, and other telecoms had done the same overseas. Meanwhile, radio and television, being later to the game, had the option of using what at the time were scarce radio frequencies.

So we had a telephone network tethered to the land by the copper wires that carried their signals, at a time when it was becoming apparent that the near future would want telecommunication to be portable (as opposed to being tied to your home or office). At the same time, televisions, which were still large and not portable, would benefit from the higher bandwidth of fixed cable, and had little to lose in giving up portability.

The “Negroponte Switch” was the observation that we had all the network capacity we needed–it was just poorly assigned. If we ran TV over the phone lines and gave the phone companies the broadcast TV airwaves, everyone would be happy. Nevertheless, logistically, this type of “switch” just couldn’t happen. Not only was there no way to seize all those assets and reallocate them, the number of devices that would have to be swapped out simultaneously and repurposed was mind-boggling. So for some time, the Negroponte Switch was just a paradox. We knew what was needed, but there was no route to get us there.

There is no evidence that Negroponte foresaw the developments that would allow his switch to become reality. The cable TV industry was born around this time, and rather than repurpose the phone network that AT&T had spent a century to build, the wildcatters of the early cable industry managed to lay a cable to everyone’s home in a matter of a decade or two.

Meanwhile, with the invention of the cell phone, the United States government was convinced to free up previously unavailable spectra that had been set aside for military use. The cellular phone industry planted cell towers every few miles across the country. Where there had been two major communication networks, now there were four. In a matter of a bit more than a decade, the Negroponte switch had become affected. Within a couple of decades the copper wires to “land line” phones and the TV broadcast channel airwaves would both become obsolete.

Networks and communication costs are generally metered in one way or another. We pay for throughput (how many bits we shipped from one location to another) and often a premium for speed. There are many billing arrangement options. For instance, cell phone carriers generally charged a fixed amount up to a particular throughput (in the case of a cell phone, download and upload traffic), but we are still essentially paying for the movement of a given amount of data.

For many information systems, the concept of “latency” in communication is more important than the communication capacity. A low latency network is one where you can get a response very rapidly. For a long time, latency in a computer application was managed by having computer terminals talking to a tuned database at the other end. As long as the database could serve up a response in less than a couple of seconds, the network rarely added more than a half a second and the user had an acceptable experience.

In the 1980’s and early 1990’s, data communication on common carriers was comically slow by today’s standards. A dial up acoustic modem would deliver 300 to 2400 bits per second. With error correction, most characters are 10 bits, and therefore this speed was 30 to 240 characters per second. This is scarcely faster than a skilled typist. It is no wonder that systems and architectures built in this era assumed relatively fast connections to local databases, which were synched up asynchronously.

We can now rapidly and economically transmit volumes of data that a few decades ago would have been more rapidly transferred by loading tapes on to a 747 and flying them to their destination.

Software licenses

Software is intellectual property, with a high one-time capital cost, but a near zero cost to replicate. In the absence of a license fee, the cost to run software is almost entirely the cost it consumes on the hardware and the network, which as we’ve just discussed is asymptotically approaching zero.

The cost to create software is almost entirely professional services. Because of the near zero cost of goods sold, software publishers have come up with many strategies for licensing to recoup their outlay. The ability of a software vendor to capture premium prices is based primarily on whether there is a perceived comparable competitor, and the switching cost to get to the competitor.

The cost to procure software that already exists depends on the licensing model, which is set by the owner / creator of the software.

The main licensing models that we will consider here are:

  • One time (capitalized) cost – a firm can commission a software development firm to build software to their requirements. Once completed, the sponsor can do what they wish with it. Historically labor was expensed in the year incurred, but because of the costs and timeframes involved, most company now treat software development projects as capital expenses and put them on their balance sheet.
  • Premise based / server based costs – if a software firm builds a software package for resale, their intent is to sell it to many firms and thereby recoup more than the cost they incurred. Their intent is to try to charge close to as much value as they are providing. In general, a large firm will be able to get more value out of a software system than a small firm. One way to extract more fees from a larger firm is using per user pricing (next bullet), but many firms build products for a given size firm and charge accordingly. QuickBooks has features and price points appropriate for very small firms and SAP has features and prices appropriate for the very large firms. These are usually adjusted for some proxy measures like number of servers, or number of cores.
  • User or usage based – More and more software firms are pricing by some sort of meter: number of “named users” (people with unique logins), or “concurrent users” (number of users using the product at the same time), or transactions (number of interactions users have with the system). As software is moving to the cloud, more license models are becoming usage based.
  • Maintenance and support – software has no “wear items” like the blades on a front-end loader or the teeth of a chain saw. It has no parts that degrade with use. And yet “maintenance” is a big part of software licensing, typically 15-20% annually of the original capital cost, which means that a system that survives 10 years has incurred twice as much maintenance costs as it did to create it. “Maintenance” is a gamble that the vendor will keep pace with changes in the software environment, and changes to the business and regulatory environment. You are betting that the vendor will have a version that supports the upgrade to the operating system that you will be installing. You are betting that they will supply required regulatory reports as laws change. “Support” (often bundled with Maintenance) is a retainer that provides for expert support on call in the event the system fails in production.
  • Open source – more and more software is licensed under an open source license, which means that consumers of the software do not pay a license fee. The creation of the software consumed labor, but not conventionally. It came from developers who do this in their spare time, or from companies who employ developers and then contribute their output to an open source venue. The companies are often motivated either to get more developers working on the product (thereby improving its quality), or to gain notoriety for providing useful software.

Software, and therefore software licenses, exists at many layers in a firm’s information infrastructure. There are operating systems licenses for servers as well as client devices. There is a vast number of software products at the infrastructure level that help processes work together better, including load balancing and integration software. There is software to manage databases, workflow messaging, and the like. Moreover, there is “application software” code that has been written to solve a specific business problem.

While many categories of software have become commoditized and have dropped dramatically in price, many other categories have moved in the opposite direction. The number of applications that a firm implements and manages has grown dramatically in the last few decades.

Software as a Service has already eclipsed new on-premise ERP implementations in some segments.14

As we will discuss later in this book, the attitude that business problems can be addressed by buying or building application systems, is at one level so obvious as to not merit mentioning, and at the same time is the primary root cause of the runaway dis-economy of most large enterprises information systems.

Professional services

This category covers all the technical specialists that are employed to build, maintain, or operate software systems. It includes a company’s internal staff as well as consultants or outsourced contractors.

Gartner has estimated this to be a $900 billion industry, which employs 9 million professionals.15

The unit cost of professional services is the cost per hour to retain them–in other words, their salaries. The unit cost has remained relatively constant (adjusted for inflation) since the dawn of the computer age. Keep in mind this is at a time when most of the other cost components have dropped a million-fold or more.

What we would have hoped would have been that productivity would have climbed over this period of technological advancement.

Pundits often scoff at the transportation industry for not keeping pace. They say if the automotive industry had kept pace with the computer industry, a Rolls Royce would now cost 2 cents to purchase and get 200,000 miles to the gallon. This snide comment misses two important observations. There is a great disparity in the minimum size for each device. The original transistors were about the size of a grain of rice. Because there is no logical minimum size for a bit of information, transistors could keep shrinking–and they did.

Each reduction in size comes with a concomitant reduction in materials needed. An automobile must carry a passenger (typically two or four), and therefore its limit function is a contained space big enough for a family. The family will weigh 500+ pounds, so even a gossamer vehicle is going to weigh close to half a ton. If the transportation industry had kept pace, we could buy a Rolls Royce for pocket change, but it would fit on the head of a pin.

If we applied the analogy to the application software industry, we would say that we should be able to design and build a major application over a weekend.

To really humble ourselves, let’s compare labor productivity to something more comparable: stevedoring (unloading ship cargo).

At the time of the birth of the computer industry, most cargo, and therefore most commerce, was carried by ship. An army of stevedores employed at each dock, loaded and unloaded the cargo from each vessel, and transferred it to the mode of its final mode of transportation (typically rail or truck).

In 1950 in New York, it took 1.9 person hours to handle a ton of cargo.16

By 2010, this had trickled down to 5 minutes a ton.17 This was primarily achieved through the genius of containerization and cranes, as well as a bit of software to manage scheduling and sequencing. This is a 20-fold improvement in labor productivity.

One would think that laborers in the information processing industry might have achieved similar levels of productivity enhancement. We are the beneficiaries of over five decades of compounding improvements in all the factors that go into information systems. Hardware and networking costs have plummeted, there is an amazing selection of free or very cheap software to choose from, and we have been creating and applying improved methods for that entire time.

Researchers claim, correctly, that information system professional’s productivity is hard to measure. This is true, but let’s look at some macro statistics:

  • Labor Unit Costs have been mostly flat, except for the trend toward offshoring, which has had a moderating effect
  • Adjusted for inflation, professional salaries have been relatively flat for the last 50 years18
  • Overall number of hours spent by information processing professionals has grown considerably over the last five decades

Are we processing that much more information? Yes, we have big data, so at one level we are running a lot more information through our pipes. But for many companies, the core number of transactions has not grown so rapidly. If you are a manufacturing company, you could easily be building the same number of widgets. You may capture more data points along the way, but this is all in service of doing a better job of building those widgets.

Hardware, networking, software licensing, and professional services are the four fundamental factors of production in software implementation. How they interact and how they have changed over time has shaped the forces that are driving the consumption of these resources.

How information system costs really behave

Now that we have a shared understanding of how the component costs have changed over the last several decades, let’s explore the more complex contributors to the runaway and widespread waste.

While the cost to deploy and host a system has dropped dramatically in the last several decades, the cost to implement and run a major application has gone up rather than down. We owe it to ourselves to find out why.

So where is all that money going, and why?

Our contention is that the real cost drivers are the following, which have received not nearly enough attention:

  • Complexity
  • Dependency
  • Integration
  • People change management
  • Cost of application functionality change

Complexity

A complex system is one made of many interrelated parts. The level of complexity is driven by the number of parts and especially by the nature of their interrelationships. There are many formal ways to measure the complexity of computer programs, such as the McCabe Cyclomatic Complexity measure or Halstead metrics. The McCabe Cyclomatic Complexity examines the number of potential paths through a particular piece of code. Halstead metrics focus on the cognitive load on attempting to understand the software. The Halstead metrics count things like the vocabulary size and the number of operators and operands.

While these can gauge the complexity of a program, the key issue is not the complexity of individual programs but the complexity of having and relying on literally, thousands of individual programs, many of which interact with each other, in hard to predict ways.

Dependency

One aspect of complexity that amplifies the problem is dependency. Some component B is dependent on a component A if any change to A affects B.

The following diagram contains a greatly abridged portion of a dependency analysis we did for a client. At the top of the diagram are applications, followed by databases, languages, infrastructure, operating systems, Application Programming Interfaces (APIs), and hardware–any of which might form the basis for dependency.

For instance, if a program calls a subroutine, and someone changes that subroutine, there is a very good chance that the calling program will be adversely impacted.

A system with 1 million independent components is not unthinkably complex; it is just a collection of parts that do not interact. The number of bottles in the following picture does not make the system complex.

A system with 1000 components that are mutually and intimately interdependent can be impossibly complex.

The issue with many enterprise systems is that the stewards of the system often are unaware of the nature of the dependencies. When you are unaware of the dependencies, the only conservative options are to make no changes, or to examine and test every possibly affected component.

The obvious dependencies between programs are rarely to blame for making dependency so pernicious in enterprise systems. There are dependencies that cross levels. For instance, there are dependencies between application software and infrastructure software. There are dependencies between code and metadata, and between metadata and data.

As we will explore later in this book, these dependencies tend to keep legacy systems locked in.

Integration

Integration is the act of getting subsystems that were independently developed to interoperate. It is generally viewed as a high value-added activity. This is mainly because we haven’t recognized that it was the independent development of things that should have been integrated, that eventually leads to the need for integration.

The cost of integrations should be viewed in the same way the manufacturing industry sees rework. That is, as waste. The rework itself isn’t waste, always better to fix something before you ship it, but the rework is an indicator of the extent of the waste in the manufacturing process.

We should view the entire systems integration industry as an indication of the waste that we have engineered into our application implementation practices.

People change management

Large systems projects have large “change management” programs. This is the activity of getting a lot of people to simultaneously change the way they work in order to implement a new system, or often to implement an improved workflow.

In large systems projects this can consume as much as a third of the project budget. The surface reason for why it costs so much is that the software package already exists, and can be acquired for a low price. Changing the software is very expensive, so the preferred approach is to change the organization to accommodate the software. Generally, executives claim that this is adopting “best practices” but often that is just an excuse.

It is hard to change people’s processes because we so rarely understand them. In a mature organization, most of the workforce has learned their function from the systems they use. They have learned the vocabulary from the terms presented in the user interface. They have adapted their workflow to accommodate that which is imposed by the system. They have even built workarounds to compensate for the shortcomings of the system.

Much of this is tacit. People have internalized this and it is how they do their job. Even small changes to workflow or terminology are difficult to master. Wholesale changes are especially hard.

But it needn’t be this way. This is a product of the new system being both inflexible and arbitrary. The inflexibility will be discussed in the next section. Because the system was designed and built for another context and use, it becomes arbitrary. It is possible that the current system is a good match but often it is not.

Cost of application functionality change

The single most important metric, and one that virtually no one measures, is the cost of making a change to an application system.

People estimate change projects, but they generally do this on a project-by-project basis. As such, they are unable to compare the cost of a change in one system to the cost in another. In addition, they have no way of knowing where they stand with their competitors.

The reasons for not tracking this metric are many, but I suspect the two biggest contributors are:

  • There is no common denominator between changes
  • There is no linear relationship between the change and the cost

No common denominator between changes

Not all changes are equal in complexity. Typically, the cost of changing the layout of existing fields on a screen or a form is far less than the cost of adding fields to a screen or a form. Moreover, adding fields to a screen or a form is typically far easier than adding columns to a database, which in turn is easier than a change to the structure of the database.

The other reason for reluctance is that most change requests are bundles of individual changes. However, this excuse shouldn’t be enough to avoid doing what is required. By retreating from the task of predicting and measuring the cost of change, a firm is unlikely to see where its real problems lie, and this blinds them to the real source of their legacy diseconomies.

We believe there are a small set of types of changes. A change request could easily be decomposed into a certain number of each type.

No linear relationship between the change and the cost

The other reason people don’t measure cost of change is that they believe it will be futile. They believe there isn’t a relationship between the change and the effort. They believe this because their experience tells them this is so.

But what almost everyone has not factored in, and is the difference that makes this whole problem tractable, is that the cost of a change is not proportional to the complexity of the change. The cost of change is proportional to the complexity of the thing being changed.

Adding a field to a recently developed agile system is fairly easy. Adding a field to an ERP system could be an entire career. The biggest multiplier effect comes when a change affects not just a complex system, but other dependent components and systems.

We worked with a Workers Compensation Insurance Company. An injured worker sued, and won, a case that established that the insurance company had to pay an injured worker not just for their lost wages, but also some percent of the health insurance premium that was being picked up by their employer while they were working. This court case established the new precedent and the company was obligated to comply, not just for the immediate case but for all other cases.

On the surface, this was a reasonably small change to make to their systems. They would need some screens to ask the injured worker how much medical insurance they were receiving through their work. They would need a few more screens to verify this information with the employer. They would also need an algorithm to determine how much of this would be added to the claimant’s benefits.

Rather than the simple set of changes I just described, the actual change cost over a million dollars and took the better part of a year to implement. Were these people incompetent? No, anything but. But the systems environment was far more complex than would have been imagined. In the first place, there wasn’t one system. There were systems to manage self-insured companies. There was the standard Workers Compensation claims system. There was a pension-based claims system. There were interfaces between these systems and dozens of auxiliary systems, many of which were affected. The data warehouse was affected, as were the ETL (Extract, Transform, and Load) processes that fed the data warehouse.

This drove home the idea that the cost of change is not driven by the complexity of the change, but the complexity of the thing being changed.

It is our contention that a firm that tracked their changes this way would rapidly become wise. Their wisdom would be in understanding the characteristics of their existing systems that lead to high cost of change. Moreover, once they knew this, they would move with much more confidence toward systematically reducing their legacy burden.

Summary

In any other industry, reducing the cost of your key inputs by a million fold would reduce the cost of the end product significantly. That the cost of implementing enterprise application keeps going up should give us pause and make us wonder whether we are subtly sabotaging ourselves.

We’ve been struck by this contradiction for two decades. Over that time we have examined several hundred projects, some unmitigated disasters, some marginally successful. We’ve done root cause analysis. We have been looking for what is behind this situation. Most of the bromides that have been floated to fix these problems (better project management, better methodologies, better tools, better technologies) are at best very tangential, and at worse red herrings. We will examine many of these in Chapter 4, but we will put forward our contention for the primary drivers for the current state of the art: the application-centric mindset coupled with attraction to unnecessary complexity.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.137.127