Chapter 1

Why Simplicity?

,

 

 

 

Simplicity is the ultimate sophistication

Leonardo da Vinci

 

1.1. Solving conflicting requirements

Information systems (ISs) are now ubiquitous in nearly all large companies and organizations. They provide a permanently available online store to customers. They automate an ever-increasing proportion of business processes and tasks, thus contributing to the rationalization effort and cost reduction required by the globalization of competition. Senior executives use ISs to perform business activity monitoring that allows them to react quickly in fast-moving markets, where reducing the time to market is more important than ever. ISs have thus truly become an essential tool for sound decision-making as well as for selling or providing goods and services.

We might naively think that such a strategic position would logically call for putting maximal effort into designing robust and perennial systems. However, as most likely any reader of this book will know by experience, such is hardly ever the case. Unlike road networks or buildings, most ISs are not really built or designed to last. Rather, they grow much more like living organisms, responding to a set of fluctuating and contradictory forces while trying to adapt in an open environment. A common situation is one in which the number of users grows, both inside (employees and IT personnel) and outside (customers) the company, while at the same time those same users all become more demanding. They expect more speed, more reliability, more flexibility, and a better user experience and all of these simultaneously.

The most acute conflict between these expectations is probably less between speed and reliability than between flexibility and reliability. Speed could certainly be achieved, at least in principle, by using mere brute force, which means by allotting sufficient technological and human resources to designing and operating the IS. Flexibility, on the other hand, could probably not be achieved even if we had an infinite amount of resources available. The fact that brute force will not do is a hint that what we are facing here is a deeper issue than achieving mere performance. More flexibility typically involves meeting unclear and fluctuating user requirements. Often it also means providing improved customization to all stakeholders. Agility and fast turnaround are thus the key requirements here. Building reliability, on the other hand, requires a lengthy design phase, deep understanding of the interdependence of subsystems, performing many tests, and gathering extensive feedback about the system's behavior. Building reliability means building human understanding, which is in essence a slow process.

At least two other factors often contribute to make the situation even worse. First, there is the successive technological hype for such things as “EAI”, “SOA”, “EJB”, “MDM”, or any other acronym you might have heard floating around in recent years. This succession of technologies will progressively generate uncontrolled complexity in the IS. Second, under such difficult circumstances, some key employees with technical or business skills might simply want to quit and look for a better working environment. Now, sum up all the previously mentioned forces that shape an IS: the need for flexibility, the multifaceted techno-hype, and perhaps a high turnover, and this will quite soon result in an organizational and technological nightmare that is probably best described as chaos! As physicists tell us, chaos is a situation which is unpredictable. This is the exact opposite of why the IS was built in the first place. In such near-chaotic situations, nobody has a clear picture of what the system is really doing, what the information feeds contain, how the data are structured, and which hardware processes are running. Not surprisingly either, nobody wants to assume the responsibility for making any decisions or changes. Incidentally, it is not by chance that most system architecture endeavors start by mapping the existing system because nobody really knows what the system is made of! Does this sound familiar?

This apparently uncontrollable increase in entropy of computing systems is by no means new. The recent need for opening older systems to the web and the plethora of technologies that pretend to be optimal in this respect only exacerbated the existing tendency for computing systems to grow out of control. For nearly half a century, however, software languages, architecture principles, and development processes have been designed to solve this apparent contradiction of building computing systems that are both maintainable, meaning well-structured and understandable by human minds, and, at the same time, flexible enough to accommodate changing requirements. Let us briefly review some of these here.

On the software engineering side, object-oriented programming (OOP) was probably one of the most significant such attempts. In non-technical terms, what OOP in principle does is to allow constructing a larger system from smaller ones by progressive and controlled aggregation. Traditional procedural languages were notoriously bad at achieving such a goal and OOP was, no doubt, a major breakthrough.

Architectural principles were also proposed, with the aim of organizing and decoupling as much as possible the various processing layers. They all involve the idea of using components, which are reusable pieces of software that should be as autonomous and decoupled from the others as possible. The best known example here is probably the three-tier architecture where components in charge of the presentation logic are clearly separated from those in charge of implementing the business rules, which are in turn decoupled from those responsible for recording the data in permanent storage.

More recently, we saw the advent of the so-called service-oriented architecture (SOA), motivated by the need for business-process flexibility and reusing legacy components. SOA proposes a component architecture, not just in terms of the software architecture for one application, but for the whole IS.

Finally, iterative engineering processes were designed, such as extreme programming or Lean Software Development, to provide controlled methods for dealing with unclear and quickly changing user requirements.

Each of these topics will be treated in depth in later chapters. For now, let us note that this continuous struggle explains why, during the early years of ISs, management was mostly driven by technological innovation. This is the first topic of the following section where we take some time to review the recent history of IS management. The aim will be to put our approach, simplistically , in perspective as the next natural step.

1.2. Three periods in IS management

We can roughly identify three successive periods in IS management. To make our points as clearly as possible, we choose to characterize each era, the reality being obviously less clear-cut.

1.2.1. Management driven by technology

Roughly speaking, this period spanned the years from 1970 to 2000. During this time, it was hoped that technological innovation alone would solve the entropy problem and allow building efficient and durable systems. This was the era of monolithic and closed systems where the same vendor would often provide both the software and the hardware running it. IBM and Digital were certainly key players here. Judging by the number of COBOL and UNIX systems still running strategic applications in today's banking systems, we can conclude that this approach had some serious success. This fact should certainly not be neglected and it could probably inspire current technological choices when it comes to thinking in terms of sustainability. We will come back to this later.

Relying on technology alone to drive the evolution of an IS presents two dangers that we refer to as the “fashion victim syndrome” and the “vendor trapping syndrome”.

Technological fashion victims trust in technology so blindly that they tend to systematically own the latest gadgets, thinking their life will change forever and for the better. Similar behavior could be observed from some tech-gurus in many IT departments during this first period. This undoubtedly fueled the impression, an often justified one, that ISs are like black holes, swallowing more and more resources while not producing much more than the previous versions and sometimes even less. As is now apparent to any observant CIO, choosing the latest technologies implies risks that often outweigh the benefits of the hypothetical improvements claimed by the latest hype. This matter of fact led a prominent IT thinker [CAR 03] to make the provocative suggestion that wisdom in this field systematically belong to technology followers rather than to the leaders.

Vendor trapping, on the other hand, is the situation in which the vendor leverages the strong software-hardware coupling to discourage customers from trying competitor's products. The most extreme form of trapping was simply locking: the software could not even run on alternative hardware.

With multi-platform languages like Java having been around for nearly 15 years now, cost-free hardware-agnostic system software like Linux for nearly 20 years, and the openness of IT systems promoted to a quasi-religion, this could sound almost like prehistory. But caution is still needed because the “trapping devil” is certainly not dead yet. Indeed, it has been rather active lately, tempting some of the major IT actors.

1.2.2. Management through cost reduction

Largely as a reaction to this first era of IT extravagance, the turn of the century saw the advent of a much more austere era of systematic cost reductions. All of a sudden, ISs came under suspicion. They were perceived as ivory towers hiding a bunch of tech-gurus whose main preoccupation was to play with the latest technologies. Hence the tight control on spending, where each dollar had to be justified by immediate and measurable gains in business productivity.

This cost-killing obsession, the fear of the vendor trap, and the advent of the web as a major selling platform were factors that all pushed IT management to favor more open architectures. These architectures were meant to leverage the legacy systems by wrapping functionality of existing systems into reusable services to open the old platforms to the web where the business was progressively shifting.

This was, and still is, the Java-Linux area. The Java language, with its motto “write once, run everywhere”, was, at least apparently, the way to go for avoiding the vendor trap. The Linux operating system, on the other hand, was to contribute to cost reduction by avoiding the prohibitive license costs that would result when the IS needs to rescale.

One important consequence of IT management teams driven primarily by cost reduction was that overdesigning and modeling an IS were considered a luxury one could no longer afford. Consequently, any form of abstract thinking was deemed academic and nearly useless. “Keep it Simple Stupid” was the new motto. That probably also favored the advent of off-the-shelf solutions in the form of ERP1 packages. Explicit coding was to be replaced by mere customization. SAP and Oracle are likely the most prominent players in this ERP category.

Pushing outsourcing to its limits was still another consequence of the cost-cutting struggle. The outsourcing of specialized IT skills certainly began way before the cost reduction era; however, it is during this era that off-shore development really took off. It was motivated solely by the availability of a cheaper labor force in emergent countries for low value-added tasks such as coding specified software components. Experience showed, however, that the expected cost savings did not always materialize because the effort incurred by additional coordination and specification was often underestimated.

As an increasing number of IT departments are now starting to realize, this drastic cost reduction period also often led to an accumulation of a heterogeneous set of technologies that were not really mastered. In a sense, many ISs just grew out of control, behaving like a set of cancer cells. Eventually, the initial attempt to reduce costs often resulted in expensive re-engineering processes and in massive system architecture endeavors, which could last for years, with no guarantee of success.

Much was learned, however, from this era. The most important lesson probably being that “cost reduction” alone cannot be the single driving force for building a sustainable and flexible IS.

1.2.3. Management through value creation

More recently, other approaches emerged for IT management teams, which by contrast with the previous approach are based on a somewhat more positive concept than “cost reduction”, namely that of “value creation”. In this perspective, the IS is considered an important intangible asset of a company that provides a substantial competitive advantage in a similar way as ordinary know-how, R&D, or copyrights do. A possible definition of the IS from this perspective could actually be the following: “The IS contains, or more simply is, the explicit knowledge of an organization”.

As for any other intangible asset, the contribution of the IS to value generation is not only hard to measure, but, more significantly, also difficult to define properly on a purely conceptual level. This difficulty can be traced back to a set of features of ISs that distinguish them from other assets:

– ISs are typically very complicated systems that grew slowly over time without the actual possibility to ever measure the exact amount of effort that went into their design, construction, and maintenance. As mentioned before, ISs grow more like living organisms because of their complexity and openness.

– When considering generation of value, it becomes very hard, if not impossible, to clearly disentangle the contribution of the IS seen as a technical tool from other contributions such as the knowledge and the skills of the IT people in charge of its maintenance and development. The efficiency of the processes in the organization in which the IS operates obviously plays a big role regarding the generation of value. Indeed, even a technically robust IS could collapse within just a few months if key skilled personnel leave or if the same inappropriate changes are made to core processes in the IT department.

– Most often, ISs are unique systems, crafted for one company to answer its specific needs. Therefore, there is no real IS market that could help define a price or a value of an IS. Putting things differently, ISs are not easy to compare for the simple reason that they are intrinsically unique.

– Another rarely mentioned but fundamental difficulty in assessing the value of an IS is what we might call the present-or-future ambiguity. What if an IS, which is currently doing a perfect job as a generator of value, had only limited flexibility to accommodate future opportunities? Most IT managers and CIOs would certainly agree that this IS has poor value. Any sensible concept of value for an IS should thus take into account not just the current situation, but also its sustainability.

Yet, this confused situation has not discouraged many IT thinkers from talking, writing, and commenting endlessly about IS value. As is usual in such circumstances, the conceptual mess is never really eliminated but is rather recycled by those who see an opportunity to invent a plethora of theories to help them sell their precious experience and expertise.

That being said, it should be acknowledged that some of these approaches are of interest and could even have practical use. A common idea is to try and quantify an appropriate concept of Use Value, a concept that actually goes back as far as Marx's major work, Capital. No doubt it is interesting to try to apply this concept to IS, even if it is only to see its limits. As the original definition by Marx was intrinsically subjective, the first task for any “use value theorist” will be to try and quantify it for the specific case of an IS. We shall come back to this in more detail in Chapter 3.

The advantage of these kinds of value-driven approaches to IS management is that they are based on actual measurements, which are certainly reassuring for the IT management teams who choose to use them. Their main weakness, however, lies in the great deal of arbitrariness they involve, both in what is measured and in how it is measured. As a quick example, many of these approaches neglect sustainability aspects of the IS altogether.

Thus, once more, this third era of the IT management has undoubtedly brought us a few steps closer to wiser and more lucid IT management. Quite clearly, however, use value cannot be the whole story either.

1.3. And now … simplicity!

1.3.1. Technology, cost reduction, value creation … So what's next?

On the one hand, the idea of maximizing some sort of use value that emerged from the last management era looks to be on the right track. But at the same time, it really sounds too simplistic. Any CIO or IT professional in charge of a large IS could probably easily come up with real-life examples where use value is not the only thing worth considering. We previously mentioned sustainability, but even that would leave out many other important facets of the usefulness of an IS.

So, let us just face it: no single concept of value will ever suffice to capture what the desirable state is for an IS, both at present and in the future. In no way can this “set of desirable features” for an IS be summarized in just one number, even with a very sophisticated concept of value. Attempting to summarize a quantity which is in essence a multi-component quantity into a single-component quantity is just wrong and can only mislead those who are in charge of making decisions. For these reasons, we believe that use value alone cannot provide safe guidance for IS management in the long run.

Once this weakness of any single-component concept of IS value has been acknowledged, one first and obvious step could be to define, more appropriately, a set of several, well-justified, concepts of values. Certainly, these should be both relevant and reasonably independent from one another. Chapter 3 follows this line of thought. But clearly, to be content with a move from a scalar concept of value to a mere set of values would expose our approach to the very same criticism that we made in the first place against the “use value”, namely that of arbitrariness. And this certainly would be a legitimate criticism.

Is there any way out? We believe there is! There is a deeper concept behind this set of desirable features that an IS should possess. This concept we believe is, well, simplicity! Behind this deceptively obvious, intuitive and apparently provocative word “simplicity” lies a rich, deep, and multifaceted set of concepts that we think could provide the much-sought guidance for the state and evolution of an IS.

The aim of this book is to make these concepts of simplicity precise and operational in real IT life. Our line of thought is that a proper conceptual understanding of simplicity is a prerequisite for sound decision making in IT.

As it happens, the complexity of IS, which in a very rough sense (too rough indeed as we shall see later) could be considered the counterpart of simplicity, has come under close scrutiny of much academic and applied research in recent years. We shall briefly review a selection of these results in Chapter 2, namely those we consider the most relevant for our subject. Although not devoid of interest, many of these works about complexity of IS somehow implement the same strategy as that of previously mentioned works on the value of an IS. It is usually argued that one single concept of complexity, taken from software engineering or from graph theory, could be miraculously appropriate for measuring the complexity of an IS, provided it is suitably generalized or enhanced. In other words, existing concepts of complexity are used and applied explicitly, as is, to ISs.

Again, we think this approach is, by far, too simplistic and will not in fact be able to reliably describe the reality of an IS, where human, organizational, and technological complexities are inextricably entangled. Neither will a contrived concept of use value or complexity do. We need more. We need some form of practical wisdom for IS management.

1.4. Plan of the book

To build this wisdom, our strategy will not be to start from scratch and concoct some entirely new concepts of simplicity and complexity. Rather, we will draw on various areas of research that have resulted in an in-depth understanding of complexity and simplicity. This is the content of Chapter 2.

– The first of these domains is information theory, which is a part of mathematics. For more than half a century, mathematicians have struggled to understand complexity and related concepts such as information and randomness. They tried to do so by removing as much as possible any kind of arbitrariness. No doubt this is certainly a good place to go for those who look for complexity concepts that are robust and deep. However, the conclusion of this quick tour will not be an easy set of ready-to-use recipes for measuring the complexity of an IS. But it will be a deeper conceptual understanding of what complexity is, in a restricted framework where it can be defined rigorously.

– The second topic we will draw from is design. In a beautiful and famous little book [MAE 06], John Maeda from MIT proposed a designer's look at simplicity for which he proposed 10 laws. These are not mere negations or antitheses of the complexity concepts suggested by the information theory, for at least two reasons. The first and most obvious one is that designers and mathematicians have very different types of concerns. While mathematicians are in quest of intrinsic and robust concepts, designers, on the other hand, take into account human factors that we believe are essential ingredients when dealing with ISs and how humans use them. Second, simplicity is a more positive concept that cannot be simply cast as negative complexity.

– The last topic on which we shall draw is software engineering, whose relationship with ISs is rather obvious when compared to the previous perhaps more exotic topics. This more conservative approach has strengths and weaknesses of its own. The strength is that it can provide a set of metrics for various aspects of IT complexities that have already proven their validity for some very specific contexts. The weakness, as we already mentioned, is the rather high level of arbitrariness of the suggested metrics.

There are deep connections between the concepts of complexity and the value, which are actually already implicit in the idealized mathematical concepts, as we shall see. Our task thus will be to bring them out and make them explicit. To that end, in Chapter 3, we shall first select a few relevant concepts of values for an IS. The famous use value will be part of the set. This part can be read as a quick introduction to the most common concept of values used in IT. We shall argue that the concepts we retain are all reasonably independent from one another and equally relevant in most practical situations. This part of the book lays the conceptual foundations for implementing simplicity in IS.

Later, in Chapter 4, we identify how the main sources of uncontrolled complexity can be mitigated using our simplicity principles to increase the value of an IS. This chapter is more practical than the previous chapter.

Rephrasing the above slightly, we can consider that a set of well-chosen concepts of value is basically a black-box view of an IS. In no way, however, is it adequate to drive the evolution of an IS. Deeper understanding and more lucidity is needed. Complexity and simplicity enter by providing a white-box view of an IS, explaining what the true origin of value is. We stress once more, however, that this understanding does not necessarily lend itself to an easy and ready-to-use set of recipes.

This book is definitely not a for-dummies kind of book. In no way do we claim that simplicity is synonymous with easiness, neither conceptually nor practically. We claim that simplicity is a core value when designing ISs.

For productivity reasons, enterprise computing has often been, and still is, reduced to applying patterns, best practices, and templates at various levels. This is actually closely related to a topic; excessive specialization and the resulting disempowerment, that we shall discuss in depth in Chapter 4. We think this is really a mistake, even with efficiency in mind. ISs naturally raise a number of intrinsically conceptual issues regarding information processing, model building, and evaluating complexity or abstraction. We believe that all these topics deserve, once in a while, a little more conceptual thinking. Chapter 2 has been written in this frame of mind.

Our previous caveats notwithstanding, the book concludes with a purely practical Chapter 5. This chapter actually summarizes several years of our own experience with contracts related to such topics as defining and building modular architectures, performing large-scale business modeling, and designing pivot formats to integrate different systems. We give a number of practical recommendations on how to implement simplicity principles in hardware, software, and functional architecture. We also discuss human and organizational matters in this same layered perspective.

Let us conclude this introduction by acknowledging that some arbitrariness is certainly unavoidable in our approach as well. We believe, however, that it offers improved guidance on how to manage an IS in the long run, when both sustainability and flexibility matter. Simplicity is certainly a more exciting concept than cost reduction. It is also deeper than value creation for which it provides an explanation and suggests practical improvements. Indeed, various aspects of simplicity touch all IS stakeholders: customers, IT managers, and top management. Finally, simplicity has the power of intuitive concepts.

Rather than trying to optimize one IS value, try to identify the key factors that contribute to different and equally essential concepts of value. This we claim is a CIO's main responsibility!

So, are you ready for a ride through simplicity?

 

 

1 Enterprise Resource Planning refers to an integrated application used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.96.247