© Gordon Haff 2018
Gordon HaffHow Open Source Ate Softwarehttps://doi.org/10.1007/978-1-4842-3894-3_5

5.  Business Models

Gordon Haff1 
(1)
Lancaster, Massachusetts, USA
 

When many people first hear about businesses that make their living selling open source software, their first question is usually something like “How can you sell something that’s available for free?” That complicated answer is addressed in this chapter.

Even restricting the discussion to “free as in beer” doesn’t simplify it much. The business world is full of examples of free and paid services or products supporting and subsidizing each other in various ways. It’s a hard formula to get right, and business models based on such approaches can rapidly shift with technology and consumer habits. Ask any newspaper publisher.

Indeed, it’s misleading, or at least simplistic, to refer to an “open source business model” as if open source software existed outside of normal commercial relationships in which money is traded for a product or service of value.

Which brings us to the other overarching theme that I’ll cover. Open source software and the forces that have helped to shape it don’t exist in a bubble. They’re part of broader trends in software development, in business interactions, and in organizational culture generally.

How Can You Sell Something That You Give Away?

Free is a multifaceted word. Its meaning is less muddled in some languages. Latin and some languages deriving from it use different concepts for the concept of freedom (liber in Latin) and zero price (gratis in Latin). “Free” in modern English appears to have come from the Old English/Anglo-Saxon side of its language tree where freagon came to embody both meanings. (English also has language roots in Latin by way of Norman French.)

As we’ve seen, “free software” in the sense that we’ve been discussing here has always been about free as in freedom rather than free as in beer. As Stallman writes in the first footnote to the GNU Manifesto:
  • The wording here was careless. The intention was that nobody would have to pay for permission to use the GNU system. But the words don’t make this clear, and people often interpret them as saying that copies of GNU should always be distributed at little or no charge. That was never the intent; later on, the manifesto mentions the possibility of companies providing the service of distribution for a profit. Subsequently I have learned to distinguish carefully between “free” in the sense of freedom and “free” in the sense of price. Free software is software that users have the freedom to distribute and change. Some users may obtain copies at no charge, while others pay to obtain copies—and if the funds help support improving the software, so much the better. The important thing is that everyone who has a copy has the freedom to cooperate with others in using it.

Freedom doesn’t pay the bills though.

The GNU Manifesto offers up some possibilities that apply mostly to individuals. For example, “People with new ideas could distribute programs as freeware, asking for donations from satisfied users, or selling hand-holding services.” Stallman also makes some more questionable suggestions such as “Suppose everyone who buys a computer has to pay x percent of the price as a software tax. The government gives this to an agency like the NSF [National Science Foundation] to spend on software development.” None of these really look like scalable business models.

Is There an “Open Source Business Model”?

In Free: The Future of a Radical Price (Hyperion, 2009), former Wired Magazine editor-in-chief Chris Anderson refers to “free” as the “most misunderstood word” and describes many of the ways in which giving things away gratis can be profitable. In all, he describes 50 business models that fall into three broad categories.

Categories of Business Models

There are direct subsidies. For example, Apple gives away many types of Apple Store Genius Bar tech support as part of the package you get when you buy an Apple phone or computer.

There are three-party or two-sided markets in which one customer class subsidizes another. Ad-supported media, including companies like Facebook, fall broadly into this category in that they’re giving away a service to consumers while charging businesses for access to that audience.

Finally, there’s freemium. A certain class of service or product is free but you need to pay to upgrade. Freemium is a common approach for selling many types of software. In-app purchases on iPhone and Android smartphones are classic examples. You can download the basic app for free, but you need to pay money to remove ads or get more features.

Getting the Balance Right

A general challenge with freemium is getting the balance of free and paid right. Make free too good and your conversion rate to paid might not be high enough to be profitable—even among people who would have been willing to pay if they had to in order to use the product at all. I use a variety of programs and software services that are useful enough to me that I’d probably be willing to pay for them if necessary. However, the free tier meets my needs well enough; I may not even value the incremental features of the paid version at all.

On the other hand, cripple the free version too much and it becomes uninteresting in its own right. If this happens, you don’t get many people to try your software.

Take the example of a not-so-hypothetical online storage provider trying to grow its customer base. They could offer a free trial. That has pros and cons but probably isn’t a good fit here. (Users will upload things and some of them will lose their only copies of those things when the trial ends. They’ll be mad. Probably not a great plan.) Instead, you decide that users will be able to sign up to get some storage that they can keep using forever at no charge. But if they want more storage, they’ll have to pay some tiered monthly fee depending upon how much they want. But how much should you give them?

You could be a cheapskate and give them 10 megabytes. Seriously? You can store one MP3 song with that amount. No one’s going to bother to use that. OK. How about 10 terabytes—a million times as much? That’s more storage than all but a tiny sliver of what individuals need. You’ll get lots of sign-ups (assuming the service is otherwise useful) but few paying customers who require more space.

Building the Funnel with Free

The nice thing about a freemium model is that it’s a good way to acquire users with the product itself. They’re not paying customers yet, but they’re well into your sales funnel in marketing-speak. (To be specific, they’re potentially in an evaluation phase, which is often listed as the third phase of the funnel after awareness and interest.) They’re using your product. They still have to like it. And they still have to decide to buy it. But getting a potential customer to evaluate your product is a big step. Freemium models for software, especially relatively simple-to-use software, make getting from initial awareness to evaluation a relatively quick and low friction process if the experience for new users is otherwise solid.

At one level, business models that include open source software can be thought of as variants on freemium. However, be careful with this framing. It can encourage simplistic thinking. Successful business models usually involve more than just charging for support or offering consulting as an option. Furthermore, approaches such as open core may look like they’re open source on the surface without benefiting much from the open source development model.

Open Core versus Open Source

With open core, a company gives away a free product that is open source but then sells additional proprietary software that complements it in some way. This often takes the form of something like a “community” edition that’s free and an “enterprise” edition that requires either a license or a subscription fee. A typical distinction is that the enterprise edition will include features that tend to be important for large organizations running software in production but aren’t as big a deal for individuals or casual use. Andrew Lampitt is credited with coining the open core term.

The MySQL database—acquired by Oracle when it bought Sun—is a typical case. MySQL Enterprise Edition “includes the most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime. It reduces the risk, cost, and complexity in developing, deploying, and managing business-critical MySQL applications.” Thus, even though you can use the base MySQL project for free, many of the features that you probably want as an enterprise user are behind the paywall.

In part because the upsell features are often not clearly partitioned off from the core project, many open core products require their contributors to sign a contributor license agreement (CLA), which assigns rights to the commercial owner of the product. (It may or may not assign copyright but, in any case, it gives the owner the right to use the contributions under a proprietary license if they want to.) Pure open source projects may use CLAs as well, but in that case, they serve a somewhat different purpose. For example, the Eclipse Contributor Agreement gives as its rationale: “It’s basically about documenting the provenance of all of the intellectual property coming into Eclipse. We want to have a clear record that you have agreed to the terms under which the Eclipse community has agreed to accept contributions.”

Many vendors are attracted to open core because it’s effectively a proprietary business model that uses open source but isn’t itself really a business model directly based on open source. What’s being sold is the proprietary add-ons. The vendor’s hope is that they’ve gotten the free and paid balance right. That the free is good enough to attract users and even outside contributors. But that most customers who would have been willing and able to pay anyway will pony up for the premium version.

As the president of the OSI, Simon Phipps, writes: “Open core is a game on rather than a valid expression of software freedom, because it does not [provide] software freedom for the software user . . . to use the package effectively in production, a business probably won’t find the functions of the core package sufficient, even in the (usual) case of the core package being highly capable. They will find the core package largely ineffective without certain ‘extras,’ and these are only available in the ‘enterprise version’ of the package, which is not open source. To use these features, you are forced to be a customer only of the sponsoring company. There’s no alternative, no way to do it yourself if the value delivered doesn’t justify the expense involved, or if you are time-rich and cash-poor. Worse, using the package locks you in to the supplier.”

Are You Taking Advantage of Open Source Development?

That’s the perspective from the users of the software. But what about from the vendor’s perspective? Is this just an argument that open core is not a sufficiently ideologically pure approach to building a business based on open source?

The issue isn’t ideological purity. It’s that a business model that’s not fully based on open source doesn’t accrue the full benefits of being based on open source either.

From a customer’s perspective, if you need the enterprise features, you need the proprietary product. The fact that there’s an open source version lacking features you need isn’t all that relevant. It’s no different from a freemium approach to traditional proprietary software or software-as-a-service. You can’t try out what you don’t have access to. The power of freemium to get potential users in the door can be significant. It’s just that open core isn’t uniquely different from proprietary approaches to selling software just because open source is part of the mix.

Companies also find that an open core model often doesn’t bring the full benefits of the open source development model. The power of open source as a development approach isn’t that anyone can see your code. It’s that individuals and companies can collaborate and work together cooperatively to make better software. However, open core almost can’t help sending off a vibe that the vendor owns the open source project and its community given that proprietary extensions depend on the open source core. It can be hard to attract outside contributors in this situation—which can be a community management challenge under the best of circumstances. The result is often an open source project that is open source in name only.

Subscriptions and Support

There’s another freemium model that’s common in open source. In fact, it’s often called out as the open source business model although that isn’t really correct or is, at least, an oversimplification. With this model, you can obtain and use fully functional, nothing-held-back software under an open source license. You can use it without paying for as long as you want, no strings attached. But, if you want support you’re going to need to pay for it in some form. This differs from the typical subscription arrangement for proprietary software—Adobe Creative Cloud, for example—with which you lose access to the software if your subscription lapses.

It’s also different from the historical approach to proprietary software, which combined an up-front software license with some sort of maintenance fee for minor updates and support.

The Rise of the Independent Software Vendor

It’s difficult to identify the first company to sell software that wasn’t also hawking hardware (which is to say, the first Independent Software Vendor (ISV)). However, Cincom Systems—founded in 1968—is a good candidate. It sold what appears to be the first commercial database management system not to be developed by a system maker like IBM. Fun fact: not only is Cincom still extant as a private company in 2018 but one of its founders, Thomas Nies, is the CEO.

Over time, pure-play or mostly pure-play software companies packaging up bits and selling them became the dominant way in which customers acquired most of their software. As we’ve seen, ISVs like Microsoft selling closed-source proprietary software even became major suppliers of the operating systems and other “platform” software that historically were supplied by vendors as part of a bundle with their hardware.

When open source software came onto the scene, it didn’t bring with it the same requirement to purchase an up-front license. However, many users still wanted the other benefits associated with having a support relationship with a commercial entity.

Open Source Support Arrives

Probably the first company to systematically provide support for open source software in a formal way was Cygnus Solutions. It was founded by John Gilmore, Michael Tiemann, and David Henkel-Wallace in 1989 under the name Cygnus Support . Its tagline was: Making free software affordable.1

Cygnus Solutions maintained a number of the key GNU software products, including the GNU Debugger. It was also a major contributor to the GCC project, the GNU C compiler.

As Tiemann described the company in 1999’s Open Sources: Voices from the Open Source Revolution (O’Reilly Media): “We wrote our first contract in February of 1990, and by the end of April, we had already written over $150,000 worth of contracts. In May, we sent letters to 50 prospects we had identified as possibly interested in our support, and in June, to another 100. Suddenly, the business was real. By the end of the first year, we had written $725,000 worth of support and development contracts, and everywhere we looked, there was more opportunity.”

In the same book, Tiemann also touches on something else that would come to be important to successful open source businesses when he wrote: “Unless and until a competitor can match the 100+ engineers we have on staff today, most of whom are primary authors or maintainers of the software we support, they cannot displace us from our position as the ‘true GNU’ source (we supply over 80% of all changes made to GCC, GDB, and related utilities).”

Shortly after Tiemann wrote those words, Red Hat—which had just gone public in August 1999—acquired Cygnus. Red Hat dates back to 1993 when Bob Young, incorporated as the ACC Corporation, started a mail-order catalog business that sold Linux and Unix software accessories out of his home. The following year, Marc Ewing started distributing his own curated version of Linux and he chose Red Hat as the name. (He picked that unusual name because he was known for wearing his grandfather’s red Cornell lacrosse hat when he worked at his job helping fellow students in the computer lab at Carnegie Mellon.)

Young found himself selling a lot of copies of Ewing’s product. In 1995, they joined together to become Red Hat Software, which sold boxed copies of Red Hat Linux, an early Linux distribution.

Linux Distributions Appear

Distributions, or “distros ” as they’re often called, first appeared in 1992 but more active projects arrived the next year. That’s when Patrick Volkerding released Slackware based on the earlier but not well-maintained SLS. The year 1993 also saw Ian Murdoch’s founding of Debian and its release near the end of the year.

Distributions brought together the core operating system components, including the kernel, and combined them with the other pieces, such as the utilities, programming tools, and web servers needed to create a working environment suitable for running applications. Distributions were a recognition that an operating system kernel and even the kernel plus a core set of utilities (such as those that are part of GNU in the case of Linux) aren’t that useful by themselves.

Over the next decade, the number of distributions exploded although only a handful were ever sold commercially.

Support was one of the first things to get added to commercial Linux distributions. Initially, this meant pretty much what it did with traditional retail boxed software. You called a helpdesk if you were having trouble installing something, the software didn’t work as promised, or you wanted to report a bug. However, thinking about what a commercial software vendor like Red Hat does as support for open source software is not only too narrow a view. It’s not the right lens.

Subscriptions : Beyond Support

Rather, as Steven Weber writes, you should be thinking about “building profitable economic models around the open source process.”

In Red Hat’s case, it’s an enterprise subscription software business that is based on an open source development model. What does this subscription look like in the context of open source software? It developed over time. It came about through experimenting, innovating, and perfecting a community-based model. It came through experiencing how to best participate in communities; adding features and functionality desired by customers; and then testing, hardening, compiling, and distributing stable, workable versions to customers.

One of the things that Michael Tiemann wrote back in 1999 that’s still very relevant today is that part of a business model for open source software is establishing in-house expertise in the design, optimization, and maintenance of the products being sold. This may be an obvious point in the case of proprietary software that is written by a single company. However, with open source software, it’s also difficult to provide effective support in the absence of active participation in the communities developing the software. That participation is what leads to having the expertise to solve difficult support problems.

And it goes beyond support. Users of software often want to influence product direction or the development of new features. With open source software, users can do so directly. However, working with the communities in which the software is developed isn’t necessarily easy or obvious to a company that isn’t familiar with doing so. As we’ve seen, there’s not really a template. Communities have different governance models, customs, and processes. Or just quirks. Even organizations that do want to participate directly in setting the direction of the software they use can benefit from a guide.

Focusing on Core Competencies

Furthermore, many organizations don’t want to (or shouldn’t) spend money and attention on developing all the software that they use. When I was an industry analyst in the early 2000s, I would talk with the banks and other financial institutions who were among the earliest adopters of Linux after the Internet infrastructure providers. Large banks had technologists with titles like “director of kernel engineering.” But here’s the thing. Banks are not actually in the business of writing and supporting operating systems. They need operating systems. But they also need bank branches. Yet they’re not in the construction business.

Over time, banks and other end users did increasingly participate in community-based open source development. We saw the example of AMQP development, for example. However, especially for platforms that make up their infrastructure, most enterprises prefer to let companies that specialize in the software do much of the heavy lifting.

Ultimately, subscribers can choose the degree to which they participate in and influence technology and industry innovation. They can either use the open source product as they would any other product. Or they can actively participate in setting the development direction to a degree that is rare with proprietary products.

Open source software subscriptions do indeed provide fixes, updates, assistance, and certifications in a way that doesn’t look that different from other commercial software products. And that may be enough for many customers. However, the ability to participate in the open source development model creates opportunities that don’t exist with proprietary software.

Aligning Incentives with Subscriptions

Furthermore, subscriptions create different incentives for vendors than up-front licenses do. The usual way that licenses work is that there’s an up-front license fee and upgrade fees for major new versions, as well as ongoing maintenance charges. As a result, there’s a strong incentive for vendors to encourage upgrades. In practice, this means that—while the company selling the software has contractual obligations to fix bugs and patch security holes—they would actually prefer that customers upgrade to new versions when they become available. There’s thus an active disincentive to add new features to existing software.

With a subscription model, on the other hand, so long as a customer continues to subscribe, it doesn’t matter so much to the vendor whether a customer upgrades to a new version or not. There’s still some incentive to get customers to upgrade. It takes effort to add new features to older versions and, at some point, it can just become too hard to continue providing support. But there’s no financial impetus creating an artificial urgency to force upgrades.

Subscriptions do sometimes get a bad rap, especially in proprietary consumer software and services. But that’s mostly because, with most subscriptions of this type, you only retain access to the software so long as you pay the subscription fee. For someone only running some piece of software occasionally and casually, that can be a bad deal compared to just using a five-year-old software package that’s no longer supported but works fine for the task at hand.

Open source subscriptions are different though because you retain full control over and access to software even if you let a subscription lapse. That’s fundamental to free and open source software. It’s yours to do with as you please. That’s at the heart of a business model that makes profitable companies built around open source possible.

From Competition to Coopetition

The rise of open source software has paralleled other changes taking place in the software industry and beyond. Some of these changes arguably take their cues from the open source development model. Others are more likely the result of some of the same influences that helped make the widespread adoption of open source software possible such as the Internet and inexpensive computers.

One of the broad changes that has paralleled the rise of open source is an increasing trend toward greater coopetition—cooperative competition.

Coopetition Gets Coined

The term dates back to the early 20th century, but it started to see widespread use when Novell’s Ray Noorda began using the term to describe the company’s business strategy in the 1990s. For example, Novell was at the time planning to get into the Internet portal business, which required it to seek partnerships with some of the same search engine providers and other companies that it would also be competing with.

In 1996, Harvard Business School’s Adam Bradenburger and Yale’s Barry Nalebuff wrote a New York Times best-selling book on the subject, adopting Noorda’s term and examining the concept through the lens of game theory. They described it as follows. “Some people see business entirely as competition. They think doing business is waging war and assume they can’t win unless somebody else loses. Other people see business entirely as cooperative teams and partnerships. But business is both cooperation and competition. It’s coopetition.”

The basic principles have been around forever. Marshall University’s Robert Deal describes in The Law of the Whale Hunt: Dispute Resolution, Property Law, and American Whalers, 1780–1880 (Cambridge University Press, 2016) how “Far from courts and law enforcement, competing crews of American whalers not known for their gentility and armed with harpoons tended to resolve disputes at sea over ownership of whales. Left to settle arguments on their own, whalemen created norms and customs to decide ownership of whales pursued by multiple crews.”2 Many situations aren’t ruled solely by either ruthless competition or wholly altruistic cooperation.

Why Coopetition Has Grown

The theory behind coopetition isn’t that well established with the result that there’s debate over where coopetition is most effective and what the most effective strategies are. However, a 2012 paper by Paavo Ritala notes that “it has been suggested that it occurs in knowledge-intensive sectors in which rival firms collaborate in creating interoperable solutions and standards, in R&D, and in sharing risks.”3

That’s a good description of the IT industry, but it’s an increasingly good description of the many industries that are increasingly selling products and services that are enabled by software or simply are software. “Software is eating the world” as venture capitalist (and co-author of the first widely used web browser) Marc Andreessen famously put it in a 2011 Wall Street Journal piece. It seems at least plausible that coopetition’s high profile of late is the result of complexity levels and customer demands that make it increasingly difficult to successfully avoid cooperation.

By way of contrast, I still remember one day in the early 1990s. I got an email from a sales rep, livid because he had learned that a networking card in a new computer system we had announced was made by Digital Equipment, a major competitor. Among the rep’s choice words were something along the lines of “I’m fighting with these guys every day and you go and stab me in the back.”

I tell this story because it nicely illustrates the degree to which the computer systems market has changed. Today, the idea that having commercial relationships with a competitor to supply some part or service would be scandalous or even especially notable would seem odd under most circumstances. There are echoes of this sort of behavior in the scuffles between Apple, Google, and Amazon in smartphones and voice assistants. But those are notable mostly because they’re not really the norm.

Coopetition is at the heart of most larger open source projects in which the participants are mostly developers working on the software as part of their day jobs. Look through the top contributors to the Linux kernel and you’ll see multiple semiconductor companies, software companies with Linux distributions, computer system vendors, and cloud service providers.4 Companies within each of these groups are often direct competitors and, indeed, may compete with others as well in certain aspects of their business. When the OpenStack Foundation was created, it was in large part to explicitly create a structure that could accommodate participation by competing corporate interests.

Open Source: Beneficiary and Catalyst

Open source software development has both benefited from and been a catalyst for coopetition. Seeing companies working cooperatively on an open source project, it’s easy to dismiss the novelty of working together in this way. After all, companies have cooperated in joint ventures and other types of partnerships forever.

What we observe with open source projects, however, is a sharply reduced level of overhead associated with cooperation. Companies work together in a variety of ways. But many of those ways involve contracts, non-disclosure agreements, and other legal niceties. While open source projects may have some of that—contributor license agreements for example—for the most part, starting to work on a project is as simple as submitting a pull request to let others know that you have pushed code to a repository.

Extensive involvement in a major project tends to be more formal and more structured of course. The details will depend on the project’s governance model, but major project contributors should be on the same page as to the project’s direction. Nonetheless, the overall process for working together in an open source project tends to be lighter weight, lower overhead, and faster than was historically the case for companies working together.

One specific change in this vein that we’ve seen is the way that software standards are now often developed.

Coopetition and Standards

Typically, we talk about two types of standards. One type is de jure standards, or standards according to the law. These are what you get when industry representatives, usually including competitors, sit down as part of a standards organization or other trade organization to create a standard for something. The process can be very long and arduous. And also infamous for often producing long and technically rigorous, in an academic way, specifications that don’t actually get used much in the real world.

The Open Systems Interconnection model (OSI model) is one example. While OSI’s conceptual seven-layer model has been widely adopted as a way to talk about layers of the networking software stack, software that directly implemented OSI was never much used.

By contrast, TCP/IP, Transmission Control Protocol (TCP) and the Internet Protocol (IP), came out of research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the late 1960s. Today, they’re the core communication protocols used by the Internet. Although TCP/IP was subsequently ratified as a formal standard, it started out as a de facto standard by virtue of widespread use. Widely used proprietary products such as Microsoft Windows or x86 processors can also be viewed as de facto standards.

Open source has developed something of a bias toward a de facto standardization process that is effectively coopetition and standardization through code. Or, if you prefer, “code first.”

We’ve seen this play out repeatedly in the software containers space. While they were subsequently standardized under the Open Container Initiative, image runtime and image formats existed as implementations before they became a standard. The same is true of container orchestration with Kubernetes evolving to be the most common way of orchestrating and managing a cluster of containers. It’s a standard based on the size of its community and its adoption rather than the action of a standards body.

This approach is very flexible because it allows companies to work together developing software and then iterate as needed. There’s another advantage as well. One of the problems with specifications is that they rarely fully specify everything. As a result, software that’s based on standards often has to make assumptions about unspecified details and behaviors. By contrast, a standard achieved through a code first approach is its own reference implementation. It’s therefore a more effective approach to coopetition than developing a specification in committee only to have parties go off and develop their individual implementations—which can be subtly incompatible as was the case with the Fibre Channel storage interconnect early on, to give one example.

The Need for Speed

The ascendency of the open source development model as an approach for collaboration and innovation wasn’t the only interesting IT trend taking place in the mid- to late 2000s. A number of things were coming together in a way that would lead to both new platforms and new development practices.

From Physical to Virtual

Server virtualization, used to split physical servers into multiple virtual ones, was maturing and IT shops were becoming more comfortable with it. Virtualization was initially intended to reduce the number of boxes needed and hence to cut costs. However, it came to have other uses as well. Ubiquitous virtualization meant that IT organizations were becoming more accepting of not knowing exactly where their applications are physically running. In other words, another level of abstraction was becoming the norm as has happened many times in many places over the history of computing.

A vendor and software ecosystem was growing out alongside and on top of virtualization. One specific pain point this ecosystem addressed was in the area of “virtualization sprawl,” a problem brought about by the fact that virtualization made it so easy to spin up new systems that the management burden could get out of hand. Concepts like automation, policy-based administration, standard operating environments, and self-service management were starting to replace system admin processes that had historically been handled by one-off scripts—assuming they weren’t simply handled manually.

The Consumerization of IT

IT was also consumerizing and getting more mobile. By 2007, many professionals no longer used PCs tethered to a local area network. They used laptops running on Wi-Fi. Then Apple introduced the iPhone. Soon smartphones were everywhere, usually purchased by employees even though they were often used for both personal and business purposes. Meanwhile, on the software side, users were getting accustomed to responsive, slick consumer web properties like Amazon and Netflix during this post-dot-com Phase 2 of the web. Stodgy and hard-to-use enterprise software looked less attractive than ever.

Line of business users in companies also started noticing how slow their IT departments were to respond to requests. Enterprise IT departments rightly retorted that they operate under a lot of constraints—whether data security, detailed business requirements, or uptime—that a free social-media site does not. Nonetheless, the consumer web increasingly set an expectation and, if IT couldn’t or wouldn’t meet it, users would go to online services—whether to quickly put computing resources on a credit card or to purchase access to a complete online application.

Even those enterprise IT shops with tightly run software practices could see that the speed at which big Internet businesses such as Amazon and Netflix could enhance, update, and tune their customer-facing services was at a different level from what they could do. Yet a miniscule number of these deployments caused any kind of outage. These companies were different from more traditional businesses in many ways. Nonetheless they set benchmarks for what is possible.

Which brings us to DevOps.

The Rise of DevOps

DevOps touches many different aspects of the software development, delivery, and operations process. But, at a high level, it can be thought of as applying open source principles and practices to automation, platform design, and culture. The goal is to make the overall process associated with software faster, more flexible, and incremental. Ideas like the continuous improvement based on metrics and data that have transformed manufacturing in many industries are at the heart of the DevOps concept. Amazon and Netflix got to where they are in part by using DevOps.

The DevOps Origin Story

DevOps grew out of Agile software development methodologies, which were formally laid out in a 2001 manifesto5 although they had roots going back much further. For example, there are antecedents to Agile and DevOps in the lean manufacturing and continuous improvement methods widely adopted by the automobile industry and elsewhere. The correspondence isn’t perfect; lean approaches focus to a significant degree on reducing inventory, which doesn’t cleanly map to software development. Nonetheless, it’s not hard to find echoes of principles found in the Toyota Way (which underlies the Toyota Production System) like “respect for people,” the right process will produce the right results,” and “continuously solving root problems” in DevOps. Appreciating this lineage also helps to understand that, while appropriate tools and platforms are important, DevOps is at least equally about culture and process.

The DevOps term was coined by Belgian consultant Patrick Debois who had been frustrated about the walls of separation and lack of cohesion between application methods and infrastructure methods while on an assignment for the Belgian government. A presentation by John Allspaw and Paul Hammond at the O’Reilly Velocity 09 conference entitled “10 Deploys a Day: Dev and Ops Cooperation at Flickr” provided the spark for Debois to form his own conference called Devopsdays in Ghent, Belgium in 2009 to discuss these types of issues. DevOpsDays have since expanded as a sort of grassroots community effort to the point where, in 2018, there are dozens held every year around the world.

According to Frederic Paul in an InfoQ video interview from April 2012, Debois admitted that naming the movement was not as intentional as it might seem: “I picked ‘DevOpsDays’ as Dev and Ops working together because ‘Agile System Administration’ was too long,” he said. “There never was a grand plan for DevOps as a word.”6

Another noteworthy DevOps moment was the publication of The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win, written by Gene Kim, Kevin Behr and George Spafford in 2013 (IT Revolution Press). This book is a sort of fable about an IT manager who has to salvage a critical project that has bogged down and gotten his predecessor fired. A board member mentor guides him through new ways of thinking about IT, application development, and security—introducing DevOps in the process. Although DevOps has both evolved and been written about more systematically since then (including The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations by Gene Kim, Patrick Debois, Jez Humble, and John Willis (IT Revolution Press, 2016)), The Phoenix Project remains an influential text for the movement.

DevOps: Extending Beyond Agile

DevOps widened Agile principles to encompass the entire application life cycle including production operations. Thus, operations and security skills needed to be added to the cross-functional teams that included designers, testers, and developers. Improving collaboration, communication, and the level of cross-functional skills is an important DevOps tenet.

Taken to an extreme, there might even no longer be devs and ops people, but DevOps skill sets. But, more commonly, this view of DevOps focuses on “two pizza” cross-functional teams—small, multidisciplinary groups that own a service from its inception through its entire life cycle. This works in part because such services are autonomous, have bounded context, and can be developed independent of other services and groups, so long as they honor their API contract. It also assumes that these “generalist” teams have the necessary skills to operate the underlying platform.

For example, when thinking about security as part of DevOps—or even using the DevSecOps term to remind us of its importance—developers (and operations) don’t suddenly need to become security specialists in addition to the other hats they already wear. But they can often benefit from a greater awareness of security best practices (which may be different from what they’ve become accustomed to) and shifting away from a mindset that views security as some unfortunate obstacle.

Abstractions to Separate Concerns

However, especially in larger organizations, DevOps has evolved to mean something a bit different than closely communicating cross-functional teams, developers on pager duty, or sysadmins writing code. Those patterns may still be followed to greater or lesser degrees, but there’s a greater focus on clean separation of concerns. It’s about enabling ops to provide an environment for developers, then get out of the way as much as possible.

This is what Adrian Cockcroft, Netflix’s former cloud and DevOps guru—he’s now at Amazon Web Services (AWS)—was getting at with the “No Ops” term when he wrote about it.7 While Netflix was and is a special case, Cockcroft hinted at something that’s broadly applicable: In evolved DevOps, a lot of what ops does is put core services in place and get out of the way. There’s value in creating infrastructure, processes, and tools in a way that devs doesn’t need to interact with ops as much—while being even more effective. (Netflix largely operated using Amazon cloud services, so they had very little infrastructure they operated themselves, in spite of their vast scale.)

Reducing the friction of interactions between devs and ops doesn’t always mean making communication easier. It can also involve making communication unnecessary. Think about it this way: You do not, in fact, want to communicate with a bank teller more efficiently. You want to use an ATM. You want self-service.

With this model of DevOps, key aspects of operations happen outside of and independent of the application development process.

Of course, communication between dev and ops (as well as other disciplines) still matters. The most effective processes have continuous communication. This enables better collaboration, so that teams can identify failures before they happen; feedback, to continuously improve and cultivate growth; and transparency.

Site Reliability Engineers

At this point, it’s worth mentioning Site Reliability Engineering. The term came out of Google in about 2003 when a team led by Ben Treynor was tasked to make Google’s sites run smoothly, efficiently, and more reliably. Like other companies with large-scale infrastructures, Google was finding that existing system management paradigms didn’t provide either the reliability or the ability to deploy new features quickly that they needed.

The idea is that a site reliability engineer (SRE) will spend about half their time on ops-related tasks like manual interventions and clearing issues. However, because the goal is to make the underlying system as automated and self-healing as possible, an SRE also spends significant time writing software that reduces the need for manual interventions or adds new features. Conceptually, this is somewhat like how a traditional system admin would write a script after they had to do the same task a few times. But the SRE concept puts that practice on steroids and expands the ops role into one with a much larger software development component.

Google’s Seth Vargo and Liz Fong-Jones argue that SRE is a variant of DevOps or “DevOps is like an abstract class in programming, and SRE is one possible implementation of that class” as they put it. I think of it more as an evolved form of ops for a separation-of-concerns DevOps model given that SRE teams support the groups actually developing software services. An SRE approach may indeed shift the location of the boundary between ops-centric roles and dev-centric roles. A concrete example might be one which embeds an application’s operational domain knowledge for a cluster of containers.8 But I’d argue that it’s still effectively a specialized ops function.

Manufacturing Analogs

Like DevOps more broadly, separation of functions also hearkens back to earlier examples from manufacturing and industrial organization. Red Hat’s Matt Micene writes that “The ‘Dev’ and ‘Ops’ split is not the result of personality, diverging skills, or a magic hat placed on the heads of new employees; it’s a by-product of Taylorism and Sloanianism. Clear and impermeable boundaries between responsibilities and personnel is a management function coupled with a focus on worker efficiency. The management split could have easily landed on product or project boundaries instead of skills, but the history of business management theory through today tells us that skills-based grouping is the ‘best’ way to be efficient.”9

In any case, DevOps should be viewed as a set of principles rather than a prescriptive set of rules.

Open Source and DevOps

Open source relates to DevOps across aspects that include platforms and tooling, process and automation, and culture.

Platforms and Tooling

A DevOps approach can be applied on just about any platform using any sort of tooling. DevOps can even be a good bridge between existing systems, existing applications, and existing development processes and new ones. The best tools in the world also won’t compensate for broken processes or toxic culture. Nonetheless, it’s far easier to streamline DevOps workflows with the right platform and tools.

Open source tooling is the default in DevOps. A 2015 DevOps Thought Leadership Survey by market researcher IDC found that a whopping 82 percent of early DevOps adopters said open source was “a critical or significant enabler of their DevOps strategy.” What's more, the further along the survey respondents were in implementing DevOps initiatives, the more important they thought open source and DevOps open source tools were.

At the platform level, a key trend pushing the use of new technologies is a shift from static platforms to dynamic, software-defined platforms that are programmable, which is to say controllable through APIs. The OpenStack project is a good example of how software-defined storage, software-defined networking, identity management, and other technologies can come together as a complete programmable infrastructure.

Containers are another important element of modern distributed application platforms. Containers modernize IT environments and processes, and provide a flexible foundation for implementing DevOps. At the organizational level, containers allow for appropriate ownership of the technology stack and processes, reducing hand-offs and the costly change coordination that comes with them. This lets application teams own container images, including all dependencies, while allowing operations teams to retain full ownership of the production platform.

With a standardized container infrastructure in place, IT operations teams can focus on building out and managing clusters of containers, meeting their security standards, automation needs, high availability requirements, and ultimately their cost profiles.

When thinking about the tool chain associated with DevOps, a good place to start is the automation of the continuous integration/continuous deployment (CI/CD) pipeline. The end goal is to make automation pervasive and consistent using a common language across both classic and cloud-native IT. For example, Ansible allows configurations to be expressed as “playbooks” in a data format that can be read by both humans and machines. This makes them easy to audit with other programs, and easy for non-developers to read and understand.

A wide range of other open source tools are common in DevOps environments including code repositories like Git, monitoring software like Prometheus and Hawkular, logging tools like Fluentd, and container content tools like Buildah.

Process

We also see congruence of open source development processes and those of DevOps. While not every open source project puts in the up-front work to fully implement DevOps workflows, many do.

For example, Edward Fry relates the story of one community that found “there are some huge benefits for part-time community teams. Planning goes from long, arduous design sessions to a quick prototyping and storyboarding process. Builds become automated, reliable, and resilient. Testing and bug detection are proactive instead of reactive, which turns into a happier clientele. Multiple full-time program managers are replaced with self-managing teams with a single part-time manager to oversee projects. Teams become smaller and more efficient, which equates to higher production rates and higher-quality project delivery. With results like these, it’s hard to argue against DevOps.”10

Whether or not they check all the DevOps boxes, significant open source projects almost can’t help but to have at least some characteristics of a DevOps process.

For example, there needs to be a common and consistent view into code. DevOps and open source projects are both well-adapted to using a distributed approach whereby each developer works directly with his or her own local repository with changes shared between repositories as a separate step. In fact, Git, which is widely used in platforms for DevOps, was designed by Linus Torvalds based on the needs of the Linux kernel project. It’s decentralized and aims to be fast, flexible, and robust.

And remember the earlier discussion about creating a good experience for new contributors by providing them with rapid feedback and incorporating their code when it’s ready? Automation and CI/CD systems are a great way to automate testing, build software more quickly, and push out more frequent releases.

Iteration, Experimentation, and Failure

At a higher level, DevOps embraces fast iteration, which sounds a lot like the bazaar approach to software development. They don’t align perfectly; software developed using a DevOps approach can still be carefully architected. However, DevOps has a general ethos that encompasses attributes like incremental changes, modularity, and experimentation.

Let’s talk about experimentation a bit more. Because it has a flip side. Failure.

Now that’s a word with a negative vibe. Among engineering and construction projects, it conjures up the Titanic sinking, the Tacoma Narrows bridge twisting in the wind, or the space shuttle Challenger exploding. These were all failures of engineering design or management.

Most failures in the pure software realm don’t lead to the same visceral imagery as the above, but they can have widespread financial and human costs all the same. Think of the failed Healthcare.​gov launch, the Target data breach, or really any number of multimillion-dollar projects that basically didn’t work in the end. In 2012, the US Air Force scrapped an enterprise resource planning (ERP) software project after racking up $1 billion in costs.

In cases like these, playing the blame game is customary. Even when most of those involved don’t literally go down with the ship—as in the case of the Titanic—people get fired, careers get curtailed, and the Internet has a field day with both the individuals and the organizations.

But how do we square that with the frequent admonition to embrace failure in DevOps? If we should embrace failure, how can we punish it?

Not all failure is created equal. Understanding different types of failure and structuring the environment and processes to minimize the bad kinds is the key to success. The key is to “fail well,” as Megan McArdle writes in The Up Side of Down: Why Failing Well Is the Key to Success (Penguin Books, 2015).

In that book, McArdle describes the Marshmallow Challenge, an experiment originally concocted by Peter Skillman, the former VP of design at Palm.11 In this challenge, groups receive 20 sticks of spaghetti, one yard of tape, one yard of string, and one marshmallow. Their objective is to build a structure that gets the marshmallow off the ground, as high as possible.

Skillman conducted his experiment with all sorts of participants from business school students to engineers to kindergarteners. The business school students did worst. I’m a former business school student, and this does not surprise me. According to Skillman, they spent too much time arguing about who was going to be the CEO of Spaghetti, Inc. The engineers did well, but also did not come out on top. As someone who also has an engineering degree and has participated in similar exercises, I suspect that they spent too much time arguing over the optimal structural design approach using a front-loaded waterfall software development methodology writ small.

By contrast, the kindergartners didn’t sit around talking about the problem. They just started building to determine what works and what doesn’t. And they did the best.

Setting up a system and environment that allows and encourages such experiments enables successful failure in Agile software development. It doesn’t mean that no one is accountable for failures. In fact, it makes accountability easier because "being accountable" needn’t equate to "having caused some disaster." In this respect, it changes the nature of accountability.

We should consider four principles when we think about such a system: scope, approach, workflow, and incentives.

Scope

The right scope is about constraining the impact of failure and stopping the cascading of additional failures. This is central to encouraging experimentation because it minimizes the effect of a failure. (And, if you don’t have failures, you’re not experimenting.) In general, you want to decouple activities and decisions from each other. From a DevOps perspective, this means making deployments incremental, frequent, and routine events—in part by deploying small, autonomous, and bounded context services (such as microservices or similar patterns).

Approach

The right approach is about continuously experimenting, iterating, and improving. This gets back to the Toyota Production System’s kaizen (continuous improvement) and other manufacturing antecedents. The most effective processes have continuous communication—think scrums and kanban—and allow for collaboration that can identify failures before they happen. At the same time, when failures do occur, the process allows for feedback to continuously improve and cultivate ongoing learning.

Workflow

The right workflow repeatedly automates for consistency and thereby reduces the number of failures attributable to inevitable casual mistakes like a mistyped command. This allows for a greater focus on design errors and other systematic causes of failure. In DevOps, much of this takes the form of a CI/CD workflow that uses monitoring, feedback loops, and automated test suites to catch failures as early in the process as possible.

Incentives

The right incentives align rewards and behavior with desirable outcomes. Incentives (such as advancement, money, recognition) need to reward trust, cooperation, and innovation. The key is that individuals have control over their own success. This is probably a good place to point out that failure is not always a positive outcome. Especially when failure is the result of repeatedly not following established processes and design rules, actions still have consequences.

Culture

I said there were four principles. But actually there are five. A healthy culture is a prerequisite for both successful DevOps projects and successful open source projects and communities. In addition to being a source of innovative tooling, open source serves as a great model for the iterative development, open collaboration, and transparent communities that DevOps requires to succeed.

The right culture is, at least in part, about building organizations and systems that allow for failing well—and thereby make accountability within that framework a positive attribute rather than part of a blame game. This requires transparency. It also requires an understanding that even good decisions can have bad outcomes. A technology doesn’t develop as expected. The market shifts. An architectural approach turns out not to scale. Stuff happens. Innovation is inherently risky. Cut your losses and move on, avoiding the sunk cost fallacy.

One of the key transformational elements is developing trust among developers, operations, IT management, and business owners through openness and accountability.

Ultimately, DevOps becomes most effective when its principles pervade an organization rather than being limited to developer and IT operations roles. This includes putting the incentives in place to encourage experimentation and (fast) failure, transparency in decision making, and reward systems that encourage trust and cooperation. The rich communication flows that characterize many distributed open source projects are likewise important to both DevOps initiatives and modern organizations more broadly.

Changing Culture

Shifting culture is always challenging and often needs to be an evolution. For example, Target CIO Mike McNamara noted in a 2017 interview that “What you come up against is: ‘My area can’t be agile because . . .’ It’s a natural resistance to change—and in some mission-critical areas, the concerns are warranted. So in those areas, we started developing releases in an agile manner but still released in a controlled environment. As teams got more comfortable with the process and the tools that support continuous integration and continuous deployment, they just naturally started becoming more and more agile.”12

It’s tempting to say that getting the cultural aspects right is the main thing you have to nail in both open source projects and in DevOps. But that’s too narrow, really. Culture is a broader story in IT and elsewhere. For all we talk about technology, that is in some respects the easy part. It’s the people who are hard.

Writing in The Open Organization Guide to IT Culture Change , Red Hat CIO Mike Kelley observes how “This shift to open principles and practices creates an unprecedented challenge for IT leaders. As their teams become more inclusive and collaborative, leaders must shift their strategies and tactics to harness the energy this new style of work generates. They need to perfect their methods for drawing multiple parties into dialog and ensuring everyone feels heard. And they need to hone their abilities to connect the work their teams are doing to their organization’s values, aims, and goals—to make sure everyone in the department understands that they’re part of something bigger than themselves (and their individual egos).”13

Pervasive Open Source

Business models associated with open source software products are important to get right. It takes viable business models that involve, not just using, but contributing back to projects to sustain healthy open source communities. While many individuals are motivated to contribute to open source projects on their own time, the vast amount of open source software powering today’s world depends on corporations contributing as part of a profitable business plan.

Those viable business models exist today, notwithstanding the many challenges to getting them right and the temptation for companies to free ride or otherwise avoid contributing—topics that I’ll cover more deeply in the next chapter. In fact, many organizations have discovered broad benefits to participating in open source software development and even in adopting open source practices in other aspects of their business.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.4.239