Chapter 5

Containers, Container Orchestration, and Cloud Native Realities

Please give me some good advice in your next letter. I promise not to follow it.

— Edna St. Vincent Millay

In this chapter, we take on the concepts of containers and container orchestration, which currently refer to Docker-style containers and Kubernetes container orchestration for the most part. Today’s containers definition is closest to the way Docker defined them a few years ago, which came from the concept of containerization that has been around for decades. Container orchestration, simply put, is the ability to take groupings of containers and run them in parallel so that they can do more at the same time. Kubernetes is currently the most popular mechanism for container orchestration and clustering.

We define containers along with container orchestration in greater detail later in this chapter. We also review the term cloud native, along with the varying definitions that have confused me and many others. However, the fundamental concepts of containers could prove invaluable when all is said and done.

There is more hype around containers, container orchestration, and cloud native than any other topic at the time of writing this book, so I suspect many of you turned directly to this chapter. I also suspect these concepts will outlive the hype and become foundational to how we build and deploy cloud solutions. We’re further down the road with container technologies, developers like them, and we’re successful with them.

However, there are several issues to consider, trade-offs that many have not yet encountered. This book continues its role as the dissonant buzzkill around issues we’ll soon face, again giving you the full and complete story rather than just blindly chasing the hype.

Containers

Let’s skip yet another comparison of containers and virtual machines (VMs); you can find that information in many other places. Instead, let’s look at specific values that containers bring to the cloud computing party and why you might want to leverage containers along with container orchestration. Keep in mind that containers are nothing new. Like most current technology, they’re built on past technological innovations and use cases that led to the industry trusting today’s containers as desired application development and deployment mechanisms.

The definition of containers started with a few PhD theses and IEEE articles, and then J2EE containers as the Web and Java grew to allow Java to scale. J2EE is an open standard based around the Java programming language that defines how containers are to be created and executed using any J2EE application server. Many of the benefits of J2EE technology are in the containers you see around today, including the ability to provide clustering for scalability and reliability benefits.

As mentioned earlier, the Docker set of Platform as a Service (PaaS) products most closely follow the current definition of containers. So, the core benefits of containers as they exist today include the delivery of

  • Consistent and isolated environments. You create a platform within a platform that is the same no matter what platform or what cloud it runs on. Because the applications and sometimes the data are isolated, they can be consistently leveraged without regard for the native operating system host where the container runs (see Figure 5-1).

  • Cost-effectiveness. The open-source evolution of containers and container orchestration along with economical tools and processes make containers more cost effective versus other technologies that provide the same features, such as portability and scalability. Note that cost effective does not mean less expensive. We cover more about costs later.

  • Fast deployment. Containers and container orchestration provide consistent ways to test and deploy containers on cloud and non-cloud platforms.

  • Portability. This is the big selling point. Containers can run across different platforms that support containers. Thus, you should be able to write an application once, deploy it as a container or set of containers, and that application should be portable across many different cloud-based platforms.

  • Repeatability and automation. The development process is consistent from container to container, including many of the same tool types of skills.

  • Flexibility. The platform is very customizable, allowing developers to take containers in as many different directions as needed to accommodate the requirements of the business.

  • Wide support. Containers have a huge ecosystem of third-party providers for everything from container-based databases to container-based security. You can almost always find a solution that you don’t need to develop yourself.

Containers are not very difficult to understand as an architecture. Consider Figure 5-1, which defines the basic structure of a Docker container. This includes the application that resides and runs in the container, binaries and libraries (Bins/Libs) that support the runtime, and even an OS within the container that provides basic services that it passes on to the host operating systems.

Images

FIGURE 5-1 Containers leverage a wide variety of architectures. However, most provide isolation, meaning that they can operate as a separate operating system and platform inside of several host operating systems, such as those provided by cloud platforms. Of course, leveraging container orchestration layers for clustering operations can provide scalability and redundancy.

Containers need a host operating system to run, which can be on a cloud provider or not, and some sort of processing to support the host operating system, including basic infrastructure such as storage, compute, or networking, anything that traditional processes require to function. The idea of a container is to provide a layer of abstraction between the native operating systems, the native cloud provider, and the application and data that define the business solution.

This container mechanism supplies a consistent and isolated runtime environment that allows for portability, which was the initial selling point of containers. However, this portability soon led to the understanding that containers could be much more. Indeed, they can provide an enhanced runtime environment for applications that can be better than a standard OS, including the ability to cluster containers using container orchestration layers, such as Kubernetes, that provide automation and the ability to scale. And that’s why containers are popular today and will be long after this book is published.

What Works

For the most part, containers deliver good support to the development community during the move to cloud computing. With containers providing a base architecture, they changed the way we think about software development in the cloud as container orchestration and Kubernetes provided mechanisms for scalability and reliability. The encouraging part is that containers were more of a grass-roots movement built on open-source origins. Thus, there were no big-name backers of containers trying to sell them as “their solution.” However, some of the larger brands did jump in feet first to promote container and container orchestration. Those were the smarter players. Others saw no obvious way to make money and ignored containers. Those players ended up doing serious catchup later in the market evolution, mostly by selling containers and container orchestration as public cloud services.

Containers now provide a sound development and deployment technology for the cloud, which in turn promotes the use of cloud-based platforms. Given their impact on the marketplace, we can consider containers a success. Indeed, the “cloud native” movement, as we discuss later in this chapter, was largely built on containers, container orchestration (Kubernetes), and micro services. All are tightly coupled technologies.

So, the core list of what containers make work includes

  • A standard development and deployment platform for applications.

  • Built-in application portability approaches. While not perfect, containers seem to provide better portability.

  • Cost-effective solutions. Containers are usually less expensive than traditional development and deployment technology.

  • A large ecosystem of technologies that support containers such as databases, storage, backup and recovery, and security, many of which are now several generations old.

Containers: What Doesn’t Work

The downside of leveraging containers includes issues around their rapidly increasing popularity, and thus the hype that surrounds containers. Like any other technology, containers are not right for all applications. Existing systems need to be carefully analyzed before they get “containerized.” Remember the old adage: Just because you can doesn’t mean you should.

Today the core downside of containers is the overapplication of container development and the overporting of existing applications to containers. The reality is that containers are contraindicated for many applications and application types. However, to date, some businesses saw their way clear to spend twice as much moving an application to containers as they spent to develop the application in the first place. They also chased the benefit of portability when the application was unlikely to ever move from its current host platform. Moreover, they did not understand that to truly take advantage of what containers have to offer means a complete rearchitecture of the application, which they typically didn’t do.

Of course, the mistakes extend to net new applications as well, or new applications that leverage containers as their development approach of choice. Again, many applications won’t find advantages with container-based development. As we cover in the next section, enterprises could spend as much as four times the money to build the same application if container-based development and deployment are used versus more traditional methods. Moreover, the end product could cost more to operate by using more cloud-based resources, such as storage and compute. In many of these situations, containers were force fit due to the decision makers being motivated by the hype surrounding containers, not understanding the business problems, or not caring about the problems, or all three.

So, here’s what doesn’t work (or rarely works) with containers:

  • Listening to hype reasons, and not business reasons.

  • Overstating the benefits, such as portability.

  • Not considering the operational costs.

  • Underestimating the amount of time and money needed to move an existing application to a container.

  • Not designing an existing application or a net new application as a purpose-built containerized application.

  • Being able to forecast future costs based on the short life cycle of a container.

  • Failing to account for the increased price of container development, deployment, and operations talent due to its popularity.

Container: Cost Considerations

The obvious key to figuring out what will and won’t work is to determine the cost of leveraging containers. This includes the cost of development, deployment, and operations. Consider these costs against more traditional and commodity-type costs for similar development, deployment, and operations. The gap between these two choices is the “container tax.” It’s the additional money you’ll spend just to leverage containers.

Make sure the benefits outweigh the costs. Containers are a good architectural choice for the right projects. However, consider the benefits you want versus the benefits you need with a clear eye. Are the odds high, medium, or low that your business will use a hyped tech’s costly but nice-to-have features? There should be at least a medium to high probability that the enterprise will see a return on its investment in any technology.

Portability, for example, is something that many believe is a great feature to have in their back pocket because it removes the risk of provider lock-in. Let’s say you leverage a cloud provider using an application that’s tightly coupled to the cloud provider’s services and APIs. If the cloud provider raises prices, or lowers the quality of services, or if the platform is no longer a good fit for other reasons, a tightly coupled application will require additional costs and development time to port the application, and that translates into business risk.

An application that runs within containers and is actually built using containers provides better portability. That puts the application in a better, more portable state that can be easily and cheaply moved to another cloud provider.

However, it’s rarely a clearcut choice. Applications that leverage containers cost about 40 percent more to build. Container development prices are higher, as are the tools and technology prices, and, on average, it just takes more time to build an application using containers compared to more traditional approaches and tooling. Moreover, containers cost about 20 percent more to operate on a public cloud provider. These are costs you hope will be justified due to the lock-in risk they avoid, which should provide the ability to quicky move from cloud to cloud without a great deal of cost. But you must weigh these extra costs against the likelihood of that move ever occurring.

The cost issues with containers are not only around the costs to build them; they’re not that much more. It’s the increased costs to operate them ongoing. That means you need a solid business reason to leverage containers in the first place.

When I’m called into projects after containers are preselected as the development and deployment technology of choice, I usually discover that few if any of the team’s reasons to leverage containers justify their use for either net new or the “containerization” of existing applications. This leads to many frowny faces when I make the unpopular announcement that containers are contraindicated for the project. Unfortunately, in many cases, the teams move forward with containers despite my feedback and just ignore the extra cost of development and operations. Of course, containerized or not, the application works just fine. However, one approach costs the business 20–40 percent more for the same application with ongoing cost overruns. Obviously, that expenditure could shortchange vital developments in other areas of the business.

More often, enterprises evaluate the use of containerization on an application-by-application basis, as it should be. Some applications will cost justify the use of containers, either net new or existing. Many will not. You need to cost justify every architectural or development decision to meet the core objective of supporting the needs of the business. The day we put technology trends over business needs is the day we should look for another job. Next week, next month, or next year, at some point you will be asked to cost justify your decisions. “All the cool kids were choosing containers” is the wrong answer.

Containers: Make the Tough Choices

Containers are an amazing technology and a concept that keeps evolving. It’s an architectural pattern that allows us to compartmentalize applications and data for portability and design simplicity. It comes with the added benefit of providing “cloud scale” capabilities, which means we can scale to any number of users or processing loads and we can perform almost any number of operations.

But containers and container orchestration are not always slam-dunk choices. The business case must justify the need to leverage containers and include how it will benefit the business ongoing.

Although the trends will become a bit clearer as time progresses, containers are or will become a part of all cloud computing architecture. However, we should leverage a technology such as containers only when there is a definable business benefit.

Container Orchestration and Clustering

Again, let’s not get into the details around container orchestration, which we typically refer to here as Kubernetes container orchestration. You can go other places to find more detail on the topics of containers and container orchestration, or even entire books written on these topics. Instead, let’s have a pragmatic discussion about the issues you need to consider with the use of this technology. However, know that references here to container orchestration include Kubernetes clustering services.

Initially, there were a few container orchestration players. Now, almost everyone is on the Kubernetes bandwagon, and for good reasons. As mentioned earlier, the Kubernetes platform is also open source, providing core services for container scheduling and creating and operating container clusters; it’s well designed, tested on public cloud providers that need to scale; and it has a large ecosystem of third-party products that provide everything from databases to security to operations administration.

My intent is not to push a specific container orchestration tool here. It’s more important to understand the concept of container orchestration and how it may or may not fit before even considering its use. Kubernetes provides a backdrop for the explanations of that concept, and it’s currently the container orchestration tool of choice for today’s container development projects.

Too often cloud architects move to technologies such as Kubernetes because of good press and popularity. Kubernetes is even offered by most cloud providers on demand. However, they end up spending more money and time than required, and that hurts the business. Worst case, you get found out and fired. The more likely scenario is that you just built something that diminished the value of the business. Pull off many more of those mistakes and you’ll end up looking for another job because your enterprise will sink under excessive IT spending and the lack of value returned to the business. Remember our teachings from Chapter 1, “How ‘Real’ Is the Value of Cloud Computing?

For all practical purposes, container orchestration (which includes the use of nodes and clusters, see Figure 5-2) provides each container with the ability to virtually execute in its own private operating system rather than leverage static hardware and partitions. Many people compare containers with virtual machines, such as those offered by VMware and others. However, they are very different animals. Containers do not require a static partition of compute, storage, and memory to function. Consequently, the administrators are able to deploy a bunch of containers on servers without having to concern themselves with the limitations of physical hardware such as memory.

Images

FIGURE 5-2 Container orchestration provides the ability to run many different or like containers in clusters that provide better scalability and flexibility. Container orchestration can support most platforms, most clouds, and most application development requirements. However, as with any technology, there is a right and wrong time to leverage orchestration. Understand the trade-offs before you embark on a business problem-solving journey with this technology.

Kubernetes uses container orchestration and clustering systems that can launch, manage, and kill containers with ease on any number of platforms (see Figure 5-2). Although we’re talking about container orchestration and clustering in the abstract, common patterns are beginning to emerge and will likely continue to emerge, such as

  • A slave and master relationship between a control node or nodes, which are often called the master nodes and the slave nodes.

    • The master nodes oversee the container clusters, such as clusters that deal with scheduling and provide API access to the master node, which basically oversees the nodes that do the work.

    • The slave nodes carry out the execution of containers and manage them during the operational state on the platforms they are assigned to run on. In the slave nodes, containers are launched and killed based on support requirements from the master node. Slave nodes functionally allow the many containers in the cluster to work together around a programmed task.

    • A container cluster is a dynamic system that places and manages containers, grouped together in pods, running on nodes, along with all the interconnections and communication channels. You can consider a cluster as a grouping of containers that together carry out a task by either replicating containers that do the same thing, allowing for scalability by distribution, or they may do different things. Of course, this capability depends on the needs for the application/solution you’re looking to build using this technology.

  • Other services such as security, databases, and governance are carried out inside and outside of the container orchestration layer. This capability provides developers with the advantages of plugging in whatever service they believe is the best fit for their container orchestration solution.

  • This model supports (of course) fine-grained distribution. Indeed, containers can execute on any number of platforms; you could run 10 container instances on one platform and perhaps 4 on another. The number you use depends on what you need those containers to do, and then you can mix and match the containers and platforms to meet your exact needs.

  • Scalability and resiliency are another benefit of this architecture, enabling technology, and deployment approach. You can program orchestration layers to automatically launch additional slave nodes and containers within those nodes to handle a dynamically increasing workload. The same mechanism can also work through partial outages by failing over to another slave node, or it can launch additional slave nodes to replace the nodes in outage.

Figure 5-3 looks at how a container orchestration system functions as an architecture. You can see that they all use a hierarchy of master nodes that manage slave nodes, which manage the container instances, including those that are running the business solutions. The solutions can include databases, applications, and security, and can operate across any number of platforms that support the execution of the containers.

Images

FIGURE 5-3 Container orchestration and clustering typically work by using a hierarchy of nodes or actors that can work from the higher-level controllers, the masters, to the lower-level slaves that carry out the work at the direction of the master. Although you’ll see variations of this structure (depending on which technology you leverage), most container and orchestration solutions (such as Kubernetes) work in much the same way.

You also need to consider the value of declarative infrastructure programming. Declarative programming approaches just worry about the “what” we want to do or carry out in the program. This is you, the developer, declaring the desired state that is to be reached, which thus defines the output of the program without worrying about the steps needed to get to the desired output. In other words, you don’t worry about “how” something is done. The steps needed to make it happen are abstracted from you, thus making your programming tasks much easier to get to a desired output and state.

This approach contrasts with imperative infrastructure programming, which, simply put, describes “how” an object state should change and the order in which those changes should execute to get to a desired output and state.

Container orchestration tools allow you to declare what the end state and output need to be, and they work out how to get to that state and solve problems they might encounter along the way. You can focus on “what” you need to solve a business problem and not “how” specifically to solve the problem.

Container Orchestration: What Works

Containers and container orchestrations generally live up to the expectations and hype. They offer cost-effective ways to build highly scalable and reliable applications on cloud or non-cloud platforms, and they provide portability within and between platforms. Vast ecosystems have grown around container orchestration, including security systems, databases, governance systems, operations systems, and other systems needed to meet specific requirements of an application that leverages this technology.

The ability to run containers that leverage redundancy, meaning either running cloned containers within a cluster manager or running more than a single cluster, provides both scalability and resiliency. Developers use containers to define an application that can solve business problems and then leverage container orchestration and clustering to effectively execute the containers and manage most aspects of the execution. Those magical capabilities are why most enterprises now choose container orchestration and clustering to build net-new cloud applications, as well as to redesign and rebuild applications to function on public clouds.

Of course, all this magic comes with trade-offs. Traditional development may still have more value over containers or container orchestration. Figure 5-4 depicts what most enterprises encounter when they adopt containers. The first trade-off is that, as mentioned earlier, containers cost more to build and deploy than traditional development approaches. It doesn’t matter if the application is net new or an application that must be refactored to move to a public cloud platform. There are exceptions, of course, but they are not the rule.

Images

FIGURE 5-4 When comparing container-based development with or without orchestration and clustering, we must consider that the initial value is much lower due to the additional costs needed to build well-designed containerized applications. Over time the value increases and greatly surpasses that of traditional development approaches on clouds.

On the magical side of the argument, the longer the containerized applications exist, the more business value they can create. This is often preceded by a dip in value for the containerized applications as enterprises go through an initial learning curve. Eventually, the enterprise will find the value drivers of the containerized applications, such as portability, scalability, and resiliency, and the recognized value inflects as you can see with the value curve container-based development in Figure 5-4. At the same time, traditional development-based applications slowly reduce in value over time, much as they did in the past.

If you’re sold on containers and the value they can bring to the business, first consider the added costs and then the longer-term benefits container-based development will likely bring. Many just assume that containers will be the best way to go. Perhaps, but the value this technology can bring will not be consistent from application to application. The values of portability, scalability, and resiliency are not the same for each application.

As a rule, container-based application development costs more and takes more time. You’ll need skilled developers and architects to do a container-based application right versus more traditional approaches that we’ve leveraged for years. Do you have that talent on staff? One skill set is new; the other is not. New skills are usually more expensive to find, develop, and/or hire, and this is no different.

Keep an eye on the values when you do a business case. The additional build costs, staffing requirements, and learning curve of container-based applications remove value. Be prepared to put specific values on the benefits of scalability, portability, and resiliency. The value curve will determine if containers are worth it for your specific application or the enterprise itself. If you find the value, then containers will work for you.

What Doesn’t Work

In two words, what doesn’t work: operational complexity. Containers bring additional complexity, and that means you need abstraction and automation tools to deal with the complexity. This is not an unsolvable problem, but you need additional resources to run most containerized applications versus those that are more traditionally created.

Why? Containers and container orchestration systems are new platforms that must be managed in addition to existing platforms that may or may not reside on public clouds. Container systems are not more difficult to operate; many could be easier to use when you consider the available operations tools. It’s that you must operate them in addition to the traditional systems, processes, and applications that still exist. This makes operations more complex, thus more involved and expensive to carry out.

In other words, containers are typically an additive technology, meaning it doesn’t replace an existing technology. The containerized solutions require new skill sets and possibly new staff to operate and manage the new tools that will support containerized applications that are usually net new as well. As the numbers of moving parts rise, the amount of complexity and the cost of managing this complexity increase ongoing.

As the boom in container-based development and the number of container orchestration instances continue to rise, additive system issues will continue to fall away. However, today’s containerized applications make things more complex; thus, operations will be more difficult and more costly for now.

Obviously, the complexity impact on your specific applications and business requirements will vary. Here we’re talking about what’s generally true without illustrating the myriad of exceptions. I urge you to do your own business cases to determine your specific impact. It’s a wide variable.

Figure 5-5 shows what we currently see when it comes to complexity and the use of containers. As the number of container-based applications grows, with or without the use of container orchestration, the amount of complexity and the costs incurred from increased complexity rise as well. At a certain point between three to six years of operation, the complexity curve levels out and even begins to decline. Again, this will happen as enterprises become more adept at managing their new container-based solutions, or when the traditional applications that must also be maintained retire and are removed as an operational responsibility. That’s the good and bad news here: You must first endure the bad news on your journey to containerized solutions.

Images

FIGURE 5-5 The more containers an enterprise leverages, the more complexity costs they must endure. Eventually, the enterprise learns how to deal with the additional complexity, and operational complexity levels out.

Other things that may be a downside of leveraging containerized solutions include finding the skills you need to build, deploy, and operate these solutions. Building containerized solutions correctly requires training and experience to make the right calls as to what technology to leverage. If they can’t find the talent, many enterprises either settle for not-as-qualified developers and designers, or they delay moving to containers until the skills can be found. This approach often leads to additional costs and risk. In many instances, the skills acquisition issues around containers are not even considered as part of the cost/benefit analysis to determine whether the technology is a good fit for the business. That’s not the way to find value for the business.

Container Orchestration: Cost Considerations

Much of what we called out earlier in the “Container: Cost Considerations” section of this chapter is applicable here as well. This includes the costs of skills, tools, and the time it takes to design and deploy a containerized application versus more traditional approaches. Your “container tax” may or may not be significant. Regardless, leveraging containers and container orchestration will cost you more than a traditional path.

So, let’s look specifically at a few interesting container orchestration trends that affect costs. As you can see in Figure 5-6, as enterprises leverage more container orchestration such as Kubernetes, the costs are initially very high, but they decline over time.

Images

FIGURE 5-6 As the use of container orchestration increases, the relative costs should significantly drop.

It typically takes two to four years of container orchestration use until the costs decline enough to make it easier to define the hard values of using containers. As you may recall from Chapter 1, these hard costs are easy to understand, such as direct costs associated with operations and security. However, just as we defined in Chapter 1, the true benefit to the business is aligned more to the soft values, that is, the business abilities gained from the use of container technology. These new abilities include agility, compressed time-to-market, and innovation. These are all common soft values enterprises find with the use of containers and container orchestration.

It’s important to understand the costs and the trade-offs of using container orchestration technology, which are depicted in Figure 5-6. The curve angles will vary depending on the type of business involved, the types of problems to solve, and the hard and soft values benefits for a particular business operation. It’s the old “it depends” answer when it comes to costs, and how costs align with business value for each enterprise. This is an individual journey, but your best jumping-off point is to understand what’s generally true, as outlined here.

Container Orchestration: Make Tough Choices

Following the crowd is often the easiest thing to do, but it’s rarely the correct thing to do. Consider why IT exists in the first place: to bring value to the business. However, I’ve been in IT for a very long time, so I suspect that many of you are already off to containers and container orchestration and have not yet considered the business benefits of doing so. If that’s you, you’re not alone. In the old days I called this “management by magazine.” Others call it “chasing the hype” to a technology because it’s popular, without first considering the desired outcome.

It takes a bit of courage to understand the core purpose of leveraging any technology and how that technology stacks up with your business and technical requirements. If that technology does not fit, it takes more courage to push back on the hype and look for better solutions that will bring more value to the business.

Containers and container orchestration and clustering are valid solutions for many purposes and should always be considered not only for their direct value but also the value that they are likely to bring in the future. In many instances, they are the correct and optimized solution to leverage, but in some cases they are not. It’s all about doing the up-front work to ensure that the right decisions are made, and your cloud solutions are as optimal as they possibly can be.

Cloud Native

Cloud native computing (or cloud native) is a buzzword term that means slightly different things to different people.

At first, most people referred to proprietary services as cloud native: If we leverage storage and compute cloud services on any public cloud provider, then those cloud services are native to the specific cloud provider that hosts those native services. Or, a native cloud service runs on a specific public cloud provider and nowhere else, and you must leverage that specific service under the terms provided by that public cloud provider.

With that interpretation, cloud native applications are applications that leverage native services on a specific cloud provider. The services are optimized for that provider by using the native cloud services. Applications that use those services are pretty much locked into that specific provider given that they leverage services that don’t exist elsewhere on other clouds. So, in this context, cloud native allows us to create cloud native applications that are optimized for a specific cloud provider. The trade-off is easy portability to other providers. Let’s call this the classic definition of cloud native.

As the term cloud native started to take off, several cloud and non-cloud vendors “cloud-washed” their marketing and proprietary products to hop on the cloud bandwagon. Confusion naturally arose as they stated that their products were either cloud native or could build cloud native things. Personally, I believe search engine optimization (SEO) was the real reason they used those cloud-washed terms without regard for the confusion that arose, or the reality that their technology often met no one’s definition of cloud native. I call this the cloud-washed definition of cloud native.

A wider definition of cloud native recently emerged that seems to be the new normal. The Cloud Native Computing Foundation (CNCF) defines cloud native as “technologies [that] empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.”1 Or, perhaps better put, cloud native applications can be deployed across multiple cloud environments, which is core to the CNCF cloud native proposition. I call this the CNCF definition of cloud native.

1 https://github.com/cncf/toc/blob/main/DEFINITION.md

I pulled a direct quote above, which I don’t do often in this book. The purpose here is to show you how others define cloud native and allow you to come to your own conclusions. The goal is to define cloud native and its value in ways that you may be able to weaponize. Or, how to leverage cloud native to a pragmatic end when building and deploying cloud-based solutions.

What’s the Right Definition of Cloud Native?

What concept should we be casing as solution developers when we build and deploy next-generation cloud computing systems? Or: What the hell is cloud native, and why should we care?

Given the way the term has evolved, and how it will likely continue to evolve, what definition describes the value that you can employ, and what technology stack does that definition promote? The original and cloud-washed definitions are still out there, and they’re probably not what you’re looking for when you consider cloud native development for your enterprise. Be sure you know exactly what you sign up for. Just as in the early days of cloud computing, where we experienced “cloud-washing,” what is called cloud native may not be what you’re seeking. It’s perhaps the most overused and multimeaning term that I’ve seen in this industry.

Today’s CNCF cloud native applications are built around containers, container orchestration, and microservices, and focus on the architecture of the applications more than the clouds they will deploy on. CNCF cloud native commoditizes public cloud providers by building applications above the clouds and then leveraging the public clouds as mere service providers to support the applications. This approach stands in contrast to the classic definition of cloud native that required building walled gardens within one provider’s cloud that limits the types and selection of services you can use.

We’ll use the CNCF definition of cloud native for the remainder of this chapter. It works with the larger idea that CNCF cloud native applications provide dynamic and scalable application attributes, functioning on many platforms that include public cloud providers, private clouds, and even traditional computing systems (legacy). The enabling technology stacks that CNCF cloud native promotes, as we stated previously, are containers, container orchestration, and microservices. The objective of leveraging this technology is to avoid cloud and platform lock-in.

The platforms in this CNCF architecture are foundational. However, foundational clouds don’t typically provide services directly to the applications. Instead, you’ll work through well-defined APIs that allow the underlying platforms, both cloud and traditional, to be abstracted away from the application. This provides better portability and should do so with very few trade-offs.

Figure 5-7 is a good representation of this idea. The top of the stack is where development, delivery, and deployment occur. Or, where the developers leverage a cloud native architecture. Moving down the stack, you’ll notice that developers and applications communicate with the container and container orchestration layers using microservices. These microservices define discrete interfaces that developers and applications can consistently leverage. They also remove the applications from any dependency on a specific proprietary service because they consistently use the same APIs for any service the application may need. Containers and container orchestration services (clustering) carry out the heavy lifting of application processing, and they leverage the underlying platforms (cloud and not cloud) to provide the primitive platform services (storage, compute, databases, networking, security, and so on). With this architecture, the public cloud providers (all platforms really) are no longer at the center of the architecture but are mere compute and storage service providers.

Images

FIGURE 5-7 CNCF cloud native moves the application architecture above the public cloud, public clouds, and legacy systems to promote the best performance with better portability. This architecture is conceptual but promotes the use of “open” application development and deployment using a DevOps process and toolchain that leverages continuous integration, delivery, and deployment. In turn, DevOps processes leverage microservice interfaces with containers and container clusters that carry out the application functions using underlying cloud platforms.

This version of a cloud native architecture is nothing new. When containers and microservices came onto the scene several years ago, developers and application designers figured out that this version of a cloud native architecture was the best path. Thus, before this was labeled as a cloud native architecture, it was a best practice. There is no magic here, and this version of a cloud native architecture does come with certain advantages, including

  • The use of open systems throughout; thus, this solution tends to be more cost effective.

  • Portability is built into this version of a cloud native architecture; you don’t have to constantly worry about leveraging vendor proprietary features that will officially lock in your application.

  • Scalability is built into this architecture as well, allowing container orchestration to scale the applications as needed, without changing the underlying architecture to accommodate scalability. It’s always there.

  • The use of public clouds as just services providers that are largely decoupled from your application deployment means you can switch out public clouds that do not provide the best pricing and/or do not behave in the best ways. While this makes cloud providers a bit nervous around the future impact of this version of a cloud native architecture, the reality is that the cloud providers will likely sell more services without having to invest in their platform’s customer interface. The cloud providers no longer deal with the user interfaces, but instead the cloud providers deal directly with the APIs that are native to a specific cloud provider.

Portability Argument

The portability benefits of being cloud native and building cloud native applications are the first things you’ll hear about from those who support this type of cloud native architecture and technology stack. As I mentioned previously, because we abstract the cloud providers from the applications using a layer of technology that deals with the differences, the applications are truly portable. We can move them from cloud provider to cloud provider at will, when needed, and this architectural approach supports whatever business needs to do.

There are a few downsides to this argument.

First, few applications ever leave the platforms they were developed on. We like the potential of built-in portability, and this benefit usually comes up many times when we design applications and pick the architecture. However, for all the time and money we will spend to avoid lock-in, that benefit may never materialize.

Second, is the cloud native (CNCF version) stack not a platform unto itself? We define portability as the ability to move between cloud providers by leveraging a cloud native stack to abstract the cloud providers from the application. Although we can indeed move our applications between cloud providers somewhat easily by not being locked into the provider, we’re still locked into the architecture we selected, in this case, a cloud native architecture. If you need to move to another architecture, say a proprietary architecture that provides some new key features that would bring value to the business, you’ll need to rewrite most of the systems and subsystems built for your cloud native architecture.

Most would chuckle at this scenario, considering this CNCF architecture is built to be portable across cloud providers and the original goal was to avoid lock-in. However, having done this for many years, more than once I’ve made application design and architecture decisions to avoid vendor lock-in only to find that the architecture and development approach I chose led me to a configuration that was hard to move to another vendor’s configurations.

Finally, it costs more to avoid lock-in. I cover this later in greater detail, but portability does not come cheap. A term used among developers is a “portability tax,” or the extra money you’ll spend to get something that minimizes lock-in. Note: We use the word minimize, and not eliminate because lock-in can never be eliminated, just reduced, no matter what architectural approach you leverage.

Optimization Argument

Optimization of this CNCF architecture, or other architectural approaches, will provide the best performance and functionality with the smallest price tag. Thus, we need proof that this architecture is the best approach with the understanding that it’s the closest we can get to being fully optimized.

There is less historical evidence around optimization related to cloud native architecture. However, we know enough about optimization itself that cloud native costs in time and or money should factor into the decision-making process to determine whether a cloud native architecture and cloud native applications are truly the best choice for the enterprise. Business value is always the top objective.

Figure 5-8 shows our optimization curve again, this time with the added dimension of cost in the curve moving up and to the right. Note that we typically have a sweet spot of investment being made. Too little investment shows less optimization, and thus less value is returned to the business. On the optimization side of the curve, we can see that too much investment reduces the value returned to the business because we spent (and often continue to spend) too much on the technology and that dilutes the value. Thus, you want to target a point in the middle where the investment made at a specific level optimizes its business value.

Images

FIGURE 5-8 When considering the value of any architecture, including a cloud native architecture, you need to look at optimization as you increase the investment. You’ll find a point along the curve that’s close to being fully optimized. Obviously, that’s where you want to be.

Although this value curve relates specifically to cloud native architecture and application development, it can be applied to any number of technology solution patterns. It’s not about finding where technology is fully optimized when compared to the investment, but how does the curve compare against other development and architectural options, even proprietary cloud provider development that leverages proprietary tools and deployment platforms? Also consider architectures that are more in the center, with a mix of proprietary and open technologies that you can mix and match as needed. Most cloud-based solutions fall in the center. They allow you to mix and match technologies to find the right solution, but you must do so on a particular vendor’s platform.

We can’t really talk about optimization without talking about the cost of the overall investment (covered next). But the architectural choices really come down to your application requirements, as related to your business requirements, and how they should fit together to form a whole. There are a lot of technology and architectural choices out there, cloud native and many others. Don’t fall into the trap of just moving along with the crowd, no matter how compelling the arguments are to use some hyped approach or technology. Doing so will quickly lead you to underoptimized solutions, and your decisions could easily end up damaging the business. Don’t be that person.

Cost Argument

Believe it or not, cost is often overlooked when selecting the best development and architectural approaches. Or, should I say, we overlook the actual costs versus the known costs? It’s easy to consider the hard costs, or what investment it will take to put an architecture and technology stack into production over the years. The soft costs often prove more valuable in the long view, but they are more difficult to identify at the outset.

For example, using a cloud native architecture to build a set of applications could provide 20 percent more agility over a five-year period. That scenario might translate into a $20 million windfall for the business because the business can quickly adapt to market and customer changes, and quickly retool to build products that are better aligned with the market. The trouble is that the value of agility for your specific business is rarely easy or simple to determine. However, this is more valuable than any hard value that you will define and thus needs special attention.

Figure 5-9 shows our friend the value curve again, this time looking at soft values over fully loaded cost. As we model different levels of investment, a different amount of soft value comes back to the business, values such as agility, speed-to-market, and innovation, that will peak and then bend the value curve at the point of overinvestment.

Images

FIGURE 5-9 Again, you need to look at the behavior of the soft value curve over different levels of investment, cloud native as well as other architectures and approaches. There is significant soft value drop that occurs at a certain point in spending, and you need to predetermine that point.

Several indicators determine where the value drop-off occurs as the soft value curve crosses the cost curve. First, the project hits the saturation point at a level of spending where no additional soft value benefits can be returned to the business just by spending more money. Second, if spending continues to rise, soft value drops significantly from that point. This is a classic case of tossing good money after bad. The chart displayed in Figure 5-9 follows the same pattern as the optimization curve in Figure 5-8. You’ll see this pattern repeat as we attempt to define something that is nearly fully optimized. Also, remember that soft values can be 50 times that of hard values, in terms of what’s returned to the business.

Figure 5-10 looks at the behavior of hard value returned to the business at different levels of spending. Unlike soft values, hard values mirror the spending or cost curve with an increase in value at the end of the curve, likely from discounts the business receives by spending more on something. For example, our hard costs include the underlying cloud services that support a cloud native architecture. Most cloud providers offer volume discounts as those support costs increase over time, which is reflected in the “hard value” curve that becomes steeper about two-thirds through the curve.

Images

FIGURE 5-10 In general, when looking at the behavior of the hard value over different levels of investment, the more you spend, the more you save. Most technology investments embrace economies of scale with volume discounts.

Hard value is soft value’s boring little brother. While you can track 10–20 percent savings over time in hard cost savings, soft value is the real core multiplier you need to consider. As I already mentioned, it often brings 50 to 100 times more value back to the business for the money invested. The trick is to understand how to model soft value for your specific use case and how to find the best optimized point for your particular type of business. You must also determine which solution offers the best optimization point of many possible solutions, including cloud native and other technology stacks.

Technology Meets Reality

It doesn’t matter if the best optimized solution is the least popular solution. It’s your responsibility to optimize value brought back into the business using whatever approach and technology will best fit the solution parameters. Keep this in mind as you drive through the more hyped approaches and technologies, such as containers, container orchestration, and everything that’s employed using a CNCF cloud native architecture.

I’m often taken aback by those who make architecture and technology decisions before they understand what problems need to be solved. This trend seems to be getting worse as we’re pushed to make faster decisions, and most solutions can be made to work at various levels of underoptimization.

The “It works” method of evaluating success is why many businesses can’t find value in their technological investments, and why no one seems to understand how the “cloud thing” could cost more than originally projected, and or why it can’t return its expected value. The cloud technology itself is not the issue; it’s the fact that someone picked the wrong technology. The business requirements pointed to a different configuration and/or solution, so now the business must suffer through a fix or, more likely, just deal with the problems and inefficiencies ongoing.

Improper technology decisions with cloud migration or digital transformations will weaken the business over time, and eventually force a merger with or acquisition by a business that better understands how to do this technology stuff. This is where technology meets reality and where we need to focus most of the work to improve how we weaponize technology for the business.

Call to Action

Understand what you’re getting into before you commit to a hyped technology like containers, container orchestration, and cloud native architecture and applications development. While the technology presented in this chapter represents only a fraction of the cloud technology you’ll need to consider for future projects, the purpose here is to go through the value and costs of each technology, as well as learn how to apply this process to other technologies that exist now or may arise in the future. This knowledge will help you avoid the urge to follow the popular technology crowd like lemmings off a cliff. Moving with the crowd is always less important than first solving your specific business problems.

Again, this is about your business first and foremost. Nobody (not even me) can pick a solution for your business without first having a deep understanding of the problems the business needs to solve. This is about returning value to the business through the effective use of technology that best fits the solution patterns. You need to make that commitment now and have the courage to stand up to others who would rather chase the hype than the right solutions. I wish you luck.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
34.231.180.210