Chapter 2. Adapt or Die

“There are two approaches to handling change: adapt or die vs. same mess for less!”

Dekel Tankel, Senior Director of Pivotal Cloud Foundry

The first adage is true for all businesses: you either adapt and evolve to changes in the surrounding environment, or you die! As a company, you need to avoid becoming extinct. Aim to be Amazon, not Borders.

Businesses today are constantly pressured to adopt the myriad of technical driving forces impacting software development and delivery. These driving forces include:

  • Anything as a service

  • Cloud computing

  • Containers

  • Agile

  • Automation

  • DevOps

  • Microservices

  • Business-capability teams

  • Cloud-native applications

This chapter explores each of these driving forces. The next chapter will explore how Cloud Foundry is uniquely positioned to leverage these forces.

Anything As A Service

In today’s world, services have become the de facto standard. Today’s premise is anything as a service (or XaaS). Services can be publicly hosted on the web or internal to a private data center. Every layer of information technology, from networking, storage, computation, and data, right through to applications, are all offered “as a service.” We have now reached the point where if you are not leveraging compute resources as a service, it is unlikely that you are moving at the pace required to stay competitive.

The move to consuming services, beyond simply provisioning virtual machines (VMs) on demand, allows you to build your software against the highest level of abstraction possible. This approach is beneficial. You should not build everything yourself; you should not reinvent the wheel. To do so is costly, in terms of both time and money. It shifts focus and talent away from your revenue-generating business. If there is a service out there that has been deployed and managed in a repeatable and scalable way, becoming a consumer of that service allows you to focus on software supporting your revenue-generating business.

Cloud Computing

To understand the importance of today’s cloud application platforms, it is important to understand the progression of platforms in IT. Cloud computing is the third incarnation of the platform eras:

  • The first era was the mainframe, which dominated the industry from the 1950s through the early 1980s.

  • Client-server architecture was established as the dominant second platform from the 1980s right up until the second decade of the 21st century.

  • The “as a service” movement in IT has broadly been termed cloud computing, and it defines the third platform era we live in today.

With a move toward cheaper x86 based hardware, cloud computing allowed for converged infrastructure, moving away from dedicated infrastructure silos toward grouping commodity infrastructure components (e.g., servers, networks, and storage) into a single pool. This converged infrastructure provided better utilization and reuse of resources. As I discuss below, cloud computing has been subdivided into “as a service” layers (SaaS, PaaS, IaaS, etc). Regardless of the layer, there are three definitive attributes of “as a service”:

Elasticity

The ability to handle concurrent growth through dynamically scaling the service up and down at speed.

On demand

The ability to choose when and how to consume the required service.

Self-service

The ability to directly provision or obtain the required service without time-consuming ticketing.

These three tenets of XaaS describe the capability of provisioning cloud resources on demand as required. This self-service capability is a shift from procuring resources through a ticketing system involving handoffs and delays between developers and operations.

Platform as a Service

IaaS and SaaS are generally well understood concepts.

Software as a service (SaaS) is the layer at the top of the software stack closest to the end user. SaaS provides the ability to consume software services on demand without having to procure, license, and install binary packages.

The bottom layer of the “as a service” stack is known as infrastructure as a service (IaaS). This provides the ability to leverage cloud-based resources including networking, storage, compute (CPU), and memory. This is what most people think of when they hear the phrase “moving to the cloud.” The IaaS layer includes both private clouds typically deployed inside a company’s data center and public clouds hosted via the Internet.

Platform Versus PaaS

Platforms offered as a service are less well understood for reasons discussed in the section “You Need a Cloud-Native Platform, Not a PaaS”. For now, just think of cloud-native platforms and PaaS as one and the same.

A technology platform leverages underlying resources of some kind to provide a set of higher-level services. Users are not required to understand how the lower-level resources are leveraged because platforms provide an abstraction of those resources. Users only interact with the platform.

A cloud-native platform describes a platform designed to reliably and predictably run and scale on top of potentially unreliable cloud-based infrastructure.

Many companies leverage IaaS for delivering software, with developers requesting VMs on demand. IaaS adoption alone, however, is not enough. Developers may be able to request and start a VM in minutes, but the rest of the stack—the middleware service layers and application frameworks—are still required, along with setting up, securing, and maintaining multiple environments for things like QA, performance testing, and production. As a higher abstraction above IaaS and middleware, cloud-native platforms take care of these concerns for you, allowing developers to keep their focus on developing their business applications.

If you do not use a platform for running applications, you are required to orchestrate and maintain all the state-dependent information yourself. This approach sets a lower boundary for how fast things can be recovered in the event of a failure.

Platforms drive down the mean time to recovery because they leverage patterns that are predictable, automatable, and repeatable, with known and defined failure and scaling characteristics.

Containers

In recent years, there has been a rapid growth in container-based technologies (such as LXC, Docker, Garden, and Rocket). Containers offer three distinct advantages over traditional VMs:

  1. Speed and efficiency

  2. Greater resource consolidation

  3. Application stack portability

Because containers can leverage a “slice” of a pre-built machine or VM, they are generally regarded to be much faster to create than a new VM. This also allows for a greater degree of resource consolidation, since many containers can be run in isolation on a single machine.1 In addition, they have enabled a new era of application stack portability because applications and dependencies developed in a container can be easily moved and run in different environments.

Understanding Containers

Containers typically use primitives such as control groups, namespaces, and several other OS features to control resources and isolate and secure the relevant processes. Containers are best understood as having two elements:

  • Container images: These package a repeatable runtime environment (encapsulating your application, dependencies, and file system) in a way that allows for images to be moved between hosts. Container images have instructions that specify how they should be run but they are not explicitly self-executable, meaning they cannot run without a container management solution.

  • A container management solution: This often uses kernel features such as Linux namespaces to run a container image in isolation, often within a shared kernel space. Container management solutions arguably have two parts: the frontend management component known as a container engine such as Docker-engine or Garden-Linux, and the backend container runtime such as runC or runV.

Agile

Agile software development can best be understood by referring to the Agile Software Development Manifesto. Being agile means you are adhering to the values and practices defined in the agile manifesto. This manifesto values:

  1. Individuals and interactions over processes and tools

  2. Working software over comprehensive documentation

  3. Customer collaboration over contract negotiation

  4. Responding to change over following a plan

There are specific agile software development methods, such as Extreme Programming (XP). XP values communication, simplicity, feedback, courage, and respect. Agile software development methods help teams respond to unpredictability through incremental, iterative work cadences (sometimes referred to as sprints). They focus on delivering smaller features more frequently as opposed to tackling one epic task at a time. The Agile methodology is an alternative to traditional sequential development strategies, such as the waterfall approach.

Many enterprises have moved toward embracing agile. Most teams now define epics that are broken down into smaller user stories weighted by a point system. Stories are implemented over sprints with inception planning meetings, daily standups, and retrospective meetings to showcase demonstrable functions to key stakeholders. Some companies have further adopted the agile disciplines of paired programming and test-driven development. However, the most critical piece of agile is often omitted: each iteration must finish with software that is able to be deployed into production. This allows for new features to be placed into the hands of real end users exactly when the product team decides, rather than only after reaching a significant milestone.

Agile can be a challenge for development teams who are not in a position to quickly deploy their applications with new features. Traditional release cycles of code, batched up and merged into a later release train, means that they forego regular end-user feedback. This feedback is the most valuable and most critical kind of feedback required. In essence, these teams adopt many of the agile principles without actually being agile. It is no use employing a “run, run, run” approach to development if you are subsequently unable to release software into production. Agile deployment allows teams to test out new ideas, quickly identify failings with rapid feedback, learn, and repeat. This approach promotes the willingness to try new things and ultimately should result in products more tightly aligned to end-user expectations.

Automate Everything

Operational practices around continuous integration (CI) and continuous delivery (CD) have been established to address the following two significant pain points with software deployment:

  • Handoffs between different teams cause delays. Handoffs can occur during several points throughout an application life cycle, starting with procuring hardware, right through to scaling or updating applications running in production.

  • Release friction describes the constant need for human interaction as opposed to using automation for releasing software.

Deployment pipelines are release processes automated by tooling to establish repeatable flows. They enable the integration and delivery of software to production. Deployment pipelines automate manual tasks and activities, especially around integration testing. Repeatable flows of code mean that every release candidate goes through the same set of integration steps, tests, and checks. This pipeline enables releases to production with minimal interaction (ideally just at the push of a button). When establishing deployment pipelines, it is important to understand the progression of continuous integration and continuous delivery.

Continuous Integration

Continuous integration (CI) is a development practice. Developers check code into a central shared repository. Each check-in is verified by automated build and testing, allowing for early detection problems and consistent software releases.

Continuous integration has enabled streamlined efficiencies for the story to demo part of the cycle. However, continuous integration without continuous delivery into production means you only have what Dave West has called water-Scrum-fall; a small measure of agility confined by a traditional waterfall process.

Continuous Delivery

Continuous delivery further extends continuous integration. The output of every code commit is a release candidate that progresses through an automated deployment pipeline to a staging environment, unless it is proven defective. If tests pass, the release candidate is deployable in an automated fashion. Whether or not new functionality is actually deployed comes down to a final business decision. However, it is vital to maintain the ability to deploy at the end of every iteration as opposed to a lengthy waterfall release cycle. This approach considers all of the factors necessary to take an idea from inception to production. The shorter that timeline, the sooner value is realized and further ideas emerge to build upon the previous ones. Companies that operate in this way have a significant advantage and are able to create products that constantly adapt to feedback and user demands.

DevOps

DevOps is a software development and operations culture that has grown in popularity over the recent years. It breaks away from the traditional developer-versus-operation silos, with a focus on communication, collaboration, integration, automation, and measurement of cooperation between all functions required to develop and run applications. The method acknowledges the interdependence of:

  • Software development

  • Quality assurance and performance tuning

  • IT operations

  • Administration (SysAdmin, DBAs, etc.)

  • Project and release management

DevOps aims to span the full application life cycle to help organizations rapidly produce and operationally maintain performant software products and services.

Traditional silo approaches to IT have led to diametrically opposed objectives that ultimately hinder application quality and speed of delivery. Teams centered around a specialty introduce friction when they inter-operate. An example of friction occurs with change control. Developers achieve success when they innovate and deliver valuable features, so they are constantly pushing for new functionality to be incorporated into a release. Conversely, IT operations achieve success when they minimize churn in favor of reliability, availability, and stability. To mitigate opposing objectives, bureaucratic and time-consuming handoffs occur, resulting in deferred ownership and responsibility. Lack of overall ownership and constant handoffs from one department to the next can prolong a task from minutes or hours to days or weeks.

The DevOps movement has empowered teams to do what is right during application development, deployment, and ongoing operations. By eliminating silos and centralizing a collective ownership over the entire development-to-operations life cycle, barriers are quickly broken down in favor of what is best for the application. Shared toolsets and a common language are established around the single goal of developing and supporting software running in production. Applications are not only more robust, they constantly adapt to changing environmental factors. Silos are replaced by collaboration with all members under the same leadership. Burdensome processes are replaced by trust and accountability.

The DevOps culture has produced cross-functional teams centered around a specific business capability. They develop products instead of working on projects. Products, if successful, are long lived. The DevOps team responsible for the development-to-operations life-cycle of a business capability continues to take responsibility for that business capability until it ceases to be in production.

Microservices

Microservices is a term used to describe a software architectural style that has emerged over the last few years. It describes a modular approach to building software in which complex applications are composed of several small, independent processes communicating with each other though explicitly defined boundaries using language-agnostic APIs. These smaller services focus on doing a single task very well. They are highly decoupled and can scale independently.

By adopting a microservices architecture, the nature of what you develop and how you develop it changes. Teams are organized around structuring software in smaller chunks (features, not releases) as separate deployments that can move and scale independently. The act of deploying and the act of scaling can be treated as independent operations, introducing additional speed to scaling a specific component. Backing services are decoupled from application software, allowing applications to be loosely coupled. This ultimately extends the application lifespan as services can be replaced as required.

Business-Capability Teams

By determining the required architecture upfront, organizations can be restructured around business-capability teams that are tasked with delivering the desired architecture. It may be difficult to transition to this in an established enterprise as this approach cuts orthogonally across existing organizational silos. A matrix overlay can help with the transition, but it is worth considering Conway’s law:

“Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure. "

Melvyn Conway

Therefore, if you want your architecture to be centered around business capability instead of specialty, you should go all in and structure your organization appropriately to match the desired architecture. Once you have adopted this approach, the next step is to define what business-capability teams are needed.

Cloud-Native Applications

An architectural style known as cloud-native applications has been established to describe the design of applications specifically written to run in a cloud environment. These applications avoid some of the anti-patterns that were established in the client-server era, such as writing data to the local file system. Those anti-patterns do not work as well in a cloud environment since, for example, local storage is ephemeral because VMs can move between different hosts. The Twelve Factor App explains the 12 principles underpinning cloud-native applications.

Platforms offer a set of contracts to the applications and services that run on them. These contracts ensure that applications are constrained to do the right thing. Twelve Factor can be thought of as the contract between an application and a cloud-native platform.

There are benefits to adhering to a contract that constrains things correctly. Twitter is a great example of a constrained platform. You can only write 140 characters, but that constraint becomes an extremely valuable feature of the platform. You can do a lot with 140 characters coupled with the rich features surrounding that contract. Similarly, platform contracts are born out of previous tried and tested constraints; they are enabling and they make doing the right thing easy for developers.

Chapter Summary

This chapter has discussed the following:

  • There has been a systemic move to consuming services beyond simply provisioning VMs on demand. Consuming services allows you to focus on building business software against the highest level of abstraction possible.

  • For any company to be disruptive through software, it starts with the broad and complete adoptions of IaaS for compute resource to provide on-demand, elastic, and self-service benefits.

  • Platforms describe the layer sitting above the IaaS layer, leveraging it directly by providing an abstraction of infrastructure services and a contract for applications and backing services to run and scale.

  • Recently there has been a rapid growth in container-based solutions because they offer isolation, resource reuse, and portability.

  • Agile development coupled with continuous delivery provides the ability to deploy new functionality at will.

  • The DevOps movement has broken down the organizational silos and empowered teams to do what is right during application development, deployment, and ongoing operations.

  • Software should be centered around business-capability teams instead of specialty capabilities. This allows for a more modular approach to building microservices software with decoupled and well defined boundaries. Microservices that have been explicitly designed to thrive in a cloud environment have been termed cloud-native applications.

As a result of these driving forces, cloud-native platforms have been established. The next chapter explores how cloud-native platforms are uniquely positioned to leverage these emerging trends, providing a way to quickly deliver business value.

1 The terms VM and machine and used interchangeably because containers can run in both environments.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.137.12