Chapter 6. Infrastructure Automation

For software targeting Windows and .NET, infrastructure is no longer the bottleneck it once was. VMs can be built and provisioned within minutes, whether we’re using on-premise Hyper-V or VMware, or commodity public cloud such as AWS or Azure. However, Continuous Delivery needs repeatability of infrastructure configuration, which means automation, not point-and-click or manually executed PowerShell commands.

For Windows and .NET, our future-proof infrastructure or platform choices are currently:

IaaS

VMs running on commodity cloud

PaaS

Effectively just Microsoft Azure, using features such as Azure Cloud Services

In the future, we will see containerization technologies like Docker for running Windows-specific workloads, but these remain Linux-only at the time of writing. Microsoft is working in partnership with Docker to extend the Docker API to support containers running Windows.

Note

Most organizations that need pools of VMs for testing should use commodity cloud providers like Amazon AWS, Microsoft Azure, ElasticHosts, or Rackspace. The real costs of building and maintaining a self-hosted VM infrastructure (sometimes called “private cloud”) are much higher than most people realize, because many costs (power, cooling, resilience, training, disaster recovery) are hidden. Unless your organization has niche requirements (such as high throughput, low latency, or compliance restrictions), you should plan to use commodity cloud infrastructure.

Shared Versus Dedicated Infrastructure

For Continuous Delivery to be effective, we need to minimize any friction caused by shared infrastructure. If several product teams are regularly waiting for another team to finish using a testing environment, it becomes difficult to maintain regular, reliable releases.

Some infrastructure and tools are worth sharing so that we have a single source of truth: the version control system, the artifact repository, and some monitoring and metrics tools. Most other tools and technologies should be aligned and dedicated to a specific group of products or services. Dedicated infrastructure costs more in terms of hardware, but usually saves a significant amount of money in time wasted waiting for environments or retesting after another team broke a shared environment.

Tip

An effective pattern for shared environments is to limit the testing time to 20 or 30 minutes, including deployments; if the new features deployed by one team do not pass automated checks in that time window, the code is automatically rolled back and the team must fix the problems before trying again.

Avoid deploying applications and services from multiple teams onto the same virtual or physical hardware. This “deployment multi-tenancy” leads to conflicts between teams over deployment risks and reconfiguration. Specifically, a team running its software on a set of Windows machines should be able to choose to run iisreset at any time, knowing that they will affect only their own services (see Figures 6-1 and 6-2).

Deployment Multitenancy
Figure 6-1. Deployment multitenancy—iisreset causes friction
Deployment Single Tenancy
Figure 6-2. Deployment single tenancy—iisreset affects just one team

Using a Test-First Approach to Infrastructure

Automation is of little use if we cannot test the resulting infrastructure before we use it. Test-first approaches to software (such as TDD) have worked well for application software, and we should use similar test-first approaches for infrastructure code too.

To do this effectively with Windows and .NET, we can use Packer and Boxstarter for building base images; Vagrant for VM development; ServerSpec for declarative server testing; a configuration management tool like Chef, Puppet, Ansible, and/or PowerShell DSC; and a CI server with remote agents (such as TeamCity or GoCD).

Use Packer, Boxstarter, and ISO files to create base Windows images, which we can then use in a traditional TDD-style coding loop:

  1. Write a failing server test (such as expecting a .NET application to be deployed in IIS on the VM).

  2. Implement enough logic to make the test pass (perhaps deploy the application using MSDeploy).

  3. Refactor the code to be cleaner and more maintainable.

  4. Push the configuration changes to version control.

  5. Wait for the CI system to

    1. Pull the changes from version control,

    2. Build the changes using Vagrant and ServerSpec, and

    3. Show that our tests have passed (a “green” build).

  6. Repeat.

By building and testing our VM images and configuration management scripts using TDD and CI, we help to ensure that all changes to servers are tracked and derived from version control (Figure 6-3). This reduces the problems that beset many Windows server environments, caused by manual or opaque SCCM-driven configuration.

Use images to follow a TDD-style cycle
Figure 6-3. Use the base images to follow a TDD-style cycle
Warning

Some organizations and teams using Windows and .NET have tried to use raw PowerShell to automate the configuration and/or provisioning of infrastructure. Almost invariably, these PowerShell-only attempts seem to progress well for a few months until they reach a certain size and complexity, when they begin to fail. The lack of tests begins to become a major problem as the code becomes unwieldy and brittle, resembling a poorly written version of Chef or Puppet! A better approach is to build on a foundation of Chef/Puppet/Ansible/DSC.

Patching and OS Updates

Our approach to OS patching and Windows Updates needs to be compatible with Continuous Delivery. For Windows and .NET, this means that we need to reconsider how tools like SCCM, WSUS, and WUFB can interact with servers in our infrastructure, particularly when we have Chef/Puppet/Ansible/DSC controlling aspects of server configuration.

In practice, whether we use commodity cloud or self-hosted infrastructure, this means following the pattern set by commodity cloud providers like AWS and Azure:

  • Server images are provided prepatched to a known patch level.

  • Product teams regularly update their base images to newer versions, retesting their code.

  • Software and infrastructure is designed to cope with servers being rebooted for patching during the day (for zero-day vulnerabilities).

It is crucial to establish a boundary between the configuration undertaken by Chef/Puppet/Ansible/DSC and the configuration undertaken by SCCM/WSUS/WUFB. Several organizations have had success with letting SCCM/WSUS handle OS-level patches and updates, leaving all other updates and patches (such as for SQL Server) to the scriptable configuration tool. The goal is to avoid the fights that occur with a split-brain configuration approach.

Summary

For Continuous Delivery, we need to treat infrastructure as code, using proven techniques like TDD. We should expect to take advantage of existing tools for infrastructure automation and configuration rather than “rolling our own,” and we need to consider the possible negative effects of shared servers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.205.99