Creating Environments in Azure

In Figure 6.2, we defined four environments: Dev, QA, Staging, and Production. In this next section, we’ll discuss how to use automation and infrastructure to provision these environments in Azure.

Immutable Infrastructure

Before we discuss ways to create our infrastructure, let’s discuss two common ways to define your infrastructure and deployments. The first is the classic model, where a server is running for a relatively long time, with patches and updates applied regularly. Any breaking changes result in tweaks to the server configuration that must be rolled out and tested, resulting in regular maintenance and patching. The longer the server runs and the more deployments that happen against that environment, the higher the risk of encountering issues caused by the state of the machine being different from when it was originally provisioned. One example is a log file that has filled a local hard drive so it now throws exceptions because of a lack of disk space, after weeks of running smoothly. Don’t be fooled by server uptime as an indicator of system health—it is not. A system being able to run for an extended period of time is not directly correlated to its inherent stability and ability to properly handle incoming deployments.

Another much more efficient model is defined by the term “Immutable Infrastructure,” coined by Chad Fowler of 6Wunderkinder (now part of Microsoft). The term implies that all infrastructure, once deployed, cannot be changed. Therefore, whenever a new configuration or operating system patch is required, it should never be applied to running infrastructure. It must always be a new, reimaged instance, started from scratch. This makes things much more predictable and stable as it eliminates the state factors that might negatively impact future deployments. Immutable Infrastructure heavily relies on the assumption that your environments are fully automated in a repeatable and reliable process, from testing to configuration to deployment and monitoring.

There are three key issues related to the classic model:

No or minimal automation: This means that everything that breaks will require dedicated attention—increasing operational complexity that results in higher maintenance costs that are hard to offset through mutable reconfigurations and updates.

Slower and buggier deployments: The more moving pieces, the more likely that one of them will break and bring the entire system down. It’s not uncommon that the perceived modular architecture of the classic model is in fact a monolithic house of cards where an inadvertent change can have damaging consequences.

Expensive diagnostics: Once something actually fails, it will be both time- and resource-consuming to pinpoint the real reason. The fix is rarely a long-term one, but rather a Band-Aid that targets just one potential symptom.


Image Automation doesn’t Mean Error-Proof

Although automation is great, if you aren’t testing your automated scripts, bad things can happen consistently and more quickly. Imagine an automation script updating an invalid configuration script to thousands of servers in seconds! It’s important to treat automation and scripting as first-class citizens, meaning that scripts are versioned, they use source control, they are tested, and they include instrumentation and tracing for diagnostic purposes.


Infrastructure as Code

Another key aspect of automation is automating the creation of environments. As we discussed in Chapter 5, “Service Orchestration and Connectivity,” Azure provides an automated way to create your application topology using Azure Resource Manager (ARM) templates. For DevOps, we’ll want to ensure that the creation of all environments is fully automated using ARM templates, including any necessary installation or configuration scripts. Having infrastructure definitions available as code also means any team member can instantly run a script and provision a private instance of your team’s environment painlessly.

Private Versus Shared Environments

There are many approaches you can use to define your infrastructure, but in the next section, we’ll look at two examples, a private and shared model. In Figure 6.3 you see a high-level infrastructure view of our continuous delivery pipeline where each microservice defines its own private infrastructure across Dev, QA, Staging, and Production. The benefit of this infrastructure is that each microservice team has complete isolation and control of their infrastructure. The downsides are that there can be a lot of unused or underutilized resources, and a higher cost to maintain this infrastructure.

Image

FIGURE 6.3: Each microservice pipeline defines its own private Azure resources

Although Figure 6.3 should be intuitive in that the closer we get to production, the larger and more realistic the preproduction environment becomes, the one thing to note is that the QA environment can be of variable size. This means that, depending on the type of test being done in QA, you can easily scale the number of virtual machines up or down as needed. In other words, you only pay for what you need based on the test needed.

In the semi-shared model shown in Figure 6.4, each team still manages its development environment privately, but all other environments use a shared pool of resources using a clustering technology like Azure Container Service, Docker Swarm, Mesosphere, or Service Fabric as discussed in Chapter 5 under “Orchestration.”

Image

FIGURE 6.4: Microservices using a combination of private and shared resources

Additional Azure Infrastructure

While this is a high-level view of a per-environment topology, each microservice or environment can include a number of other Azure resources that you need to provision. Some examples include

Application Insights: Each microservice and environment should define its own App Insights resources to monitor diagnostics.

Key Vault: An Azure service that stores machine or application secrets.

Load Balancer: As discussed in Orchestration in Chapter 5, this will load balance across Docker hosts.

Storage: Storage provides a durable and redundant data store for containers that use Docker volume mounting.

Virtual Network: You can create custom virtual networks and subnets as well as connections between multiple virtual networks.

Virtual Private Network (VPNs): For microservices that require access to on-premises apps or data stores, you can create a site-to-site connection to directly and securely communicate with your on-premises network.

Creating Environments using Azure Resource Manager

Azure Resource Manager (ARM) enables you to define the desired state of your infrastructure using a set of declarative JSON files. For our continuous delivery pipeline, we can use ARM templates to automate the creation of the four environments shown in Figure 6.3: Dev, QA, Staging, and Production. You can choose to create an ARM template from scratch or start by customizing one of the many prebuilt ARM templates available on GitHub at https://github.com/Azure/azure-quickstart-templates.

Remember from the discussion on ARM, that a common approach when creating multiple environments is to generalize the ARM template, meaning that instead of hard-coding variables into the ARM template, you parameterize the inputs of the template. For a virtual machine template, you can parameterize the virtual machine size, name, Azure region, or other variable values into a parameter file. The result might look something like this:

azuredeploy.json: The ARM template for your microservice.

azuredeploy.dev.parameters.json: Dev environment parameters.

azuredeploy.qa.parameters.json: QA environment parameters.

azure.deploy.staging.parameters.json: Staging environment parameters.

azure.deploy.production.parameters.json: Production environment parameters.

For example, the “Simple Deployment of an Ubuntu VM with Docker template” from https://github.com/Azure/azure-quickstart-templates/tree/master/docker-simple-on-ubuntu includes the following three files:

azuredeploy.json: The ARM template that includes the virtual machine and the Docker virtual machine extension, an extension that automates installing and configuring Docker and Docker Compose.

azuredeploy.parameters.json: The parameterized set of variables that can be different per environment. The exact parameters created in this template are shown.

metadata.json: Documentation or a description file that describes the content of the ARM template.

The azuredeploy.parameters.json file includes parameters for the Azure storage account, location, admin username and password, and DNS name (for example, mydevserver).

...
"parameters": {
   "newStorageAccountName": {
    "value": "uniqueStorageAccount"
   }
   "location": {
    "value": "West US"
   },
   "adminUsername": {
    "value": "username"
   },
   "adminPassword": {
    "value": "password"
   },
   "dnsNameForPublicIP": {
    "value": "uniqueDNS"
   }
}

One common per-environment configuration setting missing in this parameter file is the capability to configure VM size. When creating environments, you probably have smaller VM sizes in development, like an A1 Basic VM with 1 core and 1.75GB of RAM, but your production environment would have a more powerful VM size configuration like the D4 size that includes 8 cores, 28GB RAM, and a 400GB SSD drive. We can parameterize the VM size by defining it as a new parameter in the azuredeploy.parameters.json file as shown here:

  "vmSize": {
   "value": "Standard_D4"
  },

Next, you will need to open the azuredeploy.json file and under the hardwareProfile property, change the vmSize property to read the value from the newly added vmSize parameter.

      "properties": {
       "hardwareProfile": {
          "vmSize": "[parameters('vmSize')]"
      },
      ...

In this simple configuration, you can use one ARM template to define the virtual machine and have four parameter files that represent the different per-environment configuration settings.

For more advanced configuration scenarios, there are also templates to take and customize for Docker Swarm, Mesosphere or the Azure Container Service used in the examples for this book.

Docker Swarm: https://github.com/Azure/azure-quickstart-templates/tree/master/docker-swarm-cluster

Mesos with Swarm and Marathon: https://github.com/Azure/azure-quickstart-templates/tree/master/mesos-swarm-marathon

Azure Container Service using the example from this book: https://github.com/flakio/infrastructure

All these examples provide parameter files that you can use to change the total number of VMs included in your cluster (represented as the nodes parameter in Docker Swarm template, the agents parameter in Mesos, and the agentCount parameter in the Azure Container Service template).


Image Windows Server 2016 Containers Preview

At the time of this writing, Windows Server Containers support is still in preview, but an Azure Resource Manager template and the corresponding container configuration PowerShell script (containerConfig.ps1) is available on GitHub at http://bit.ly/armwindowscontainers.


Tracking Deployments with Tags and Labels

In a microservices architecture, you could have thousands of containers running on hundreds of hosts, and even have multiple versions of the same service running at the same time! To help organize and categorize your infrastructure and applications, one key feature to leverage is adding metadata through the use of Resource Manager tags and Docker labels.

Azure Resource Manager includes the capability to set up to 16 arbitrary tags created as key/value pairs that enable you to differentiate between environments (Dev, QA, Staging, and Production), or between geographical locations (Western U.S. versus Eastern U.S.), or between different organizational departments (Finance, Marketing, HR) to better track billing.

To do this, open the dev environment parameter file and set the values for environment and department as shown. The location tag is already included as a parameter so we don’t need to add it again.

    "environment": {
      "value": "dev"
    },
    "dept": {
     "value": "finance"
    },
...

Next, in the azuredeploy.json file, we will add a tags section to include metadata about the environment, location, and department. Instead of hard-coding these values, notice that the tag values are read from the parameters we created previously.

   {
     "apiVersion": "2015-05-01-preview",
     "type": "Microsoft.Compute/virtualMachines",
     "name": "[variables('vmName')]",
     "location": "[parameters('location')]",
     "tags": {
         "environment": "[parameters('environment)]"
         "location": "[parameters(location)]"
         "dept": "[parameters(dept)]"
     }
...

Doing this enables you to easily find and filter your ARM resources by tag from the Azure portal or the command line. Instead of looking through a list of hundreds of virtual machines, you can filter the list to just the finance department’s production VMs.

For Docker, we can use the label command to set key/value metadata about the Docker daemon (meaning the host), a Docker image definition, or when a Docker container is created. For example, you can set a tag on the Docker daemon running on the host to specify capabilities such as an SSD drive as shown, using the reverse domain name notation:

Docker daemon --label io.flak.storage="ssd"

Labels on Docker images can be set using the LABEL command in the Dockerfile. Remember that if you add a label to the Dockerfile image, it is hard-coded into the image itself. For that reason, it’s best to only add labels in a Docker image for metadata that will not change based on the runtime environment. For per-environment labels, use the “--label” switch in the Docker run command as shown:

Docker run –d
--label io.flak.environment="dev"
--label io.flak.dept="finance"
--label io.flak.location="westus"
nginx

Once you define the labels for your Docker containers, you can use standard Docker commands to filter based on specific label values. This example will show only those running containers that are in the “dev” environment.

Docker ps --filter "label=io.flak.environment=dev"

Third-Party Configuration and Deployment Tools

Azure supports a number of third-party configuration and deployment tools that your organization might already be using. These include popular configuration and deployment tools like Puppet, Chef, Octopus Deploy, and others. Azure’s virtual machine extension framework includes extensions to install and configure Puppet, Chef, Octopus Deploy, and other software on virtual machines either through the Azure portal or by defining the extension in an Azure Resource Manager template. You can find the full list of supported virtual machine extensions at http://bit.ly/azureextensions.

Dockerizing your DevOps Infrastructure

The growth of Docker has also started a new trend, to “Dockerize” everything. To Dockerize something is to create a Dockerfile for applications or infrastructure so that it can be built as a Docker image and deployed as a Docker container. There are a number of tools used in DevOps, including source control, continuous integration, build servers, and test servers. Many of the applications you would have once spent hours to download, set up and configure are now distributed as preconfigured Docker images. Here are just some examples:

For source control, there are many Git Docker images available including GitLab’s community edition: https://hub.docker.com/r/gitlab/gitlab-ce/

For continuous integration, Jenkins CI is available as a Docker image: https://hub.docker.com/_/jenkins/

For improving code quality, you can use the SonarQube Docker image to run static analysis tests to discover potential issues, ensure coding style guidelines are met, and get reports for code coverage and unit test results: https://hub.docker.com/_/sonarqube/

For build, unit, or integration tests, you can use a Docker container as an isolated host for compiling your project, running unit tests, or running coded-UI tests using tools like Selenium: https://hub.docker.com/r/selenium/

Applications and services like Maven, Tomcat, RabbitMQ, NGINX, Cassandra, Redis, MySQL, and others are all available as Docker images from http://hub.docker.com

Even the Azure command-line interface (CLI) is available as an image: https://hub.docker.com/r/microsoft/azure-cli/

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.178.53