List of Figures

Chapter 1. Before you begin

Figure 1.1. To follow along with all the exercises in this book, create a free Azure account if you don’t already have one.

Figure 1.2. Complete Azure account sign-up information

Figure 1.3. The Azure portal, ready for you to create your own applications and solutions

Figure 1.4. Pizza as a Service model. As you move from homemade pizza, where you provide everything, to the restaurant model, where you just show up, the responsibilities and management demands change accordingly.

Figure 1.5. Cloud computing service model

Figure 1.6. Virtualization in action on a physical host in Azure

Figure 1.7. The Azure Cloud Shell in the web-based portal

Chapter 2. Creating a virtual machine

Figure 2.1. In this chapter, you create a basic VM, log in to install a web server, and then open a network port to allow customers to browse to the sample website.

Figure 2.2. Select and launch the Cloud Shell in the Azure portal by selecting the shell icon.

Figure 2.3. The exercises in this book mostly use the Bash version of the Cloud Shell. The first time you access the Cloud Shell, it will probably load the PowerShell shell. Select the Bash version, and then wait a few seconds for Azure to switch the shell. Each time you access the Cloud Shell after this change, the Bash version should load automatically.

Figure 2.4. An SSH key pair created in the Azure Cloud Shell with the ssh-keygen command

Figure 2.5. Create an Ubuntu Linux VM in the Azure portal. Provide a VM name, and then enter a username and the SSH public key you created. Create a resource group, and select your closest Azure location.

Figure 2.6. When you create a VM, you can change virtual network settings, add extensions, configure backups, and more. For this chapter, there’s nothing to change, but for your own deployments, you may want to configure additional settings.

Figure 2.7. Select your VM in the Azure portal, and then select Connect to generate the SSH connection information.

Figure 2.8. Use the connection string shown in the Azure portal to create an SSH connection to your VM from the Cloud Shell.

Figure 2.9. Select your VM in the Azure portal to view its information. The public IP address is shown at lower right.

Figure 2.10. To see your web server in action and view the default Apache 2 page, enter the public IP address in a web browser.

Figure 2.11. To save costs, delete resource groups when you no longer need them.

Chapter 3. Azure Web Apps

Figure 3.1. In this chapter, you create an app service plan and a basic web app and then deploy a website from GitHub.

Figure 3.2. Select a specific version of a language in the Web Apps application settings.

Figure 3.3. Select your web app in the Azure portal. On the right side of the windows is information such as the current state of the web app, the location, and its URL.

Figure 3.4. To see the default web app page in action, open a web browser to the URL of your site.

Figure 3.5. You create a local copy of the sample files from GitHub with the git clone command. To push these local files to your Azure web app, you use git push

Figure 3.6. Refresh your web browser to see the default web app page replaced with the basic static HTML site from GitHub.

Figure 3.7. Your application can generate application logs and server logs. To help you review or troubleshoot problems, these logs can be downloaded with FTP or viewed in real time.

Figure 3.8. You can view the Web Apps web server log streams of live logs from your application to help verify and debug application performance. The console box at the right side on the screen shows the real-time streaming logs from your web app.

Figure 3.9. Swap between available deployment slots to make your dev site live in production.

Chapter 4. Introduction to Azure Storage

Figure 4.1. An Azure Storage account allows you to create and use a wide variety of storage features, way beyond just somewhere to store files.

Figure 4.2. Unstructured data stored in a table: a key-value pair made up of the PartitionKey and RowKey. The data is shown in the cost and description fields.

Figure 4.3. Messages are received from the frontend application component that details what pizza each customer ordered in the Message Text property.

Figure 4.4. As each message is processed, it’s removed from the queue. The first message shown in figure 4.3 was removed once it was processed by the backend application component.

Chapter 5. Azure Networking basics

Figure 5.1. Software-defined network connections in Azure

Figure 5.2. The virtual network and subnet you create in the Azure portal forms the basis for the infrastructure in this chapter.

Figure 5.3. Create a virtual network interface card in the Azure portal.

Figure 5.4. Create a public IP address and DNS name label in the Azure portal.

Figure 5.5. To attach the public IP address to your network interface, select Associate at the top of the overview window.

Figure 5.6. The public IP address is now associated with a network interface. With a dynamic assignment, no public IP address is shown until a VM is created and powered on.

Figure 5.7. Inbound packets are examined, and each NSG rule is applied in order of priority. If an Allow or Deny rule match is made, the packet is either forwarded to the VM or dropped.

Figure 5.8. Create an NSG in your resource group.

Figure 5.9. Default security rules are created that permit internal virtual network or load-balancer traffic but deny all other traffic.

Figure 5.10. Create an NSG rule to allow HTTP traffic.

Figure 5.11. (Figure 5.1, repeated.) You’re bringing together two subnets, NSGs, rules, network interfaces, and VMs. This is close to a production-ready deployment where one VM runs the web server and is open to public traffic, and another VM in a separate subnet is used for remote connections to the rest of the application environment.

Chapter 6. Azure Resource Manager

Figure 6.1. One way to build an application in Azure is for all the resources related to that application deployment to be created in the same resource group and managed as one entity.

Figure 6.2. An alternate approach is to create and group resources based on their role. A common example is that all core network resources are in a separate resource group than the core application compute resources. The VMs in the compute resource group can access the network resources in the separate group, but the two sets of resources can be managed and secured independently.

Figure 6.3. The access control for each Azure resource lists what the current assignments are. You can add assignments, or select Roles to see information about what permission sets are available.

Figure 6.4. Create a resource lock in the Azure portal.

Figure 6.5. You can create up to 15 name:value tags for each Azure resource.

Figure 6.6. Humans make mistakes, such as mistyping a command or skipping a step in a deployment. You can end up with slightly different VMs at the end of the output. Automation is often used to remove the human operator from the equation and instead create consistent, identical deployments every time.

Figure 6.7. Azure Resource Manager handles dependencies for you. The platform knows the order in which to create resources and has awareness of the state of each resource without the use of handwritten logic and loops like those you must use in your own scripts.

Figure 6.8. Many extensions are available in Visual Studio Code to improve and streamline how you create and use Azure Resource Manager templates.

Figure 6.9. With Visual Studio, you can graphically build templates and explore JSON resources.

Figure 6.10. For each Resource Manager template in the GitHub sample repo, there’s a Deploy to Azure button. If you select this button, the Azure portal loads, and the template is loaded. You’re prompted for some basic parameters, and the rest of the deployment is handled by the template.

Chapter 7. High availability and redundancy

Figure 7.1. If your application runs on a single VM, any outage on that VM causes the application to be inaccessible. This could mean customers take their business elsewhere or, at the least, aren’t satisfied with the service you provide.

Figure 7.2. Hardware in an Azure datacenter is logically divided into update domains and fault domains. These logical domains allow the Azure platform to understand how to distribute your VMs across the underlying hardware to meet your redundancy requirements. This is a basic example—an update domain likely contains more than one physical server.

Figure 7.3. The template in GitHub for this exercise loads in the Azure portal and prompts for a few parameters. Provide a resource group name, location, and SSH key, and then deploy the template to create your resources.

Figure 7.4. The availability set that your sample template deploys contains two fault domains and five update domains. The numbering system is zero-based. The update domains are created sequentially across the fault domains.

Figure 7.5. The first VM is created in fault domain 0 and update domain 0.

Figure 7.6. With a second VM created, the VMs are now evenly distributed across fault and update domains. This is often considered the minimal amount of redundancy to protect your applications.

Figure 7.7. The third VM is created back in fault domain 0, but in update domain 2. Although VMs 0 and 2 potentially share the same hardware failure risk, they’re in different update domains and so will not undergo regular maintenance at the same time.

Figure 7.8. The availability set lists the VMs it contains and shows the fault domain and update domain for each VM. This table lets you visualize how the VMs are distributed across the logical domains.

Figure 7.9. An Azure region can contain multiple availability zones: physically isolated datacenters that use independent power, network, and cooling. Azure virtual network resources such as public IP addresses and load balancers can span all zones in a region to provide redundancy for more than just the VMs.

Figure 7.10. When network resources are attached to a single Azure datacenter, or zone, an outage in that facility causes the entire application to be unreachable by the customer. It doesn’t matter that the other VMs continue to run in other zones. Without the network connectivity to distribute traffic from your customers, the whole application is unavailable.

Figure 7.11. To deploy the availability zone template in the Azure portal, specify a resource group, username, and password, and then the OS type and the number of VMs you wish to create. The template uses loops, copyIndex(), dependsOn, variables, and parameters, as covered in the previous chapter on Resource Manager.

Chapter 8. Load-balancing applications

Figure 8.1. Traffic from the internet enters the load balancer through a public IP address that’s attached to a frontend IP pool. The traffic is processed by load-balancer rules that determine how and where the traffic should be forwarded. Health probes attached to the rules ensure that traffic is only distributed to healthy nodes. A backend pool of virtual NICs connected to VMs then receives the traffic distributed by the load-balancer rules.

Figure 8.2. An internet load balancer may be used to distribute traffic to frontend VMs that run your website, which then connect to an internal load balancer to distribute traffic to a database tier of VMs. The internal load balancer isn’t publicly accessible and can only be accessed by the frontend VMs within the Azure virtual network.

Figure 8.3. A port-based load-balancer health probe checks for a VM response on a defined port and protocol. If the VM doesn’t respond within the given threshold, the VM is removed from the load-balancer traffic distribution. When the VM starts to respond correctly again, the health probe detects the change and adds the VM back into the load-balancer traffic distribution.

Figure 8.4. A VM that runs a web server and has a custom health.html page remains in the load-balancer traffic distribution provided that the health probe receives an HTTP code 200 (OK) response. If the web server process encounters a problem and can’t return requested pages, they’re removed from the load-balancer traffic distribution. This provides a more thorough check of the web server state than port-based health probes.

Figure 8.5. With session affinity mode, the user connects to the same backend VM only for the duration of their session.

Figure 8.6. When you configure the load-balancer rules to use source IP affinity mode, the user can close and then start a new session but continue to connect to the same backend VM. Source IP affinity mode can use a 2-tuple hash that uses the source and destination IP address, or a 3-tuple hash that also uses the protocol.

Figure 8.7. Traffic in the load balancer is processed by NAT rules. If a protocol and port match a rule, the traffic is then forwarded to the defined backend VM. No health probes are attached, so the load balancer doesn’t check whether the VM is able to respond before it forwards the traffic. The traffic leaves the load balancer and is then processed by NSG rules. If the traffic is permitted, it’s passed to the VM.

Figure 8.8. One or more backend pools can be created in a load balancer. Each backend pool contains one or more VMs that run the same application component. In this example, one backend pool contains VMs that run the web application tier, and another backend pool contains the VMs that serve multimedia, such as images and video.

Figure 8.9. To prepare the virtual network, in this exercise you create a network, a subnet, and virtual NICs that are protected by a NSG. Rules attached to the NSG allow HTTP and SSH traffic.

Figure 8.10. No VMs have been created here—the load-balancer configuration deals with virtual network resources. There’s a tight relationship between the load balancer and virtual network resources.

Figure 8.11. When you open the public IP address of the load balancer in a web browser, traffic is distributed to one of the VMs that run your basic website. The load-balancer health probe uses the health.html page to confirm the web server responses with an HTTP code 200 (OK). The VM is then available as part of the load-balancer traffic distribution.

Figure 8.12. In the Azure portal, select your load-balancer resource group and view the Resource Manager template.

Chapter 9. Applications that scale

Figure 9.1. You can scale your applications up and down, or in and out. The method you use depends on how your application is built to handle scale. Vertical scale adjusts the resources assigned to a VM or web app, such as the number of CPU cores or amount of memory. This method to scale an application works well if the application runs only one instance. Horizontal scale changes the number of instances that run your application and helps increase availability and resiliency.

Figure 9.2. As a database grows, it needs more resources to store and process the data in memory. To scale vertically in this scenario, you add more CPU and memory.

Figure 9.3. To manually scale a web app vertically, you change the pricing tier (size) of the underlying app service plan. The app service plan defines the amount of resources assigned to your web app. If your application requires a different amount of storage, number of CPUs, or deployment slots, you can change to a different tier to right-size the assigned resources to the application demand.

Figure 9.4. To deal with an increase in demand to your application, you can increase the number of VMs that run the application. This distributes the load across multiple VMs, rather than ever-larger single-instance VMs.

Figure 9.5. A virtual machine scale set logically groups together a set of VMs. Each VM is identical and can be centrally managed, updated, and scaled. You can define metrics that automatically increase or decrease the number of VMs in the scale set based on your application load.

Figure 9.6. Scale sets can automatically scale in and out. You define rules to monitor certain metrics that trigger the rules to increase or decrease the number of VM instances that run. As your application demand changes, so does the number of VM instances. This maximizes the performance and availability of your application, while also minimizing unnecessary cost when the application load decreases.

Figure 9.7. When you add an autoscale rule, you define the exact behavior required for the rule to trigger.

Figure 9.8. You should now have one rule that increases the instance count by one when the average CPU load is greater than 70%, and another rule that decreases the instance count by one when the average CPU load is less than 30%.

Figure 9.9. You should now have one rule that increases the instance count by one when the average CPU load is greater than 70%, and another rule that decreases the instance count by one when the average CPU load is less than 30%.

Chapter 10. Global databases with Cosmos DB

Figure 10.1. In a structured database, data is stored in rows and columns within a table. Each row contains a fixed set of columns that represent the schema for the database.

Figure 10.2. In an unstructured database, data is stored without fixed mappings of columns to a row in a table. You can add toppings to a single pizza, for example, without updating the entire schema and other records.

Figure 10.3. Traditional structured databases scale vertically. As the database grows, you increase the amount of storage, memory, and CPU power on the server.

Figure 10.4. Unstructured NoSQL databases scale horizontally. As the database grows, it’s sharded into segments of data that are distributed across each database server.

Figure 10.5. In this section, you create a resource group and a Cosmos DB account. A document database is then created in this account, and you add three entries to represent a basic menu for your pizza store.

Figure 10.6. Create a Cosmos DB database with the SQL model type (API). You can also automatically enable geo-redundancy across a paired region as you create the database, but you’ll do that in separate step later.

Figure 10.7. A Cosmos DB database that uses the document model stores data in collections. These collections let you group data for quicker indexing and querying.

Figure 10.8. Create a collection to hold your pizza store menu items. You can choose how much storage to reserve and how much bandwidth (RU/s) you want your application to use.

Figure 10.9. With the Data Explorer in the Azure portal, you can browse your collections to query or create new documents. This graphical tool lets you quickly manage your database from a web browser.

Figure 10.10. Data is replicated from one primary Cosmos DB instance to multiple Azure regions around the world. Web applications can then be directed to read from their closet region, and customers are dynamically routed to their closest location to minimize latency and improve response times.

Figure 10.11. Select an Azure region around the world to replicate your Cosmos DB database to, and then choose Save. Those are all the steps required to globally distribute your data.

Figure 10.12. The flow of requests through a Cosmos DB SDK when an application uses location awareness to query Cosmos DB

Figure 10.13. The Keys section of your Cosmos DB account lists the connection information and access keys. You need this information when you build and run applications, such as in the end-of-chapter lab.

Figure 10.14. The basic Azure web app shows your short pizza menu based on data in the Cosmos DB database. The write endpoint is shown as East US, and you used a preferred location list to set West Europe as the primary read endpoint. This approach lets you pick locations as you deploy your app globally, without the need for complex traffic routing.

Chapter 11. Managing network traffic and routing

Figure 11.1. In this chapter, we examine how you can create DNS zones in Azure DNS. To minimize latency and improve response times, Traffic Manager can then be used to query DNS and direct customers to their closet application instance.

Figure 11.2. This simplified flow of DNS traffic shows how a user sends a DNS request for www.azuremol.com to a DNS server, receives a response that contains the associated IP address, and then can connect to the web application.

Figure 11.3. To delegate your domain to Azure, configure your current domain provider with the Azure name server addresses. When a customer makes a DNS query for your domain, the requests are sent directly to the Azure name servers for your zone.

Figure 11.4. You can view the Azure name servers for your DNS zone in the Azure portal, Azure CLI, or Azure PowerShell.

Figure 11.5. A customer sends a DNS query to a DNS service for www.azuremol.com. The DNS service forwards the query to Traffic Manager, which returns an endpoint based on the routing method in use. The endpoint is resolved to an IP address, which the customer uses to connect to the web application.

Figure 11.6. A parent Traffic Manager profile with the geographic routing method should use child profiles that contain multiple endpoints. Those child endpoints can then use priority routing to always direct traffic to the preferred endpoint. For example, the East US child profile always sends traffic to the endpoint in East US, provided the endpoint is healthy. If the endpoint is unhealthy, traffic is then directed to West Europe. Without this child profile, customers in East US couldn’t fail over to an alternate endpoint and would be unable to access your web application.

Figure 11.7. In this section, you associate your endpoints with the Traffic Manager profiles, and define the priority for the traffic to be distributed.

Figure 11.8. Select your resource group, and then choose the Traffic Manager profile for East US. Under Settings, select Endpoints, and then choose Add.

Figure 11.9. Create an endpoint named eastus. The target resource type is App Service. Select the web app you created in East US. With a priority of 1, all traffic is directed to this endpoint, provided the endpoint remains healthy and can serve traffic.

Figure 11.10. Two endpoints are listed for the Traffic Manager profile. The endpoint for East US has the lower priority, so it always receives traffic when the endpoint is healthy. Redundancy is provided with the West Europe endpoint, which is used only when the East US endpoint is unavailable

Figure 11.11. The same configuration of endpoints as the previous Traffic Manager profile, this time with the location of the web apps reversed. These child profiles can be used to always route customers to the web app in either East US or West Europe, but you now have redundancy to fail over to another endpoint if the primary endpoint in the region is unavailable.

Figure 11.12. The child Traffic Manager profiles for East US and West Europe have been created, with the regional web apps and priorities configured as needed. Now you need to associate the child profiles with the parent profile.

Figure 11.13. This endpoint uses the nested profile for East US. The regional grouping directs all customers from North America/Central American/Caribbean to the endpoints configured in the child profile.

Figure 11.14. Nested child profiles with associated geographic regions. This parent Traffic Manager profile now directs all traffic from Europe to the web app in West Europe, with redundancy to use East US if there’s a problem. The opposite is true for customers in North America/Central America/Caribbean.

Figure 11.15. After the last few chapters, you should understand how to create highly available IaaS or PaaS applications in Azure. The IaaS solutions can use availability zones, load balancers, and scale sets. The PaaS solutions can use autoscaling web apps and Cosmos DB. Traffic Manager and Azure DNS can route customers to the most appropriate application instance automatically, based on their geographic location.

Chapter 12. Monitoring and troubleshooting

Figure 12.1. By default, boot diagnostics are enabled when you create a VM in the Azure portal. A storage account is created, which is where the boot diagnostics are stored. In a later exercise, you review and enable guest OS diagnostics, so don’t enable them right now. For production use, I recommend that you enable both boot diagnostics and guest OS diagnostics for each VM you create.

Figure 12.2. The boot diagnostics for a VM report on the health and boot status. If errors are displayed, you should be able to troubleshoot and diagnose the root cause. You can also download the logs from the portal for analysis on your local computer.

Figure 12.3. You can configure events and log levels for various components within the VM. This ability lets you centralize your VM logs for analysis and to generate alerts. Without the need to install complex, and often costly, monitoring systems, you can review and receive notifications when issues arise on your Azure VMs.

Figure 12.4. With the VM diagnostics extension installed, additional [Guest] metrics are available for review. You can search for and select the metrics to view, or change the time range as desired.

Figure 12.5. From the list of Azure subscriptions (you probably have only one), expand the list of regions. From a security perspective, you should only enable Network Watcher in Azure regions that you need to monitor for a given problem. Network Watcher can be used to capture packets for other applications and services across your subscription if you enable the feature in many regions.

Figure 12.6. Select your VM, and provide a local port on the VM to test. In this example, you want to test connectivity to port 80 to simulate a common web application on the VM. The remote IP address can be any external address for Network Watcher to simulate traffic. What really happens is that Network Watcher examines the effective security group rules to validate if traffic could flow to the VM based on the source and destination IP addresses and ports.

Figure 12.7. When you select a VM, Network Watcher examines how all the NSG rules are applied and the order of precedence, and shows what effective rules are currently applied. You can then quickly drill down to the subnet, virtual NIC, and default rules to find and edit where a given rule is applied.

Figure 12.8. A network capture when viewed in Microsoft’s Message Analyzer. Each individual packet is available for inspection. You can group and filter by communication protocol or client-host. This depth of network data allows you to examine the actual packets that flow between nodes to troubleshoot where an error occurs. A former colleague once told me, “The packets never lie.” The puzzle is to figure out what the packets tell you.

Figure 12.9. When you start a packet capture, you can save the data to Azure Storage or a local file on the VM. You can also specify a maximum size or duration of the packet captures. To limit captures to particular addresses or ports, you can add filters and define your specific needs.

Figure 12.10. Create an alert when a security event for your VM records a Restart Virtual Machine operation.

Chapter 13. Backup, recovery, and replication

Figure 13.1. Multiple VMs or physical servers, from various providers and locations, can be backed up through the central orchestration service. Azure Backups uses defined policies to back up data at a given frequency or schedule. These backups can then be stored in Azure or to an on-premises storage solution. Throughout, data is encrypted for added security.

Figure 13.2. Incremental backups only back up the data that has changed since the previous operation. The first backup is always a full backup. Each subsequent backup job only backs up data that has changed since the previous job. You control the frequency of full backups with policies. This approach minimizes the amount of data that needs to securely travel across the network and be housed in the destination storage location. Azure Backup maintains the relationship of incremental backups to each other to ensure that when you restore data, it’s consistent and complete.

Figure 13.3. The recovery point objective (RPO) defines how much data loss you can sustain for a protected instance. The longer the RPO, the greater the acceptable data loss. An RPO of one day means up to 24 hours of data could be lost, depending on when the data loss occurred in relation to the last backup. An RPO of one week means up to seven days’ worth of data could be lost.

Figure 13.4. The RTO defines how long it’s acceptable for the data-restore process to take and the application to be unavailable. The more recovery points are involved in the restore process, the longer the RTO. In a similar manner, the closer the backup storage is to the restore point, the shorter the RTO.

Figure 13.5. When you create an Azure Backup policy, you can define how long to retain recovery points. These retention values allow you to build policies to fit various compliance and audit requirements that you must adhere to.

Figure 13.6. If needed, choose your Recovery Services vault, and then select your backup policy from the list of available policies. The backup schedule and retention options are shown for review.

Figure 13.7. To create the first backup, select the Backup Now button. The status updates when complete and shows the latest backup time, latest restore point, and oldest restore point.

Figure 13.8. When you perform a file-level restore, you choose a recovery point to restore. A recovery script is then downloaded to your computer, which can only be executed by entering the generated password. The recovery script mounts the recovery point as a local volume on your computer. Once you’ve restored the files you need, you unmount the disks from your computer, which returns them for use in the recovery vault.

Figure 13.9. When the VM backup is complete, the overview page shows the data from the last backup and available restore points. To start the restore process, select Restore VM.

Figure 13.10. You can restore a complete VM or just the data disks. This example restores a complete VM and connects it to the same virtual network and subnet as the original VM. In practice, you should connect to a different subnet to keep the network traffic separate from production workloads.

Figure 13.11. Azure Site Recovery orchestrates the replication and migration of physical or virtual resources to another location. Both on-premises locations and Azure can serve as source and destination points for protection, replication, or migration.

Figure 13.12. Azure Site Recovery replicates configuration, data, and virtual networks from the production environment to a recovery environment. The VMs aren’t created in the recovery environment until a failover is initiated. Only the data replicates.

Figure 13.13. Changes on the production disks are immediately replicated to a storage account cache. This storage account cache prevents performance impacts on the production workloads as they wait to replicate changes to the remote recovery location. The changes from the storage account cache are then replicated to the remote recovery point to maintain data consistency.

Figure 13.14. Site Recovery populates these default values automatically for all the replicated resources, vaults, and storage cache it needs. To replicate a VM, select Enable Replication.

Chapter 14. Data encryption

Figure 14.1. In this basic example, an attacker could intercept network traffic that’s sent over an unencrypted HTTP connection. Because your data isn’t encrypted, the attacker could piece together the network packets and obtain your personal and financial information. If you instead connect to the web server over an encrypted HTTPS connection, an attacker can’t read the contents of the network packets and view the data.

Figure 14.2. You can easily upload and apply a custom SSL certificate to your web apps. A default wildcard certificate is already available at https://yourwebapp.azurewebsites.net; but if you use a custom domain name, you need to purchase and apply a custom SSL certificate.

Figure 14.3. When you encrypt your data, only you can decrypt and view the contents. If an attacker were to gain access to a virtual disk or individual files, they wouldn’t be able to decrypt the contents. Encryption methods can be combined: customers can connect to your web over HTTPS, you can force traffic to storage accounts to be over HTTPS, and you can then encrypt the data that’s written to disk.

Figure 14.4. As data is written to a managed disk, it’s encrypted. In-memory data on the VM, or data on temporary disks local to the VM, isn’t encrypted unless the entire VM is enabled for encryption, which we look at later in this chapter. The automatic encryption of data written to managed disks causes no overhead to the VM. The Azure platform performs the encryption operation on the underlying storage. The VM doesn’t need to handle any encrypt/decrypt processes.

Figure 14.5. When you enable SSE, Azure blobs and files are encrypted as the data is written to disk. Azure tables and queues aren’t encrypted. For additional data security, you can force all communications with a Storage account to use secure communication protocols, such as HTTPS. This protects the data in transit until the moment it’s encrypted on disk.

Figure 14.6. When you encrypt a VM, you specify a service principal and encryption key to use. The credentials you provide for the service principal are used to authenticate against Azure Key Vault and request the specified key. If your credentials are valid, the encryption key is returned and used to encrypt the VM.

Figure 14.7. When a key vault is enabled for disk encryption, it grants permission for the Azure platform to request and use the encryption key to successfully start an encrypted VM.

Figure 14.8. (Repeats figure 14.6) You can now use your AAD service principal to request the use of an encryption key stored in a key vault. This encryption key can be used to encrypt a VM. You create and encrypt this VM in the end-of-chapter lab.

Figure 14.9. When you encrypt a VM, the Azure disk encryption extension is installed. This extension manages the use of BitLocker on Windows VMs or dm-crypt on Linux VMs, to perform the data encryption on your VM. The extension is also used when you query the encryption status for a VM.

Chapter 15. Securing information with Azure Key Vault

Figure 15.1. Azure Key Vault provides a secure way to store digital information such as certificates, keys, and secrets. These secure items can then be accessed directly by your applications and services, or Azure resources such as VMs. With minimal human interaction, you can centrally distribute secure credentials and certificates across your application environments.

Figure 15.2. Azure Key Vault is a logical resource in Azure, but any certificates, secrets, and keys are stored in a hardware security module. For development or test scenarios, a software-protected vault can be used, which then performs any cryptograph operations—such as encrypting or decrypting data—in software, and not in hardware on the HSM. For production, you should use an HSM-protected vault, where all the processing is done on hardware.

Figure 15.3. In the next few exercises, you’ll build an example of a secret stored in a key vault that can be used as the database password for a MySQL Server install. A VM is created that has permissions to request the secret from the key vault. The retrieved secret is then used to automatically enter a secure credential during the application install process.

Figure 15.4. When you create a managed service identity for a VM, a service principal is created in Azure Active Directory. This service principal is a special type of account that can be used for resources to authenticate themselves. This VM then uses the Instance Metadata Service endpoint to makes requests for access to resources. The endpoint connects to AAD to request access tokens when the VM needs to request data from other services. When an access token is returned, it can be used to request access to Azure resources, such as a key vault.

Figure 15.5. The VM uses the IMDS to request access to a key vault. The endpoint communicates with AAD to request an access token. The access token is returned to the VM, which is then used to request access from the key vault. If access is granted by the key vault, the secret for databasepassword is returned to the VM.

Figure 15.6. The curl request covers the first three steps on this diagram. The curl request is made, the endpoint communicates with AAD, and an access token is issued.

Figure 15.7. This second curl request covers the last two steps in the diagram. The access token is used to request the secret from the key vault. The JSON response is returned, which includes the value of the secret.

Figure 15.8. A user, application, or service can request a new certificate from a key vault. A certificate signing request (CSR) is sent by the key vault to a certificate authority (CA). This could be an external third-party CA or a trusted internal CA. Azure Key Vault can also act as its own CA to generate self-signed certificates. The CA then issues a signed X.509 certificate, which is stored in the key vault. Finally, the key vault returns the certificate to the original requestor.

Figure 15.9. Your Remote Desktop client may try to use your default local computer credentials. Instead, select Use a Different Account, and then provide the localhostazuremol credentials that you specified when you created the VM.

Figure 15.10. In the Microsoft Management Console, add the Certificates snap-in on the local computer. Expand the Personal > Certificates store to view installed certificates. The certificate injected from Key Vault is listed.

Chapter 16. Azure Security Center and updates

Figure 16.1. Azure Security Center monitors your Azure resources and uses defined security policies to alert you to potential threats and vulnerabilities. Recommendations and steps to remediate issues are provided. You can also use just-in-time VM Access, monitor and apply security updates, and control whitelisted applications that can run on VMs.

Figure 16.2. The Azure Security Center Overview window provides a list of recommendations, alerts, and events. You can select a core resource type such as Compute or Networking to view a list of security items specific to those resources.

Figure 16.3. The VM you created already triggers security warnings. In this example, the first is that no firewall appliance is detected other than NSGs. The second warning is that the NSG allows traffic from any internet device, rather than restricting access to a specific IP address range.

Figure 16.4. With JIT VM access, NSG rules are configured to deny remote connections to a VM. RBAC permissions are used to verify permissions when a user requests access to a VM. These requests are audited, and if the request is granted, the NSG rules are updated to allow traffic from a given IP range, for a defined period of time. The user can access the VM only during this time. Once the time has expired, the NSG rules automatically revert to a deny state.

Figure 16.5. Select a VM from the Recommended options, and then choose to Enable JIT on 1 VMs. State currently shows that this VM is Open for all remote access, which flags the severity of the security concern as High.

Figure 16.6. When you enable JIT, you can change the default rules to be allowed, the allowed source IPs, and a maximum request time in hours. These JIT rules allow granular control over what’s permitted to allow only the bare minimum of connectivity.

Figure 16.7. The JIT rules are created with the lowest priority. These priorities make sure the JIT rules take precedence over any later rules applied at the subnet level.

Figure 16.8. When you request access, only the specific ports and source IPs are permitted. The defaults are populated from the settings you provided when you enabled the VM for JIT, but the values can now be changed as needed for each access request.

Figure 16.9. The JIT rule now allows traffic on port 22 to your VM, but only from your public IP address. After the time period specified (by default, 3 hours), this NSG rule reverts back to the traffic being denied.

Figure 16.10. Update Management installs a VM agent that collects information on the installed updates on each VM. This data is analyzed by Log Analytics and reported back to the Azure platform. The list of required updates can then be scheduled for automatic install through Azure Automation runbooks.

Figure 16.11. Operations Management Suite (OMS) covers multiple Azure services that work together to provide management and configuration features across your entire application environment. The services that use OMS aren’t limited to Azure VMs or resources and can work across other cloud-providers or on-premises systems when appropriately configured.

Figure 16.12. The OMS dashboard for Update Management reports the status of configured VMs. As more VMs are added to the service, you can quickly determine how many VMs require patches and how critical the missing patches are in terms of security and compliance. The pie charts show the number of Windows and Linux computers in need of updates, and then how many updates are required for each VM.

Figure 16.13. Once the VM agent has scanned for compliance, a list of available updates is provided. Depending on the OS and version, Update Management may be able to work with Log Analytics to classify the updates based on severity, or provide links to the relevant update hotfix pages.

Figure 16.14. The list of scheduled deployment tasks is shown. If desired, you can delete a given task; otherwise, the updates are automatically applied at the defined time.

Figure 16.15. In the Azure Automation account, you can manage multiple computers and view the status or apply updates. Both Azure VMs and non-Azure computers can be monitored and controlled by the same Azure Automation account. Behind the scenes, OMS can integrate with other providers to install agents on computers in a hybrid environment. This integration allows a single dashboard and management platform to handle your update needs.

Figure 16.16. You can monitor the status of running Azure Automation jobs in the portal. To help review or troubleshoot tasks, you can click a job to view any output and generated logs.

Chapter 17. Machine learning and artificial intelligence

Figure 17.1. A common use of AI in everyday life is digital assistants such as Cortana, Siri, and Google Assistant. You can use voice or text commands to interact with them, and they can monitor your daily calendar and commute conditions to warn you about traffic problems.

Figure 17.2. AI can take input from the user and make decisions that best suit the anticipated action. The AI isn’t preprogrammed with all of these possible responses and decision trees. Instead, it uses data models and algorithms to apply context to the user input and interpret the meaning and appropriate outcome.

Figure 17.3. Large amounts of raw data are processed and made ready for use. Different preparation techniques and data sanitization may be applied, depending on the raw inputs. ML algorithms are then applied to the prepared data to build an appropriate data model that reflects the best correlation among all the data points. Different data models may be produced and refined over time. Applications can then use the data models on their own data inputs to help guide their decision-making and understand patterns.

Figure 17.4. The Google Maps service receives multiple data points from users each day that record details of their commute. This data can be prepared and processed, along with the weather forecast and real-time weather during those commutes. ML algorithms can be applied to these large data sets and a data model produced. As a smaller sample of active drivers then feed their current travel conditions or weather data into the Google Maps service, the data model can be applied to predict your commute and generate a traffic alert to your smartphone that suggests an alternate route home.

Figure 17.5. Data science virtual machines (DSVMs) are available for Windows and Linux. This Window Server 2016 DSVM comes with several data science applications preinstalled, such as R Server, Jupyter Notebooks, and Azure Machine Learning Studio. DSVMs let you quickly get up and running with processing big data and building ML algorithms.

Figure 17.6. In the upcoming exercises, you create a web app bot that integrates multiple Azure AI and ML services to interact with a customer and help them order pizza.

Figure 17.7. When you train the LUIS app, the intents and entities are input and processed to create a data model. This data model is then used by your web app bot to process language understanding and intent. The number of intents and entities input for processing are small, so the data model isn’t perfect. In the real world, many more intents and entities would be provided, and you’d repeatedly train, test, and refine the data model to build progressively larger data sets to build an accurate model for processing language and intent.

Figure 17.8. As you reclassify the intent of messages and retrain the LUIS app, the data model is refined as additional data inputs are provided to the ML algorithms. When you enter similar greetings in the future, the data model will hopefully be improved and will respond more appropriately.

Figure 17.9. A customer can now access your bot online and ask to view the menu or order pizza. LUIS provides the language understanding, which allows the bot to process orders and send them to Azure Storage for additional processing.

Figure 17.10. With your web app bot running, start a conversation and try to order a pizza. In this example dialog, you can view the menu, order a pizza, and check the order status. The app is basic and isn’t really creating orders or updating the status beyond what pizza was ordered, but the exercise (hopefully!) shows how you can quickly deploy a bot in Azure.

Chapter 18. Azure Automation

Figure 18.1. Azure Automation provides many related features. A shared set of resources, such as credentials, certificates, schedules, and connection objects can be used to automatically run PowerShell or Python scripts on target servers. You can define the desired state of a server, and Azure Automation installs and configures the server appropriately. Host updates and security patches can be automatically applied. All these features work across both Windows and Linux servers, in Azure and on-premises or other cloud providers.

Figure 18.2. Information on the Run As account is shown, which includes an ApplicationId and TenantId. These are specific properties for AAD that help identify the credentials for this account. A CertificateThumbprint is shown, which matches up with a digital certificate we look at in the next step.

Figure 18.3. The thumbprint of the RunAsCertificate matches that shown in RunAsConnection. In your runbooks, you define which connection asset to use. The appropriate certificate is used to log in to the Azure account.

Figure 18.4. The output of the runbook can be viewed, along with any logs that are generated or errors and warnings. This basic example completes in a few seconds, but more complex runbooks may take longer. You can monitor the status of those longer runbooks and stop or pause their execution if needed.

Figure 18.5. The desired state configuration for a server is created and stored in Azure Automation. The Automation account acts as a pull server, which allows connected servers to pull the required configuration from a central location. Different configuration modes can be set for the remediation behavior of the server if their configuration deviates from the desired state.

Figure 18.6. After the VM has been connected to Azure Automation DSC, the desired state is applied and the IIS web server is installed.

Chapter 19. Azure containers

Figure 19.1. With a traditional VM infrastructure, the hypervisor on each virtualization host provides a layer of isolation by providing each VM with its own set of virtual hardware devices, such as a virtual CPU, virtual RAM, and virtual NICs. The VM installs a guest operating system, such as Ubuntu Linux or Windows Server, which can use this virtual hardware. Finally, you install your application and any required libraries. This level of isolation makes VMs very secure but adds a layer of overhead in terms of compute resources, storage, and startup times.

Figure 19.2. A container contains only the core libraries, binaries, and application code required to run an app. The container is lightweight and portable, because it removes the guest OS and virtual hardware layer, which also reduces the on-disk size of the container and startup times.

Figure 19.3. In a traditional VM host, the hypervisor provides the scheduling of requests from the virtual hardware in each VM onto the underlying physical hardware and infrastructure. The hypervisor typically has no awareness of what specific instructions the guest OS is scheduling on the physical CPU time, only that CPU time is required.

Figure 19.4. Containers share a common guest OS and kernel. The container runtime handles the requests from the containers to the shared kernel. Each container runs in an isolated user space, and some additional security features protect containers from each other.

Figure 19.5. In a traditional monolithic application, the entire application runs as a single application. There may be various components within the application, but it runs from a single install and is patched and updated as a single instance. With microservices, each component is broken down into its own application service and unit of execution. Each component can be updated, patched, and scaled independently of the others.

Figure 19.6. A Dockerfile was used to build a complete container image, azuremol. This image was pushed to an online public registry called Docker Hub. You can now create a container instance using this prebuilt public image from Docker Hub, which provides a ready-to-run application image.

Figure 19.7. When you create a container instance, the pizza store website runs without any additional configuration. All the configuration and content are included within the container image. This quick exercise highlights the portability and power of containers—once the container image has been prepared, your app is up and running as soon as a new container instance is deployed.

Figure 19.8. Your sample container from Docker Hub runs on a two-node Kubernetes cluster that you create in Azure Kubernetes Service. The Kubernetes deployment contains two logical pods, one on each cluster node, with a container instance running inside each pod. You then expose a public load balancer to allow your web app to be viewed online.

Figure 19.9. With the Kubernetes cluster created in AKS, you can now create a Kubernetes deployment and run your app. Your container runs across both nodes, with one logical pod on each node; you need to create a Kubernetes service that exposes a public load balancer to route traffic to your app.

Chapter 20. Azure and the Internet of Things

Figure 20.1. Messages are sent between many connected IoT devices and a central system. Your applications and services can then process the data received and send device instructions to perform additional actions in response to their collected data.

Figure 20.2. With an IoT hub, you can centrally provision and manage many IoT devices at scale. Two-way communication exists between devices and Azure to read and write data. You can process data received from devices and route it to other Azure services such as Web Apps and Storage. To monitor and troubleshoot issues, you can route information to Azure Event Grid, which we look at in the next chapter, and then link to other monitoring solutions.

Figure 20.3. Copy and paste the connection string for your Azure IoT device into the Raspberry Pi simulator. The connectionString variable is used to connect to transmit the simulated sensor data to Azure.

Figure 20.4. An IoT hub receives messages from connected IoT devices and sends the messages to an endpoint. These endpoints can be used by other Azure services to consume data from the IoT devices. A default endpoint for events exists, which services like web apps can read from.

Figure 20.5. Messages are sent from IoT devices to the IoT hub, which then directs the messages to an endpoint. In each endpoint, consumer groups can be created. These consumer groups allow other Azure services to access the device messages, which they otherwise wouldn’t have access to. With consumer groups, you don’t have to use message queues to allow external applications to read IoT device data.

Figure 20.6. To let your web app read the data from your simulated Raspberry Pi IoT device, you create a consumer group in the IoT hub. You then define two application settings for your web app that let you connect to the consumer group. To let your web browser automatically receive the stream of data from the Raspberry Pi as new data is received, you also enable a setting for WebSockets.

Figure 20.7. As messages are sent from IoT devices, they pass through the IoT hub to an endpoint. Your application code reads in web app application settings that define the IoT hub connection string and consumer group to use. Once connected to the IoT hub, the consumer group allows web apps to read the IoT device messages. Each time a new message is received from an IoT device, your web app uses a WebSocket connection with web browsers that access your site to automatically push updates. This connection allows you to view real-time data streamed from IoT devices, such as temperature and humidity information, from your simulated Raspberry Pi device.

Figure 20.8. The sample application uses a WebSocket connection between your web browser and web app to automatically update every 2 seconds with the latest data from your simulated Raspberry Pi device.

Chapter 21. Serverless computing

Figure 21.1. In a serverless computing environment, each application is broken down in small, discrete units of application components. Each component runs on a serverless computing provider, such as Azure Function Apps, and output is produced that can then be consumed by other serverless application components or other Azure services such as Azure IoT or Azure Storage.

Figure 21.2. In a logic app, an input could be when a tweet is posted, a file is uploaded, or a message is received from an IoT device. The logic app applies rules and filters to the data and determines if the message meets criteria you define. Output actions, such as generating an email, are then completed. All this logic involves no programming or application infrastructure other than an Azure subscription.

Figure 21.3. As with a logic app, an event notification or trigger usually starts an Azure function. The function app contains a small unit of code that executes a specific task. There’s no infrastructure to configure or maintain. Only your small code block is required. Once the code execution is complete, the output can be integrated with another Azure service or application.

Figure 21.4. Azure services like Azure IoT and Azure Storage can send notifications to Azure Event Grid. These notifications may happen when a message is received from an IoT device or a file is uploaded to storage. Azure Event Grid allows other services and providers to subscribe to these notifications to perform additional actions in response to events.

Figure 21.5. IoT devices connect to the IoT hub and can stream all their sensor data. There could be hundreds or thousands of connected IoT devices. Azure Event Hubs handles all these separate data streams and allows services such as Azure HDInsight to process the raw data in Hadoop or Spark clusters to analyze and generate reports.

Figure 21.6. Messages are placed in a service bus queue by application components—a frontend app, in this example. Other middleware or backend applications can then pick up these messages and process them as needed. Here, a backend application picks up the message and processes it. Advanced messaging features include guaranteeing the order of messages on the queue, locking messages, timeouts, and relays.

Figure 21.7. When your simulated Raspberry Pi IoT device sends message data, a temperature reading of 30°C or more generates an alert. Messages tagged with this alert are placed on a service bus. These messages can then be used to trigger logic apps.

Figure 21.8. As messages are transmitted from IoT devices to an IoT hub, they can be routed to specific endpoints based on criteria you define. Messages that contain a temperature alert in the message body can be routed to an endpoint that uses the service bus queue. Messages placed on the service bus queue that contain a temperature alert can then be used to trigger things like Azure logic apps or function apps.

Figure 21.9. Select your service bus endpoint, and then enter the query string that gets any message received from IoT devices tagged with a temperature alert.

Figure 21.10. Each message received on the service bus queue from the IoT hub triggers the logic app. When the logic app runs, it sends an email notification through a defined mail provider.

Figure 21.11. To get started with your logic app, select the template for “When a message is received in the Service Bus queue.”

Figure 21.12. Enter a name for your service bus connection, and select your queue from the Connection Name list. Then, select the RootManageSharedAccessKey connection name, and choose Create.

Figure 21.13. Search for and select your current email provider, such as Gmail or Outlook.com. You can also choose SMTP - Send an Email to manually configure a different provider.

Figure 21.14. The simulated Raspberry Pi device sends a message to the IoT hub every 2 seconds that contains temperature sensor readings. If the temperature is above 30°C, a temperature alert is noted. The IoT hub routes any messages that contain a temperature alert to a service bus queue. Messages on this queue trigger an Azure logic app to run. The logic app is connected to an e-mail provider, such as Outlook or Gmail, and sends an e-mail notification about the temperature warning from the IoT device.

Figure 21.15. The logic app triggers the function app. The message received on the service bus queue is passed into the function. Code in the function app parses the message, extracts the temperature, and returns that value to the logic app. It takes a few milliseconds for the function app to run this code, so the cost to performing these compute tasks is fractions of a cent.

Figure 21.16. Drag the Send an Email action below the analyzeTemperature function. Select the end of the message Body, and the Dynamic content dialog appears. To insert the temperature value computed by the function app, select the message Body from the analyzeTemperature function.

Figure 21.17. As messages are received from the simulated Raspberry Pi device, any messages that contain a temperature alert are routed to the service bus queue endpoint. Messages on the service bus queue trigger a logic app, which passes the message to a function app. A JavaScript function parses the temperature reading and returns it to the logic app, which then sends an e-mail notification that includes the temperature recorded by a sensor on the IoT device.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.9.124