Chapter 4. Configure and manage virtual networks

The virtual network (or VNet) in Azure provides the foundation for the Azure networking infrastructure. Virtual machines are connected to virtual networks. This connection provides inbound and outbound connectivity, to other virtual machines, to on-premises networks, and to the Internet. Azure provides many networking features which will be familiar to those already experienced in networking, such as the ability to control which network flows are permitted and to control network routing. This allows Azure deployments to implement familiar network architectures, such as network segmentation between layers of an N-tier application.

This chapter focuses on the core capabilities enabling virtual networks to be used flexibly and securely to connect your Azure virtual machines.

Skills in this chapter:

Skill 4.1: Implement and manage virtual networking

Azure Virtual Networks (VNets) form the foundation for the Azure Networking infrastructure. Each virtual network allows you to define a network space, comprising one or more IP address ranges. This network space is then carved into subnets. IP addresses for virtual machines, as well as some other services such as an internal Azure Load Balancer, are assigned from these subnets.

Each subnet allows you to define which network flows are permitted (using Network Security Groups), and what network routes should be taken (using User-Defined Routes). Together, these features allow you to implement many common network topologies, such as a DMZ containing a network security appliance, or a multi-tier application architecture with restricted communications between application tiers.

Create and configure a virtual networks and subnets

A virtual network (VNet) is an Azure resource. When creating a VNet, the most important setting to choose is the IP range (or ranges) the VNet will use.

IP ranges are defined using Classless Inter-Domain Routing (CIDR) notation. For example, the range 10.5.0.0/16 represents all IP ranges starting with 10.5 (the /16 indicates that the first 16 bits of the IP address given are fixed, while the remaining bits are variable across the IP range being defined). Each virtual network can use either a single IP range, or multiple disjoint IP ranges.

Note Cidr Notation

You will need to understand CIDR notation to work effectively with virtual networks in Azure. There are many good explanations to be found online, for example at: https://devblogs.microsoft.com/premier-developer/understanding-cidr-notation-when-designing-azure-virtual-networks-and-subnets/

The IP ranges in your VNet are private to that VNet. An IP address in your VNet can only be accessed from within that VNet, or from other networks connected to the VNet.

Note Virtual Network IP Ranges

When choosing the IP ranges for your VNet, it is normally a good idea to plan your network space in advance. You will typically want to avoid creating overlaps with other virtual networks, or with on-premises environments, since any overlap will prevent you from connecting these networks together later.

Your VNet IP ranges will typically be taken from the private address ranges defined in RFC 1918. These IP ranges are:

  • 10.0.0.0 - 10.255.255.255 (10.0.0.0/8)

  • 172.16.0.0 - 172.31.255.255 (172.16.0.0/12)

  • 192.168.0.0 - 192.168.255.255 (192.168.0.0/16)

You can also use public, Internet-addressable IP ranges in your VNet. However, this is not recommended, since the addresses within your VNet will take priority, and virtual machines in your Vnet will no longer be able to access the corresponding Internet addresses.

In addition, there are a small number of IP ranges reserved by the Azure platform, and which therefore cannot be used. These are:

  • 224.0.0.0/4 (Multicast)

  • 255.255.255.255/32 (Broadcast)

  • 127.0.0.0/8 (Loopback)

  • 169.254.0.0/16 (Link-local)

  • 168.63.129.16/32 (Azure-provided DNS)

Subnets

Subnets are used to divide the VNet IP space. Different subnets can have different network security and routing rules, enabling applications and application tiers to be isolated, and network flows between them controlled. For example, consider a typical 3-tier application architecture comprising a web tier, an application tier and a database tier. By implementing each tier as a separate subnet, you can control precisely which network flows are permitted between tiers and from the Internet.

The name of a subnet must be unique within that VNet. You cannot change the subnet name after is has been created.

Each subnet must also define a single network range (in CIDR format). This range must be contained within the IP ranges defined by the VNet. Only IP addresses from within the subnets can be assigned to virtual machines and other resources. Subnets do not have to span the entire VNet address space—they can be a subset, leaving unused space for future expansion.

Azure will hold back a total of 5 IP addresses from each subnet. Like standard IP networks, Azure reserves the first and last IP addresses in each subnet for network identification and for broadcast, respectively. Azure also holds three additional addresses for internal use starting from the first address in the subnet. For example, if the subnet address range is 192.168.1.0/24 then the first available IP address is 192.168.1.4.

If you create a VNet with 10 subnets, you are losing 50 IP addresses to Azure. Careful upfront planning is critical to not cause a shortage of IPs later. Also, the smallest subnet on an Azure VNet is a CIDR /29, which provides three useable IP addresses.

A VNet is not required to have subnets defined, although you are required to define one subnet when creating a VNet using the Azure portal, and a VNet without subnets isn’t very useful. VNets typically have multiple subnets, and you can add new subnets to your VNet at any time.

Changes to subnets and address ranges can only be made if there are no devices connected to the subnet. If you wish to make a change to a subnet’s address range, you first must delete all the objects in that subnet. If the subnet is empty, you can change the range of addresses to any range that is within the address space of the VNet not assigned to any other subnets.

Subnets can be only be deleted from VNets if they are empty. Once a subnet is deleted, the addresses that were part of that address range are released and available again for use within new subnets that you can create.

Additional virtual network properties

So far, we have focused on the most important properties of each VNet and subnet: the IP address ranges. There are some additional properties and features of VNets and subnets to also be aware of. Table 4-1 provides a summary of the properties supported by virtual networks.

Table 4-1 Properties of a virtual Network

Property Description
Name The VNet name. It must be unique within the resource group. It is between 2-64 characters, may contain letters (case insensitive), numbers, underscores, periods, or hyphens. Must start with a letter or number and end with a letter, number, or underscore.
Location Azure location must be the same as the VNet. Each VNet is tied to a single Azure region, and can only be used by resources (such as Virtual Machines) in that region.
Address Space An array of IP address ranges available for use by subnets.
DDOS Protection Settings to defines whether additional DDoS protection is provided for resources in the VNet, and if so which protection plan is used.
DHCP Options Contains an array of DNS servers. If specified, these DNS servers are configured on virtual machines in the virtual network in place of the Azure-provided DNS servers.
Subnets The list of subnets configured for this VNet.
Peerings The list of peerings configured for this VNet. Peerings are used to create network connectivity between separate VNets.

Table 4-2 provides a summary of the properties supported by virtual network subnets.

Table 4-2 Properties of a virtual network subnet

Property Description
Name The subnet name must be unique within the VNet. It is between 2-80 characters, may contain letters (case insensitive), numbers, underscores, periods, or hyphens. Must start with a letter or number. Must end with a letter, number, or underscore.
Address prefix The IP address range for a subnet, specified in CIDR notation. All subnets must sit within the VNet address space and cannot overlap.
Support for multiple IP ranges in a single subnet is currently in preview.
Network security group Reference to the network security group (NSG) for the subnet. NSGs are essentially firewall rules that can be associated to a subnet and are used to control which inbound and outbound traffic flows are permitted.
Route table Route table applied to the subnet, used to override the default system routes. These are used to send traffic to destination networks that are different than the routes that Azure uses by default.
Service endpoints (and olicies) An array of Service Endpoints for this subnet. Service Endpoints provide a direct route to various Azure PaaS services (such as Azure storage), without requiring an Internet-facing endpoint. Service Endpoint Policies provide further control over which instances of those services may be accessed.
Delegations An array of references to delegations on the subnet. Delegations allow subnets to be used by certain Azure services, which will then deploy managed resources (such as an Azure SQL Database Managed Instance) into the subnet. Access to these resources is private and can be controlled using NSGs. Delegations also support access to and from on-premises networks when hybrid networking is used.
Peerings The list of peerings configured for this VNet. Peerings are used to create network connectivity between separate VNets.
Creating a virtual network and subnets using the Azure portal

To create a new VNet by using the Azure portal, first click Create A Resource and then select Networking. Next, click Virtual Network as shown in Figure 4-1.

A screen shot shows the Azure Marketplace after clicking Create A Resource, Networking, which shows the ‘Virtual network’ item which you click to create a new Virtual Network.

Figure 4-1 Creating a virtual network using the Azure portal

The Create Virtual Network blade opens. Here you can provide configuration information about the virtual network. This blade requires the following inputs, as shown in Figure 4-2:

  • Name of the virtual network

  • Address space to be used for the VNet using CIDR notation

  • Subscription in which the VNet is created

  • The resource group where the VNet is created

  • The location for VNet

  • Subnet name for the first subnet in the VNet

  • The Address Range of the first Subnet

The blade also allows you to specify some additional settings, relating to DDoS protection, service endpoints and the Azure firewall service.

When creating a VNet using the Azure portal, you can only specify a single IP address range, and you must specify exactly one subnet. You can use the Azure portal to add more IP ranges and subnets after the VNet has been created.

A screen shot shows Create Virtual Network blade information, such as the name of the VNet, address space, resource group, location, and subnet information captured prior to provisioning.

Figure 4-2 Create a Virtual Network blade

Once the VNet has completed provisioning, you can review the settings using the Azure portal. Notice the Apps subnet has been created as part of the inputs made and shown in Figure 4-3.

A screen shot shows the VNet ExamRefVNET in the Azure portal showing the Apps subnet.

Figure 4-3 Virtual network created using the Azure portal

To create another subnet in the VNet, click +Subnet on this blade and provide the following inputs, as shown in Figure 4-4:

  • Name of the subnet

  • The IP address range

  • The network security group (if any)

  • The route table (if any)

  • Which service endpoints to connect from this subnet (if any)

  • Which Azure service the subnet should be delegated to (if any)

A screen shot shows the Add Subnet blade in the Azure portal showing the settings to create a new subnet called Data with IP address range 10.1.1.0/24, together with settings for the network security group, route table, service endpoints and subnet delegation.

Figure 4-4 Add Subnet blade, used to add a new subnet to an existing virtual network

Creating a virtual network and subnets using Azure PowerShell

The first step in creating a VNet using PowerShell is to build a local array containing an object representing each subnet. In the code example below, the New-AzVirtualNetworkSubnetConfig cmdlet is used to create two local objects that represent two subnets in the VNet. Notice how $subnets = @() creates an array, which is then loaded with each subnet object using the += operator.

$subnets = @()
$subnet1Name = "Apps"
$subnet2Name = "Data"
$subnet1AddressPrefix = "10.0.0.0/24"
$subnet2AddressPrefix = "10.0.1.0/24"
$subnets += New-AzVirtualNetworkSubnetConfig -Name $subnet1Name `
                  -AddressPrefix $subnet1AddressPrefix
$subnets += New-AzVirtualNetworkSubnetConfig -Name $subnet2Name `
                  -AddressPrefix $subnet2AddressPrefix

We’re now ready to create our VNet, which is achieved using the New-AzVirtualNetwork cmdlet, specifying the VNet name, resource grouplocation, the address space, and the subnet array. You can also pass in multiple address spaces similar to how the subnets are passed in using an array.

$rgName   = "ExamRef-RG"
$location = "Central US"
$vnetAddresssSpace = "10.0.0.0/16"
$VNetName = "ExamRef-vnet"
$vnet = New-AzVirtualNetwork -Name $VNetName `
              -ResourceGroupName $rgName `
              -Location $location `
              -AddressPrefix $vnetAddresssSpace `
              -Subnet $subnets

After completing the scripts above, you can retrieve the VNet and subnet properties using the Get-AzVirtualNetwork cmdlet, as shown next.

Get-AzVirtualNetwork -Name $VNetName -ResourceGroupName $rgName

To add (or remove) a subnet from an existing VNet using Azure PowerShell, first use the Get-AzVirtualNetwork cmdlet to retrieve the current VNet settings into a local object, then make the necessary changes locally, then commit the changes using Set-AzVirtualNetwork. This next example shows how to add a new subnet to the VNet created earlier.

$subnet3Name = "Web"
$subnet3AddressPrefix = "10.0.2.0/24"
$vnet = Get-AzVirtualNetwork -Name $VNetName `
    -ResourceGroupName $rgName

$vnet.Subnets += New-AzVirtualNetworkSubnetConfig -Name $subnet3Name `
    -AddressPrefix $subnet3AddressPrefix

Set-AzVirtualNetwork -VirtualNetwork $vnet
Creating a virtual network and subnets using the Azure CLI

You can also create and configure VNets and subnets using the Azure CLI using the az network vnet create command. You must specify the address space used by the VNet using the address-prefixes parameter. You can optionally specify your first subnet using the subnetname and subnet-prefixes parameters.

# Create a virtual network (assumes resource group already exists)
az network vnet create --name ExamRef-vnet --resource-group ExamRef-RG
--address-prefixes 10.0.0.0/16 --subnet-name Apps --subnet-prefixes 10.0.1.0/24

Following the creation of the VNet, you can create additional subnets using the az network vnet subnet create command.

# Create additional subnets
az network vnet subnet create --name Data --vnet-name ExamRef-vnet
--resource-group ExamRef-RG --address-prefix 10.0.2.0/24

After running these commands there should be a newly-provisioned VNet named ExamRef-vnet, containing two subnets: Apps and Data. You can verify the VNet settings by retrieving them using the az network vnet show command:

# Show virtual network settings
az network vnet show --name ExamRef-vnet --resource-group ExamRef-RG
--output jsonc

You’ve already seen how to use the az network vnet subnet create command to add a new subnet to an existing virtual network. To remove a subnet, use the az network vnet subnet delete command.

Note Subnet Operations

In Azure PowerShell, adding and removing subnets requires you to first retrieve a local copy of the VNet object, modify this object locally, then submit the modified object back to Azure. In the Azure CLI, these operations work differently. These CLI commands work by creating and deleting the subnets directly, in a single operation, by addressing the subnet as a child resource of the VNet. This approach—exposing certain settings as both a resource property and, at the same time, as a child resource—is quite common across a number of Azure resource types, especially in the Microsoft.Network resource provider.

To modify other VNet settings, use the az network vnet update command, and to modify subnet settings use the az network vnet subnet update command. As with other Azure CLI commands, use the help parameter with each of these commands to see the full list of supported parameters. For example, to modify the address space of our VNet to include an additional IP range, use:

az network vnet update --name ExamRef-vnet --resource-group ExamRef-RG
--address-prefixes 10.0.0.0/16 10.10.0.0/16

Note that this example specifies the existing address space (10.0.0.0/16) as well as the new address space (10.10.0.0/16).

Configure private IP addresses and network interfaces

VMs in Azure use TCP/IP to communicate with: services in Azure, other VMs you have deployed in Azure, on-premises networks, and the Internet. Just as a physical server uses a network interface card (NIC) to connect to a physical network, virtual machines use a network interface resource (also referred to as a NIC) to connect to a virtual network or the Internet.

There are two types of IP addresses you can use in Azure:

  • Public IP addresses Used for communication with the Internet.

  • Private IP addresses Used for communication within Azure virtual networks and connected on-premises networks.

This section focuses on how to deploy and manage private IP addresses and network interfaces. Public IP addresses are discussed in the next section.

Network interfaces

Both public and private IP addresses are configured on virtual machines using network interface resources. Therefore, to understand how to use public and private IP addresses with your virtual machine, you first must understand network interfaces. A network interface is a standalone Azure resource. Since its only purpose is to provide network connectivity for virtual machines, it is typically provisioned and deleted with its corresponding virtual machine.

Just as a physical server can have more than one network card, you can associate multiple network interfaces with a single virtual machine. This is a common practice when configuring virtual machines to act as network virtual appliances. These appliances provide network security as well as routing and other features similar to physical network devices in a traditional network.

Table 4-3 details the most important properties of each network interface resource in Azure.

Table 4-3 Properties of a network interface

Property Description
Name The network interface name. Must be unique within the resource group. it is between 1-80 characters, may contain letters (case insensitive), numbers, underscores, periods, or hyphens. Must start with a letter or number and end with a letter, number, or underscore.
Location The location of the resource. Must be the same as the location of any virtual network or any virtual machine which the network interface will be connected to.
DNS Settings If specified, these DNS servers are configured on virtual machines in the virtual network in place of the Azure-provided DNS servers. This setting will override the VNet-level DNS settings, if both are specified.
IP Forwarding Used to enable IP forwarding on this network interface. It is used for network virtual appliances to allow the virtual machine to receive packets addressed to other networks.
IP Configurations A list of IP configurations for the network interface. These are the most important settings, containing the public and private IP address properties.
Network Security Group Used to reference a network security group to be applied to this network interface.
Accelerated Networking Used to enable accelerated networking (only supported on certain VM sizes).

The most important property of the network interface is the IP configuration. This is where the public and private IP address settings are configured. Each network interface supports an array of IP configurations, which enables each network interface to support multiple IP addresses.

Private IP addresses

Private IP addresses are configured as properties within the IP configurations of the network interface. They are not a separate resource. Each IP configuration specifies a single subnet, and the private IP address is allocated from the address space of that subnet.

There are two methods used to assign private IP addresses: dynamic or static. The default allocation method is dynamic, where the IP address is automatically allocated from the resource’s subnet (using an Azure DHCP server).

Dynamic allocation assigns private IP addresses from each subnet in order, starting with the lowest available IP in the subnet IP range. Remember that the first four IP addresses in each subnet are reserved by the Azure platform. For example, if the subnet is 10.10.0.0/24, the first private IP to be allocated will be 10.10.0.4 (because 10.10.0.0 to 10.10.0.3 are reserved).

Dynamically-allocated IP address can change when you stop and start the associated virtual machine. To avoid this, private IP addresses can also be allocated statically. This is used where you want to control which IP address is assigned to a specific server, and for that IP address to remain fixed.

Static private IP addresses are commonly used for:

  • Virtual machines that act as domain controllers or DNS servers

  • Resources that require firewall rules using IP addresses

  • Resources accessed by other apps/resources through an IP address explicitly, rather than a domain name.

To configure a static private IP address, simply specify static IP allocation within the network interface IP configuration, together with the desired IP address. You can specify an existing dynamic private IP address as the static private IP address, or choose a new private IP address. The address must be within the address range of the subnet associated with the IP configuration, and not currently in use.

When changing a private IP address, you may need to manually review and update network settings within the virtual machine. For this reason, it is preferable to plan and specify static private IP addresses in advance when first provisioning the virtual machine.

Note Configuring Static Private IP Addresses

Static private IP addresses should only be configured in the Azure network interface resource. They will be assigned to the virtual machine using DHCP, just like with dynamic private IP addresses. Do not configure private IP addresses directly within the virtual machine OS network settings.

Both IPv4 and IPv6 private IP addresses are supported. However, IPv6 support has a number of limitations. VMs cannot communicate between private IPv6 addresses on a VNet, since they can only use IPv6 to receive and respond to inbound traffic from the Internet when using an Internet-facing load balancer.

Note Dynamic and Static Private IP Assignment

Private IPv4 address assignments can be either dynamic or static. Private IPv6 addresses can only be assigned dynamically.

Enabling static private IP addresses on VMs with the Azure portal

The network interface of a VM holds the configurations of the private IP address. This is known as the IP configuration. Using the Azure portal, you can modify the private IP address allocation method for the IP configuration from dynamic to static. You can also use the Azure portal to manage other network interface settings, such as assigning network security groups, public IP addresses, and adding new IP configurations.

Using the portal, locate the network interface for the VM to be assigned a static IP address. Once the blade loads for the NIC, click on IP Configurations, then select the IP configuration you wish to update. The IP Configuration blade is shown in Figure 4-5. Here, you can update the private IP address allocation method to Static and specify the static IP address.

A screen shot shows the Azure portal where a static private IP address has been assigned to an IP configuration of a NIC.

Figure 4-5 Assigning a Static Private IP Address to a NIC

Enabling static private IP addresses on VMs with PowerShell

When updating an existing network interface resource to use a static IP address, use two PowerShell cmdlets: Get-AzNetworkInterface and Set-AzNetworkInterface. First, use the Get-AzNetworkInterface cmdlet to create a local object representing the network interface. Next, access the IP configurations array of the network interface and modify the appropriate IP configuration object locally, specifying the static IP allocation method and the IP address that should be assigned. Finally, to save your changes to Azure, use the Set-AzNetworkInterface cmdlet.

# Update existing NIC to use a Static IP address and set the IP
# Assumes a NIC exists named 'ExamRef-NIC' in resource group 'ExamRef-RG'
# Assumes NIC is associated with a, existing subnet with private IP 10.0.0.5 available
$nic = Get-AzNetworkInterface -Name ExamRef-NIC -ResourceGroupName ExamRef-RG
$nic.IpConfigurations[0].PrivateIpAllocationMethod = "Static"
$nic.IpConfigurations[0].PrivateIpAddress = "10.0.0.5"
Set-AzNetworkInterface -NetworkInterface $nic
To change a network interface from static to dynamic assignment, use:
# Update existing NIC to use a Dynamic IP address
# Assumes a NIC exists named 'ExamRef-NIC' in resource group 'ExamRef-RG'
$nic = Get-AzNetworkInterface -Name ExamRef-NIC -ResourceGroupName ExamRef-RG
$nic.IpConfigurations[0].PrivateIpAllocationMethod = "Dynamic"
Set-AzNetworkInterface -NetworkInterface $nic
Enabling static private IP addresses on VMs with the Azure CLI

To use the Azure CLI to update a network interface to a static private IP address, use one simple command: az network nic ip-config update. The name of the network interface and resource group are required, along with the name of the IP configuration to update and the new static IP address.

# Update existing NIC to use a Static IP Address and set the IP
# Assumes a NIC exists named 'ExamRef-NIC' in resource group 'ExamRef-RG'
# Assumes NIC has an IP configuration named 'ipconfig1'
# Assumes NIC is associated with a, existing subnet with private IP 10.0.0.5 available
az network nic ip-config update --name ipconfig1 --nic-name ExamRef-NIC
--resource-group ExamRefRG-CLI --private-ip-address 10.0.0.5

Note, there is no need to specify the static IP allocation method explicitly—this is implied by specifying the private IP address to use. To specify dynamic IP allocation, use the same command, specifying the IP address as “”.

# Update existing NIC to use a Dynamic IP Address and set the IP
az network nic ip-config update --name ipconfig1 --nic-name ExamRef-NIC
--resource-group ExamRefRG-CLI --private-ip-address ""

Create and configure public IP addresses

Associating a public IP address with a network interface creates an Internet-facing endpoint, allowing your virtual machine to receive network traffic directly from the Internet.

A public IP address is a standalone Azure resource. This contrasts with a private IP address that exists only as a collection of settings on another resource, such as a network interface or a load balancer.

To associate a public IP address with a virtual machine, the IP configuration of the network interface must be updated to contain a reference to the public IP address resource. As a standalone resource, public IP addresses can be created and deleted independently as well as moved from one virtual machine to another.

Basic vs Standard Pricing Tiers

Public IP addresses are available at two pricing tiers (or SKUs): Basic or Standard. All Public IP Addresses created before the introduction of these tiers are mapped to the Basic tier.

The main distinction is that Standard tier Public IP Addresses support zone-redundant deployment, allowing you to use availability zones to protect your deployments against potential outages caused by datacenter-level failures (such as fire, power failure, or cooling failure). There are a number of other important differences between the two tiers, as summarized in Table 4-4.

Table 4-4 Comparison of public IP Address Basic and Standard Tiers

Basic Tier Standard Tier
Supports both static and dynamic allocation methods. Supports static allocation only.
Open by default for inbound traffic. Use NSGs to restrict inbound or outbound traffic. Closed by default for inbound traffic. Use NSGs to allow inbound traffic and restrict outbound traffic.
Not zone redundant, but can be assigned to a specific availability zone. Zone redundant by default, or can instead be assigned to a specific availability zone
Does not support public IP prefixes (discussed later). Supports public IP prefixes, allowing IP addresses to be assigned from a contiguous IP address block.
Public IP address allocation

As with private IP addresses, public IP addresses support both dynamic and static IP allocation. For the Basic tier, both static and dynamic allocation are supported, the default being dynamic. For the Standard tier, only static allocation is supported.

Under dynamic allocation, an actual IP address is only allocated to the public IP address resource when the resource is in use—that is, when it is associated with a resource such as a running virtual machine. If the virtual machine is stopped (deallocated) or deleted, the IP address assigned to the public IP address resource is released and returned to the pool of available IP addresses managed by Azure. When you restart the virtual machine, a different IP address will most likely be assigned.

If you wish to retain the IP address, the public IP address resource should be configured to use static IP allocation. An IP address will be assigned immediately (if one was not already dynamically assigned). This IP address will never change, regardless of whether the associated virtual machine is stopped or deleted.

Static public IP addresses are typically used in scenarios where a dependency is taken on a particular IP address. For example: commonly used in the following scenarios:

  • Where firewall rules specify an IP address.

  • Where a DNS record would need to be updated when an IP address changes.

  • Where the source IP address is used as a (weak) form of authentication of the traffic source.

  • Where an SSL certificate specifies an explicit IP address rather than a domain name.

With private IP addresses, static allocation allows you to specify the IP address to use from the available subnet address range. In contrast, static allocation of public IP addresses does not allow you to specify which public IP address to use. Azure assigns the IP address from a pool of IP addresses in the Azure region where the resource is located.

Public IP address prefixes

When using multiple public IP addresses, it can be convenient to have all of the IP addresses allocated from a single IP range or prefix. For example, when configuring firewall rules, this allows you to configure a single rule for the prefix, rather than separate rules for each IP address.

To support this scenario, Azure allows you to reserve a public IP address prefix. Public IP address resources associated with that prefix will have their IP addresses assigned from that range, rather than from the general purpose Azure pool.

When creating a prefix, specify the prefix resource name, subnet size (for example, /28 for 16 IP addresses), and the Azure region where the IP addresses will be allocated. This feature is currently in preview, so check for the current level of support.

Once the prefix is created, individual public IP addresses can be created that are associated with this prefix. Note that only standard-tier public IP addresses support allocation from a prefix, and thus only static allocation is supported. The IP address assigned to these resources will be taken from the prefix range—you cannot specific a specific IP address from the range.

DNS Labels

The Domain Name System (DNS) can be used to create a mapping from a domain name to an IP address. This allows you to reference IP address endpoints using a domain name, rather than using the assigned IP address directly.

There are four ways to configure a DNS label for an Azure public IP address:

  1. By specifying the DNS name label property of the public IP address resource.

  2. By creating a DNS A record in Azure DNS or a third-party DNS service hosting a DNS domain.

  3. By creating a DNS CNAME record in Azure DNS or a third-party DNS service hosting a DNS domain.

  4. By creating an alias record in Azure DNS.

In the first option, you specify the left-most part of the DNS label as a property in the public IP address resource. Azure provides the DNS suffix, which will be of the form <region>.cloudapp.azure.com. The DNS label you provide is concatenated with this suffix to form the fully-qualified domain name (FQDN), which can be used to look up the IP address via a DNS query.

For example, if your public IP address is deployed to the Central US region, and you specify the DNS label contoso-app, then the FQDN will be contoso-app.centralus.cloudapp.azure.com.

The major limitation of this first approach is that the DNS suffix is taken from an Azure-provided DNS domain. It does not support the use of your own vanity domain, such as contoso. com. To address this, you will need to use one of the other approaches.

In the second approach, you will have already hosted your vanity domain either in Azure DNS or a third-party DNS service. Using your hosting service, you can create a DNS entry in your vanity domain mapping to your public IP address resource. If you use a DNS A record, which maps directly to an IP address, you will need to update the DNS record if the assigned IP address changes. To avoid this, you will probably prefer to use static rather than dynamic IP allocation.

In the third approach, you start by creating a DNS label for your public IP address. You then create a CNAME record in your vanity domain which maps your chosen domain name to the Azure-provided DNS name. For example, you might map www.contoso.com to contoso-app.centralus.cloudapp.azure.com. This approach has the advantage of avoiding the need for static IP allocation, since the Azure-provided DNS entry updates automatically if the assigned IP address changes. However, the downside of this approach is that the Domain Name System does not support CNAME records at the apex (or root) of a DNS domain, hence while you can create a CNAME record for www.contoso.com, you cannot create one for contoso.com (without the www).

In the fourth approach, your vanity domain must be hosted in Azure DNS. You can then create an alias record, which works the same as an A record, except that rather than specifying the assigned IP address value explicitly in the DNS record, you simply reference the public IP address resource. The assigned IP address is taken from this resource and automatically configured in your DNS alias record. With alias records, the DNS record is automatically updated if the assigned IP address changes, avoiding the need for static IP allocation.

Outbound Internet connections

When a public IP address is assigned to a virtual machine’s network interface, outbound traffic to the Internet will be routed through that IP address. The recipient will see your public IP address as the source IP address for the connection.

However, the virtual machine itself does not see the public IP address in its network settings—it only sees the private IP address. Traffic leaves the virtual machine via the private IP address, and Source Network Address Translation (SNAT) is used to map the outbound traffic from the private IP address to the public IP address.

Note that a public IP address is not required for outbound Internet traffic. Even without a public IP address assigned, virtual machines can still make outbound Internet connections. In this case, SNAT is used to map the private IP address to the Internet-facing IP address.

IPv4 and IPv6

Public IP address resources can use either an IPv4 or IPv6 address (but not both). Note that IPv6 support is limited as follows:

  • Only the Basic tier is supported.

  • Only dynamic allocation is supported.

  • Only Internet-facing load balancers (and not virtual machines) can be assigned a public IPv6 address.

Creating a public IP address using the Azure portal

Creating a new public IP address is a simple process when using the portal. Click New, and then search for public IP address in the marketplace. Like all resources in Azure, some details will be required, including the name of the resource, the SKU (or pricing tier), the DNS name label, idle time-out, subscription, resource group and location/region. For the Basic SKU, you also specify the IP version and static or dynamic assignment. For the Standard SKU, choose between zoneredundant deployment or a specific availability zone.

The location is critical, as an IP address must be in the same location/region as the virtual machine or other resource that will use it. Figure 4-6 shows the Azure Create Public IP Address Blade.

A screen shot shows the Azure portal during the creation of a public IP address. The DNS name is set, along with the location where this address is used in the Azure Cloud.

Figure 4-6 Creating a Public IP Address in the Azure portal

Creating a public IP address using the PowerShell

To create a new public IP address by using Azure PowerShell, use the New-AzPublicIpAddress cmdlet, as shown in the next example. Each of the IP address properties discussed earlier is specified using an appropriate parameter.

# Creating a Public IP Address
# Set Variables
$publicIpName = "ExamRef-PublicIP1-PS"
$rgName = "ExamRefRG-PS"
$dnsPrefix = "examrefpubip1ps"
$location = "centralus"

# Create the Public IP
New-AzPublicIpAddress -Name $publicIpName `
              -ResourceGroupName $rgName `
              -AllocationMethod Static `
              -DomainNameLabel $dnsPrefix `
              -Location $location
Creating a public IP address using the Azure CLI

To create a new public IP address by using the Azure CLI, use the az network public-ip create command. Each of the IP address properties discussed earlier is specified using an appropriate parameter. For a full list, use az network public-ip create -h to see the inline help.

# Creating a Public IP Address
az network public-ip create --name ExamRef-PublicIP1-CLI --resource-group
ExamRefRGCLI --dns-name examrefpubip1cli --allocation-method Static

Configure network routes

Network routes control how traffic is routed in your network. Azure provides default routing for common scenarios, with the ability to configure your own network routes where necessary.

System routes

Azure VMs that are added to a VNet can communicate automatically with each other over the network. Even if they are in different subnets or attempting to gain access to the Internet, there are no configurations required by you as the administrator. Unlike typical networking, you will not need to specify a network gateway, even though the VMs are in different subnets. This is also the case for communication from the VMs to your on-premises network when a hybrid connection from Azure to your datacenter has been established.

This ease of setup is made possible by what is known as system routes, which define how IP traffic flows in Azure VNets. The following are the default system routes that Azure will use and provide for you:

  • Within the same subnet

  • From one subnet to another within a VNet

  • VMs to the Internet

  • A VNet to another VNet through a VPN gateway

  • A VNet to another VNet through VNet peering

  • A VNet to your on-premises network through a VPN gateway or ExpressRoute

A diagram shows a virtual network with two subnets: Apps and Data. The routing of network traffic is using the default system routes.

Figure 4-7 N-Tier application deployed to Azure VNet using System Routes

Figure 4-7 shows an example of how these system routes make it easy to get up and running. System routes provide for most typical scenarios by default, without you having to make any routing configuration.

User-defined routes

There are some use cases where you will want to configure the routing of packets differently from what is provided by the default system routes. One of these scenarios is when you want to send traffic through a network virtual appliance, such as a third-party load balancer, firewall or router deployed into your VNet from the Azure Marketplace.

To make this possible, you must create what are known as user defined routes (UDRs). The UDR is implemented by creating a route table resource. Within the route table, a number of routes are configured. Each route specifies the destination IP range (in CIDR notation) and the next hop IP address. A variety of different types of next hop are supported. These are:

  • Virtual Appliance A virtual machine running a network application such as a load-balancer or firewall. With this next hop type, you also specify the IP address of the appliance, which can be a virtual machine or internal load-balancer for high-availability virtual appliances.

  • Virtual Network Gateway Used to route traffic to a VPN Gateway (but not an ExpressRoute Gateway, which uses BGP for custom routes). Since there can be only one VPN Gateway associated with a VNet, you are not prompted to specify the actual gateway resource.

  • Virtual Network Used to route traffic within the Virtual Network.

  • Internet Used to route a specific IP address or prefix to the Internet.

  • None Used to drop all traffic send to a given IP address or prefix.

This route table is then associated with one or more subnets. Traffic originating in the subnet whose destination matches the destination IP range of a route table rule will instead be routed to the corresponding next hop IP address. The service running at this IP address is responsible for all onward routing.

Note Route Tables

You can have multiple route tables, and the same route table can be associated to one or more subnets. Each subnet can only be associated to a single route table. All VMs in a subnet use the route table associated to that subnet.

A diagram shows a virtual network with three subnets: Apps, Data, and DMZ. The routing of network traffic is using user defined routes to send packets from the Apps and Data subnets through the firewall which is deployed to the DMZ subnet.

Figure 4-8 N-Tier application deployed with a firewall using user defined routes

Figure 4-8 shows a UDR that has been created to direct outbound traffic via a virtual appliance. In this case the appliance is a firewall running as a VM in Azure in the DMZ subnet.

The same appliance can also be used to filter traffic between the Apps and Data subnets. An example route table implementing this design is shown in Figure 4-9.

A diagram shows a user defined route which is forcing traffic through the firewall device running at the IP address 10.0.99.4.

Figure 4-9 Route table rules forcing network traffic through firewall

Note Dedicated Subnets for Network Appliances

Do not apply a route table to a subnet if the route table contains a rule with a next hop address within that subnet. To do so could create a routing loop. For this reason, virtual network appliances should be deployed to dedicated subnets, separate from the resources that route through that appliance.

IP forwarding

User defined routes (UDR) allow for changing the default system routes that Azure creates for you in an Azure VNet. In the virtual appliance scenario, the UDRs forward traffic to a virtual appliance such as a firewall, which is running as an Azure virtual machine.

By default, a virtual machine in Azure will not accept a network packet addressed to a different IP address. For that traffic to be allowed to pass into that virtual appliance, you must enable IP forwarding on the network interface of the virtual machine. This configuration doesn’t typically involve any changes to the Azure UDR or VNet.

IP forwarding can be enabled on a network interface by using the Azure portal, PowerShell, or the Azure CLI. In Figure 4-10, you see that the network interface of the NGFW1 VM has the IP forwarding set as Enabled. This VM is now able to accept and send packets that were not originally intended for this VM.

A screen shot shows the network interface for a virtual appliance firewall that has been enabled for IP forwarding.

Figure 4-10 IP Forwarding enabled on a virtual appliance

How routes are applied

A given network packet may match multiple route table rules. When designing and implementing custom routes, it’s important to understand the precedence rules that Azure applies.

If multiple routes contain the same address prefix, Azure selects the route type, based on the following priority:

  1. User defined routes

  2. System routes for traffic in a virtual network, across a virtual network peering, or to a virtual network service endpoint

  3. BGP routes

  4. Other system routes

Within a single route table, a given network packet may match multiple routing rules. There is no explicit precedence order on the rules in a route table. Instead, precedence is given to the rule with the most specific match to the destination IP address.

For example, if a route table contains one rule for prefix 10.10.0.0/16, and another rule for 10.10.30.0/28, then any traffic to IP address 10.10.30.4 will be matched against the second rule in preference to the first.

When troubleshooting networking issues, it can be useful to get a deeper insight into exactly which routes are being applied to a given network interface. The effective routes feature of each network interface allows you to see the full details of every network route applied to that network interface, giving you full insight into how each outbound connection will be routed based on the destination IP address.

Forced tunneling

A special case is when routes are configured with the destination IP prefix 0.0.0.0/0. Given the precedence rules described above, this route controls traffic destined for any IP address is not covered by any other rules.

By default, Azure implements a system route directing all traffic matching 0.0.0.0/0 (and not matching any other route) to the Internet. If you override this route, this traffic is instead directed to the next hop you specify. By using a VPN Gateway as the next hop, you can direct all Internet-bound traffic over your VPN connection to an on-premises network security appliance. This is known as forced tunneling.

Implementing a custom route using the 0.0.0.0/0 prefix has several implications. First, traffic to Azure platform services will also be routed via your custom route. This may add considerable additional latency to these connections. To prevent this, use service endpoints to maintain a direct connection to these services.

Second, you will no longer be able to access resources in your subnet directly from the Internet. Instead, you will need to configure an indirect path, with inbound traffic passing through the next hop device.

Configure user defined routes using the Azure portal

To configure user defined routes, the first step is to create a route table resource. From the Azure portal, click +Create A Resource, then click Networking, then click Route Table to open the Create Route Table blade, as shown in Figure 4-11. Fill in the route table name, select the subscription and resource group, and specify the route table location—this must be the same Azure region that the subnets use with this route table.

A screen shot shows the Create Route Table blade from the Azure portal. The name of the route table is ExamRef-RouteTable, and the location is North Europe.

Figure 4-11 The Create Route Table blade in the Azure portal

Having created the route table, the next step is to define the routes. Open the route table blade, and under Settings click Routes to open the list of routes in the route table. Then click +Add to open the Add Route blade, as shown in Figure 4-12.

A screen shot shows the Add Route blade from the Azure portal. The name of the route is WVet3-Route, the address prefix is 10.3.0.0/16, the next hop type is virtual appliance, and the next hop address is 10.2.20.4.

Figure 4-12 The Add Route Blade in The Azure Portal

Repeat this process for each custom route in the route table. The list of routes in the route table will be shown in the route table blade, as shown in Figure 4-13.

A screen shot shows the list of routes in the route table blade in the Azure portal.

Figure 4-13 The list of routes in the route table blade in the Azure portal

The final step is to specify which subnets this route table should be associated with. This can be configured either from the subnet, or from the route table. In the latter case, from the route table blade under Settings click Subnets, to open the list of subnets associated with the route table. Click +Associate to open the Associate Subnet blade, as shown in Figure 4-14.

A screen shot shows the Associate subnet blade in the Azure portal. The virtual network VNet1 and the subnet Default have been selected.

Figure 4-14 The Associate Subnet blade for a route table, in the Azure portal

After creating the subnet association, the route table blade will show a list of associated subnets as shown in Figure 4-15.

A screen shot shows the list of subnets in the route table blade in the Azure portal. The Default subnet of virtual network VNet1 with IP address range 10.1.0.0/24.

Figure 4-15 The list of subnets in the route table blade in the Azure portal

To see the effective routes for a given network interface, navigate to the network interface blade in the Azure portal, then click Effective Routes to open the Effective Routes blade, as shown in Figure 4-16.

A screen shot shows the list of effective routes in the network interface blade in the Azure portal. Several default routes are shown, including the virtual network and a VNet peering route. A user defined route is given, specifying the next hop IP address 10.2.20.4 for all traffic for destination IP address prefix 10.3.0.0/16.

Figure 4-16 The list effective routes for the VNet1-VM network interface

Configure user defined routes using Azure PowerShell

To configure user defined routes using Azure PowerShell, follow the same sequence as used for the Azure portal: first create the route table resource by using the New-AzRouteTable cmdlet, then add routes to the route table using the Add-AzRouteConfig cmdlet and commit the changes using the Set-AzRouteTable cmdlet.

Next, associate the route table with the subnet(s). In this case, the subnet must be associated with the route table, not the other way around.

# Create the route table resource
$rt = New-AzRouteTable -Name RouteTable1 -ResourceGroupName ExamRef-RG
-Location 'North Europe'

# Add a route to the local route table object
Add-AzRouteConfig -RouteTable $rt `
 -Name Route1 `
 -AddressPrefix 10.3.0.0/16 `
 -NextHopType VirtualAppliance `
 -NextHopIpAddress 10.2.20.4

# Commit the route table back to Azure
Set-AzRouteTable -RouteTable $rt

# Find the VNet and subnet
$vnet = Get-AzVirtualNetwork –Name VNet1 –ResourceGroupName ExamRef-RG
$subnet = $subnet = $vnet.Subnets | where {$_.Name -eq "Default"}

# Update the subnet to specify the route table.
# This cmdlet requires us to re-specify the subnet address prefix, which we take from
 the existing subnet
Set-AzVirtualNetworkSubnetConfig -VirtualNetwork $vnet `
 -Name Default `
 -AddressPrefix $subnet.AddressPrefix `
 -RouteTable $rt

# Commit the VNet back to Azure
Set-AzVirtualNetwork -VirtualNetwork $vnet

Azure PowerShell can also be used to retrieve the effective routes for a network interface by using the Get-AzEffectiveRouteTable cmdlet.

# Get effective routes for a network interface
Get-AzEffectiveRouteTable -NetworkInterfaceName VNet1-VM `
 -ResourceGroupName ExamRef-RG
Configure user defined routes using the Azure CLI

To configure UDRs with the Azure CLI, start by creating the route table resource using the az network route-table create command. Then add routes to the route table using the az network route-table route create command. To associate the route table with a subnet, use the az network vnet subnet update command, specifying the --route-table parameter.

# Create route table
az network route-table create --name RouteTable1 --resource-group ExamRef-RG

# Add route(s) to route table
az network route-table route create --name Route1 --route-table-name RouteTable1

--resource-group ExamRef-RG --address-prefix 10.3.0.0/16
--next-hop-type VirtualAppliance --next-hop-ip-address 10.2.20.4

# Associate route table with subnet
az network vnet subnet update --name default --vnet-name VNet1
--resource-group ExamRef-RG --route-table RouteTable1

The Azure CLI can also be used to review effective routes on a network interface, using the az network nic show-effective-route-table command.

# Get effective routes for a nic
az network nic show-effective-route-table --name VM1-NIC --resource-group ExamRef-RG

Skill 4.2: Create connectivity between virtual networks

A virtual network provides a private network space in Azure for your virtual machines. In some scenarios, you may need virtual machines in one virtual network to communicate with virtual machines in another virtual network. This section explains how to achieve this by creating private connections between your virtual networks. There are two kinds of private connections available: VNet Peering, and Site-to-Site VPN connections.

Connectivity between virtual networks is useful in several common scenarios. One example is where applications in different virtual networks need access to a shared service such as a domain controller, a network security appliance, or a gateway. Since each virtual network exists in a single Azure region, another scenario requiring connectivity between virtual networks is for communication between virtual machines in different Azure regions.

Create and configure VNet peering

VNet peering allows virtual machines in two separate virtual networks to communicate directly, using their private IP addresses. The VNets can either be in the same Azure region, or separate Azure regions. Peering between VNets in different regions is called Global VNet peering. In all cases, traffic between peered VNets travels over the Microsoft backbone infrastructure, not the public Internet.

Note Vnet Peering Restrictions

You can peer VNets in different subscriptions, even if those subscriptions are under different Azure Active Directory tenants (cross-tenant peering is not supported via the Azure portal, you need to use the Azure CLI, PowerShell, or templates).

You can also use VNet peering to connect Resource Manager VNets to the older "Classic" VNets. However, peering between two Classic VNets is not supported (a VNet-to-VNet VPN can be used in this case).

The peered VNets must have non-overlapping IP address spaces. In addition, the VNet address space cannot be modified once the VNet is peered with another VNet.

Note Peering Connections

When peering two Resource Manager VNets, such as VNet1 and VNet2, two peering connections are required – one from VNet1 to VNet2, and one from VNet2 to VNet1.

However, when peering between a Resource Manager VNet and a Classic Vnet, a connection is only made from the Resource Manager VNet.

VNet peering gives the same network performance between VMs as if they were placed in a single, large VNet, while maintaining the manageability that comes from using two or more separate VNets. There is no bandwidth cap imposed on peered VNets. The only limits are those on the VMs themselves, based on VM series and size.

Note Peering Limits

Be aware of the limit of 100 peering connections per VNet. This is a hard limit.

No VNet gateways are required by VNet peering. This avoids the cost, throughput limitations, additional latency and additional incurred complexity associated with using VNet gateways.

Note Global Peering Limitations

Global peering cannot be used to access the frontend IP of an internal Azure load-balancer, or a virtual network gateway, in the remote virtual network. In these cases, a VNet-to-VNet VPN should be used instead. This limitation applies only to global VNet peering between Azure regions, not to VNet peering within an Azure region.

By default, peered VNets appear as a single network for connectivity purposes. That is, there are no restrictions on connectivity between the peered VNets, so virtual machines in peered VNets can communicate with each other as if they were in the same VNet. In addition, the VirtualNetwork service tag (described in Skill 4.4) spans the address space of both peered networks.

Alternatively, you also have the option to limit connectivity—with this option, there is no automatic outbound connectivity between peered VNets, and the VirtualNetwork service tag does not include the address space of the peered VNet. In this case, you control connectivity between peered virtual networks using network security groups.

A simple example of VNet peering is shown in Figure 4-17. This shows two VNets which ave been connected using VNet peering. This allows (for example), the WEB1 virtual machine in VNetA to connect to the MYSQL1 database in VNetB.

The diagram shows two VNets that are peered together in the North Central Region. There are two different address spaces, which are required when peering two VNets together.

Figure 4-17 VNet Peering between two virtual networks

Once peered, traffic between VMs is routed through the Microsoft backbone network. Traffic does not pass over the public Internet, even when using global VNet peering to connect VNets in different Azure regions.

While global VNet peering allows for open connectivity between virtual machines across VNets in different Azure regions, a limitation is that a VM can only connect to the frontend IP address of an internal Azure Load Balancer in the same region. To connect to an internal Azure Load Balancer across regions, a VNet-to-VNet VPN connection is required.

It is important to understand that VNet peering is a pairwise relationship between two virtual networks. To create connectivity across 3 virtual networks (VNetA, VNetB, and VNetC), all 3 pairs must be peered (VNetA to VNetB, VNetB to VNetC and VNetA to VNetC). This is illustrated in Figure 4-18.

This functional diagram shows three VNets: A, B, and C. A is peered to B and B is also peered to C. Even though it might seem that A is now connected to C via B, which is not the case.

Figure 4-18 VNet peerings do not have a transitive relationship

Service chaining and hub-and-spoke networks

A common way to reduce duplication of resources is to use a hub-and-spoke network topology. In this approach, shared resources (such as domain controllers, DNS servers, monitoring systems, and so on) are deployed into a dedicated hub VNet. These services are accessed from multiple applications, each deployed to their own separate spoke VNets.

As you have just seen, VNet peering is not transitive. This means there is no automatic connectivity between spokes in a hub and spoke topology. Where such connectivity is required, one approach is to deploy additional VNet peerings between spokes. However, with a large number of spokes, this can quickly become unwieldy.

An alternative approach is to deploy a network virtual appliance (NVA) into the hub, using user-defined routes (UDRs) to route inter-spoke traffic through the NVA. This is known as service chaining, and it enables spoke-to-spoke communication without requiring additional VNet peerings, as illustrated in Figure 4-19.

This functional diagram shows a hub VNet peered with each of two spoke VNets. Each spoke VNet contains web and database server VMs. The hub VNet contains network appliance and domain controller VMs. User-define routes direct outbound traffic from each spoke VNet to the network appliance in the hub VNet.

Figure 4-19 Service chaining allows for the use of common services across VNet Peerings

To transit traffic from one spoke VNet to another spoke VNet via an NVA in the hub VNet, the VNet peerings must be configured correctly. By default, a peering connection will only accept traffic originating from the VNet to which it is connected. This will not be the case for traffic forwarded between spoke VNets via an NVA in a hub VNet. To permit such traffic, the VNet peerings must enable the Allow Forwarded Traffic setting.

Sharing virtual network gateways

Suppose two peered VNets, say VNet-A and VNet-B, wish to send traffic to an external network via a virtual network gateway (this external network could be an on-premises network, or another Azure VNet connected via a site-to-site VPN connection). Rather than deploy two virtual network gateways, it is much simpler and more cost-efficient for both VNets to share a single gateway. This can be achieved, provided both VNets are deployed to the same Azure region, and having the correct configuration of the peering settings.

Suppose the virtual network gateway is deployed to VNet-A, allowing VNet-A to communicate with the external network. By default, only traffic originating in VNet-A is permitted to use this gateway, and the external network is only able to connect to VMs in VNet-A. To allow connectivity between VNet-B and the external network, the following settings must be configured:

  • Use Remote Gateways This setting must be enabled on the peering connection from VNET-B to VNET-A. This informs VNET-B of the availability of the gateway in VNET-A. Note that to enable this setting, VNET-B cannot have its own virtual network gateway.

  • Allow Gateway Transit This option must be enabled on the peering connection from VNET-A to VNET-B. This permits traffic from VNET-B to use VNET-A’s gateway to send traffic to the external network.

Note that in this case, the Allow Forwarded Traffic peering option is not required.

Creating a VNet peering using the Azure portal

To create a peering connection between two VNets, the VNets must already have been created and must not have overlapping address spaces.

To create a new VNet peering from VNet1 to VNet2, connect to the Azure portal and locate VNet1. Under Settings, click Peerings, and then select +Add to open the Add Peering blade. Use the following settings for a standard peering connection, as shown in Figure 4-20.

  • Name VNet1-to-VNet2

  • Peer details

    • Resource Manager

    • Subscription Select the Subscription for VNet2

      Virtual Network Choose VNet2

      (Alternatively, you can specify the peer VNet by selecting the I Know My Resource ID checkbox and entering the peer VNet resource ID)

  • Configuration

    • Allow Virtual Network Access Enabled

    • Leave the remaining three checkboxes unchecked

A screen shot of the Azure portal shows the Add Peering blade configured to peer VNet1 to VNet2.

Figure 4-20 Adding peering from VNet1 to VNet2 using the Azure portal

Click OK to create the peering and return to the VNet1 – Peerings view. After refreshing your browser, the VNet peering appears in the portal with the peering status Initiated, as seen in Figure 4-21.

A screen shot shows the VNet1-to-VNet2 peering created by using Azure portal. It shows its peering status as Initiated.

Figure 4-21 VNet1-to-VNet2 peering showing status as Initiated in the Azure portal

To complete the VNet peering, you need to create a second peering in the opposite direction, from VNet2 to VNet1. Open VNet2 in the Azure portal, click Peerings. Click +Add to open the Add Peering blade, and fill in as follows next, as shown in Figure 4-22.

A screen shot of the Azure portal shows the Add Peering blade configured to peer VNet2 to VNet1.

Figure 4-22 Adding Peering from VNet2 to VNet1 using the Azure portal

  • Name VNet2-to-VNet1

  • Peer details

  • Resource Manager

  • Subscription Select the Subscription for VNet1

  • Virtual Network Choose VNet1

  • Configuration

    • Allow Virtual Network Access Enabled

    • Leave the remaining three boxes unchecked for this example

Once the peering has completed provisioning, it will appear in the portal with the peering status Connected to peer network VNet1, as seen in Figure 4-23.

A screen shot shows the VNet2-to-VNet1 peering created by using the Azure portal. It shows its peering status as Connected.

Figure 4-23 VNet2-to-VNet1 peering showing as Connected in the Azure portal

Returning to the peering blade of VNet1 shows that the first peering, from VNet1 to VNet2, also now shows a peering status of Connected, as shown in Figure 4-24.

A screen shot shows the VNet1-to-VNet2 peering created by using the Azure portal. It shows its peering status as Connected.

Figure 4-24 VNet1-to-VNet2 peering showing as Connected in the Azure portal

Now Vnet1 and VNet2 are peers, and VMs on these networks can communicate with each other, as if this was a single virtual network.

Creating a VNet peering using PowerShell

When creating a new VNet peering using PowerShell, first use the Get-AzVirtualNetwork cmdlet to assign information about the VNet1 and VNet2 into two local objects. Next, use the Add-AzVirtualNetworkPeering cmdlet to create two VNet peerings, from VNet1 to VNet2 and from VNet2 to VNet1. Upon completion, the VNet peering will provision, and move to a connected peering status. You can use the Get-AzVirtualNetworkPeering cmdlet to verify the peering status of the VNets.

# Load Vnet1 and VNet2 into local variables
$vnet1 = Get-AzVirtualNetwork `
   -Name VNet1 `
   -ResourceGroupName ExamRef-RG

$vnet2 = Get-AzVirtualNetwork `
   -Name VNet2 `
   -ResourceGroupName ExamRef-RG

# Peer VNet1 to VNet2
Add-AzVirtualNetworkPeering `
   -Name 'VNet1-to-VNet2' `
   -VirtualNetwork $vnet1 `
   -RemoteVirtualNetworkId $vnet2.Id

# Peer VNet2 to VNet1
Add-AzVirtualNetworkPeering `
   -Name 'VNet2-to-VNet1' `
   -VirtualNetwork $vnet2 `
   -RemoteVirtualNetworkId $vnet1.Id

# Check the peering status
  Get-AzVirtualNetworkPeering `
     -ResourceGroupName ExamRef-RG `
     -VirtualNetworkName VNet1 `
     | Format-Table VirtualNetworkName, PeeringState
Creating VNet peering using the Azure CLI

To create a VNet peering using the Azure CLI, use the az network vnet peering create command. You can use the az network vnet peering list command to check the peering status.

If the remote VNet is in a different subscription or resource group, you will need to specify the full resource ID of the remote VNet instead of only the name. This resource ID can be found using the az network vnet show command.

# Peer VNet1 to VNet2
# Note: if the remote VNet is in a different subscription or resource group,
# specify the full resource ID
az network vnet peering create --name VNet1-to-VNet2 --resource-group
ExamRef-RG --vnet-name VNet1 --allow-vnet-access --remote-vnet VNet2

# Peer VNet2 to VNet1
# Note: if the remote VNet is in a different subscription or resource group,
# specify the full resource ID
az network vnet peering create --name VNet2-to-VNet1 --resource-group
ExamRef-RG --vnet-name VNet2 --allow-vnet-access --remote-vnet VNet1

#To See the Current State of the Peering
az network vnet peering list --resource-group ExamRef-RG --vnet-name VNet1 -o table
az network vnet peering list --resource-group ExamRef-RG --vnet-name VNet2 -o table

Create a virtual network gateway and configure VNET to VNET connectivity

A virtual network gateway allows you to create connections from your virtual network to other networks. When creating a gateway, you must specify if it will be used for VPN connections or ExpressRoute connections. Virtual network gateways used for VPN connections are called a VPN gateways, while those used for ExpressRoute connections are called ExpressRoute gateways.

VPN gateways can be used to create VPN connections, either to on-premises networks or to other virtual networks. A VPN connection with an on-premises network is called a site-to-site VPN. These are discussed further in Skill 4.7. A VPN connection between two VNets is called a VNet-to-VNet connection.

Connecting virtual networks using VNet-to-VNet connections has several disadvantages over virtual network peering. The VPN gateways add additional cost and complexity. They also add network latency and reduce network bandwidth. In general, VNet peering should be used in preference to VNet-to-VNet connections. VNet-to-VNet connections, however, can be useful in scenarios where VNet peering is not suitable, such as when the additional security of end-to-end encryption is required.

A VNet-to-VNet connection is a type of site-to-site VPN connection, whereas a VPN gateway is used for both VPN endpoints. Therefore, it requires that a VPN gateway be deployed to both VNets.

VPN gateways can only be deployed to a dedicated gateway subnet within the VNet. A gateway subnet is a special type of subnet that can only be used for virtual network gateways. Under the hood, the VPN gateway is implemented using Azure virtual machines (these are not directly accessible and are managed for you). While the minimum size for the gateway subnet is a CIDR /29, the Microsoft-recommended best practice is to use a CIDR /27 address block to allow for future expansion.

VPN Gateways are available in several pricing tiers, or SKUs. The correct tier should be chosen based on the required network capacity, as shown in Table 4-5.

Table 4-5 Comparison of VPN gateway pricing tiers

SKU Max VNet-to-VNet Connections Max VNet-to-VNet Throughput
Basic 10 100 Mbps
VpnGw1 and VpnGw1Az 30 650 Mbps
VpnGw2 and VpnGw2Az 30 1 Gbps
VpnGw3 and VpnGw3Az 30 1.25 Gbps

Note Re-Sizing VPN Gateways

You can resize a gateway between the VpnGw1, 2 and 3 tiers. However, you cannot resize a Basic tier gateway. The Basic tier is considered a legacy SKU and does not support all features.

Having created a gateway subnet and VPN gateway in each VNet, the VNets can be connected by creating VPN connections between these gateways. As with VNet peering, two connections must be made—one in each direction—and the virtual networks must have non-overlapping IP address ranges.

Creating a VNet-to-VNet connection between VNets automatically configures the necessary network routes. It also updates the VirtualNetwork service tag (explained in Skill 4.4) to include the IP address space from both VNets, so the default NSG rules will permit open connectivity between connected VNets. Connectivity between VNets can be restricted if necessary by modifying the NSGs.

For increased resilience to datacenter-level failures, virtual network gateways can be deployed to availability zones. This requires the use of dedicated SKUs, called VpnGw1Az, VpnGw2Az, and VpnGw3Az. Both zone-redundant and zone-specific deployment models are supported, and the choice is inferred from the associated public IP address rather than being specified explicitly as a gateway property.

Creating a VPN gateway and VNet-to-VNet connection using the Azure portal

The following steps capture the basic process of creating a VNet-to-VNet connection between two VNets: VNet2 and VNet3. This guide assumes these VNets have been created in advance

  • Create gateway subnets on VNet2 and VNet3

  • Provision VPN gateways on VNet 2 and VNet3

  • Create a connection from the VPN gateway on VNet2 to the VPN gateway on VNet3

  • Create a connection from the VPN gateway on VNet3 to the VPN gateway on VNet2

Using the portal, navigate to VNet2 and click the Subnets link under Settings to open the Subnets blade. Click the +Gateway Subnet button and assign an address space using a /27 CIDR. An example is shown in Figure 4-25; you may need to choose a different subnet address range based on the address range assigned to your VNet. Do not modify the other subnet settings.

A screen shot shows the Azure portal adding a Gateway Subnet to VNet2, using a CIDR /27 IP address range.

Figure 4-25 Adding a Gateway Subnet to VNet2

Repeat this process to create a similar gateway subnet on VNet3. Once again, choose a /27 gateway subnet IP range within the available address space for VNet3.

Next, provision a VPN gateway for VNet2, as follows. From the Azure portal, click +Create A Resource, then click Networking, and then select Virtual Network Gateway. Complete the Create Virtual Network Gateway blade as follows:

  • Name VNet2-GW

  • Gateway type VPN

  • VPN Type Route-based

  • SKU VpnGw1

  • Virtual network VNet2 (you may need to set the correct location first)

  • First IP Configuration Create New, VNet2-GW-IP

  • Location <Same as VNet2>

Do not select the checkboxes for Enable Active-Active Mode or Configure BGP ASN. Figure 4-26 shows the completed gateway settings.

A screen shot shows the Azure portal creating a new VPN gateway for VNet2. The gateway is a route-based VPN gateway using SKU VpnGw1 and public IP address VNet2-GW-IP.

Figure 4-26 Creating the Azure VPN gateway for VNet2

Repeat this process to create a similar VPN gateway for VNet3.

Note VPN Gateway Provisioning Time

It may take up to 45 minutes to provision the VPN Gateways, which is normal.

The final step is to create the VPN connection between the VPN gateways. Two connections are required, one in each direction. Using the Azure portal, open the blade for the VNet2-GW VPN gateway, and click Connections. Then click +Add to open the Add Connection blade, and complete the settings as shown in Figure 4-27.

A screen shot shows the Azure portal and a new connection resource being provisioned between VNet2 and VNet3

Figure 4-27 Creating the Connection from VNet2 to VNet3

  • Name VNet2-to-VNet3

  • Connection type VNet-to-VNet

  • First virtual network gateway VNet2 (not editable)

  • Second virtual network gateway VNet3

  • Shared key (PSK) <Choose a secure, random string, and keep a note of it>

Now navigate to the VNet3-GW VPN gateway blade and repeat the above process to create a second connection, from VNet3 to VNet2. Be sure to use the same shared key (PSK) as used when creating the first connection.

Both connections will be listed in the Azure portal on the Connections blade of both VPN gateways. After a short time, both should report their status as Connected, as shown in Figure 4-28.

A screen shot shows the Azure portal shows the VNet2-to-VNet3 and VNet3-to-VNet2 connections, both with status Connected.

Figure 4-28 VPN connections with status Connected

Creating a VPN Gateway and VNet-to-VNet connection using Azure PowerShell

The process for creating VPN gateways and VNet-to-VNet connections using Azure PowerShell follows the same steps as used by the Azure Portal, like the following script demonstrates.

Note Creating a Gateway Subnet

When creating the gateway subnet, there is no special parameter or cmdlet name to denote that this is a gateway subnet rather than a normal subnet. The only distinction that identifies a gateway subnet is the subnet name, GatewaySubnet.

# Script to set up VPN gateways and VNet-to-VNet connection
# Assumes VNet2 and VNet3 already created
# with IP address ranges 10.2.0.0/16 and 10.3.0.0/16 respectively

# Name of resource group
$rg = 'ExamRef-RG'

# Create gateway subnets in VNet2 and VNet3
# Note: Gateway subnets are just normal subnets, with the name 'GatewaySubnet'
$vnet2 = Get-AzVirtualNetwork -Name VNet2 -ResourceGroupName $rg
$vnet2.Subnets += New-AzVirtualNetworkSubnetConfig -Name GatewaySubnet
-AddressPrefix 10.2.1.0/27
$vnet2 = Set-AzVirtualNetwork -VirtualNetwork $vnet2

$vnet3 = Get-AzVirtualNetwork -Name VNet3 -ResourceGroupName $rg
$vnet3.Subnets += New-AzVirtualNetworkSubnetConfig -Name GatewaySubnet
-AddressPrefix 10.3.1.0/27
$vnet3 = Set-AzVirtualNetwork -VirtualNetwork $vnet3

# Create VPN gateway in VNet2
$gwpip2 = New-AzPublicIpAddress -Name VNet2-GW-IP -ResourceGroupName $rg `
  -Location $vnet2.Location -AllocationMethod Dynamic

$gwsubnet2 = Get-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' `
  -VirtualNetwork $vnet2

$gwipconf2 = New-AzVirtualNetworkGatewayIpConfig -Name GwIPConf2 `
  -Subnet $gwsubnet2 -PublicIpAddress $gwpip2

$vnet2gw = New-AzVirtualNetworkGateway -Name VNet2-GW -ResourceGroupName $rg `
  -Location $vnet2.Location -IpConfigurations $gwipconf2 -GatewayType Vpn `
  -VpnType RouteBased -GatewaySku VpnGw1

# Create VPN gateway in VNet3
$gwpip3 = New-AzPublicIpAddress -Name VNet3-GW-IP -ResourceGroupName $rg `
  -Location $vnet3.Location -AllocationMethod Dynamic

$gwsubnet3 = Get-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' `
  -VirtualNetwork $vnet3

$gwipconf3 = New-AzVirtualNetworkGatewayIpConfig -Name GwIPConf3 `
  -Subnet $gwsubnet3 -PublicIpAddress $gwpip3

$vnet3gw = New-AzVirtualNetworkGateway -Name VNet3-GW -ResourceGroupName $rg `
  -Location $vnet3.Location -IpConfigurations $gwipconf3 -GatewayType Vpn `
  -VpnType RouteBased -GatewaySku VpnGw1

# Create Connections
New-AzVirtualNetworkGatewayConnection -Name Vnet2-to-VNet3 `
  -ResourceGroupName $rg `
  -Location $vnet2.Location `
  -VirtualNetworkGateway1 $vnet2gw `
  -VirtualNetworkGateway2 $vnet3gw `
  -ConnectionType VNet2VNet `
  -SharedKey "secretkey123"

New-AzVirtualNetworkGatewayConnection -Name Vnet3-to-VNet2 `
  -ResourceGroupName $rg `
  -Location $vnet3.Location `
  -VirtualNetworkGateway1 $vnet3gw `
  -VirtualNetworkGateway2 $vnet2gw `
  -ConnectionType VNet2VNet `
  -SharedKey "secretkey123"
Creating a VPN gateway and VNet-to-VNet connection using the Azure CLI

The process for creating VPN gateways and VNet-to-VNet connections using Azure PowerShell follows the same steps as used by the Azure Portal, as the following script demonstrates.

Once again, the gateway subnet is created simply by specifying the name GatewaySubnet

when creating a normal subnet.

In this case, the public IP address required by the VPN gateway must be created beforehand, rather than being created implicitly when creating the gateway.

# Create VPN gateways and VNet-to-VNet connection between VNet1 and VNet2
in resource group ExamRef-RG-CLI
# Assumes VNet2 and VNet3 already created,
# with IP address ranges 10.2.0.0/16 and 10.3.0.0/16
# and locations NorthEurope and WestEurope, respectively

# Create gateway subnets in VNet2 and VNet3
az network vnet subnet create --name GatewaySubnet --vnet-name VNet2
--resource-group ExamRef-RG-CLI --address-prefixes 10.2.1.0/27
az network vnet subnet create --name GatewaySubnet --vnet-name VNet3
--resource-group ExamRef-RG-CLI --address-prefixes 10.3.1.0/27

# Create public IP addresses for use by VPN gateways
az network public-ip create --name VNet2-GW-IP --resource-group
ExamRef-RG-CLI --location NorthEurope
az network public-ip create --name VNet3-GW-IP --resource-group
ExamRef-RG-CLI --location WestEurope

# Create VPN gateways in VNet2 and VNet 3
az network vnet-gateway create --name VNet2-GW --resource-group
ExamRef-RG-CLI --gateway-type vpn --sku VpnGw1 --vpn-type RouteBased
--vnet VNet2 --public-ip-addresses VNet2-GW-IP --location NorthEurope

az network vnet-gateway create --name VNet3-GW --resource-group
ExamRef-RG-CLI --gateway-type vpn --sku VpnGw1 --vpn-type RouteBased
--vnet VNet3 --public-ip-addresses VNet3-GW-IP --location WestEurope

# Create connections between VPN gateways
az network vpn-connection create --name VNet2-to-VNet3 --resource-group
ExamRef-RG-CLI --vnet-gateway1 VNet2-GW --vnet-gateway2 VNet3-GW
--shared-key secretkey123 --location NorthEurope

az network vpn-connection create --name VNet3-to-VNet2 --resource-group
ExamRef-RG-CLI --vnet-gateway1 VNet3-GW --vnet-gateway2 VNet2-GW
--shared-key secretkey123 --location WestEurope

Verify virtual network connectivity

There are many reasons why a connection between VNets might not work as expected. For example:

  • A peering connection may not have enabled the Allow Virtual Network Connectivity option.

  • Network security groups may be configured to block the traffic.

  • The peering or VNet-to-VNet connection may not have been established successfully.

  • Network settings (e.g. firewall settings) within the VMs may be obstructing traffic.

  • User-defined routes (UDRs) may be misconfigured to route traffic incorrectly.

  • A network virtual appliance (NVA) used to bridge traffic between spokes in a hub-and-spoke network architecture may be misconfigured.

There are several ways to verify connections between VNets:

  • The simplest way is to check if VMs in each VNet can communicate with each other, for example by trying to create an RDP or SSH connection between them.

  • Verify the status of the peerings connections or VNet-to-VNet connections.

Network troubleshooting, including troubleshooting VPN connections, is covered in greater depth in Skill 4.6.

Skill 4.3: Configure name resolution

Humans work with names, but computers prefer IP addresses. Fundamentally, DNS is about mapping names to IP addresses; to make name-based rather than IP-based networking possible. Simplifying somewhat, a client makes a DNS query containing a domain name, and receives a response containing the IP address for that name.

Almost everywhere you look, you’ll see DNS scenarios. From browsing the web, to smartphone apps, to IoT devices, to database lookups within an application, DNS is everywhere. Because DNS is so universal, it is especially important that DNS services offer exceptionally high availability and low latency, since the impact of DNS failures or delays will be widespread.

Azure DNS provides a high performance, highly-available DNS service in Azure. It can be used for two separate DNS scenarios:

  1. Providing Internet-facing name resolution for a public DNS domain by hosting the corresponding DNS zone.

  2. Providing internal name resolution between virtual machines within or between virtual networks.

In addition, Azure provides the ability to control which DNS servers are configured on your virtual machines, allowing you to use your own DNS servers instead of the Azure-provided service.

Configure Azure DNS

This section describes how Azure DNS is configured to host Internet-facing domains. We start with a summary of how the domain name system works, since understanding DNS is a prerequisite to understanding Azure DNS.

How DNS Works

To properly understand the various DNS services and features available in Azure, it is first necessary to understand how the domain name system works. In particular, it is important to understand the different roles played by recursive and authoritative DNS servers, and how a DNS query is routed to the correct DNS name servers using DNS delegation.

First, it’s important to understand the distinction between a domain name, and a DNS zone. The Internet-facing domain name system is a single global name hierarchy. A domain name is just a name within that hierarchy. Owning a domain name gives you the legal right to control the DNS records within that name, and any sub-domains of that name.

You purchase a domain name from a domain name registrar. The registrar then lets you control which name servers receive the DNS queries for that domain, by letting you configure the NS records for the domain.

A DNS zone is the representation of a domain name in an authoritative DNS server. It contains the collection of DNS records for a given domain name. The service hosting the DNS zone lets you manage the DNS records within the zone, and hosts the data on authoritative name servers, which answers DNS queries with DNS responses based on the configured DNS records.

In Azure, you can purchase domain names using the App Service Domains service. DNS zone hosting is provided by Azure DNS.

The DNS settings on the user’s device point to a recursive DNS server, also sometimes known as a local DNS service (or LDNS), or simply as a DNS resolver. The recursive DNS service is typically hosted by your company (if you’re at work) or by your ISP (if you’re at home). There are also public recursive DNS services available, such as Google’s 8.8.8.8 service. The recursive DNS service doesn’t host any DNS records, but it allows your device to off-load most of the work associated with resolving DNS queries.

A diagram shows a client making a DNS query for www.contoso.com to a recursive DNS name server. To resolve this DNS query, the recursive DNS server makes a series of DNS queries, first to the root name servers, then the com name servers, and finally the contoso.com name servers. It then returns the completed DNS response to the client.

Figure 4-29 The DNS Resolution Process

To understand the role of recursive and authoritative DNS servers, consider Figure 4-29, which describes the DNS resolution process for a single DNS query, www.contoso.com:

  1. Your PC makes a DNS query to its locally-configured recursive DNS server. This query is simply a packet sent over UDP port 53, although TCP can also be used (typically when responses are too big to fit in a UDP packet).

  2. Let’s assume the recursive DNS server has just been switched on, so there is nothing in its cache. It passes the query to one of the root name servers (the addresses of the root name servers are pre-configured). The root name servers are authoritative name servers—they host the actual DNS records for the root zone. A zone is simply the data representing a node in the DNS hierarchy.

  3. The root name servers don’t know anything about the contoso.com DNS zone. They do, however, know where you can find the com zone. So, they return a DNS record of type NS, which tells the recursive DNS server where to find the com zone.

  4. The recursive server tries again, this time calling the com name servers. Again, these are authoritative name servers, this time for the com zone.

  5. These name servers don’t recognize www.contoso.com, but they do have NS records that define where the contoso.com DNS zone can be found.

  6. The recursive server tries again, this time calling the authoritative contoso.com name servers.

  7. These servers are authoritative for the contoso.com DNS zone. And, there is a record on these servers matching the www record name. The server does recognize the www.contoso.com query name and returns the A record response that maps this name to an IP address.

  8. The recursive server then returns this result back to the client.

The recursive DNS server can also follow a chain of CNAME records (which map one DNS name to another name). And the recursive DNS server also caches the responses it receives, so that it can respond more quickly next time. The duration of the cache is determined by the TTL (time-to-live) property of each DNS record.

The domain name system is a distributed system, where one set of servers can refer queries to another set using NS records. The process we’ve just seen to map a query name to a result perhaps via a long chain of authoritative DNS servers is called DNS name resolution.

The NS records tell clients on the Internet where to find the name servers for a given DNS zone. The NS records for a DNS zone are configured in the parent zone, and a copy of the records is also present in the child zone. Setting up these NS records is called delegating a DNS domain.

A fully-qualified domain name (FQDN) is a domain name containing all components all the way up to the root zone. Strictly speaking, a fully-qualified name ends with a “.” (i.e. www-dot-contoso-dot-com-DOT), which represents the root zone, although by convention the trailing “.” is often omitted.

Reverse DNS is the ability to map an IP address to a name (as opposed to name to IP address, which is what normal DNS provides). Some applications use reverse DNS as a weak form of authentication. For example, it’s commonly used in email spam-scoring algorithms.

Reverse DNS lookups use a completely independent DNS hierarchy from the forward lookups. The reverse lookup for www.contoso.com does not sit in the contoso.com zone. It instead sits in a separate DNS zone hierarchy based on reversed IP addresses. For example, suppose www.contoso.com resolves to IP address 1.2.3.4. Then the reverse lookup for the IP address 1.2.3.4 will typically be a record named 4 in the DNS zone 3.2.1.in-addr.arpa, giving a FQDN 4.3.2.1.in-addr.arpa (notice the reversed IP address.)

Reverse DNS lookup zones are controlled by whomever owns the IP subnet. The reverse DNS lookup zone for an IP block you own can be hosted in Azure DNS. Public IP addresses in Azure reside in Microsoft-owned IP blocks, and hence the reverse DNS lookups use Microsoft-managed reverse DNS lookup zones.

There’s nothing in the domain name system to ensure the reverse lookup maps to the same name as was used in the forward lookup. That’s achieved simply by the correct configuration in both forward and reverse lookup zones.

DNS services in Azure

There are several DNS-related services and features in Azure—an overview of each is given below. The first three items are Azure services, which you consume by creating service-specific resources that you will be billed for. The remaining three items are Azure features, which you configure using settings on other resource types, such as a virtual network, public IP address, or network interface.

  • Azure DNS Allows you to host your DNS domains in Azure. It provides the ability to create and manage the DNS records for your domain and provides name servers, which answer DNS queries for your domain from other users on the Internet.

       Azure DNS also supports private DNS zones, which are used for intranet-based name resolution for VM to VM lookups, including support for some scenarios not supported by the Azure-provided DNS service, which we’ll come to shortly. Private DNS zones are currently in preview.

  • Azure Traffic Manager An intelligent DNS service that uses DNS to implement global traffic management. Where Azure DNS always provides the same DNS response to any given DNS query, in Azure Traffic Manager the same query may result in one of several possible responses, depending on a number of factors which you control, such as where the end-user is located or which of your service endpoints is currently available. This enables you to route traffic intelligently between Azure regions, or between Azure deployments and on-premises deployments.

       Understanding Traffic Manager is out of scope for the AZ-103 exam.

  • App service domains llows purchasing of domain names, which can then be hosted in Azure DNS. This service is integrated with Azure App Service, but can be used for any domain registration, even if App Service is not being used.

  • Azure-provided DNS Sometimes called Internal DNS, it allows the VMs in your virtual network to find each other, using DNS queries based on the hostname of each VM. The DNS queries are internal (private) to the virtual network.

  • Recursive DNS A service provided by Azure for DNS name resolution from your Azure VMs or other Azure services. You can also configure your VMs to use your own DNS server instead. This is sometimes informally called bring your own DNS. This is common when joining your VMs to a domain controller.

  • Reverse DNS Provides the ability to configure the reverse DNS lookup for an Azure-assigned public IP address. (Reverse DNS lookup zones for IP blocks you own can be hosted in Azure DNS).

Creating and delegating a DNS Zone to Azure DNS

A DNS zone is a resource in Azure DNS. Creating a DNS zone resource allocates authoritative DNS name servers to host the DNS records for that zone. Azure DNS can then be used to manage those DNS records. DNS queries directed to those DNS name servers receive a DNS response based on the DNS records configured at that time.

You do not have to own the corresponding domain name before creating a DNS zone in Azure DNS. You can create a DNS zone with any name, except for names on the public suffix list (see https://publicsuffix.org/ ). You can also create more than one DNS zone resource with the same DNS zone name, so long as they are in different resource groups. In this case, the DNS zones will be allocated to separate DNS name servers, so no conflict arises.

You can test your DNS records by directing DNS queries directly to the assigned DNS name servers for your zone. For general use, however, your DNS zone should be delegated from the parent zone. This requires you to own the corresponding domain name.

Before you can delegate your DNS zone to Azure DNS, you first need to know the names of the name servers assigned to your zone. These can be obtained using the Azure portal, PowerShell, or CLI after the DNS zone resource has been created. You can’t predict in advance which name server pool will be assigned to your DNS zone. You need to create the DNS zone, and then check.

The assigned name servers will vary between zones, so if you’re setting up multiple zones in Azure DNS you need to check the name servers on each one. Don’t assume that the name servers will be the same across all your zones.

Each domain name registrar has their own DNS management tool allowing you to set the name server (NS) records for a domain. In the registrar’s DNS management page, edit the NS records and replace the NS records with the ones Azure DNS assigned.

When delegating a domain to Azure DNS, you must use the name server names provided by Azure DNS. You should always use all four name server names, regardless of the name of your domain. Domain delegation does not require the name server name to match your domain name.

Note Delegating DNS Zones to Azure DNS

When delegating a domain to Azure DNS, do not use DNS glue records to point to the Azure DNS name server IP addresses directly. These IP addresses may change in the future. Delegations using name server names in your own zone, sometimes called vanity name servers, are not currently supported in Azure DNS.

Azure DNS treats child zone as entirely separate zones. Delegating a child zone therefore follows the same process as delegating the parent zone:

  1. Create the child zone resource.

  2. Identify the name servers for the child zone. These will be different to the name servers assigned to the parent zone.

  3. Create NS records in the parent zone to delegate the child zone. The name of the NS records should be the child zone name (excluding the parent zone name suffix), and the RDATA in the NS records should be the child zone name servers.

Note Delegating Child DNS Zones to Azure DNS

When you delegate a child zone, any existing name servers in the parent zone that match the child zone name will become hidden. You’ll still see them in the Azure portal, but they won’t resolve from the name servers since the delegation to the child zone will take precedence. To avoid this issue, before delegating the child zone, you should check for any records that will be hidden and replicate them into the child zone. This applies with any DNS service, not just Azure DNS.

Managing DNS records in Azure DNS

Each record in the domain name system includes the following properties:

  • Name The name of the DNS record is combined with the name of the DNS zone, to form the fully-qualified domain name (FQDN). For example, the record www in zone contoso.com corresponds to the FQDN www.contoso.com.

  • Type The type of DNS record determines what data is associated with the record and what purpose it is used for. A list of record types supported by Azure DNS is provided in Table 4-6.

  • TTL The TTL (or Time-to-Live) tells recursive DNS servers how long a DNS record should be cached.

  • RDATA The data returned for each DNS record. The type of data returned depends on the DNS record type. For example, an A record will return an IPv4 address, whereas a CNAME record returns another domain name.

The collection of records in a DNS zone with the same name and the same type is called a resource record set (or RRSet, or in Azure DNS, simply a record set). Records in Azure DNS are managed using record sets. Record sets are a child resource of the DNS zone, and can contain up to 20 individual DNS records. The name, type and TTL are configured on the record set, and the RDATA is configured on each DNS record within the record set.

To create a DNS record set at the root (or apex) of a DNS zone, use the record set name “@“. For example, the record set named “@“ in the zone contoso.com will resolve against queries for contoso.com. You can also use “*“ in the record set name to create wildcard records (subject to DNS wildcard matching rules).

Azure DNS supports all commonly-used DNS record types. The full list of supported record types, together with a description of each, is provided in Table 4-6.

Table 4-6 DNS Record Types in Azure DNS

DNS Record Type Remarks
A Used to map a name to an IPv4 address.
AAAA Used to map a name to an IPv6 address.
CAA Used to specify which certificate authorities can issue certificates for a domain. Note that CAA records are not currently available in the Azure portal, so they must be configured using the Azure CLI or Azure PowerShell.
CNAME Provides a mapping from one DNS name to another. The DNS standards do not allow CNAME records at the zone apex. In addition, you cannot create a CNAME record with the same name as a record of any other record type, and CNAME record sets only support a single DNS record rather than a list of records. These are DNS RFC constraints, not Azure DNS limitations.
MX Used for mail server configuration.
NS

An NS record set at the zone apex containing the name servers for the DNS zone is required by the DNS standards. This is created for you when the DNS zone is created. It can be edited, for example to add additional records when co-hosting a DNS zone with more than one provider, but not deleted.

You can create additional NS record sets to delegate child zones.

PTR Used for reverse DNS lookups in reverse lookup zones.
SOA An SOA record is required at the apex of every zone. This is created and deleted with the DNS zone resource.
SRV

SRV records are used for service discovery for a wide range of services, from Kerberos to Minecraft to the Session Initiation Protocol used for Internet telephony.

Note that the Service and Protocol parameters are specified as part of the record set name, for example: _service._protocol.media.contoso.com. Some DNS services prompt you to enter these values separately, then merge them to form the record set name. With Azure DNS, you need to specify them as part of the record set name, but they are not entered separately.

TXT Used for a wide range of applications, including email Sender Policy Framework (SPF).

Note SPF Records

Sender Policy Framework (SPF) records are used to identify legitimate mail servers for a domain and help prevent spam. The SPF record type was deprecated by RFC7208, which states that the TXT record type should be used for SPF records.

Alias records

Azure DNS offers integration with other services hosted in Azure via Alias records.

With conventional DNS records, you explicitly specify the target, such as the IP address of an A record. If the IP address changes, you need to update the DNS record accordingly.

Alias records allow you to define the target of the DNS record implicitly, by referencing another Azure resource. The value of the DNS record is populated automatically based on the resource it references and is updated automatically if that resource changes.

Alias records can reference three different resource types:

  • An A or AAAA record can reference a public IP address, of type IPv4 or IPv6 respectively

  • A, AAAA, or CNAME record can reference a Traffic Manager profile. This exposes the dynamic, traffic-managed name resolution of the Traffic Manager directly within a record in your DNS domain. Prior to this feature, you had to create a CNAME record from your domain to a record in the trafficmanager.net domain provided by Azure Traffic Manager.

  • An A, AAAA or CNAME record can also reference another record in the same DNS zone. This lets you create synchronized records with ease.

Alias records are a very useful way to address a number of scenarios.

First, Alias records allow you to avoid orphaned DNS records. A common problem with DNS systems is that records are not cleaned up when the services they reference are deleted. The DNS record is left dangling. With Alias records, the DNS record no longer resolves once the underlying service is deleted.

Second, as we have already discussed, by updating automatically when underlying resources change, Alias records reduce your management overhead and help you avoid accidental application downtime.

Third, since Alias records enable you to avoid using a CNAME record when using a vanity domain name with Azure Traffic Manager, they enable you to implement a traffic-managed record at the apex of your domain.

Creating DNS zones and DNS records using the Azure portal

To create a DNS zone, click +Create A Resource, then Networking, then click DNS Zone to open the Create DNS Zone blade. Fill in the blade by specifying the DNS domain name as the DNS zone resource name, and selecting your resource group, as shown in Figure 4-30.

A screen shot shows the Create DNS Zone blade of the Azure portal, with DNS zone name examref.com.

Figure 4-30 Creating a DNS zone using the Azure portal

Note DNS Zones and Azure Region

When creating a DNS zone, the location field only specifies the resource group location. It does not apply to the DNS zone resource itself, which is global rather than regional.

Once the DNS zone has been created, open the DNS zone blade. The Azure DNS name servers assigned to the zone are listed in the essentials panel, as highlighted in Figure 4-31.

A screen shot shows the DNS zone blade of the Azure Portal. The DNS zone name is examref.com. The assigned name servers are highlighted, they are ns1-03.azure-dns.com, ns2-03.azure-dns.net, ns3-03.azure-dns.org, and ns4-03.azure-dns.info.

Figure 4-31 The DNS zone blade, highlighting the Azure DNS name servers assigned to this zone

To set up DNS delegation for the DNS zone, these name servers must be listed in the corresponding NS records in the parent zone. If the domain name was purchased using the Azure App Service Domains service, this will be done automatically. Otherwise, this must be configured at the DNS registrar where the domain name was purchased.

To create a DNS record in a new record set, click +Record Set to open the Add Record Set blade. If there is an existing record with the same name and type as the record you wish to create, you should instead click on the existing record set and add the new record there. To create a pair of A records with name www (giving the fully-qualified domain name www.examref.com), fill in the blade with the following values, as shown in Figure 4-32.

  • Name www

  • Type A

  • Alias record set No

  • TTL 1 hour (or choose your own value)

  • IP Addresses Enter A record IP addresses, one for each DNS record in the record set.

A screen shot shows the Add Record Set blade of the Azure Portal. The record set name is www, and the record type is A. Two IP addresses are provided, to create a record set containing 2 DNS records.

Figure 4-32 The Add Record Set blade

Suppose now you wish to create a DNS record at the zone apex (so the fully-qualified domain name is simply the DNS zone name examref.com), pointing to a dynamically-allocated public IP address. Click +Add Record Set again and complete the Add Record Set blade with the following settings, as shown in Figure 4-33.

A screen shot shows the Add Record Set blade of the Azure Portal. The record set name is @, and the record type is A. The Alias Record Set option is Yes, and the public IP address ExamRef-IP has been selected.

Figure 4-33 The Add Record Set blade for an Alias record set

  • Name @ (this is a DNS convention for records at the zone apex)

  • Type A

  • Alias record set Yes

  • Choose subscription Choose the subscription containing the public IP address

  • Azure resource Choose the public IP address resource

  • TTL 1 hour (or choose your own value)

Creating DNS zones and DNS records using Azure PowerShell

DNS Zones and record sets are created using the New-AzDnsZone and New-AzDnsRecordSet cmdlets, respectively.

You can specify DNS records when creating the record set, or you can create an empty record set and add DNS records afterward, using the Get-AzDnsRecordSet, Add-AzDnsRecordConfig, and Set-AzDnsRecordSet cmdlets.

You can use a similar sequence to remove DNS records from an existing record set, using the Get-AzDnsRecordSet, Remove-AzDnsRecordConfig, and Set-AzDnsRecordSet cmdlets. Note that when removing a record, you must specify all RDATA fields for the resource type and they must all be an exact match to an existing record.

# Create a DNS zone
New-AzDnsZone -Name examref.com -ResourceGroupName ExamRef-RG

# Create a record set containing a single record
New-AzDnsRecordSet -Name www -RecordType A `
          -ZoneName examref.com `
          -ResourceGroupName ExamRef-RG `
          -Ttl 3600 `
          -DnsRecords (New-AzDnsRecordConfig -IPv4Address "1.2.3.4")

# Create a record set at the zone apex containing multiple records
$records = @()
$records += New-AzDnsRecordConfig -IPv4Address "1.2.3.4"
$records += New-AzDnsRecordConfig -IPv4Address "5.6.7.8"
New-AzDnsRecordSet -Name '@' -RecordType A `
          -ZoneName examref.com `
          -ResourceGroupName ExamRef-RG `
          -Ttl 3600 `
          -DnsRecords $records

# Add a new record to and remove an existing record from an existing record set
$recordset = Get-AzDnsRecordSet -Name www -RecordType A `
          -ZoneName examref.com `
          -ResourceGroupName ExamRef-RG
Add-AzDnsRecordConfig -RecordSet $recordset -IPv4Address "5.6.7.8"
Remove-AzDnsRecordConfig -RecordSet $recordset -IPv4Address "1.2.3.4"
Set-AzDnsRecordSet -RecordSet $recordset

# View records
Get-AzDnsRecordSet -ZoneName examref.com -ResourceGroupName ExamRef-RG
Creating DNS Zones and DNS Records using the Azure CLI

To create a DNS zone using the Azure CLI, use the az network dns zone create command.

To manage DNS records, first create an empty record set using the az network dns record-set A create command. In this case, A represents the DNS A record type—substitute a different record type as required.

DNS records are then added and removed using the az network dns record-set a add-record and az network dns record-set a remove-record commands (again, substituting the required record type).

DNS records can be listed using the az network dns record-set list command, which returns a list of records of all record types.

# Create a DNS zone
az network dns zone create --name examref.com --resource-group ExamRef-RG

# Create an empty record set of type 'A'
az network dns record-set a create --name www --zone-name examref.com
--resource-group ExamRef-RG --ttl 3600

# Add A records to the above record set
az network dns record-set a add-record --record-set-name www
--zone-name examref.com --resource-group ExamRef-RG --ipv4-address 1.2.3.4
az network dns record-set a add-record --record-set-name www
--zone-name examref.com --resource-group ExamRef-RG --ipv4-address 5.6.7.8

# Remove an A record from the record set
az network dns record-set a remove-record --record-set-name www
--zone-name examref.com --resource-group ExamRef-RG --ipv4-address 1.2.3.4

# View records
az network dns record-set list --zone-name examref.com
--resource-group ExamRef-RG -o table
Importing and Exporting DNS zone files using the Azure CLI

A DNS zone file is a text file that contains details of every DNS record in the zone. It follows a standard format, making it suitable for transferring DNS records between DNS systems. Using a zone file is a quick, reliable, and convenient way to transfer a DNS zone into or out of Azure DNS.

Azure DNS supports importing and exporting zone files by using the az network dns zone import and az network dns zone export commands. Since zone files are processed client-side in the CLI itself, zone file import and export are not available via any other Azure DNS tools, such as PowerShell, the Azure portal, or even the Azure DNS SDKs or REST API.

Importing a zone file will create a new zone in Azure DNS if one does not already exist. If the zone already exists, the record sets in the zone file are merged with the existing record sets.

SOA parameters are taken from the imported zone file, except for the host property, for which the value assigned by Azure DNS is retained. Similarly, for the NS record set at the zone apex, which contains the Azure DNS name servers assigned to the zone, the TTL is always taken from the imported zone file, but the name server names are taken from the zone in Azure DNS.

# Export a DNS zone file
az network dns zone export --name examref.com --resource-group ExamRef-RG --file-name
"examref.com.txt"

# Import a DNS zone file (to different resource group)
az network dns zone import --name examref.com --resource-group ExamRef2-RG --file-name
"examref.com.txt"

Configure custom DNS settings

When a virtual machine connects to a virtual network, it receives its IP address via DHCP. As part of that DHCP exchange, DNS settings are also configured in the VM. By default, VMs are configured to use Azure’s recursive DNS servers. These provide name resolution for Internet-hosted domains, plus private VM-to-VM name resolution within a virtual network.

The hostname of the VM is used to create a DNS record mapping to the private IP address of the VM. You specify the hostname—which is simply the VM name—when you create the virtual machine. Azure specifies the DNS suffix, using a value that is unique to the virtual network. These suffixes end with internal.cloudapp.net. The hostname and DNS suffix together form the unique fully-qualified domain name.

Name resolution for these DNS records is private—they can only be resolved from within the virtual network. The DNS suffix is configured as a lookup suffix within each VM, so names can be resolved between VMs within the virtual network using the hostname only.

This built-in DNS service uses the IP address: 168.63.129.16. This is a special static IP address that is reserved by the platform for this purpose. This IP provides both the authoritative DNS service for Azure-provided DNS as well as Azure’s recursive DNS service, which is used to resolve Internet DNS names from Azure VMs.

Bring your own DNS

Alternatively, you can configure your own DNS settings, which will be configured on the VMs instead during the DHCP exchange. This enables you to specify your own DNS servers, either in Azure or running on-premises. With your own DNS servers, you can support any DNS scenario, including scenarios not supported by the Azure-provided service. Example scenarios requiring you to use your own DNS servers include name resolution between VMs in different virtual networks, name resolution between on-premises resources and Azure virtual machines, reverse DNS lookup of internal IP addresses, and name resolution for non-Internet-facing domains, such as domains associated with Active Directory.

You should not specify your own DNS settings within the VM itself, since the platform is then unaware of the settings you have chosen. Instead, Azure provides configuration options within the virtual network settings. These DNS server settings are at the virtual network level, and apply to all VMs in the virtual network.

You can also specify VM-specific DNS server settings within each network interface. This takes precedence over settings at the virtual network level. Where multiple VMs are deployed in an availability set, setting DNS servers at the network interface, all VMs in the availability set are updated. The DNS servers applied are the union of the network interface-level DNS servers from across the availability set.

Note DNS Name Server Settings

Custom DNS settings can be configured at the VNet level, and the network interface level, but not at the subnet level. To use specific settings for an individual subnet, you must configure those settings on each network interface in the subnet.

You can use these DNS settings to direct your VMs’ DNS queries to any DNS servers you choose. They can point to IP addresses of on-premises servers, such as an Active Directory Domain Controller or network appliance, a DNS service running in an Azure Virtual Machine, or anywhere else on the Internet.

If you use your own DNS servers, those servers will need to offer a recursive DNS service, otherwise name resolution for Internet domains from your virtual machines will break. If you point the DNS settings directly at an Internet-based recursive DNS service, such as Google 8.8.8.8, then you will not be able to perform VM-to-VM lookups.

Note Restart Virtual Machines when Changing DNS Settings

If you make changes to the DNS settings at the virtual network level, any affected virtual machines must restart to pick up the new settings. If you make changes to DNS settings and the network interface level, the affected VM (or VMs across the availability set, if used) will restart automatically to pick up the new settings.

One challenge when using your own DNS servers is that you will need to register each VM in your DNS service. To do this, you can configure the DNS service to accept Dynamic DNS queries, which the VM will send when it boots. This allows the VMs to register with the DNS server automatically. A problem with this approach is that the DNS suffix in the Dynamic DNS query must match the DNS zone name configured on the DNS server, and Azure does not support configuring the DNS suffix via the Azure platform settings. As a workaround, you can configure the correct DNS suffix within each VM yourself, using a start-up script.

Configure custom DNS settings using the Azure portal

To configure the DNS servers on a VNet, open the virtual network blade, and then click on DNS Servers under Settings, as seen in Figure 4-34. You can then enter the DNS servers you wish this VM to use. After saving your changes, you need to restart the VMs in the VNet to pick up the change.

A screen shot shows how the DNS settings for the virtual network have been configured. In this case, the virtual network has been configured with two DNS servers, with IP addresses 10.0.0.25 and 10.0.0.125.

Figure 4-34 Custom DNS servers for a virtual network configured using the Portal

The steps to configure the DNS servers on an individual VM are similar. Open the blade for the VM’s network interface, and then click on DNS Servers under Settings, as seen in Figure 4-35. You can then enter the DNS servers you wish this VM to use. Note that VMs in an availability set will adopt the union of DNS servers from network interfaces across the availability set. After saving your changes, your VM (or VMs in the availability set) will automatically restart to pick up the change.

A screen shot shows how the DNS settings for the network interface have been configured. In this case, the network interface has been configured with two DNS servers, with IP addresses 10.0.0.25 and 10.0.0.125.

Figure 4-35 Custom DNS servers for a network interface configured using the Portal

Configure custom DNS settings using Azure PowerShell

To configure custom virtual network DNS settings when creating a virtual network using Azure PowerShell, use the DNSServer parameter of the New-AzVirtualNetwork cmdlet.

# Create a virtual network with custom DNS settings
New-AzVirtualNetwork -Name VNet1 `
 -ResourceGroupName ExamRef-RG `
 -Location "North Europe" `
 -AddressPrefix 10.1.0.0/16 `
 -DNSServer 10.0.0.4,10.0.0.5 `
 -Subnet (New-AzVirtualNetworkSubnetConfig `
   -Name Default `
   -AddressPrefix 10.1.0.0/24)

To change the custom DNS settings on an existing VNet, use the Get-AzVirtualNetwork cmdlet to create a local object representing the VNet. Modify the DNS settings locally on this object, then commit your changes using the Set-AzVirtualNetwork cmdlet. Existing VMs must be restarted to pick up the change.

# Modify the DNS server configuration of an existing VNet
$vnet = Get-AzVirtualNetwork -Name VNet1 `
 -ResourceGroupName ExamRef-RG

$vnet.DhcpOptions.DnsServers.Clear()
$vnet.DhcpOptions.DnsServers.Add("10.10.200.1")
$vnet.DhcpOptions.DnsServers.Add("10.10.200.2")

Set-AzVirtualNetwork -VirtualNetwork $vnet

# Restart the VMs in the VNet to pick up the DNS change (example for 1 VM)
$vm = Get-AzVM -Name VNet1-VM -ResourceGroupName ExamRef-RG
Restart-AzVM -Id $vm.Id

When creating a virtual machine using Azure PowerShell, there is no option to specify the DNS settings. To change the custom DNS settings on the network interface of an existing VM, use the Get-AzNetworkInterface cmdlet to create a local object representing the network interface. Modify the DNS settings locally on this object, then commit your changes using the Set-AzNetworkInterface cmdlet. This will cause the VM (or VMs in the availability set) to restart automatically to pick up the change.

# Update the DNS settings on a network interface
$nic = Get-AzNetworkInterface `
       -Name VM1-NIC `
       -ResourceGroupName ExamRef-RG

$nic.DnsSettings.DnsServers.Clear()
$nic.DnsSettings.DnsServers.Add("8.8.8.8")
$nic.DnsSettings.DnsServers.Add("8.8.4.4")

# Commit the DNS change. This will cause VM (or VMs in Availability Set) to restart
Set-AzNetworkInterface -NetworkInterface $nic
Configure custom DNS settings using the Azure CLI

Use the dns-servers parameter to specify custom DNS servers when creating a virtual network using the az network vnet create command.

# Create a virtual network using custom name servers (uses default subnet)
az network vnet create --name VNet1 --resource-group ExamRef-RG --address-prefixes
10.0.0.0/16 --dns-servers 8.8.8.8 8.8.4.4

To modify the DNS server configuration on an existing VNet, use the az network vnet update command. Use the dns-servers parameter to specify custom DNS settings, and the remove parameter to remove the custom DNS servers and revert to the Azure-provided DNS defaults. VMs must be restarted to pick up the change.

# Set custom DNS servers on a VNet
az network vnet update --name VNet1 --resource-group ExamRef-RG --dns-servers 10.0.0.254

# Remove custom DNS servers from a VNet
az network vnet update --name VNet1 --resource-group ExamRef-RG
--remove DHCPOptions.DNSServers

To modify the DNS server configuration on an existing VM, use the az network nic update command. NIC-level DNS settings are aggregated across availability sets, and VMs must be restarted to pick up the change.

# Set custom DNS servers on a NIC
az network nic update --name VM1-NIC --resource-group ExamRef-RG
--dns-servers 8.8.8.8 8.8.4.4

Configure private DNS zones

In addition to supporting Internet-facing DNS domains, Azure DNS also supports private DNS domains as a Preview feature. This provides an alternative approach to name resolution within and between virtual networks.

By using private DNS zones, you can use your own custom domain names, including DNS suffix, rather than the Azure-provided DNS suffix, without the overhead or complexity of running your own DNS servers.

The service supports automatic registration of VMs into the private zone, but only from a single virtual network, called the registration VNet. This must be registered with the DNS zone before any VMs are created.

If you want to resolve VM names from multiple virtual networks, the VMs in any other networks must be registered with the service manually (or via a custom automation). Name resolution between VNets is independent of connectivity between VNets, so peering your virtual networks or setting up a VNet-to-VNet connection is not required.

Name resolution is supported from up to 10 virtual networks. These are called resolution VNets. The zone name is not registered with the VMs as a DNS search suffix, so you will need to register it yourself or use fully-qualified domain names in your DNS queries.

Create private DNS zones using Azure PowerShell or the Azure CLI

As a preview feature, there are several limitations. Most notably, you cannot configure private DNS zones using the Azure portal—you need to use Azure PowerShell or the Azure CLI.

With Azure PowerShell, specify the ZoneType Private parameter to create a private DNS zone with the New-AzDnsZone cmdlet. Use the RegistrationVirtualNetwork and ResolutionVirtualNetwork parameters to specify the virtual networks.

# Create a private DNS zone
$vnet1 = Get-AzVirtualNetwork -Name VNet1 -ResourceGroupName ExamRef-RG
$vnet2 = Get-AzVirtualNetwork -Name VNet2 -ResourceGroupName ExamRef-RG

New-AzDnsZone -Name contoso.local `
   -ResourceGroupName ExamRef-RG `
   -ZoneType Private `
   -RegistrationVirtualNetwork $vnet1 `
   -ResolutionVirtualNetwork $vnet2

With the Azure CLI, specify zone-type private when creating a DNS zone with the az network dns zone create command to create a private zone. Use the --registration-vnets and --resolution-vnets parameters to specify the virtual networks, using either the network name or resource ID.

# Create a private DNS zone
az network dns zone create --name contoso.local --resource-group ExamRef-RG
--zone-type private --registration-vnets VNet1 --resolution-vnets VNet2

Once created, you can manage DNS records in a private DNS zone using the Azure portal, PowerShell, or CLI, in the same way as for public DNS zones. Only manually-registered DNS entries are visible using these tools—the DNS records corresponding to the automatically-registered VMs in the registration VNet are not available.

Skill 4.4: Create and configure a network security group (NSG)

Network security groups (NSGs) allow you to control which network flows are permitted into and out of your virtual networks and virtual machines. Each NSG contains lists of inbound and outbound rules, which give you fine-grained control over exactly which network flows are allowed or denied.

Create security rules

A network security group (NSG) is a standalone Azure resource, which acts as networking filter. Each NSG contains a list of security rules. These are used to allow or deny inbound or outbound network traffic, depending on the properties of that traffic such as protocol, IP address, and port. To apply the NSG, it is associated with either a subnet or with a specific VM’s network interface.

NSG rules

NSG rules define which traffic flows are allowed or denied by the NSG. Table 4-7 describes the properties of an NSG rule.

Table 4-7 NSG properties

Property Description Constraints Considerations
Name The name of the rule.

Must be unique within the region.

Must end with a letter, number, or underscore.

Cannot exceed 80 characters.

You can have several rules within an NSG, so make sure you follow a naming convention that allows you to identify the purpose of each rule.
Protocol The network protocol the rule applies to. TCP, UDP, or *. Using * as a protocol includes ICMP as well as TCP and UDP. In the Azure portal, select ‘Any’ instead of ‘*’.
Source port range(s) Source port range(s) to match for the rule. Single port number from 1 to 65535, port range (example: 1-65535), a list of port or port ranges, or * (for all ports).

The source ports could be ephemeral so unless your client program is using a specific port, use * in most cases.

Try to reduce the number of rules by specifying multiple ports or port ranges in a single rule.

Destination port range

Destination port range(s) to match for the rule.

Single port number from 1 to 65535, port range (example: 1-65535), a list of port or port ranges, or * (for all ports).

Try to reduce the number of rules by specifying multiple ports or port ranges in a single rule.
Source address prefix(es) Source address prefix(es) or service tag(s) to match for the rule. Single IP address (example: 10.10.10.10), IP subnet (example: 192.168.1.0/24), a service tag, a list of the above, or * (for all addresses).

Consider using ranges, service tags, and lists to reduce the number of rules.

The IP addresses of Azure VMs can also be specified implicitly using application security groups.

Destination address prefix(es) Destination address prefix(es) or service tag(s) to match for the rule. Single IP address (example: 10.10.10.10), IP subnet (example: 192.168.1.0/24), a service tag, a list of the above, or * (for all addresses).

Consider using ranges, default tags, and lists to reduce the number of rules.

The IP addresses of Azure VMs can also be specified implicitly using application security groups.

Direction Direction of traffic to match for the rule. Inbound or outbound. Inbound and outbound rules are processed separately, based on traffic direction.
Priority Rules are checked in the order of priority. Once a matching rule is found, no more rules are tested. Unique Number between 100 and 4096. Uniqueness is only within this NSG. Consider creating rules jumping priorities by 100 for each rule to leave space for new rules you might create in the future.
Action Type of action to apply if the rule matches. Allow or Deny. Keep in mind that if an allow rule is not found for a packet, the packet is dropped.

Note NSG Rule Priority

NSG Rules are enforced based on their Priority. Priority values start from 100 and go to 4096 (and from 65001 to 65003 for default rules). Rules will be read and enforced starting with 100 then 101, 102, and so on. When a rule is found that matches the traffic under consideration, the rule is applied, and all further processing stops—subsequent rules are disregarded.

For example, suppose you had an inbound rule that allowed TCP traffic on any port with a priority of 250 and another that denied TCP traffic on Port 80 with a priority of 125. An inbound TCP connection on port 80 would be denied, since the deny rule has a lower priority value and would be applied before the allow rule is considered.

Service Tags

Many Azure services are accessed via Internet-facing endpoints. These endpoints can change over time, for example as new Azure regions are built. This makes it difficult to use NSG rules to control access to those services—it’s hard to identify the list of IP ranges to use, and even harder to keep the list up-to-date.

To address this problem, Azure provides service tags. These are platform-defined shortcuts that map to the IP ranges of various Azure services. The IP ranges associated with each service tag are updated automatically whenever the IP addresses used by the service change.

Service tags are used in NSG rules as a quick and reliable way of creating rules that control traffic to each service. Typically, they are used in outbound rules to control which other Azure services the VMs in a VNet can or cannot access.

Note that service tags control access to the service, but not to a specific resource within that service. For example, a service tag might be used in an NSG rule allowing a VM to connect to Azure storage. This rule cannot control which account in Azure storage the VM will attempt to use.

Service tags are provided for around 20 Azure services, and the list is growing. Here are some of the most commonly-used service tags.

  • VirtualNetwork controls access to the virtual network address space where the NSG is assigned. It refers to the entire virtual network (not just the subnet), plus all connected virtual networks and any on-premises address space connected via Site-to-Site VPN or ExpressRoute (which we discuss in the next Skill section of this course).

    Note that the network address space of peered virtual networks is only included if the Allow Virtual Network Access property is set to Enabled.

  • Internet Denotes the public Internet address space. This includes the Internet-facing Azure IP address ranges, used for public IP addresses and Azure platform services.

  • AzureCloud Denotes the Azure datacenter public IP space. This service tag can be scoped to a specific Azure region, for example by specifying AzureCloud.EastUs.

  • AzureLoadBalancer Denotes the IPs where Azure load balancer health probes will originate. Traffic from these addresses should be allowed for any load-balanced VMs. Note that this service tag cannot be used to control traffic coming through the load balancer from elsewhere. This traffic can be filtered using the originating source IP, which is not modified as it passes through the Azure load balancer

  • AzureTrafficManager Performs a similar role for Azure Traffic Manager. It is used to allow traffic from the source IP addresses of Traffic Manager health probes.

  • Storage Represents the IP addresses used by the Azure Storage service. As with the Azure Cloud Service Tag, the Storage service tag can be region scoped. For example, you can specify Storage.WestUS to only allow access to Storage accounts in the West US region.

  • The Sql Represents the IP addresses used by the Azure SQL Database service. This service tag can also be scoped to a specific region.

Default rules

All NSGs have a set of default rules. You cannot add to, edit, or delete these default rules. However, since they have the lowest possible priority, they can be overridden by other rules which you create.

The default rules allow and disallow traffic as follows:

  • Virtual network Traffic originating and ending in a virtual network is allowed both in inbound and outbound directions.

  • Internet Outbound traffic is allowed, but inbound traffic is blocked.

  • Load balancer Allows Azure load balancer to probe the health of your VMs and role instances. If you are not using a load balanced set, you can override this rule.

Note Load Balancer Traffic

The Load Balancer default rule uses the AzureLoadBalancer service tag. This applies only to Azure load balancer health probes, which originate at the load balancer. It does not apply to traffic received through the load balancer, which retain their original source IP address and port.

Table 4-8 shows the default inbound rules for each NSG.

Table 4-8 Default Inbound Rules

Name Priority Source Source Port Destination Destination Port Protocol Access
AllowVNetInBound 65000 VirtualNetwork Any VirtualNetwork Any Any Allow
AllowAzureLoad BalancerInBound 65001 AzureLoadBalancer Any Any Any Any Allow
DenyAllInBound 65500 Any Any Any Any Any Deny

Table 4-9 shows the default outbound rules for each NSG.

Table 4-9 Default Outbound Rules

Name Priority Source Source Port Destination Destination Port Protocol Access
AllowVNet OutBound 65000 VirtualNetwork Any VirtualNetwork Any Any Allow
AllowInternet OutBound 65001 Any Any Internet Any Any Allow
DenyAllOutBound 65500 Any Any Any Any Any Deny
Application security groups

As you have seen, NSG rules are like traditional firewall rules, and are defined using source and destination IP blocks. They enable you to segment your network traffic into application tiers, by segmenting your application tiers into separate subnets.

This creates some management challenges. The IP blocks for each subnet must be carefully planned in advance. To allow for additional servers to be added in future, each subnet must be bigger than you really need, making inefficient use of the IP space. And if you make a subnet too small and run out of space, it can be time-consuming to reconfigure the network to free up additional space, especially without application downtime. Also, each subnet requires a separate NSG, making it difficult to get an overall picture of the permitted and blocked traffic at an application level.

Application security groups (ASGs) address these challenges by offering an alternative approach to network segmentation. They allow you to achieve the same goal of segmenting your application into separate tiers, and strictly controlling the permitted network flows between tiers. But they avoid the need to associate each tier with a separate subnet, and therefore all the challenges associated with planning and managing subnets fall away. With ASGs, you define which application tier each VM belongs to explicitly, rather than implicitly based on the subnet in which the VM has been placed. All VMs can be placed in a single subnet, and a single NSG is used to define all permitted network flows between application tiers. Since a single subnet is used, the IP space can be managed much more flexibly. And since there is a single NSG, with rules referring to named application tiers, the network rules are easier to understand, and can all be managed in one place.

Figure 4-36 shows an example. We have a standard 3-tier application architecture, with web servers, application servers, and database servers. These servers have been grouped by associating each server with the appropriate application security group. All servers are placed in the same subnet, without having to think about how the network space is subdivided. A single network security group contains rules defining the permitted traffic flows between application tiers.

A screen shot shows VMs divided into WebServers, AppServers and DatabaseServers, all placed in a single subnet. Each group of VMs is grouped using an application security group. A single NSG controls the permitted network flows, which are Internet to WebServers, WebServers to AppServers, AppServers to DatabaseServers, and from the Azure load balancer (health probes) to the virtual network. All other flows are denied.

Figure 4-36 Using application security groups to simplify subnet and NSG management

Application security groups enable you to configure network security as a natural extension of an application’s structure, allowing you to group virtual machines and define network security policies based on those groups. You can reuse your security policy at scale without the manual maintenance of explicit IP addresses. The platform handles the complexity of explicit IP addresses and multiple rule sets, allowing you to focus on your business logic.

Configuring application security groups is straightforward:

  1. First, you create an application security group resource for each server group. This resource has no properties, other than its name, resource group, and location.

  2. Next, you associate the network interface from each VM with the appropriate application security group. This defines which group (or groups) each VM belongs to.

  3. Finally, you define your network security group rules using application security group names instead of explicit IP ranges. This is similar to how rules are configured using named service tags.

Create an NSG using the Azure portal

To create an NSG using the portal, first click Create A Resource, then Networking, then select Network Security Group. Once the Create Network Security Group blade loads you will need to provide a name, the subscription where your resources are located, the resource group for the NSG and the location (this must be the same as the resources you wish to apply the NSG). In Figure 4-37, the NSG will be created to allow HTTP traffic into the Apps subnet and be named AppsNSG.

A screen shot shows the Azure portal creating a new network security group. The name AppsNSG has been supplied and placed into the ExamRefRG resource group and located in the Central US Azure region.

Figure 4-37 Creating a network security group using the Azure Portal

After the NSG has been created, open the NSG Overview blade as shown in Figure 4-38. Here, you see that the NSG has been created, but there are no inbound or outbound security rules beyond the default rules.

A screen shot shows the Azure portal on the Overview blade of the AppsNSG. The default inbound and outbound security rules are shown.

Figure 4-38 The NSG Overview blade, showing the inbound and outbound security rules

The next step is to create the inbound rule for HTTP and HTTPS traffic. Under the Settings area, click on Inbound Security Rules, then click +Add to open the Add Inbound Security Rule panel. Notice how the panel has both Basic and Advanced modes, depending on the level of control required. To allow HTTP/HTTPS traffic on Port 80 and 443, fill in the settings as shown in Figure 4-39:

  • Source Any

  • Source Port Ranges *

  • Destination VirtualNetwork

  • Destination Port Ranges 80,443

  • Protocol TCP

  • Action Allow

  • Priority 100

  • Name Allow_HTTP_HTTPS

  • Description Allow HTTP and HTTPS inbound traffic on ports 80 and 443

A screen shot shows the Add inbound security rule blade for AppsNSG. The details on the blade allow TCP traffic on ports 80 and 443 from any source to the VirtualNetwork subnet where the NSG is applied.

Figure 4-39 Adding an Inbound Rule to allow HTTP traffic

Once all the settings have been filled in, click the Add button to create the NSG rule.

Note Applying NSGS to Virtual Networks

The destination IP ranges refers to the VirtualNetwork. This allows the NSG to be applied to any subnet in any VNet, and avoids coupling the NSG to a specific IP range. Traffic will only be permitted to those subnets where the NSG is applied.

Once the inbound rule has been saved, it will appear in the portal. Review your rule to ensure it has been created correctly.

Create an NSG using Azure PowerShell

To create an NSG and configure the rules by using Azure PowerShell, you need to use the New-AzNetworkSecurityRuleConfig and New-AzNetworkSecurityGroup PowerShell cmdlets together.

# Create array to contain NSG rules
$rules = @()
# Build a new Inbound Rule to Allow TCP Traffic on Port 80 or 443 to the Subnet,
and add to the $rules array
$rules += New-AzNetworkSecurityRuleConfig -Name Allow_HTTP_HTTPS `
                      -Description "Allow HTTP and HTTPS
                        inbound on ports 80 and 443" `
                      -Access Allow `
                      -Protocol Tcp `
                      -Direction Inbound `
                      -Priority 100 `
                      -SourceAddressPrefix * `
                      -SourcePortRange * `
                      -DestinationAddressPrefix VirtualNetwork `
                      -DestinationPortRange 80,443
# Create an NSG, including the new inbound rules
$nsg = New-AzNetworkSecurityGroup -ResourceGroupName ExamRef-RG `
                  -Location centralus `
                  -Name AppsNSG `
                  -SecurityRules $rules
Create an NSG using the Azure CLI

Creating an NSG using the Azure CLI is a multi-step process, just as it was with the portal and PowerShell. First, use the az network nsg create command to create the NSG. Once created, use the az network nsg rule create command to add each NSG rule.

# Create the NSG
az network nsg create --name AppsNSG --resource-group ExamRef-RG

# Create the NSG Inbound Rule allowing TCP traffic on Port 80
az network nsg rule create --name Allow_HTTP_HTTPS --nsg-name AppsNSG
--resource-group ExamRef-RG --direction Inbound --priority 100
--access Allow --source-address-prefixes "*" --source-port-ranges "*"
--destination-address-prefixes "VirtualNetwork" --destination-port-ranges 80 443
--description "Allow HTTP and HTTPS inbound on ports 80 and 443" --protocol TCP

Associate NSG to a subnet or network interface

NSGs are used to define the rules of how traffic is filtered for your IaaS deployments in Azure. We have seen how to create NSG resources and define the NSG rules. However, these NSGs by themselves are not effective until they are associated with a resource in Azure.

NSGs can be associated with network interfaces (NICs) which are associated to the VMs, or they can be associated with a subnet. Each NIC or subnet can only be associated with a single NSG. However, a single NSG can be associated with multiple NICs and/or subnets.

When associating an NSG with a NIC, it applies to all IP configurations in that NIC. All inbound and outbound traffic to and from the NIC must be allowed by the NSG. It is possible to have a multi-NIC VM, and you can associate the same or different NSG to each Network Interface.

Alternatively, NSGs can be associated with a subnet, in which case they apply to all traffic to and from resources in that subnet. This approach is useful when applying the same rule across multiple VMs.

Note How NSGS Are Applied

Microsoft does not recommend deploying NSGs to both subnets and to NICs within that subnet. However, this configuration is supported, and it’s important to understand how NSGs are applied when deployed in this way.

For inbound traffic, first the NSG at the subnet is applied, followed by the NSG at the NIC. Traffic only flows if both NSGs allow the traffic to pass.

For outbound traffic, the sequence is reverse. First the NSG at the NIC is applied, followed by the NSG at the subnet. Again, traffic only flows if both NSGs allow the traffic to pass.

In all cases, rules within each NSG are applied in priority order, with the first matching rule being effective.

Associating an NSG with a subnet using the Azure portal

We have seen how to create an NSG and how to add an inbound rule for HTTP and HTTPS traffic. Yet, this NSG has not been associated with any subnets or NICs, and so is not in effect.

The next task will be to associate it with the Apps subnet. You can use either the NSG blade or the virtual network subnet blade for this task; we’ll use the former.

In the NSG blade of the Azure portal, click the subnets link to show the list of subnets currently associated with the NSG (this should be empty at this stage). Click +Associate to open the Associate Subnet blade. The portal will ask for two configurations: the virtual network, and the subnet. Note that you can only select virtual networks in the same Azure region as the NSG. In Figure 4-40, the virtual network ExamRefVNET and subnet Apps has been selected.

A screen shot shows the Azure portal with the AppsNSG network security group being associated with the virtual network ExamRefVNET and the subnet Apps.

Figure 4-40 The ExamRefVNET virtual network and Apps subnet have been selected

After being saved, the rules of the NSG are now being enforced for all network interfaces that are associated with this subnet. This will allow inbound TCP traffic on ports 80 and 443 for all VMs that are connected to this subnet. Of course, you need to have a webserver VM configured and listening on ports 80 or 443 to respond.

Associating an NSG with a subnet using Azure PowerShell

To associate an NSG with a subnet using Azure PowerShell the subnet configuration must be updated to reference the NSG. This is achieved in three steps:

  1. Use Get-AzVirtualNetwork to retrieve a local object representing the VNet.

  2. Identify the desired subnet within the VNet object and update the subnet configuration to reference the NSG.

  3. Use Set-AzVirtualNetwork to save the updated VNet configuration back to Azure.

#Associate the Rule with the Subnet Apps in the Virtual Network ExamRefVNET-PS
$vnet = Get-AzVirtualNetwork -Name ExamRef-vnet -ResourceGroupName ExamRef-RG

# Find the 'Apps' subnet
$subnet = $vnet.Subnets | where {$_.Name -eq "Apps"}

# Modify the 'Apps' subnet to reference the NSG
# Assumes $nsg is already populated from the earlier script or via
Get-AzNetworkSecurityGroup
$subnet.NetworkSecurityGroup.Id = $nsg.Id

Set-AzVirtualNetwork -VirtualNetwork $vnet

In step 2 above, you can also use the Set-AzVirtualNetworkSubnetConfig cmdlet to update the subnet configuration. However, this requires you to specify the AddressPrefix for the subnet, even if this has already been defined.

Associating an NSG with a subnet using the Azure CLI

To associate an NSG with a subnet using the Azure CLI, use the az network vnet subnet update command.

# Associate the NSG with the ExamRef-vnet Apps Subnet
az network vnet subnet update --name Apps --vnet-name ExamRef-vnet
--resource-group ExamRef-RG --network-security-group AppsNSG

This example specifies the NSG by name, which assumes the NSG resource is in the same resource group as the VNet. If the NSG is in a different resource group, specify the full NSG resource ID instead.

To remove an NSG from a subnet, use the same command, specifying empty double quotes (“”) as the NSG name.

Identify required ports

When defining NSGs, it can be a challenge to identify all the network flows that an application requires. Certain flows, such as from a middle tier to a database tier, will be obvious to anyone familiar with the application architecture. Other flows, however, such as DNS lookups, connections to Active Directory, or checks against licensing or key management servers, are less obvious, but still critical.

Azure provides two very useful tools which can help you identify the network flows used by a running application. These are the Service Map and NSG flow logs.

Service Map

The Service Map is a Log Analytics solution. It helps you document the network flows from a running application. It works by installing two agents on each server: the Microsoft Monitoring Agent (MMA) and the Dependency Agent. Both agents are available for Windows and Linux. There is no requirement that the application be running in Azure—it can also be used for onpremises applications.

Service Map provides rich reporting of dependencies and network flows, including traffic volumes and internal application processes. Machines can be grouped to provide a logical view that reflects the application architecture. Of interest when configuring NSGs is the Failed Connections view (Figure 4-41), which shows network flows that cannot be completed. This may indicate a missing or mis-configured NSG rule.

A screen shot of the service map feature in the Azure portal shows links from processes running on a virtual machine to a number of network ports. The link from the backup.pl process to an IP address is highlighted, showing the failed connection.

Figure 4-41 A Service Map example, showing a failed connection

To get started with the Azure Service Map, from the Azure portal click Create A Resource and install the Service Map solution from the Azure Marketplace. As the Service Map is a Log Analytics solution, you will need to create a Log Analytics workspace, or reference an existing workspace.

The next step is to on-board your VMs to Log Analytics. This will also install the Microsoft Monitoring Agent (MMA) on each VM. Open the log analytics workspace, and under Workplace Data Sources, click Virtual Machines to see a list of virtual machines together with their on-boarding status, as shown in Figure 4-42.

A screen shot of the Azure portal shows a list of virtual machines, taken from the data sources within the Log Analytics workspace. One virtual machine is shows as connected to this workspace, while another virtual machine is not connected.

Figure 4-42 The virtual machines list under Log Analytics data sources

Click the virtual machine to on-board, which will open a separate blade, then click Connect, as shown in Figure 4-43. After a short delay to install the MMA, the virtual machine will be connected to the Log Analytics workspace.

A screen shot of the Azure portal shows the ExamRef-VM1 machine as Not Connected to the Log Analytics workspace. A connect button is enabled, allowing the VM to be connected.

Figure 4-43 Connecting a virtual machine to Log Analytics

Next, for the Service Map solution, an additional VM extension called the Dependency Agent must be installed. This can be installed in many ways, including a standalone installer, PowerShell DSC, or via a PowerShell script, as shown below:

# Deploy the Dependency agent to every VM in a Resource Group
$version = "9.4"
$ExtPublisher = "Microsoft.Azure.Monitoring.DependencyAgent"
$OsExtensionMap = @{ "Windows" = "DependencyAgentWindows"; "Linux" =
"DependencyAgentLinux" }
$rmgroup = "ExamRef-RG"

Get-AzVM -ResourceGroupName $rmgroup |
ForEach-Object {
   ""
   $name = $_.Name
   $os = $_.StorageProfile.OsDisk.OsType
   $location = $_.Location
   $vmRmGroup = $_.ResourceGroupName
   "${name}: ${os} (${location})"
   Date -Format o
   $ext = $OsExtensionMap.($os.ToString())
   $result = Set-AzVMExtension -ResourceGroupName $vmRmGroup -VMName $name
-Location $location `
   -Publisher $ExtPublisher -ExtensionType $ext -Name "DependencyAgent"
-TypeHandlerVersion $version
   $result.IsSuccessStatusCode
}

Deployment is now complete. It’s best to leave several days to gather data, since some network flows may not be used frequently.

To view the Service Map, click Solutions within the Log Analytics workspace, then click on the Service Map solution. On the Overview blade, a summary tile should show the number of virtual machines on-boarded, as shown in Figure 4-44.

A screen shot of the Azure portal shows the overview blade of the Service Map solution in Azure Log Analytics. The blade includes a summary section, which includes a tile showing two VMs are reporting data to the service map. One machine is Windows, and one is Linux.

Figure 4-44 The Service Map solution overview

Finally, click the Service Map summary tile to open the Service Map. An example is shown in Figure 4-45. From here, you can browse processes and connections for each VM.

A screen shot of the Azure portal shows a simple service map for a Linux VM called ExamRef-VM1. The VM has connected to two servers on port 80 (www), seven servers on port 443 (https), and one IP address (168.63.129.16) on port 32526.

Figure 4-45 The Service Map for ExamRef-VM1

NSG flow logs

NSG flow logs are a form of Azure diagnostic logs. They record both allowed and denied network flows in and out of an NSG. By analyzing NSG flow logs, you can understand which traffic flows your application is using, and which flows are being requested by an application, but blocked by your NSG. You can then review if the NSG rules should be updated to allow or deny these flows.

An example flow log is shown next. It is written in JavaScript Object Notation (JSON), and starts with metadata such as the timestamp and resource ID of the NSG generating the log. Then, for each NSG rule, the log gives a list of FlowTuples that describe the flow itself.

{
   "time": "2018-05-01T15:00:02.1713710Z",
   "systemId": "<Id>",
   "category": "NetworkSecurityGroupFlowEvent",
   "resourceId":
"/SUBSCRIPTIONS/<Id>/RESOURCEGROUPS/<rg>/PROVIDERS/MICROSOFT.NETWORK/
NETWORKSECURITYGROUPS/MYVM-NSG",
   "operationName": "NetworkSecurityGroupFlowEvents",
   "properties": {
      "Version": 1,
      "flows": [
         {
              "rule": "UserRule_default-allow-rdp",
              "flows": [
                 {
                      "mac": "000D3A170C69",
                      "flowTuples": [
                          "1525186745,192.168.1.4,10.0.0.4,55960,3389,T,I,A"
                       ]
                 }
               ]
         }
      ]
   }
}

Each FlowTuple describes the application of that security rule to a particular network flow. The fields in the FlowTuple are described in Table 4-10. (An enhanced v2 flow log format is currently in Preview, giving additional information regarding the duration and data volume of each flow.)

Table 4-10 NSG Flow Log FlowTuple Fields

Example data What data represents Explanation
1542110377 Time stamp The time stamp of when the flow occurred, in UNIX EPOCH format. In the previous example, the date converts to May 1, 2018 at 2:59:05 PM GMT.
10.0.0.4 Source IP address The source IP address that the flow originated from. 10.0.0.4 is the private IP address of the VM you created in Create a VM.
13.67.143.118 Destination IP address The destination IP address that the flow was destined to.
44931 Source port The source port that the flow originated from.
443 Destination port The destination port that the flow was destined to. Since the traffic was destined to port 443, the rule named UserRule_default-allow-rdp, in the log file processed the flow.
T Protocol Whether the protocol of the flow was TCP (T) or UDP (U).
O Direction Whether the traffic was inbound (I) or outbound (O).
A Action Whether the traffic was allowed (A) or denied (D).

Analyzing large flow logs can be a substantial task. To make this easier, the Traffic Analytics solution for Log Analytics can be used to analyze the logs and summarize the data in a variety of easy-to-consume reports.

Before using NSG Flow Logs, your subscription must be registered to use the Microsoft. Insights resource provider. You can register the resource provider using the Azure portal, by clicking All Services, then Subscriptions. Choose your subscription to open the Subscription blade, then click Resource Providers. Find the Microsoft.Insights resource provider (as shown in Figure 4-46) and register it if necessary.

A screen shot of the Azure portal shows the resource provider pane within the Subscriptions blade. The list of resource providers includes Microsoft.Insights, which has the status Registered.

Figure 4-46 Registering the Microsoft.Insights Resource Provider

You can also register the resource provider using PowerShell:

# Register the Microsoft.Insights Resource Provider
Register-AzResourceProvider -ProviderNamespace Microsoft.Insights

Or, you can register the resource provider using the Azure CLI:

# Register the Microsoft.Insights Resource Provider
az provider register --namespace Microsoft.Insights

Another pre-requisite is to create a storage account to store the NSG Flow Logs. In the Azure Portal, click Create A Resource, then Storage, then Storage Account, and fill in the Create Storage Account blade, specifying the storage account name and other settings. For more details on creating storage accounts, see Chapter 2.

NSG Flow Logs are one of many network diagnostics features provided by Azure Network Watcher. In the Azure portal, select All Services, then enter Network Watcher in the search filter, then click Network Watcher in the results to open the Network Watcher blade. The Network Watcher service is enabled automatically in any region where you have deployed a virtual network.

Within the Network Watcher blade in the Azure portal, click NSG Flow Logs. A list of NSGs is shown, together with the flow log and traffic analytics status for each NSG (see Figure 4-47).

A screen shot of the Azure portal shows the NSG Flow Logs pane in Azure Network Watcher. A list of NSGs is shown, and for each NSG you can see the Flow Log status (Enabled or Disabled), and the Traffic Analytics status (Enabled or Disabled).

Figure 4-47 NSG Flow Logs view in Network Watcher, showing the Flow Log status for each NSG

To enable NSG Flow Logs, click on an NSG to open the Flow Logs Settings blade (Figure 4-48). Within this blade, you can enable NSG Flow Logs and select the storage account used to store the logs. You can also optionally enable Traffic Analytics, and select the Log Analytics workspace used by this solution.

A screen shot of the Azure portal shows the NSG Flow Settings blade. The Flow Logs have been enabled, and a storage account chosen to store the log data. Traffic Analytics has also been enabled, and a Log Analytics workspace selected to analyze the data.

Figure 4-48 NSG Flow Log settings

Evaluate effective security rules

When troubleshooting networking issues, it can be useful to get a deeper insight into exactly how NSGs are being applied. When NSG rules are defined using service tags and application security groups, instead of explicit IP addresses or prefixes, it sometimes isn’t clear whether a particular flow matches a particular rule, or not.

The Effective Security Rules view is designed to provide this insight. It allows you to drill into each NSG rule and see the exact list of source and destination IP prefixes that have been applied, regardless of how the NSG rule was defined.

To access the Effective Security Rules view, your virtual machine must be running. This is because the data is taken directly from the configuration of the running VM.

View effective security rules using the Azure portal

Using the Azure portal, open the Virtual Machine blade, then click Networking. This will show the networking settings, including the NSG rules (and includes a convenient link to add new rules). At the top of this blade, click Effective Security Rules (as highlighted in Figure 4-59) to open the Effective Security Rules blade.

A screen shot of the Azure portal shows the Networking blade for an Azure virtual machine. The blade includes a list of NSG rules. At the top of the blade, a link to Effective Security Rules is highlighted.

Figure 4-49 Azure Virtual Machine Networking blade

At first sight, the Effective Security Rules blade (Figure 4-50) looks very similar to the Networking blade shown previously. It shows the name of the network interface and associated NSGs, together with a list of NSG rules.

A screen shot of the Azure portal shows Networking blade for an Azure virtual machine. The blade includes a list of NSG rules. At the top of the blade, a link to Effective Security Rules is highlighted.

Figure 4-50 Azure Virtual Machine Networking blade

The difference becomes clear when you click on one of the NSG rules. This opens an additional pane, showing the exact source and destination IP address prefixes used by that rule. For example, in Figure 4-51, you can see the exact list of 122 IP address prefixes used for outbound Internet traffic.

A screen shot of the Azure portal shows the Effective Security Rules blade. The AllowInternetOutBound default rule has been selected, opening a panel showing the full list of 122 IP address prefixes used by that rule.

Figure 4-51 Effective Security Rules showing Internet address prefixes

Having access to the exact list of address prefixes for each NSG rule allows you to investigate networking issues without fear of any ambiguity over how NSG rules are defined.

View effective security rules using Azure PowerShell

Effective security rules are also available using Azure PowerShell, using the Get-AzEffectiveNetworkSecurityGroup cmdlet. This cmdlet returns an object containing all effective rules for a given network interface, including NSGs at both the subnet and network interface level.

# Get effective security rules for a NIC
Get-AzEffectiveNetworkSecurityGroup -NetworkInterfaceName
examref-vm1638 -ResourceGroupName ExamRef-RG
View effective security rules using the Azure CLI

Effective security rules are also available using the Azure CLI, using the az network nic list-effective-nsg command.

When using the table output format, the results show a summary of the NSG rules but do not show the individual IP address prefixes. Using a verbose output format such as JSON gives the full output, including IP address prefixes.

# Get effective security rules for a NIC
az network nic list-effective-nsg --name examref-vm1638 --resource-group ExamRef-RG
--output json

Skill 4.5: Implement Azure load balancer

Azure Load Balancer is a fully-managed load-balancing service, used to distribute inbound traffic across a pool of backend servers running in an Azure virtual network. It can receive traffic on either Internet-facing or Intranet-facing endpoints, and supports both UDP and TCP traffic.

Azure Load Balancer operates at the transport layer (OSI layer 4), routing inbound and outbound connections at the packet level. It does not terminate TCP connections, and thus does not have visibility into application-level constructs. For example, it cannot support SSL offloading, URL path-based routing, or cookie-based session affinity (for these, see “Application Gateway” in Skill 3.1.)

Azure Load Balancer provides low latency and high throughput, scaling to millions of network flows. It also supports automatic failover between backend servers based on health probes, enabling high availability applications.

Configure internal load balancer, load balancing rules, and public load balancer

The deployment of Azure Load Balancer involves the coordinated configuration of several groups of settings. These settings work together to define the overall load balancer behavior.

Basic and Standard Load Balancer tiers

Azure Load Balancer is available in two pricing tiers, or SKUs: Basic and Standard. They offer different levels of scale, features, and pricing. Table 4-11 provides a comparison of the main feature differences between the Basic and Standard tiers.

Table 4-11 Comparison between Standard and Basic Load Balancer Tiers

Standard Basic
Availability Zones Supports zone-specific or zone-redundant deployments including cross-zone load-balancing Not supported
Backend Pools Up to 1,000 servers, any mix of VMs, availability sets, and VM Scale Sets, in the same VNet Up to 100 servers, must be VMs in the same availability set or a single VM Scale Set
Health Probes TCP, HTTP, HTTPS TCP, HTTP
Diagnostics Rich metrics via Azure Monitor, including byte and packet counters, health probe status, connection attempts, outbound connection health, and more Azure Log Analytics for public load balancer only; alerts, backend pool health count
Security Closed by default. Whitelist permitted inbound flows using Network Security Groups Open by default. Can optionally restrict flows using Network Security Groups
Outbound Connectivity Supports multiple outbound IP addresses, configurable via outbound rules Single outbound IP, not configurable
Other Features Supports HA Ports, TCP Reset on idle timeout, faster management operations N/A
Pricing Based on number of rules and data processed Free
SLA 99.99% for data path with two healthy VMs None

For a complete comparison of Basic and Standard Load Balancer tiers, see: https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview#skus.

Frontend IP configuration

Azure Load Balancer supports two modes: internal load balancer or public load balancer. In each case, the frontend IP configuration defines the endpoint upon which the load balancer receives incoming traffic.

  • Internal load balancer Used to load-balance traffic for Intranet-facing applications, or between application tiers. The frontend IP configuration references a subnet, and an IP address from that subnet is allocated using either dynamic or static assignment to the load balancer.

  • Public load balancer Used to load-balance traffic for Internet-facing applications. The frontend IP configuration references a separate public IP address resource, which is used to receive inbound traffic.

When used with IaaS VMs, each load balancer can support multiple frontend IP configurations. This allows it to receive traffic on multiple IP addresses, to load-balance traffic for multiple applications. All frontend configurations, however, must be of the same type, internal or public.

A public load balancer must be associated with a public IP address resource. If the load-balancer uses the standard pricing tier, then the public IP address must also use the standard pricing tier. Standard tier load balancers support both zone specific and zone redundant deployment options. The choice of deployment option is taken from the associated public IP address, rather than being specific explicitly in the load balancer properties.

Backend configuration

The backend pool defines the backend servers over which the load balancer will distribute incoming traffic.

When using a basic-tier load balancer, this backend pool must comprise either a single virtual machine, virtual machines in the same availability set, or a VM scale set (traffic will be distributed to all virtual machines in the VM scale set). You cannot distribute traffic to multiple virtual machines unless they are members of the same availability set or VM scale set.

With a standard-tier load-balancer, these restrictions are lifted. Backend pools can comprise a combination of virtual machines, across availability sets and VM scale sets.

Health Probes

Azure Load Balancer supports continual health probing of backend pool instances, to determine which instances are healthy and able to receive traffic. The load balancer will stop sending traffic flows to any backend pool instance that is determined to be unhealthy. Unhealthy instances continue to receive health probes, so the load balancer can resume sending traffic to that instance once it returns to a healthy state.

Azure Load Balancer supports three types of health probes:

  • TCP Probes attempt to initiate a connection by completing a three-way TCP handshake (SYN, SYN-ACK, ACK). If successful, the connection is then closed with a four-way handshake (FIN, ACK, FIN, ACK).

  • HTTP Probes issue an HTTP GET with a specified path.

  • HTTPS Probes are similar to HTTP probes, except that a TLS/SSL wrapper is used. HTTPS probes are only supported on the standard-tier load balancer.

All three probe types must also specify the probe port, or the interval. The minimum probe interval is five seconds in length, and the minimum consecutive probe failure threshold is two seconds. For HTTP and HTTPs probes, the probe path must also be given.

And endpoint is marked unhealthy if:

  • For HTTP or HTTPS probes only, the endpoint returns an HTTP status code other than 200 OK.

  • The probe endpoint closes the connection using a TCP reset.

  • The probe endpoint fails to respond during the timeout period, for a consecutive number of requests. The number of failed requests required to mark the endpoint unhealthy is configurable.

Configuring a dedicated health check page, such as /healthcheck.php, enables each backend server to implement custom application logic to decide whether it is healthy. Checking the availability of a backend database is an example of this.

When configuring network security groups (NSGs) for backend servers, it is important to allow both inbound traffic and probe traffic. Azure Load Balancer does not modify the source IP address of inbound traffic, so inbound traffic rules should be configured as if the load balancer was not in use. Whitelisting inbound probe traffic is achieved by allowing traffic originating from the AzureLoadBalancer service tag.

Load-balancing rules

Similar to Azure Application Gateway, load-balancing rules are used to connect the frontend IP configuration to the backend server pool, and to a health probe. Unlike App Gateway, there is no separate backend HTTP settings configuration; any additional HTTP settings are defined directly within the load-balancing rule itself. These include frontend and backend ports, idle timeout, protocol (TCP or UDP), and IP version (IPv4 or IPv6).

The load-balancing rule also allows you to configure how inbound connections are distributed between backend instances. There are three options:

  • None Traffic is distributed based on a 5-tuple hash of source IP, destination IP, source port, destination port, and protocol. This is the default option.

  • Source IP Traffic is distributed based on a 2-tuple hash of source and destination IP only.

  • Source IP and Protocol Traffic is distributed based on a 3-tuple hash of source IP, destination IP, and protocol.

Under the default option, new TCP sessions from a given client might be routed to a different backend endpoint, since the source port will have changed. By excluding the source port from the load-balancing algorithm, the Source IP and Source IP Protocol options provide consistent mappings between client and individual backend servers across separate connections. This is useful in applications where traffic between the client and server uses more than one connection or protocol. Media uploads that use both a TCP session to control and monitor the upload, as well as UDP packets to upload the media data are examples.

Inbound NAT Rules

You have seen how Azure Load Balancer can be configured to distribute inbound traffic across a pool of backend servers. Another common scenario is where a connection must be made to a specific backend server via the load balancer frontend. This is useful for gaining access to a specific server, such as when diagnosing a problem, without exposing a new endpoint on that server.

The direct connectivity to individual servers is achieved by creating a port mapping from the frontend to a specific backend server. This mapping is also known as an inbound NAT rule. Each inbound NAT rule specifies a frontend IP address, frontend port, protocol (TCP or UDP), backend server, and backend port. Once enabled, traffic received by the frontend IP on the designated frontend port is directed to the specified backend server and port.

Network Security Group configuration

The final step in configuring the Azure load balancer is to ensure that Network Security Groups (NSGs) are correctly configured. These NSGs can be associated with the subnet containing the backend virtual machines, or with their network interfaces. Two inbound security rules are required.

First, an inbound rule must permit traffic from the end users to the backend servers. Even though traffic passes through the load balancer, this does not change the source IP of the inbound traffic, hence the rule must reference the end user source IP address and port range.

A second inbound rule must permit traffic originating from the load balancer health probe. The IP addresses from which the health probes originate are defined in the AzureLoadBalancer service tag, which should be used to define the source IP address range for this rule.

Note Load Balancers and Network Security Groups

Standard-tier load balancers use standard-tier public IP addresses, which are by default closed to inbound traffic. When using a standard-tier load balancer, traffic must be whitelisted using NSGs. In contrast with basic-tier load balancers, traffic should be whitelisted using NSGs, but will also flow if NSGs are not used.

Create an Azure load balancer using the Azure Portal

To use the Azure load balancer, the administrator must first provision the resource, which includes the frontend IP configuration. After this step has been completed, you can create the backend pool, the heath probes, and finally the load balancing rule.

To create the load balancer in the portal, click +Create a resource, followed by Networking, then click Load Balancer. This will open the Create Load Balancer blade, as shown in Figure 4-52. Complete the blade as follows:

  • Name Provide a name for the load balancer resource.

  • Type Choose between Public or Internal.

  • SKU Select the pricing tier: Basic or Standard.

  • Public IP address (public load balancers only): Choose an existing public IP address resource, or create a new one. Standard-tier load balancers must use standard-tier public IP addresses.

  • Virtual network, subnet and IP assignment (internal load balancers only): Choose the virtual network and subnet from which the frontend IP address will be allocated, and choose between static and dynamic allocation.

  • Availability zone (standard-tier load balancers only): For public load balancers, the availability zone is configured as part of the public IP address configuration. For internal load balancers, it is explicitly specified.

  • Subscription, resource group, and location Specify as required.

As screenshot showing the Create a load balancer blade, information such as the Name, Resource Group, Location that is captured prior to provisioning.

Figure 4-52 Creating a Load Balancer with the Azure Portal

After the load balancer has been created, the next steps are to create the backend pool, the health probe, and finally the load-balancing rule.

To create a backend pool, open the load balancer blade in the Azure Portal, then click Backend Pools, followed by +Add. This opens the Add Backend Pool blade, as shown in Figure 4-53. Specify the backend pool name and, for a standard load balancer, select the virtual machines (and their IP addresses) to include in the backend pool. For basic load balancers, you will need to choose between adding an individual virtual machine, an availability set, or a VM scale set.

A screenshot shows the Azure Portal with the backend pool configured to use IPv4, and a VM from the examref-vnet.

Figure 4-53 Creating a backend pool and adding virtual machines, using a standard load balancer

To create a health probe, navigate to the load balancer blade and click Health Probes followed by +Add. This opens the Add Health Probe blade as shown in Figure 4-54. Specify the health probe name, together with the protocol, port, probe interval, and consecutive probe failures threshold.

A screen shot shows the Azure Portal creating a health probe in Azure Load Balancer. The probe name is healthProbe1, the protocol is TCP, the port is 80, the interval is 5, and the unhealthy threshold is 2.

Figure 4-54 Creating a health probe in Azure Load Balancer

The final step is to configure a load balancing rule, which links the frontend IP configuration to the backend pool, specifying the health probe and other load balancing settings. From the load balancer blade, click Load Balancing Rules, followed by +Add. This opens the Add Load Balancing Rule blade, as shown in Figure 4-55. Choose the frontend IP configuration, backend pool, and health probe selected earlier. For HTTP traffic, select TCP, specify port 80 for both the frontend and backend ports, select None for session persistence, and leave the idle timeout at the default (4 minute) value.

A screen shot shows the Azure Portal creating the load balancing rule. The rule references the backend pool and health probes created in the previous steps.

Figure 4-55 Creating a load balancing rule in Azure load balancer

Note Floating IP

The last setting, Floating IP (direct server return), is only recommended when load-balancing traffic for a SQL Server AlwaysOn Availability Group listener. For other scenarios, the Floating IP setting should be left disabled.

The final step is to ensure NSGs are configured to allow incoming traffic and health probe traffic. With this in place, if the VMs added to the backend pool are configured with a web server, you should be able to connect to the public IP address of the load balancer and see the webpage.

Create an Azure load balancer using PowerShell

Creating an Azure load balancer using PowerShell involves several steps. In the case of a public load balancer, the public IP address must be created. Next, the frontend IP configuration, backend pool, health probe, and load balancing rule are configured, each as a separate local object. The load balancer itself is created using these local objects to specify the load balancer configuration.

# Set Variables
$rgName = "ExamRef-RG"
$location = "West Europe"

# Create the Public IP
$publicIP = New-AzPublicIpAddress `
   -Name ExamRefLB-IP `
   -ResourceGroupName $rgName `
   -AllocationMethod Static `
   -Location $location

#Create Frontend IP Configuration
$frontendIP = New-AzLoadBalancerFrontendIpConfig `
   -Name ExamRefFrontEnd `
   -PublicIpAddress $publicIP

# Create Backend Pool
$beAddressPool = New-AzLoadBalancerBackendAddressPoolConfig `
   -Name ExamRefBackEndPool

#Create HTTP Probe
$healthProbe = New-AzLoadBalancerProbeConfig `
   -Name HealthProbe `
   -RequestPath '/' `
   -Protocol http `
   -Port 80 `
   -IntervalInSeconds 5 `
   -ProbeCount 2

#Create Load Balancer Rule
$lbrule = New-AzLoadBalancerRuleConfig `
   -Name ExamRefRuleHTTPPS `
   -FrontendIpConfiguration $frontendIP `
   -BackendAddressPool $beAddressPool `
   -Probe $healthProbe `
   -Protocol Tcp `
   -FrontendPort 80 `
   -BackendPort 80

#Create Load Balancer
$lb = New-AzLoadBalancer `
   -ResourceGroupName $rgName `
   -Name ExamRefLB `
   -Location $location `
   -FrontendIpConfiguration $frontendIP `
   -LoadBalancingRule $lbrule `
   -BackendAddressPool $beAddressPool `
   -Probe $healthProbe

Having created the load balancer, the next step is to add virtual machines to the backend pool. When using Azure PowerShell, the process is not to add a virtual machine or network interface to the backend pool, but rather the other way around, by adding a reference to the backend pool to the network interface of the VM. This is similar to the process used to add virtual machines to an App Gateway backend pool.

The following PowerShell script shows how to add a virtual machine to a load balancer backend pool. Note that when updating the IP configuration of the network interface, all existing IP configuration settings must be re-stated, otherwise they will be lost.

# Set Variables
$rgName = "ExamRef-RG"
# Add VM1 to the LB backend pool
# First, get the VM. Then get the NIC based on the VM ID
$vm1 = Get-AzVM -Name VM1 -ResourceGroupName $rgName
$vm1nic = Get-AzNetworkInterface -ResourceGroupName $rgName `
   | where {$_.VirtualMachine.Id -eq $vm1.Id}

# Get the LB and backend pool (skip if you have these already)
$lb = Get-AzLoadBalancer `
   -Name ExamRefLB `
   -ResourceGroupName $rgName

$beAddressPool = Get-AzLoadBalancerBackendAddressPoolConfig `
   -Name ExamRefBackEndPool `
   -LoadBalancer $lb

# Update the IP config of the NIC to reference the backend pool of the
Application Gateway
# Note: This is NOT an incremental change. You need to specify ALL settings of
the IP config
#      Exisiting settings (such as public IP addresses) will be lost if not
re-specified.
#      This example re-specifies the subnet only (this is mandatory)
$ipconfig = Get-AzNetworkInterfaceIpConfig `
   -Name ipconfig1 `
   -NetworkInterface $vm1nic

Set-AzNetworkInterfaceIpConfig `
   -Name ipconfig1 `
   -NetworkInterface $vm1nic `
   -SubnetId $ipconfig.Subnet.Id `
   -LoadBalancerBackendAddressPoolId $beAddressPool.Id

# Commit the change
Set-AzNetworkInterface -NetworkInterface $vm1nic

The final step is to ensure NSGs are configured to allow incoming traffic and health probe traffic. With this in place, if the VMs added to the backend pool are configured with a web server, you should be able to connect to the public IP address of the load balancer and see the webpage.

Create an Azure load balancer using the Azure CLI

The same configurations are required when creating a load balancer by using the Azure CLI when creating load balancers in the portal or by using PowerShell. First, for public load balancers only, create the public IP address that the load balancer will use. Next, create the load balancer itself, using the az network lb create command. This step also creates the frontend IP configuration and backend pool. The load balancer is then updated to incrementally add the health probe and load balancing rule.

# Creating a Public IP Address
az network public-ip create --name ExamRefLB-IP --resource-group
ExamRef-RG --allocation-method Static --location westeurope

#Create Load Balancer
az network lb create --name ExamRefLB --resource-group ExamRef-RG
--location westeurope --backend-pool-name ExamRefBackEndPool
--frontend-ip-name ExamRefFrontEnd --public-ip-address ExamRefLB-IP

#Create HTTP Probe
az network lb probe create --name HealthProbe --lb-name ExamRefLB
--resource-group ExamRef-RG --protocol http --port 80 --path /
--interval 5 --threshold 2

#Create Load Balancer Rule
az network lb rule create --name ExamRefRule --lb-name ExamRefLB
--resource-group ExamRef-RG --protocol Tcp --frontend-port 80
--backend-port 80 --frontend-ip-name ExamRefFrontEnd
--backend-pool-name ExamRefBackEndPool --probe-name HealthProbe

Having created the load balancer, the next step is to add virtual machines to the backend pool. As with Azure PowerShell, the Azure CLI implements this by adding a reference to the backend pool to the network interface of the VM. This is more straightforward with the Azure CLI than it is with PowerShell, since incremental update of the network interface is supported. Note that the name (or resource ID) of the network interface attached to the VM is required.

# Add the Web Servers to the Backend Pool
az network nic ip-config address-pool add --address-pool ExamRefBackEndPool
--lb-name ExamRefLB --resource-group ExamRef-RG --nic-name vm1-nic
--ip-config-name ipconfig1

The final step is to ensure that NSGs are configured to allow incoming traffic and health probe traffic. With this in place, if the VMs added to the backend pool are configured with a web server, you should be able to connect to the public IP address of the load balancer and see the webpage.

Troubleshoot load balancing

Basic- and standard-tier load balancers also support additional diagnostic logs to enable common troubleshooting scenarios. These logs are different between the basic and standard tiers.

Basic-tier load balancer metrics and diagnostics

The basic tier load balancer provides the following diagnostic logs:

  • Alert event logs These logs record load balancer alert events. They are written whenever a load balancer alert is raised (max every 5 minutes).

  • Health probe logs These logs allow you to investigate the status of health probes for backend servers. They are written whenever there is a change in health probe status.

  • Metrics Used to track common load balancer metrics.

To enable basic-tier load-balancer logs, open the load balancer blade in the Azure Portal, select Diagnostic Logs and click Turn On Diagnostics to open the diagnostics configuration blade, shown in Figure 4-56.

A screen shot showing the load balancer diagnostics settings blade. Options are available to send logs to a storage account, event hub, or Log Analytics workspace. The available logs are LoadBalancerAlertEvent,LoadBalancerProbeHealthStatus, and AllMetrics.

Figure 4-56 Configuring diagnostics logs in a basic-tier load-balancer

Having configured the diagnostics logs, they can be downloaded for offline analysis or analyzed using Log Analytics.

Standard-tier load balancer metrics and diagnostics

The standard load balancer also supports diagnostics, via metrics routed automatically to Azure Monitor. Available metrics include byte count, packet count, health probe status, SYN count (for new connections), and more. Azure monitor supports charting and alerting based on these metrics. In addition, they are exposed as multi-dimensional metrics, meaning that charts and alerts can be built using filtered views. An example is filtered based on protocol, source IP, or port. An example chart is shown in Figure 4-57.

A screen shot showing a chart taken from the Azure Monitor metrics for a standard-tier Azure load balancer. The chart shows the SYN count fluctuating between 0 and 110 over a 30-minute period.

Figure 4-57 Azure Monitor chart showing standard load balancer SYN count

Skill 4.6: Monitor and troubleshoot virtual networking

Azure offers numerous features and services to enable you to monitor your network and investigate network issues. These features provide a wide range of diagnostic and alerting capabilities. A good understanding of the range of features available will enable you to investigate network issues quickly and effectively.

Monitor on-premises connectivity

Azure Network Performance Monitor (NPM) is a network monitoring solution for hybrid networks. It enables you to monitor network connectivity and performance between various points in your network, both in Azure and on premises. It can provide reports of network performance and raise alerts when network issues are detected.

NPM provides three services:

  • Performance Monitor Used to monitor connectivity between various points in your network, both in Azure and on-premises. You can monitor nodes at both ends, gather data on connectivity, packet loss, latency, and available network paths.

  • Service Connectivity Monitor Used to monitor outbound connectivity from nodes on your network to any external service with an open TCP port, such as web sites, applications, or databases. This measures latency, response time, and packet loss, enabling you to determine whether poor performance is caused by network or application issues.

  • ExpressRoute Used to monitor end-to-end connectivity between your onpremises network and Azure, over ExpressRoute. This service can auto-discover your ExpressRoute network topology. It can then tracking your ExpressRoute bandwidth utilization, packet loss, and latency. These are measured at the circuit, peering and Azure virtual network level.

NPM also provides a dashboard giving an overview of the network status. as well as detailed per-service charts and reports.

Deploying Network Performance Monitor

NPM is a Log Analytics solution. Log Analytics agents are installed on each node used to measure network connectivity and performance. These agents perform synthetic transactions over either TCP or ICMP to measure network performance. Data gathered from these agents is channeled into a Log Analytics workspace. NPM analyzes this data to provide both reporting and alerting.

NPM can be installed from the Azure Marketplace (from the Azure Portal, click +Create A Resource and search for Network Performance Monitor). It is also available from Network Watcher, an Azure service that acts as a hub for a wide range of network monitoring and diagnostic tools. You will be required to create a Log Analytics workspace or select an existing workspace to use. Be sure to deploy your Log Analytics workspace to one of the regions supported by Network Performance Monitor, as listed at: https://docs.microsoft.com/azure/azure-monitor/insights/network-performance-monitor#supported-regions.

Having deployed NPM, the monitoring agents must be installed and configured. The choice of where to install the agents depends on your network topology and which parts of your network you plan to measure. To monitor a given network link, agents should be installed on servers at both ends of that link. To monitor connections between subnets, an agent on at least one server in each subnet is required.

To install the NPM monitoring agent on an Azure virtual machine, simply open the Log Analytics workspace, and click Virtual Machines (under Workspace Data Sources) to see a list of virtual machines and the status of their Log Analytics connection (Figure 4-58). From there, click on a VM and click Connect to add the VM to Log Analytics. After a few minutes, refresh the list of virtual machines to see the updated list.

A screen shot from the Azure Portal shows the Azure Virtual Machines list in a Log Analytics Workspace. Several machines are listed, with log analytics connection status Connected or Not Connected.

Figure 4-58 Connecting Azure Virtual Machines to a Log Analytics workspace

To connect on-premises servers with Log Analytics, you need to install the Log Analytics agent. Open the Log Analytics Workspace, then click View Solutions under Configure Monitoring Solutions. Select the NPM solution and click the Solution Requires Additional Configuration tile, as shown in Figure 4-59.

A screen shot from the Azure Portal shows a tile reading Network Performance Monitor, Solution Requires Additional Configuration.

Figure 4-59 Solution Requires Additional Configuration tile in Network Performance Monitor

Here you will find options to download and install the Log Analytics agent, the workspace IDs and keys needed to configure the agent, and a PowerShell script to open the necessary firewall ports, as shown in Figure 4-60.

A screen shot from the Azure Portal shows the Network Performance Monitor configuration blade. This includes links to download the agent (32-bit or 64-bit), workspace ID, primary and secondary keys, and a link to download a PowerShell script to open the necessary firewall ports.

Figure 4-60 Network Performance Monitor Configuration

Having installed and configured the agents, ensure that Network Security Groups and onpremises firewalls are configured to allow the agents to communicate. The default port used is TCP 8084.

Finally, on the left-nav, complete the Network, Subnetworks, and Nodes sections to describe your network topology, as shown in Figure 4-61. This allows you to define the networks and subnets in your network and identify which monitoring nodes sit within each network segment.

A screen shot from the Azure Portal shows the Subnet configuration within Network Performance Monitor. This shows a subnetwork (10.0.1.0/24) with a single monitored node (10.0.1.4).

Figure 4-61 Network Performance Monitor Network and Subnet Configuration

Performance Monitor

Performance Monitor enables you to monitor packet loss and latency between your endpoints, both in Azure and on-premises. A VM or server running the Log Analytics agent is required at both ends of each monitored connection.

To configure Performance Monitor, first complete the Performance Monitor tab on the Setup section of the Network Performance Monitor Configuration blade. This allows you to specify TCP or ICMP-based monitoring.

Next, use the Performance Monitor section to define your monitoring rules. Each rule requires you to specify the source and destination networks, and the network protocol. You can also choose whether to enable health monitoring events based on defined criteria, and whether to raise alerts based on those events. An example Performance Monitor rule is shown in Figure 4-62.

A screen shot from the Azure Portal shows an example Performance Monitor rule configuration. The rule is enabled, and monitors connectivity between All Networks, All Subnetworks , All Networks, and All Subnetworks. Health monitoring is enabled, based on auto-detecting sudden changes in packet loss or latency. Email alerts are enabled.

Figure 4-62 Example Performance Monitor Rule Configuration

Once configured, Performance Monitor will continually gather data from the Log Analytics agents, enabling both reporting and alerts. Figure 4-63 gives an example of a packet loss and latency chart from Performance Monitor.

A screen shot from the Azure Portal shows two charts, one for packet loss and one for latency. The source subnet is 10.0.0.0/24 and the destination subnet is 10.0.1.0/24.

Figure 4-63 Example Performance Monitor Packet Loss and Latency Report

Service Connectivity Monitor

Service Connectivity Monitor is used to test outbound connectivity from your network to open a TCP port, such as a website, application, or database. It supports pre-configured endpoints for Microsoft Office365 and Dynamics. You can also configure custom tests to arbitrary endpoints.

To use the pre-configured endpoints, select the Service Connectivity Monitor tab from the setup section of the Network Performance Monitor Configuration blade, as shown in Figure 4-64. Select the services to monitor, click +Add Agents to choose which of your network nodes should monitor these services, then click Save And Continue.

A screen shot from the Azure Portal shows the Service Connectivity Monitor tab of the Setup section of the Network Performance Monitor Configuration blade. Four Office365 services have been selected.

Figure 4-64 Configuring Service Connectivity Monitor for Microsoft Services

Now move to the Service Connectivity Monitor section, on the left-nav. This shows the existing tests and allows you to configure custom tests. Figure 4-65 shows a custom test to check the availability of the Azure management portal, at https://portal.azure.com.

A screen shot from the Azure Portal shows the Service Connectivity Monitor tab of the Setup section of the Network Performance Monitor Configuration blade. Four Office365 services have been selected.

Figure 4-65 Configuring a custom test in Service Connectivity Monitor

Once configured, Service Connectivity Monitor will generate packet loss and network performance charts (showing latency and response times) for each tested endpoint. Figure 4-66 provides an example chart.

A screen shot from the Azure Portal shows packet loss and network performance charts from Service Connectivity Monitor. Packet loss is 0%. The network performance chart shows both latency (around 160ms) and response time (between 320ms and 450ms).

Figure 4-66 Packet loss and network performance charts from Service Connectivity Monitor

ExpressRoute Monitor

ExpressRoute Monitor allows you to monitor end-to-end network connectivity and performance between on-premises and Azure endpoints over ExpressRoute connections. It can auto-detect ExpressRoute circuits and your network topology, and track bandwidth utilization, packet loss and network latency. Reports are available for each ExpressRoute circuit or peering, and also for each Azure virtual network using ExpressRoute.

To configure ExpressRoute Monitor, use the ExpressRoute Monitor section of the Network Performance Monitor Configuration blade (see Figure 4-67). First, ExpressRoute resources (such as gateways and circuits) are identified in your subscriptions. Next, the monitoring for each peering can be enabled, configuring health events and choosing monitoring agents.

A screen shot from the Azure Portal shows the ExpressRoute Monitor configuration, at the Discovery stage. A number of subscriptions are listed with a button to Discover ExpressRoute Resources.

Figure 4-67 Configuring ExpressRoute Monitor

Once configured, it takes 30-60 minutes for the first ExpressRoute reporting data to become available. Several reports and charts are available, including bandwidth utilization, latency, and packet loss for each ExpressRoute circuit and for each peering. A network topology view shows network connections and status (Figure 4-68). Log Analytics alerts can be configured for a wide range of events, such as high latency, packet drops, high and low utilization, and more.

A screen shot from the Azure Portal shows the ExpressRoute Monitor network topology view. The diagram shows two independent ExpressRoute pathways between two endpoints.

Figure 4-68 Network Topology view in ExpressRoute Monitor

Use network resource monitoring

Earlier in this chapter, you saw how Application Gateways and Azure load balancers emit diagnostic logs, which can be used for detailed insight into the status of each service. These logs can be captured in a storage account, streamed to an EventHub, or integrated with an Azure Log Analytics workspace, which enables customized queries and log-based alerting. In the case of App Gateway, you also saw how the Azure Application Gateway Analytics Log Analytics solution provides a pre-configured dashboard and charts showing App Gateway status.

Diagnostic logs are also available for a number of other networking resources, including Traffic Manager, Azure DNS, and Network Security Groups. In each case, they give deeper insight into the status and operation of each service, as well as supporting log-based alerts through Log Analytics. In the case of NSGs, the Traffic Analytics Log Analytics solution provides detailed reports giving insight into the successful and blocked traffic flows into and out of your Azure services.

Use Network Watcher

Network Watcher provides a central hub for a wide range of network monitoring and diagnostic tools. These tools are valuable across a wide range of network troubleshooting scenarios, and also provide access to other tools listed in this skill section, such as the Network Performance Monitor and Connection Monitor.

Deploying Network Watcher

Network Watcher is enabled as a single instance per Azure region. It is not deployed like a conventional Azure resource, although it does appear as a resource in a resource group.

Any subscription containing a virtual network resource will automatically have Network Watcher enabled. Otherwise, it can be enabled via the Azure Portal, under All Services, Network Watcher, which also shows the Network Watcher status per region. It can also be deployed via the command line (using the New-AzNetworkWatcher cmdlet or the az network watcher configure command), which unlike the Azure Portal gives control over the resource group used.

Some of the Network Watcher tools require the Network Watcher VM extension to be installed on the VM being monitored. This extension is available for both Windows and Linux VMs. It is installed automatically when using Network Watcher via the Azure Portal.

The Network Watcher VM extension can also be installed via Azure PowerShell:

# Install Network Watcher VM extension
Set-AzVMExtension `
   -ResourceGroupName ExamRef-RG `
   -Location "West Europe" `
   -VMName VM1 `
   -Name networkWatcherAgent `
   -Publisher Microsoft.Azure.NetworkWatcher `
   -Type NetworkWatcherAgentWindows `
   -TypeHandlerVersion 1.4

It can also be installed via the Azure CLI:

# Install Network Watcher VM extension
az vm extension set --vm-name VM1 --resource-group ExamRef-RG
--publisher Microsoft.Azure.NetworkWatcher --version 1.4
--name NetworkWatcherAgentWindows --extension-instance-name NetworkWatcherAgent
IP Flow Verify

The IP Flow Verify tool provides a quick and easy way to test if a given network flow will be allowed into or out of an Azure virtual machine. It will report whether the requested traffic is allowed or blocked, and in the latter case which NSG rule is blocking the flow. It is a useful tool for verifying that NSGs are correctly configured.

It works by simulating the requested packet flow through the NSGs applied to the VM. For this reason, the VM must be in a running state.

To use IP Flow Verify via the Azure Portal, open Network Watcher and click IP Flow Verify. Select the VM and NIC to verify, and specify the protocol, direction, and remote and local IP addresses and ports, as shown in Figure 4-69.

A screen shot from the Azure Portal shows the Network Watcher IP Flow Verify blade. An Azure VM is selected, together with protocol (TCP), direction (outbound) and local and remote IP addresses and ports. The results show the flow is allowed, under the security rule AllowVnetOutBound.

Figure 4-69 Using Network Watcher IP Flow Verify

IP Flow verify can also be used from PowerShell, using the Test-AzNetworkWatcherIPFlow cmdlet, or the Azure CLI, using the az network watcher test-ip-flow command.

Next Hop

The Next Hop tool provides a useful way to understand how a VM’s outbound traffic is being directed. For a given outbound flow, it shows the next hop IP address and type, and the route table ID of any user-defined route in effect. Possible next hop types are:

  • Internet

  • VirtualAppliance

  • VirtualNetworkGateway

  • VirtualNetwork

  • VirtualNetworkPeering

  • VirtualNetworkServiceEndpoint

  • None (this is used for user-defined routes)

To use Next Hop via the Azure Portal, open Network Watcher and click Next Hop. Select the source VM, NIC and IP address, and the destination address, as shown in Figure 4-70. The destination can be any IP address, either on the internal network or the Internet.

A screen shot from the Azure Portal shows the Network Watcher Next Hop blade. An Azure VM is selected, together with a destination IP address. The next hop results show the next hop type as Internet and the route table ID as System Route.

Figure 4-70 Using Network Watcher Next Hop

Next Hop can also be used from PowerShell using the Get-AzNetworkWatcherNextHop cmdlet, or the Azure CLI using the az network watcher show-next-hop command.

Packet Captures

The Packet Capture tool allows you to capture network packets entering or leaving your virtual machines. It is a powerful tool for deep network diagnostics.

You can capture all packets, or a filtered subset based on protocol and local and remote IP addresses and ports. You can also specify the maximum packet and overall capture size, and a time limit (captures start almost immediately once configured).

Packet captures are stored as a file on the VM or in an Azure storage account, in which case NSGs must allow access from the VM to Azure storage. These captures are in a standard format, and can be analyzed off-line using common tools such as WireShark or Microsoft Message Analyzer.

To use the Packet Capture tool, open Network Watcher and click on Packet Capture, then click +Add. Select the VM, give the capture a name, and specify the destination, packet and total size, time limit, and filters. An example is shown in Figure 4-71.

A screen shot from the Azure Portal shows the Network Watcher Add Packet Capture blade. An Azure VM is selected, together with packet capture settings: Storage Account, Maximum Bytes Per Packet And Per Sessions, Time Limit, and a TCP filter specifying the local IP address and local ports.

Figure 4-71 Using Network Watcher Packet Capture

Packet Capture can also be used from PowerShell. The following script shows how to start a packet capture, check packet capture status, and stop a packet capture.

# Get the Network Watcher resource
$nw = Get-AzResource | Where {$_.ResourceType `
    -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq "WestEurope" }

$networkWatcher = Get-AzNetworkWatcher `
    -Name $nw.Name `
    -ResourceGroupName $nw.ResourceGroupName

# Get the storage account to store the capture in
$storageAccount = Get-AzStorageAccount `
    -Name examref-storage `
    -ResourceGroupName ExamRef-RG

# Set up filters
$filter1 = New-AzPacketCaptureFilterConfig `
    -Protocol TCP `
        -RemoteIPAddress "1.1.1.1-255.255.255.255" `
    -LocalIPAddress "10.0.0.3" `
    -LocalPort "1-65535" `
    -RemotePort "20;80;443"

$filter2 = New-AzPacketCaptureFilterConfig `
    -Protocol UDP

# Get the VM
$vm = Get-AzVM -Name VM1 -ResourceGroupName ExamRef-RG

# Start the packet capture
New-AzNetworkWatcherPacketCapture `
    -NetworkWatcher $networkWatcher `
    -TargetVirtualMachineId $vm.Id `
    -PacketCaptureName "PacketCaptureTest" `
    -StorageAccountId $storageAccount.id `
    -TimeLimitInSeconds 60 `
    -Filter $filter1, $filter2

# Check packet capture status
Get-AzNetworkWatcherPacketCapture `
    -NetworkWatcher $networkWatcher `
    -PacketCaptureName "PacketCaptureTest"

# Stop packet capture
Stop-AzNetworkWatcherPacketCapture `
    -NetworkWatcher $networkWatcher `
    -PacketCaptureName "PacketCaptureTest"

You can also use Packet Capture from the Azure CLI, as shown in the following script:

# Start packet capture
az network watcher packet-capture create --name PacketCaptureTest2
--resource-group ExamRef-RG --vm VM1 --time-limit 300
--storage-account examref-storage --filters '[ { "protocol":
"TCP","remoteIPAddress":"1.1.1.1-255.255.255.255","localIPAddress":"10.0.0.3","remotePo
rt":"20"} ]'

# Get packet capture status
az network watcher packet-capture show-status --name PacketCaptureTest2
--location WestEurope

# Stop packet capture
az network watcher packet-capture stop --name PacketCaptureTest2
--location WestEurope
Network Topology

The Network Topology view in Network watcher provides a diagrammatic view of the resources in your virtual network. It is not a diagnostic or alerting tool. It is a quick and easy way to review your network resources and manually check for misconfiguration.

A limitation of the tool is that it only shows the topology within a single virtual network. All common network resource types are supported, although for Application Gateways, only the backend pool connected to the network interface is shown.

To use Network Topology via the Azure Portal, open Network Watcher and click Topology. Select the resource group and virtual network, and the topology will be shown.

An example topology is given in Figure 4-72. In this example, you can see that the NSG has been misconfigured, since it is configured on VM1, but not on VM2. An NSG should be added to VM2 or moved to the subnet level.

A screen shot from the Azure Portal shows the Network Watcher Network Topology. The topology shows two VMs, VM1 and VM2, in the same subnet, connected to a load balancer. An NSG is assigned only to the NIC of VM1.

Figure 4-72 Using Network Watcher Network Topology

The underlying topology data can be downloaded in JSON format via Azure PowerShell or the Azure CLI, using the Get-AzNetworkWatcherTopology cmdlet or the az network watcher show-topology command, respectively.

Troubleshoot external networking

We have already seen how the Network Performance Monitor provides a range of powerful features to monitor and diagnose issues across both Azure and on-premises networks, including detailed analytics for ExpressRoute connections.

Another pair of useful tools to investigate issues with external networks are the Connection Monitor and Connection Troubleshoot tools in Network Watcher. These are discussed in the next section: “Troubleshoot virtual network connectivity.”

In this section we discuss another feature of Network Watcher, VPN Troubleshoot, which is designed specifically to diagnose problems with VPN connections.

Do not forget—for simple validation that a VPN connection is working, it’s also always worthwhile trying to connect between VMs on either end of VPN tunnel, using standard tools such as tcping.

VPN Troubleshoot

The VPN Troubleshoot feature in Network Watcher provides automated diagnostics of Azure VPN gateways and connections. The results provide a detailed report on gateway health and connection health, providing accurate pointers on common issues enabling informed remediations.

VPN Troubleshoot only supports route-based VPN gateways (not policy-based gateways or ExpressRoute gateways). It supports both IPsec Site-to-Site VPNs and Vnet-to-Vnet connections; it does not support ExpressRoute connections or Point-to-Site connections.

During the troubleshooting process, logs are written to a storage account. This account must be created before starting the troubleshooting process.

To use VPN Troubleshoot via the Azure Portal, first open Network Watcher, followed by clicking VPN Troubleshoot. Select the storage container for the troubleshooting logs, then select which VPN resources to troubleshoot, as shown in Figure 4-73. Finally, click Start Troubleshooting.

A screen shot from the Azure Portal shows the Network Watcher VPN Troubleshoot feature. Two VPN gateways are listed, of which one has been selected. A storage account has also been selected. A button at the top of the page reads Start Troubleshooting.

Figure 4-73 Using Network Watcher VPN Troubleshoot

The troubleshooting process takes a few minutes to run. Once complete, the results will be shown at the bottom of the page, as shown in Figure 4-74.

A screenshot from the Azure Portal shows the Network Watcher VPN Troubleshoot output. In this case, it shows the gateway is running normally.

Figure 4-74 Network Watcher VPN Troubleshoot Results

VPN Troubleshoot can also be accessed via PowerShell, as demonstrated in the following script. In this example, troubleshooting is run on a VPN connection. It can also be run on a VPN gateway.

# Get the Network Watcher resource
$nw = Get-AzResource | Where {$_.ResourceType `
    -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq "WestEurope" }

$networkWatcher = Get-AzNetworkWatcher `
    -Name $nw.Name `
    -ResourceGroupName $nw.ResourceGroupName

# Create a storage account and container for logs
# (You could also use an existing account/container)
$sa = New-AzStorageAccount `
    -Name examrefstorage `
    -SKU Standard_LRS `
    -ResourceGroupName ExamRef-RG `
    -Location "WestEurope"

Set-AzCurrentStorageAccount -ResourceGroupName $sa.ResourceGroupName
-Name $sa.StorageAccountName

$sc = New-AzureStorageContainer -Name logs

# Get the connection to troubleshoot
$connection = Get-AzVirtualNetworkGatewayConnection `
    -Name Vnet1-to-Vnet2 `
    -ResourceGroupName ExamRef-RG1

# Start VPN Troubleshoot
Start-AzNetworkWatcherResourceTroubleshooting `
    -NetworkWatcher $networkWatcher `
    -TargetResourceId $connection.Id `
    -StorageId $sa.Id `
    -StoragePath "$($sa.PrimaryEndpoints.Blob)$($sc.name)"

VPN Troubleshoot can also be accessed via the Azure CLI. In these example commands, values in { } should be replaced with the output of earlier commands.

# Crate a storage account and container for logs
# (You could also use an existing account/container)
az storage account create --name examrefstorage --location westeurope
--resource-group ExamRef-RG --sku Standard_LRS
az storage account keys list --resource-group ExamRef-RG
--account-name examrefstorage
az storage container create --account-name examrefstorage
--account-key {storageAccountKey} --name logs

# Start VPN Troubleshoot
# Note: Assumes storage account and VPN connection are in the same resource group
# If not, specify using full resource IDs instead
# Use JSON output, since table output does not show the actual troubleshooting result
az network watcher troubleshooting start --resource-group ExamRef-RG
--resource Vnet1-to-Vnet2 --resource-type vpnConnection
--storage-account examrefstorage --storage-path
https://examrefstorage.blob.core.windows.net/logs --output json

Troubleshoot virtual network connectivity

A number of the tools we have already seen can be useful for troubleshooting connectivity issues between and within virtual networks. Network Watcher offers two more tools that are particularly useful in this scenario: Connection Troubleshoot and Connection Monitor.

Connection Troubleshoot

Connection Troubleshoot is a Network Watcher feature designed to allow you to test the connectivity between an Azure VM or an App Gateway and another endpoint—either another Azure VM, or an arbitrary Internet or Intranet endpoint. This diagnostic tool can identify a range of problems, including guest VM issues, such as guest firewall configuration, low memory or high CPU, Azure configuration issues such as Network Security Groups blocking traffic, or routing issues diverting traffic. It can also diagnose other network issues, such as DNS failures.

To use Connection Troubleshoot from the Azure Portal, open Network Watcher then click Connection Troubleshoot. Specify the source VM, then specify the destination, either as another VM or by giving a URI, FQDN, or IPv4 address. Specify the protocol to use (either TCP or ICMP). For TCP, you can specify the destination port, and, under Advanced Settings, the source port. An example configuration is shown in Figure 4-75.

A screen shot from the Azure Portal shows the Network Watcher Connection Troubleshoot configuration. A source VM has been specified, using the name and resource group name. The destination has been specified as a FQDN (azure.microsoft.com) and protocol (TCP), with destination port 443.

Figure 4-75 Network Watcher Connection Troubleshoot configuration

The test takes a few minutes to run. Upon completion, the results will be shown at the bottom of the page. An example output is shown in Figure 4-76.

A screen shot from the Azure Portal shows the Network Watcher Connection Troubleshoot results. The overall status is Reachable. A table shows the status of test probes across each Hop in the journey. Probes statics (Number Of Probes, Number Of Failures, Min/Max/Average Latency) are also shown.

Figure 4-76 Network Watcher Connection Troubleshoot results

Connection Troubleshoot is also available via PowerShell, using the Test-AzNetworkWatcherConnectivity cmdlet, and via the Azure CLI, using the az network watcher az network watcher test-connectivity command.

Connection Monitor

The Connection Monitor in Network Watcher is similar to Connection Troubleshoot, in that it uses the same mechanism to test the connection between an Azure VM or App Gateway and another endpoint. The difference is that Connection Monitor provides ongoing connection monitoring, whereas Connection Troubleshoot only provides a point-in-time test.

Data from Connection Monitor is surfaced in Azure Monitor. Charts show key metrics such as round-trip time and probe failures. Azure Monitor can also be used to configure alerts, triggered by connection failures or a drop in performance.

To use Connection Monitor via the Azure Portal, open Network Watcher, then click Connection Monitor. A list of active monitored connections is shown. Click +Add to create a new monitored connection, then fill in the connection settings. The settings are the same as for Connection Troubleshoot, plus the probe frequency. An example is shown in Figure 4-77.

A screen shot from the Azure Portal shows the Network Watcher Connection Monitor configuration. The source is VM1 and the destination is VM2. Port 443 has been selected, with a probe interval of 60 seconds.

Figure 4-77 Network Watcher Connection Monitor configuration

The monitored connection will be listed on the Connection Monitor blade within Network Watcher. Click on a monitored connection to open the results panel, as shown in Figure 4-78. The chart shows average round-trip time and % probe failures. Click on the chart to view the data in Azure Monitor. From there, alerts can be configured based on these metrics exceeding thresholds you define. The table below the chart shows the current connection status—clicking on each line gives further details about the status, which is similar to the results obtained from Connection Troubleshoot.

A screen shot from the Azure Portal shows the Network Watcher Connection Monitor status. The chart shows probes failing as of approximately 17:05. The detailed status shows this was caused by an NSG rule, Deny-Internet-TCP-80-Outbound.

Figure 4-78 Network Watcher Connection Monitor status

Skill 4.7: Integrate on-premises network with Azure virtual network

Many Azure deployments require connectivity between the on-premises network and the Azure VNet. This integrated network is called a hybrid network.

Hybrid networks are commonly used for Intranet applications, which may be hosted in Azure but only accessed from the on-premises network. They are also used by Azure applications that require access to an on-premises resource, such as a database.

Hybrid networks provide connectivity between the private IP space of the on-premises network and the private IP space of the Azure VNet. The VNet can be thought of as an extension of the existing on-premises network. The concept is similar to extending the on-premises network to a new office location.

Create and configure Azure VPN Gateway

A virtual network gateway allows you to create connections from your virtual network to other networks. When creating a gateway, you must specify if it will be used for VPN connections or ExpressRoute connections. Virtual network gateways used for VPN connections are called a VPN gateway, while those used for ExpressRoute connections are called ExpressRoute gateways.

Earlier in this chapter we saw how VPN gateways can be used to connect one Azure VNet to another. They can also be used to create VPN tunnels between Azure VNets and on-premises networks—this is called a site-to-site VPN. They can also be used as a hub for point-to-site networks, where individual machines connect to an Azure VNet via the VPN client on the machine.

Gateway subnets

VPN gateways can only be deployed to a dedicated gateway subnet within the VNet. A gateway subnet is a special type of subnet that can only be used for virtual network gateways. Under the hood, the VPN gateway is implemented using Azure virtual machines (these are not directly accessible and are managed for you). While the minimum size for the gateway subnet is a CIDR /29, the Microsoft-recommended best practice is to use a CIDR /27 address block to allow for future expansion.

A VPN connection between an on-premises network and an Azure VNet can only be established if the network ranges do not overlap. Network address ranges should be planned carefully to avoid restricting future connectivity options.

Gateway SKUs

VPN Gateways are available in several pricing tiers, or SKUs. The correct tier should be chosen based on the required network capacity, as shown in Table 4-12.

Table 4-12 Comparison of VPN Gateway Pricing Tiers

SKU Max Site-to-Site VPN Connections Throughput
Basic 10 100 Mbps
VpnGw1 and VpnGw1Az 30 650 Mbps
VpnGw2 and VpnGw2Az 30 1 Gbps
VpnGw3 and VpnGw3Az 30 1.25 Gbps

Note Resizing VPN Gateways

You can resize a gateway between the VpnGw1, 2, and 3 tiers. You cannot, however, resize a Basic tier gateway.

BGP

Border Gateway Protocol (BGP) is a standard used in the Internet to exchange routing information between networks. BGP can be optionally enabled on your VPN gateway, if the onpremises gateway also supports it. If used, it enables the VPN gateway and the on-premises gateway to exchange routing information automatically, avoiding the need to configure routes manually.

BGP also enables high availability redundant connections (see next section) advanced features such as transit routing across multiple networks. It is also used where a VPN connection is used as a failover in case the primary connection, using ExpressRoute, were to fail.

High Availability

By default, each VPN gateway is deployed as two VMs in an active-standby configuration. To reduce downtime in the event the active instance fails, an active-active configuration can also be used (not supported for Basic SKU gateways). In this mode, both gateway instances have their own public IP addresses, and two connections are made to the on-premises VPN endpoint.

Dual on-premises VPN endpoints can also be used. This requires BGP to be enabled, and works with both active-standby or active-active VPN gateways. Combining dual on-premises endpoints with active-active VPN gateways provides a fully-redundant configuration, avoiding single points of failure, as shown in Figure 4-79. In this configuration, Traffic will be distributed over all four VPN tunnels.

A diagram shows an on-premises network connected to an Azure VNet. The on-premises network is connected via two VPN endpoints. The Azure network also uses two endpoints, in the form of an active-active VPN gateway. Both on-premises VPN endpoints are connected to both Azure endpoints, creating 4 connections in total.

Figure 4-79 Dual on-premises VPN endpoints connected to active-active VPN gateways

For increased resilience to datacenter-level failures, virtual network gateways can be deployed to availability zones. This requires the use of dedicated SKUs, called VpnGw1Az, VpnGw2Az, and VpnGw3Az. Both zone-redundant and zone-specific deployment models are supported, the choice being inferred from the associated public IP address rather than being specified explicitly as a gateway property.

Create a VPN Gateway using the Azure Portal

Before creating the VPN gateway, first create the gateway subnet. Using the Azure Portal, navigate to your virtual network and click the Subnets link under Settings to open the subnets blade. Click the +Gateway Subnet button and assign an address space using a /27 CIDR, as seen in Figure 4-80. Do not modify the other subnet settings.

A screen shot shows the Azure Portal adding a Gateway Subnet to an Azure virtual network, using a CIDR /27 IP address range.

Figure 4-80 Adding a Gateway Subnet to a virtual network

Next, provision a VPN gateway as follows. From the Azure Portal, click +Create A Resource, then click Networking, and then select Virtual Network Gateway. Complete the Create Virtual Network Gateway’ blade as follows:

  • Name VNet-GW

  • Gateway type VPN

  • VPN Type Route-based

  • SKU VpnGw1

  • Virtual Network <choose your VNet>

  • First IP Configuration Create New, VNet-GW-IP

  • Location <Same as your VNet>

Do not select the checkboxes for Enable Active-Active Mode or Configure BGP ASN’. Figure 4-81 shows the completed gateway settings.

A screen shot shows the Azure Portal creating a new VPN gateway. The gateway is a route-based VPN gateway using SKU VpnGw1 and public IP address VNet2-GW-IP.

Figure 4-81 Creating an Azure VPN Gateway

Create a VPN Gateway using PowerShell

The process for creating VPN gateways and VNet-to-VNet connections using Azure PowerShell follows the same steps as used by the Azure Portal, as the following script demonstrates.

Note Gateway Subnets

When creating the gateway subnet, there is no special parameter or cmdlet name to denote that this is a gateway subnet rather than a normal subnet. The only distinction that identifies a gateway subnet is the subnet name, GatewaySubnet.

# Script to set up VPN gateways and VNet-to-VNet connection
# Assumes VNet1 is already created, with IP address ranges 10.1.0.0/16

# Name of resource group
$rg = 'ExamRef-RG'

# Create gateway subnet in VNet1
# Note: Gateway subnets are just normal subnets, with the name 'GatewaySubnet'
$vnet1 = Get-AzVirtualNetwork`
   -Name VNet1 `
   -ResourceGroupName $rg

$vnet1.Subnets += New-AzVirtualNetworkSubnetConfig `
   -Name GatewaySubnet `
   -AddressPrefix 10.1.1.0/27

$vnet1 = Set-AzVirtualNetwork `
   -VirtualNetwork $vnet1

# Create VPN gateway in VNet1
$gwpip = New-AzPublicIpAddress `
   -Name VNet1-GW-IP `
   -ResourceGroupName $rg `
   -Location 'North Europe' `
   -AllocationMethod Dynamic

$gwsubnet = Get-AzVirtualNetworkSubnetConfig `
   -Name 'GatewaySubnet' `
   -VirtualNetwork $vnet1

$gwipconf = New-AzVirtualNetworkGatewayIpConfig `
   -Name GwIPConf `
   -Subnet $gwsubnet `
   -PublicIpAddress $gwpip

$vnet1gw = New-AzVirtualNetworkGateway `
   -Name VNet1-GW `
   -ResourceGroupName $rg `
   -Location 'North Europe' `
   -IpConfigurations $gwipconf `
   -GatewayType Vpn `
   -VpnType RouteBased `
   -GatewaySku VpnGw1
Create a VPN Gateway using the Azure CLI

The process for creating VPN gateways using the Azure CLI follows similar steps. First the public IP address and gateway subnet are created, followed by the gateway itself. Once again, the gateway subnet is created simply by specifying the name ‘GatewaySubnet’ when creating a normal subnet.

In this case, the public IP address required by the VPN gateway must be created beforehand, rather than being created implicitly when creating the gateway.

# Create VPN gateway in VNet1 (already created, with IP address ranges 10.1.0.0/16)

# Create gateway subnets in VNet2 and VNet3
az network vnet subnet create --name GatewaySubnet --vnet-name VNet1
--resource-group ExamRef-RG --address-prefixes 10.1.1.0/27

# Create public IP addresses for use by VPN gateway
az network public-ip create --name VNet1-GW-IP --resource-group ExamRef-RG
--location NorthEurope

# Create VPN gateway in VNet1
az network vnet-gateway create --name VNet1-GW --resource-group ExamRef-RG
--gateway-type vpn --sku VpnGw1 --vpn-type RouteBased --vnet VNet1
--public-ip-addresses VNet1-GW-IP --location NorthEurope

Create and configure site-to-site VPN

Site-to-site VPNs enable on-premises networks to be connected to an Azure virtual network. This connection enables on-premises servers and Azure VMs to communicate over their private network space, without being exposed to the Internet.

Site-to-Site connections are established between your VPN on-premises device and an Azure VPN gateway. Traffic flows over the public Internet, enclosed in a secure, encrypted tunnel between these two endpoints. The underlying VPN encryption method used is IPsec IKEv2. An example is illustrated in Figure 4-82.

The diagram shows a Site-to-Site VPN connection between Azure and an on-premises datacenter. The diagram includes IP addressing showing how each network connects to the other via the VPN gateway.

Figure 4-82 Site-to-site VPN connection between Azure and On-Premises

Supported VPN devices

A wide range of on-premises VPN devices is supported for Azure Site-to-Site VPNs, from many device manufacturers. A full list, including links to configuration instructions, is given in the Azure documentation (see https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-about-vpn-devices). For certain devices, Azure also provides configuration scripts to automate the setup process.

If you do not have access to a hardware VPN device, a software-based device can be used, such as Microsoft Routing and Remote Access Service (RRAS) on Windows, or OpenSWAN on Linux.

Note VPN IP Addresses

The on-premises VPN device must be deployed with an Internet-facing static IPv4 address.

Multi-Site networks

Each VPN gateway can support multiple Site-to-Site VPN connections. This is called a multi-site connection. Multi-site connections are commonly used to connect an Azure virtual network to multiple on-premises sites. They can also be used to create VPN connections to other Azure virtual networks in cases where VNet peering is not available (see Skill 4.2).

To use a multi-site connection, a route-based VPN is required. Since each VNet supports only a single VPN gateway, all connections share the available bandwidth. In Figure 4-83, you see an example of a network with three sites and two VNets in different Azure regions.

The diagram shows three enterprise locations that are connected to two Azure VNets that are in different Azure regions.

Figure 4-83 Multi-Site Site-to-Site Network with three locations and two Azure VNets

Create a Site-to-Site VPN using the Azure Portal

Before creating a Site-to-Site VPN connection, ensure that your on-premises VPN device is supported and deployed with a static Internet-facing IPv4 address. Plan your network so that on-premises and Azure address spaces do not overlap, then deploy your Azure VPN gateway as described earlier.

Next, deploy a local network gateway resource in Azure. This resource represents your on-premises network and is where details of that network (such as IP prefixes and the gateway IP address) are configured. In the Azure Portal, click +Create A Resource and search for ocal network gateway, and click to open the Create Local Network Gateway blade as shown in Figure 4-84. Fill in the blade as follows:

  • Name Choose a name for the local network gateway resource.

  • IP address The Internet-facing IP address of your on-premises VPN gateway.

  • Address space The on-premises network address space.

  • Configure BGP settings Leave unchecked, unless using BGP.

  • Subscription, resource group and location Choose any values. The location does not have to match the location of your VPN gateway.

The diagram shows the Create Local Network Gateway blade from the Azure Portal. The Gateway Name, IP Address and Address Space have been filled in.

Figure 4-84 Create Local Network Gateway

Next, configure your on-premises VPN device. You will need to specify a shared key (choose any sufficiently random, secret value) and the public IP address of the Azure VPN gateway. Use the configuration guides or configuration scripts available from the Azure documentation pages, if available for your device.

Next, create the VPN connection in Azure. Open the blade for your VPN gateway, and click Connections to see the list of current connections. Then click +Add to open the Add Connection blade, as shown in Figure 4-85. Fill in the blade as follows:

  • Name Choose a name for the connection

  • Connection type Site-to-site (IPsec)

  • Virtual network gateway The currently selected Azure VPN gateway (fixed)

  • Local network gateway Choose your local network gateway resource

  • Shared key (PSK) Enter the same value as used on-premises

  • Subscription, resource group, and location These are taken from the VPN gateway (fixed)

A screen shot shows a the Add Connection blade from the VPN Gateway section of the Azure Portal. The blade is configured for a Site-to-Site VPN connection, with the local network gateway created earlier selected.

Figure 4-85 Add VPN Connection

The VPN connection will now be created. The connection status can be seen in the Connections list for the VPN gateway. The status will initially be Updating, and after a few moments it should change to Connected once the connection is established, as shown in Figure 4-86.

A screen shot shows the connection status for the OnPremConnection VPN connection. The status is Connected.

Figure 4-86 VPN Connection status Connected

Configure a Site-to-Site VPN using Azure PowerShell

The process for creating a Site-to-Site VPN using Azure PowerShell follows the same steps required for the Azure Portal. We assume you have already planned your network, deployed your on-premises VPN device, and created your VPN gateway in Azure, as described earlier. The follow script shows how to create the local network gateway and the VPN connection.

# Create local network gateway
$localnw = New-AzLocalNetworkGateway `
   -Name LocalNetGW `
   -ResourceGroupName ExamRef-RG `
   -Location "West Europe" `
   -GatewayIpAddress "53.50.123.195" `
   -AddressPrefix "10.5.0.0/16"

# Get VPN gateway
$gateway = Get-AzVirtualNetworkGateway `
 -Name VPNGW1 `
 -ResourceGroupName ExamRef-RG

# Create the connection
$conn = New-AzVirtualNetworkGatewayConnection `
   -Name OnPremConnection `
   -ResourceGroupName ExamRef-RG `
   -Location 'West Europe' `
   -VirtualNetworkGateway1 $gateway `
   -LocalNetworkGateway2 $localnw `
   -ConnectionType IPsec `
   -SharedKey "abc123"
Configure a Site-to-Site VPN using the Azure CLI

The process for creating a Site-to-Site VPN using the Azure CLI follows the same steps as the Azure Portal and Azure PowerShell. The following script shows how to create the local network gateway and the VPN connection, assuming the on-premises VPN device and Azure VPN gateway have already been deployed.

# Create Local Network Gateway
az network local-gateway create --gateway-ip-address 53.50.123.195
--name LocalNetGW --resource-group ExamRef-RG --local-address-prefixes 10.5.0.0/16

# Create VPN Connection
az network vpn-connection create --name OnPremConnection -resource-group
ExamRef-RG --vnet-gateway1 VPNGW1 --location WestEurope --shared-key abc123
--local-gateway2 LocalNetGW

Configure ExpressRoute

ExpressRoute is a secure and reliable private connection between your on-premises network and the Microsoft cloud. The connection is provided by a third-party network provider who has partnered with Microsoft to offer ExpressRoute services. This third party is known as the ExpressRoute provider.

Unlike a Site-to-Site VPN, network traffic using ExpressRoute uses your provider’s network and does not pass over the Internet. The latency and bandwidth for an ExpressRoute circuit is therefore more predictable and stable because traffic stays on your provider’s network.

Another key difference between ExpressRoute connections and Site-to-Site VPN connections is that Site-to-Site VPN connections only provide connectivity to your Azure VNet, whereas ExpressRoute provides connectivity to all Microsoft cloud services. This includes Azure VNets, Azure platform services (such as CosmosDB), and Microsoft services outside of Azure such as Office 365 and Dynamics 365.

Connectivity models

ExpressRoute connectivity can be established in one of three ways. The capabilities and features of ExpressRoute are the same in each case.

  • If your network already has a presence at a co-location facility with a cloud exchange, your co-location provider can establish a virtual cross-connection with the Microsoft Cloud. This provides either a layer 2 or a managed layer 3 connection.

  • Your connectivity provider may be able to provide a point-to-point ethernet connection from their network to your on-premises network. Again, this approach offers either a layer 2 or managed layer 3 connection.

  • Finally, your existing IPVPN WAN provider may be able to integrate ExpressRoute into your WAN, if they are registered as an ExpressRoute provider. In this case, your provider will typically offer managed layer 3 connectivity.

These connectivity options are shown in Figure 4-87.

The diagram shows a logical diagram of the three types of ExpressRoute circuits. These include co-location, point-to-point Ethernet, and IPVPN WAN.

Figure 4-87 ExpressRoute connectivity models

Circuits and peering

An ExpressRoute circuit is an Azure resource used to represent the logical connection between your on-premises network and Microsoft. Each circuit is identified by a GUID called a service key (s-key), which is shared with your connectivity provider.

Each circuit has a fixed bandwidth, and a specific peering location. The available bandwidth options are 50 Mbps, 100 Mbps, 200 Mbps, 500 Mbps, 1 Gbps, 2 Gbps, 5 Gbps, and 10 Gbps. This bandwidth can be either metered or unlimited:

  • Metered All inbound data transfer is free of charge, and all outbound data transfer is charged based on a pre-determined rate. Users are also charged a fixed monthly port fee (based on high-availability dual ports).

  • Unlimited All inbound and outbound data transfer is free of charge. Users are charged a single fixed monthly port fee (based on high-availability dual ports).

New ExpressRoute circuits offer two peering options, also known as routing domains: Private or Microsoft Peering. Each circuit can use either one or both peerings. These peerings are shown in Figure 4-88.

  • Azure Private Peering Provides connectivity over the Intranet address space into your Azure virtual network. This peering is considered a trusted extension of your core network into Azure.

  • Microsoft Peering Provides connectivity over the Internet address space into Microsoft services such as Office 365, Dynamics 365, and Internet-facing endpoints of Azure platform (PaaS) services.

Older circuits may use a third peering model, Azure Public Peering, which provides connectivity to Azure PaaS services only. This is deprecated for new circuits.

The diagram shows functional architecture of the two types of peerings or routing domains supported by ExpressRoute. The customer network is connected to the Partner edge network. This is connected via ExpressRoute (using two separate connections) to the Microsoft Edge. From there, the Microsoft Peering connects to all Internet-facing Microsoft services, and the Private Peering connects to Azure Virtual Networks.

Figure 4-88 ExpressRoute peering options

Each ExpressRoute circuit has two connections from your network edge to two Microsoft edge routers, configured using BGP. Microsoft requires dual BGP connections from your edge to each Microsoft edge router. You can choose not to deploy redundant devices or ethernet circuits at your end; however, connectivity providers use redundant devices to ensure that your connections are handed off for high availability to Microsoft in a redundant manner. Figure 4-89 shows a redundant connectivity configuration.

The diagram shows two cities that are a part of an enterprise configuration connecting to ExpressRoute in New York City and Las Vegas. The connections are redundant and able to leverage Azure resources in two different regions

Figure 4-89 Multiple cities connected to ExpressRoute in two Azure regions

Global availability and ExpressRoute Premium

ExpressRoute is only available in certain cities throughout the world, so it is important to check with your local providers to determine availability. For a list of ExpressRoute providers and their supported locations, see: https://docs.microsoft.com/azure/expressroute/expressroute-locations.

By default, each ExpressRoute circuit enables connectivity to Microsoft data centers within a geopolitical region. For example, a connection in Amsterdam gives you access to all Microsoft datacenters in Europe.

With the ExpressRoute Premium add-on, connectivity is extended to all Microsoft datacenters worldwide. This add-on also raises the number of routes permitted for the Azure Private Peering from 4,000 to 10,000. It also increases the number of virtual networks that can be connected to each ExpressRoute circuit, from 10 to between 20 and 100 (depending on the bandwidth of the circuit).

Creating an ExpressRoute circuit

To create an ExpressRoute circuit using the Azure Portal, click +Create a resource, then Networking, then ExpressRoute to open the Create ExpressRoute blade (Figure 4-90). Specify the circuit name, provider and peering location, then specify the bandwidth, billing model, and whether or not the ExpressRoute Premium add-on is required. Finally, specify the subscription, resource group, and resource location.

An Azure Portal screen shot shows the Create ExpressRoute circuit blade. The Provider, Peering Location, Bandwidth and Billing Model have been selected. The SKU options are Standard and Premium. Under the Create button, a warning reads: By Clicking The Create Button, You Understand That Billing Will Start Immediately Upon Creation Of The Expressroute And You Agree To Accept The Charges.

Figure 4-90 Creating an ExpressRoute circuit

Note Expressroute Locations

When creating an ExpressRoute circuit, you must specify both the peering location and the location of the ExpressRoute circuit resource. There are independent settings, although Microsoft suggests the best practice is for them to be nearby.

Note Expressroute Billing

Billing for the circuit begins immediately upon resource creation and does not depend upon completing the configuration with the ExpressRoute provider. ExpressRoute circuits can be expensive, so care is advised. It is a good practice to restrict the ability to create ExpressRoute circuits using Azure Policy.

The ExpressRoute circuit will be created. The resource overview blade will show the provider status as Not Provisioned, and also shows the service key. Copy the service key and share it with your ExpressRoute provider. The provider status will change to Provisioning and finally to Provisioned once the provider setup is complete.

Next, you need to provision either Azure Private Peering or Microsoft Peering for your circuit. From the ExpressRoute circuit blade, click Peerings, and select the type of peering to configure. Fill in the BGP ASN, and subnets as promoted, and then save the configuration.

For Microsoft Peering, you may see the status Validation Needed for the advertised public IP prefixes. This is because Microsoft needs to validate that you own these IP prefixes before updating their routing to use the ExpressRoute connection. In this case, use the Azure Portal to raise a support ticket to perform the validation.

Connecting virtual networks to ExpressRoute

Virtual networks are connected to ExpressRoute circuits using an ExpressRoute gateway. An ExpressRoute gateway is a virtual network gateway, created with the ExpressRoute option (rather than the VPN option, used to create VPN gateways). Just as with VPN gateways, the ExpressRoute gateway must be created in the gateway subnet of the virtual network.

Once the ExpressRoute gateway is created, it can be connected to the ExpressRoute circuit. The process is the same as adding a VPN connection to a VPN gateway, except that the ExpressRoute connection type is selected, and the ExpressRoute circuit specified. The circuit must be enabled by your connectivity provider and have Azure Private Peering enabled beforehand.

Verify and troubleshoot on-premises connectivity

To verify connectivity or troubleshoot connectivity between on-premises networks and Azure:

  • Verify the status and configuration of all VPN connections, virtual network gateways, ExpressRoute connections, or ExpressRoute circuits involved.

  • For ExpressRoute, try to reset a failed circuit using the Get-AzExpressRouteCircuit and Set-AzExpressRouteCircuit PowerShell cmdlets, as described at: https://docs.microsoft.com/azure/expressroute/reset-circuit.

  • Try to connect between an on-premises server and an Azure VM, and vice-versa, such as using SSH or TCP.

  • Use standard network tools such as tcping or tracert to confirm connectivity between networks.

  • Use the Azure network diagnostics tools described in Skill 3.3.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.91.255.225