Basic Hyper-V Networking

Networking is a huge subject in Windows Server 2012 Hyper-V. We are going to start with the basics, moving through some new features that can be difficult to understand at first, and building your knowledge up in layers, before you go on to the more-complex topics in Chapter 5, “Cloud Computing,” Chapter 7, “Using File Servers,” and Chapter 8, “Building Hyper-V Clusters.” We recommend that you read through each subject, even if you are experienced in networking or Hyper-V. At the very least, this will serve as a refresher, but you might find that you learn something that will be important to later topics.

In this section, you will look at the basics of Hyper-V networking that are required to connect your virtual machines to a network. You will be introduced to the new Hyper-V virtual switch before looking at the greatly anticipated NIC teaming feature.

Using the Hyper-V Extensible Virtual Switch

In the past, Hyper-V had a relatively simple virtual device in the host called a virtual network to connect virtual machines to networks. The virtual network has been replaced by something much more exciting (for nerds who get excited by this sort of thing) and powerful: the virtual switch. This is a central piece in Microsoft’s Windows Server 2012 cloud operating system strategy. Users of previous versions of Hyper-V will still create the new virtual switch the same way that they created the virtual network, and the same types are still used. However, it won’t be long until the power of the switch becomes evident.

Understanding the Virtual Network Interface Controller

In Chapter 3, “Managing Virtual Machines,” you saw that virtual machines could have one or more virtual network interface controllers (NICs) in their configuration to allow the virtual machines to connect to a network. Virtual NICs are not confined to just virtual machines; the management OS can also have virtual NICs. This might be a little difficult to understand at first. Where does this NIC reside? What does it connect to? It resides in the management OS, and it appears in Network Connections just like a physical NIC. Like every virtual NIC in virtual machines, the management OS virtual NICs connect to a virtual switch. You will see a few ways to create and use virtual NICs as you proceed through this chapter.

Introducing the Virtual Switch

The virtual switch performs the same basic task as a physical server. A physical switch connects the NICs of physical servers to a network, providing each connection with its own isolated source-to-destination connection. A network administrator might route that network or might make it an isolated network. There are three kinds of virtual switch, and each kind will connect the virtual NICs of virtual machines to a different kind of network:

External Virtual Switch The role of the external virtual switch, shown in Figure 4-1, is to connect virtual NICs to a physical network. Each virtual NIC is connected to a single virtual switch. The switch is connected to a physical network. This connects the virtual NICs to the physical network.
The virtual NICs participate on a LAN, just like the NICs in a physical machine do. Each virtual NIC has its own MAC or Ethernet address, and each network stack has its own IPv4 and/or IPv6 address. Each virtual NIC is completely separate from all of the other virtual NICs and from the NICs that are used by the management OS in the host itself. For example, the Windows Firewall in the management OS has nothing to do with the networking of the virtual machines in the host, and vice versa.

Figure 4-1 An external virtual switch

c04f001.eps
You can have more than one external virtual switch in a host. For example, if you want two external switches, you will need two connections to a physical network. Thanks to the many innovative features of Hyper-V in Windows Server 2012, you do not need more than one virtual switch to support multiple networks, as you will learn in this chapter and in Chapter 5.
Traditionally, an administrator wanted to isolate the network connection of the management OS from that of the virtual machines, so it had its own physical network connection. However, when there were a limited number of physical NICs in the host, the management OS could share the physical network connection that was used by the virtual switch. Doing this creates a virtual NIC in the management OS that is connected to the external virtual switch. This management OS virtual NIC appears in Network Connections with a name of vEthernet (<Name Of Switch>) and a device name of Hyper-V Virtual Ethernet Adapter. The management OS virtual NIC needs to be configured (settings and/or protocols) just as a physical NIC would require.
The Hyper-V administrator can choose between connecting the management OS to the virtual switch or directly to a physical NIC—if the administrator is not considering something else called converged fabrics, which you will read about later in the chapter.
Private Virtual Switch Figure 4-2 shows a private virtual switch. This type of virtual switch has no connections to either the management OS or the physical network; it is completely isolated, and therefore any virtual NICs that are connected to a private virtual switch are isolated too.

Figure 4-2 A private virtual switch

c04f002.eps
In past versions of Hyper-V, the private virtual switch was often used to secure sensitive workloads that should not be open to contact by the physical network or other virtual machines. One virtual machine running a firewall service could have two virtual NICs: one connected to an external switch, and another connected to the private network where the sensitive workload was connected. The firewall would then control and route access to the private network.
Virtual switches (of any type) are not distributed. This means that a virtual machine connected to a private network on Host1 cannot communicate with a virtual machine connected to a similarly named private network on Host2. This could cause issues if the cooperating virtual machines were migrated to different hosts. This could be avoided by adding some complexity, such as new isolated VLANs on the physical network or rules to keep virtual machines together. However, as you will see in Chapter 5, there are new techniques we can use instead of private networks, such as Port ACLs (access control lists) or third-party solutions, that can isolate our workloads without the use of private virtual switches.
Internal Virtual Switch The internal virtual switch isolates connected virtual NICs from the physical network, but it connects the management OS by using a virtual NIC, as you can see in Figure 4-3.
As with the private virtual switch, you do have to take care to keep virtual machines on this switch together on the same host. Otherwise, the machines will not be able to reach each other across the network. While the internal type of the virtual switch has a place in a lab environment, it has little use in production systems, thanks to the new cloud networking functionality in Windows Server 2012 that can achieve the same results, but with the flexibility of being able to move virtual machines between different hosts. (Chapter 5 provides more details.)

Figure 4-3 Internal virtual switch

c04f003.eps

Creating Virtual Switches

If you want to create a virtual switch for basic functionality, you can do this in the Hyper-V Manager console. Under Actions, you will see an option for Virtual Switch Manager. Clicking that option opens the Virtual Switch Manager for the selected host, as shown in Figure 4-4.

Figure 4-4 The Virtual Switch Manager

c04f004.tif

To create a new virtual switch, you do the following:

1. On the left side of the screen, under Virtual Switches, select New Virtual Network Switch.
2. On the right-hand side of the screen, under Create Virtual Switch, choose the type of virtual switch you want to create. You can change this type in a moment or after you create the switch.
3. Click the Create Virtual Switch button. The Virtual Switch Manager window changes to reveal more configuration options for the type of switch that you have selected (see Figure 4-5).

Figure 4-5 Configure the new virtual switch.

c04f005.tif

The options available depend on the type of virtual switch that you have decided to create.

You can choose External Network (with further options), Internal Network, or Private Network to change the type of virtual switch that you want to create. The choice of External Network has one configuration requirement and offers two options:

  • From the options in the drop-down list box, you choose the physical network adapter that you want to connect the virtual switch to. Only connections that are unused will be available. Unfortunately, this does not give you the human-friendly name of the network connection; it uses the actual device name. You can retrieve the device name of the desired network connection by opening Network Connections and using the details view or by running the Get-NetAdapter | Format-Table -Autosize PowerShell cmdlets.

Meaningful Network Names
The way that Windows names the network connections has been an annoyance for administrators for a long time. Local Area Connection, Local Area Connection 1, and so on, or Ethernet, Ethernet 1, and so on, as the names appear in Windows Server 2012, have nothing to do with the names or the orders of the devices on the physical hardware. One workaround has been to plug in the network cables one at a time, and rename the network in Network Connections with a more meaningful name such as Management OS or External1.
Some manufacturers are choosing to store the NIC names from the back of the server chassis in the BIOS (PCI-SIG Engineering Change Notice, or ECN). Windows Server 2012 has a new feature called Consistent Device Naming (CDN) that can detect those BIOS-stored names of the network devices and use them to name the network connections for you. This makes life much easier for operators and administrators. With CDN, you just need to know what switch port is connected to what NIC port on the back of the server and, in theory, the network names will have a corresponding name, making server or host network configuration much easier.

  • The Allow Management Operating System To Share This Network Adapter option is selected by default if you have chosen to create an External Network. You should clear this check box if you want to physically isolate the management OS networking on a different network. In the past, Hyper-V engineers typically wanted to isolate the management OS from virtual machine traffic and so have cleared this option.

Single-Root I/O Virtualization (SR-IOV) is a new feature in Windows Server 2012 Hyper-V that allows virtual NICs to run with less latency than usual if you have the required hardware. You’ll learn more about this feature later in the chapter. It is not selected by default and should be selected only if you are absolutely certain that you do want to enable SR-IOV; this will require some understanding of this advanced feature. Note that this option can be selected only when you create the switch, and it cannot be changed without deleting and re-creating the switch.


A Shortcut to Network Connections
How much administrative or engineering effort is wasted navigating through Control Panel or Server Manager to get to Network Connections? There is a quicker route: just run NCPA.CPL.

If you choose either of the following configurations for the virtual switch, a management OS virtual NIC will be created and connected to the virtual switch:

  • An external virtual switch that is shared with the management OS
  • An internal virtual switch

If you did choose one of these configurations, you can configure the management OS virtual network with a VLAN ID. This will isolate the traffic at the virtual and physical switch level, and will require some work by the physical network administrators to trunk the connected switch ports to support VLANs. Choosing this option does not place the virtual switch on the VLAN; it places only the management OS virtual NIC on that VLAN.

When you create a virtual switch, you might get a warning that the creation process may disconnect anyone using services on the management OS. For example, creating an external virtual switch via Remote Desktop will disconnect your session if you select the Remote Desktop NIC to be the connection point for an external switch.

You can return to Virtual Switch Manager to modify a switch (except for the SR-IOV setting) or to even delete it.


Always Have a Backdoor to the Host
Virtualization hosts, such as Hyper-V hosts, are extremely valuable resources because they can support numerous servers. From a business’s point of view, those multiple services enable the organization to operate. Configuring networking from a remote location, such as a remote office or your desk on another floor, can have risks. If you make a mistake with the network configuration, you can disconnect the host, the virtual machines, and your own ability to use Remote Desktop to fix the problem quickly.
Therefore, we strongly urge you to consider having a secured backdoor to get remote KVM access to your hosts such as Dell’s DRAC or HP’s iLO. Some solutions offer SSL and Active Directory integrated authentication so only authorized administrators can gain remote access. This can be coupled with physical network remote access and firewall policies where security is a concern. If you make a mistake while remotely configuring networking on the host, you can use the KVM console to log into the server, fix the issue, and quickly return to normal service without having to travel to the computer room or request local operator assistance.

You can use New-VMSwitch to create a virtual switch, which is disconnected from the management OS by using PowerShell:

New-VMSwitch “External1” -NetAdapterName “Ethernet” -AllowManagementOS 0

That snippet creates an external virtual switch by default. You can change the type by adding the -SwitchType flag and choosing from External, Internal, or Private:

New-VMSwitch “Internal1” -SwitchType Internal

This creates a new virtual switch that virtual machines can be connected to. The management OS will also get a new virtual NIC called vEthernet (Internal1) that will require an IP configuration. Here is how you can configure IPv4 for this virtual NIC:

NewNetIPAddress -InterfaceAlias “vEthernet (Internal1)” -IPAddress 10.0.15.1 -PrefixLength 24 -Default Gateway 10.0.15.254Set-DnsClientServerAddress -InterfaceAlias "vEthernet (Internal1)" -ServerAddresses 10.0.15.21, 10.0.15.22

Get-VMSwitch can be used to retrieve a virtual switch. Set-VMSwitch can be used to configure a virtual switch. For example, you can quickly reconfigure a virtual switch to use a different physical connection by using this line of PowerShell:

Set-VMSwitch “External1” -NetAdapterName “Ethernet 3”

You could always change your mind about not sharing the virtual switch’s network connection with the management OS:

Set-VMSwitch "External1" AllowManagementOS 1

Here’s an example of the power of PowerShell; you can quickly move all the virtual NICs that are connected to one virtual switch to another virtual switch in one line of code instead of dozens, hundreds, or even thousands of mouse clicks:

Get-VMNetworkAdapter * | Where-Object {$_.SwitchName -EQ “Private1”} | `
Connect-VMNetworkAdapter -SwitchName “External1”

The first piece of the code retrieves all the virtual NICs on the host, before filtering them down to just the ones connected to the Private1 virtual switch. The remaining virtual NICs are then changed so they connect to the External1 virtual switch instead.

By default, all newly created virtual NICs have a dynamically assigned MAC or Ethernet address. Each MAC address is assigned from a pool of addresses that are defined in the Virtual Switch Manager screen, as you can see in Figure 4-6. You can alter this range of MAC addresses if required by the network administrator. Note that this will not affect the in-use MAC addresses of any virtual NIC that is currently in operation on the host at the time of the change.

Figure 4-6 The MAC address range for this host

c04f006.tif

You have learned the basics of configuring a virtual switch. Now it’s time to see why it’s called the extensible virtual switch.

Adding Virtual Switch Extensibility

By itself, the Hyper-V virtual switch is a powerful network appliance, but it can be given more functionality by installing third-party extensions. Each of these extensions adds functionality to the Hyper-V virtual switch instead of replacing it. The extensions are certified Network Driver Interface Specification (NDIS) filter drivers or Windows Filtering Platform (WFP) filters/drivers.

As you can see in Figure 4-7, three types of extensions can be added to a Hyper-V extensible virtual switch:

Capturing Extension The role of the NDIS filter capturing extension is to monitor network traffic as it passes through the virtual switch. It may not alter this traffic. The extension may report back to a central monitoring service or application to allow administration.
In the legacy physical datacenter, network administrators would use data from network appliances to analyze problems between two computers. Those tools are useless when the two computers in question are running on the same host and the traffic never leaves the virtual switch and never gets near the physical network. This extension type offers network, virtualization, and application administrators the tools that can analyze communications within the virtual switch itself. The monitoring solution is agile because the virtual switch extension provides a hardware abstraction; there is no dependency on having certain physical hardware that is supported by a network monitoring solution.
At the time of this writing, InMon is offering a beta version of their capture filter solution called sFlow Agent for Windows Server 2012 Hyper-V.
Filtering Extension The filtering extension can inspect (doing everything a capturing extension can do), modify, and insert packets in the virtual switch. It can also drop packets or prevent packet delivery to one or more destinations. As you can see in Figure 4-7, the filtering extension sees inbound data before the capturing extension, and it sees outbound data after the capturing extension.
As you may have guessed, this means that certified third-party filter extensions can add feature-rich virtual firewall functionality to the Hyper-V extensible virtual switch. The ability to apply filtering at the virtual-switch layer means that the dependency on physical firewall appliances and the misuse of VLANs for virtual machine and service network isolation is no longer necessary. Instead, we can use flatter networks and leverage software-defined networking (SDN) with less human involvement, and this makes Windows Server 2012 more suited for enterprise and cloud deployment.
5nine Software offers a solution, called Agentless Security Manager for Hyper-V, that has a filtering extension that has advanced firewall functionality from within the Hyper-V virtual switch.
Forwarding Extension The third and final extension type is the forwarding extension. This all-encompassing extension type can do everything that the forwarding and filtering extensions can do. But forwarding extensions also can do something else: they can make the Hyper-V virtual switch look like a completely different switch to third-party administration software.
The forwarding extension is the first extension to see incoming data and the last extension to see outgoing data.
At the time of this writing, there are two known solutions in this space. Cisco has the Nexus 1000V, described as a distributed virtual switching platform that extends Cisco functionality into the Hyper-V extensible virtual switch. It can be managed using the same Cisco command tools that are used to configure the physical network. NEC offers a solution called ProgrammableFlow for Hyper-V that promises to give you software-defined networking (see Chapter 5), security, and easier administration for cloud computing.
You can install multiple forwarding extensions on a host, but only a single forwarding extension can be enabled in each specific virtual switch. This is because the forwarding extension creates a lot of change that will be very specific to the solution at hand.

Figure 4-7 The architecture of the Hyper-V extensible virtual switch

c04f007.eps

You will have to refer to the vendors of the third-party extensions for specific guidance and instructions for their solutions. However, you can perform the following operations in the Virtual Switch Manager:

  • Enable and disable extensions.
  • Reorder extensions within their own type. For example, you cannot make a monitoring extension see inbound data before a filtering extension sees it.

To perform these operations on a virtual switch, you follow these steps:

1. Open Virtual Switch Manager.
2. Click the expand button on the relevant virtual switch to expand its properties.
3. Click Extensions.

Here you can do the following:

  • Select (to enable) or deselect (to disable) the check boxes beside an extension.
  • Select an extension and click Move Up or Move Down to reorder the extension within its own extension type.
  • Select an extension to see its description under Details for Selected Extensions.

The small amount of text in this chapter on the extensibility of the Hyper-V virtual switch might mislead you on the importance of this functionality. The features, security, flexibility, and the extension of SDN into the physical network make the extensibility of the virtual switch extremely important. In fact, it’s one of the headline-making features of Windows Server 2012 that make it a true cloud operating system and is described extensively in Chapter 5.

Supporting VLANs

The virtual LAN (VLAN) is a method of dividing a physical LAN into multiple subnets for a number of reasons. VLANs can create more address space, control broadcast domains, and isolate traffic. They have been used (many say misused) to create security boundaries that are routed and filtered by firewalls. Each VLAN has an ID, or tag, that is used to dynamically associate a device with that VLAN so it can communicate only on the VLAN.

We can support VLANs in Hyper-V in many ways, some of which you will read about in Chapter 5. You have already seen how to associate a management OS virtual NIC with a VLAN when the management OS shares a physical connection with an external virtual switch. We will cover additional basic VLAN solutions in this chapter.

The typical request is to have a single virtual switch that can support many virtual machines connected to many VLANs. In reality, what is being requested is to have many virtual NICs connected to many VLANs; you’ll soon see why.

Configuring Physical Isolation of VLANs

A very crude approach is to have one physical NIC for every required VLAN. You can see this in Figure 4-8. In this example, you have two VLANs, 101 and 102. A port is configured on the switch for each VLAN. Each port is connected to a physical NIC on the host. An external virtual switch is created for each of the VLANs/physical NICs. And the virtual NICs of the virtual machines are connected to the associated VLANs/virtual switches.

Figure 4-8 Physical isolation for each VLAN

c04f008.eps

This is not a good solution. It might be OK for a lab or the smallest of installations, but it is not flexible, it is not scalable, and it will require lots of NICs if you have lots of VLANs (and double the number if you require NIC teaming).


Isolation of LAN Traffic from Internet Connections
Although the Hyper-V virtual switch securely isolates traffic, it is always desirable to physically isolate LAN traffic from Internet traffic. This isolation protects against distributed denial-of-service (DDoS) attacks that are sustained, artificially creating exceptional levels of traffic designed to crash appliances, hardware, or services. Physically isolating LAN services from Internet services means that the LAN services can remain available to internal users while the DDoS attack is underway.
The ideal is that the physical isolation is done at the host level, with Internet services having their own hosts on a physically isolated network infrastructure. But small and medium businesses don’t have that luxury; they must use some level of shared infrastructure. They can have Internet-facing virtual machines (virtual NICs) operational on one external virtual switch (and physical connection), and internal services running on another external virtual switch (and physical connection), as previously shown in Figure 4-8. And once again, the ideal is to physically isolate the routing and switching to ensure that the DDoS attack does not affect the physical infrastructure for internal services.

The only scalable approach to VLANs is to deal with them at the software layer. There are many ways to do that, and the next two options are some of those that you can use.

Assigning VLAN IDs to Virtual NICs

With this approach, you create a single external virtual switch and then configure the VLAN ID of each virtual NIC to link it to the required VLAN. The benefits are as follows:

  • The solution is software based, so it can be automated or orchestrated for self-service cloud computing.
  • Scalability is not an issue because the approach is software based.
  • The solution requires very little infrastructure configuration.
  • The solution is secure because only the virtualization administrator can configure the VLAN ID.

Figure 4-9 shows the solution. The network administrator creates the required VLANs and sets up a trunk port in the switch. The physical NIC in the host is connected to the trunk port. An external virtual switch is created, as usual, and connected to the physical NIC. Virtual machines are created, as usual, and their NICs are connected to the single virtual switch, even though lots of virtual machines are going to be connected to lots of VLANs. The trick here is that the virtualization administrator (or the orchestration software) assigns each virtual NIC to a specific VLAN. The trunk is passed into the virtual switch, and this allows the virtual NIC to be bound to and communicate on the assigned VLAN and only the assigned VLAN.

Note that if you trunk the switch port, every virtual NIC that is assigned to the switch port must be assigned a VLAN ID to communicate.

Figure 4-9 Using virtual NICs to assign VLAN IDs

c04f009.eps

You can configure the VLAN ID of a virtual NIC as follows:

1. Open the settings of the virtual machine.
2. Click the required virtual network adapter.
3. Select the Enable Virtual LAN Identification check box and enter the required VLAN ID.

You can change virtual NIC VLANs while a virtual machine is running. You can also change the VLAN ID of a virtual network by using PowerShell:

Set-VMNetworkAdapterVLAN -VMName “Virtual Machine 1” -Access -VLANID 101

This is a simple scenario; Virtual Machine 1 has only a single virtual NIC, so we can just target the virtual machine to assign the VLAN to that virtual NIC. The virtual NIC is configured in access mode (a simple assignment) to tag only traffic for VLAN 101.

What if we had to assign a VLAN to a virtual machine with more than one virtual NIC, such as Virtual Machine 2, previously seen in Figure 4-9? Set-VMNetworkAdapter allows us to specify the name of a specific virtual NIC. That’s a problem, because the name of each virtual NIC in the virtual machine settings is the rather anonymous Network Adapter, no matter how many virtual NICs the virtual machine has. So which one do you configure? First, you could run the following to get the virtual NICs of the virtual machine:

$NICS = Get-VMNetworkAdapter -VMName “Virtual Machine 2”

The results of the query are stored as an array in the $NICS variable. An array is a programming construct that contains more than one value. Each result is stored in the array and is indexed from 0 to N. In this case, Virtual Machine 2 has two virtual NICs. That’s two entries in the array, with the first being indexed as [0] and the second indexed as [1]. We know that we want to configure the second virtual NIC to be in VLAN 102:

Set-VMNetworkAdapterVLAN -VMName “Virtual Machine 2” -VMNetworkAdapter `
$NICS[1].Name -Access -VLANID 102

If you want, you can flatten the entire solution down to a single, difficult-to-read line, in the traditional PowerShell way:

Set-VMNetworkAdapterVLAN -VMName “Virtual Machine 2” -VMNetworkAdapter `
{Get-VMNetworkAdapter -VMName “Virtual Machine 2”}[1].Name -Access `
-VLANID 102

Using a Virtual Switch VLAN Trunk

Another approach to dealing with VLANs is to simply pass the VLAN trunk up through the virtual switch port and into the virtual NIC of the virtual machine. In the example in Figure 4-10, the virtual NIC of Virtual Machine 1 will be configured in access mode to connect to a single VLAN. The virtual NIC of Virtual Machine 2 will be configured in trunk mode to allow VLANs that are specified by the Hyper-V or cloud administrator.

Figure 4-10 Trunk mode for a virtual NIC

c04f010.eps

There is a certain level of trust being placed on the administrator of the guest operating system in a virtual machine that is granted trunk mode access to VLANs. The guest OS administrator should know the following:

  • How to configure VLANs in the guest OS
  • What traffic to send over what VLAN
  • Not to misuse this delegated right

There is a certain amount of control that the Hyper-V or cloud administrator retains. Only specified VLANs will be trunked, so the guest OS administrator can successfully tag traffic for only the delegated VLANs.

Creating a trunked virtual switch port through to a virtual NIC is an advanced feature that should be rarely used, and it does not have a place in the GUI. You can configure trunk mode to the virtual NIC by using PowerShell only:

Set-VMNetworkAdapterVLAN -VMName “Virtual Machine 2” -Trunk -AllowedVLANList `
102-199 -NativeVLANID 102

The cmdlet has configured the virtual switch’s port for Virtual Machine 2 to be trunked for VLANs 102 to 199. A required safety measure is to specify a fallback VLAN in case the guest OS administrator fails to set VLAN IDs within the guest OS. In this example, the fallback VLAN is 102.

In this example, only one virtual NIC is in the virtual machine; you can target specific virtual NICs by using the method that was shown in the previous section. You can query the results of your VLAN engineering as follows:

PS C:> Get-VMNetworkAdapterVLAN
VMName                VMNetworkAdapterName Mode   VlanList  
------                -------------------- ----   --------  
VirtualMachine1       Network Adapter   Access  101
VirtualMachine2       Network Adapter   Trunk  102,102-199

This is where we’re going to leave VLANs for now, but we will make a quick return during the coverage of NIC teaming.

Supporting NIC Teaming

If you ever delivered presentations on or sold Hyper-V implementations before Windows Server 2012, you were guaranteed one question: is there built-in NIC teaming? Finally, the answer is yes—and it is completely supported for Hyper-V and Failover Clustering.

NIC teaming, also known as NIC bonding or load balancing and failover (LBFO) for NICs, was not supported by Microsoft in the past. Third parties such as server manufacturers offered NIC teaming software. But this software made huge changes to how Windows and Hyper-V networking worked, and Microsoft never supported these solutions in any way on their operating systems or for virtualization. Microsoft Support has a policy that could require customers to re-create a problem without third-party NIC teaming to prove that the add-on is not the cause of a problem.

But now Windows Server 2012 includes a NIC teaming solution. We no longer need to use third-party NIC teaming software, although hardware vendors might continue to offer it with the promise of additional features, albeit with the price of losing support from Microsoft.

If you have used NIC teaming in the past, you might be tempted to skip ahead in the chapter. Instead, we urge you to read this section of the book because it might prevent you from making some mistakes that are caused by common misunderstandings of how NIC teaming works. You will also require the knowledge in this section when you start to look at enabling Receive-Side Scaling (RSS - for SMB Multichannel) and Dynamic Virtual Machine Queue (DVMQ).

Understanding NIC Teaming

In NIC teaming, we group NICs together to act as a single unit, much as we use RAID to group disks. This results in the following:

Load Balancing When we have two or more NICs, we can distribute and balance the network traffic across each NIC in a team. This is sometimes referred to as NIC or link aggregation, but those terms can mislead those who are not knowledgeable about NIC teaming. Although a NIC team does aggregate NICs, you don’t just get a 20 GbE pipe when you team together two 10 GbE NICs. Instead, you get the ability to distribute traffic across two 10 GbE NICs, and how that distribution works depends on how you configure the team, and that decision depends on various factors that you will soon be familiar with.
Failover Services that are important to the business usually have a service-level agreement (SLA) indicating that they must be available (not just operational) to the customer (or the end user) a certain percentage of the time. With features such as fault-tolerant storage and highly available Hyper-V hosts, we can protect the SLA from the operational point of view, but networking affects the availability of the service; a single faulty switch can bring down the entire service.
By using failover in NIC teaming, we put in more than one NIC to cover the eventuality that something will fail in the networking stack. In Figure 4-11, there are two NICs in the NIC team. The Hyper-V virtual switch is connected to the team instead of to a physical NIC. That means that all virtual NICs have two paths in to and out of the physical network. Both physical NICs are connected to different switches (possibly in a stack, or one logical switch configuration), and each switch has two paths to the upstream network. In this configuration, 50 percent of the entire physical network, from the NICs to the upstream network, could fail, and the virtual machines would still remain operational via one path or another through the vertical stack.
It is this high availability that is so desired for dense enterprise-level virtualization, where so many eggs (virtual machines or services) are placed in one basket (host).

Figure 4-11 NIC teaming with highly available networking

c04f011.eps

Together, these two functions are known as LBFO. Typically, NIC teaming might add more functionality, support VLANs, and require some configuration depending on the environment and intended usage.

Using Windows Sever 2012 NIC Teaming

The NIC teaming in Windows Server 2012 gives us LBFO as you would expect. You can see the basic concept of a NIC team in Figure 4-12. A Windows Server 2012 NIC team is made up of 1 to 32 physical NICs, also known as team members. The interesting thing is that these NICs can be not only any model, but also from any manufacturer. The only requirement is that the team member NICs have passed the Windows Hardware Quality Labs (WHQL) test—that is, they are on the Windows Server 2012 Hardware Compatibility List (HCL), which can be found at www.windowsservercatalog.com. The team members can even be of different speeds, but this is not recommended because the team will run at the lowest common denominator speed.

The ability to combine NICs from different manufacturers offers another kind of fault tolerance. It is not unheard of for there to be a NIC driver or firmware that is unstable. A true mission-critical environment, such as air traffic control or a stock market, might combine NICs from two manufacturers just in case one driver or firmware fails. If that happens, the other NIC brand or model should continue to operate (if there wasn’t a blue screen of death), and the services on that server would continue to be available.

A NIC team has at least one team interface, also known as a team NIC or tNIC. Each team interface appears in Network Connections. You will configure the protocol settings of the team interface rather than those of the team members. The team members become physical communication channels for the team interfaces.

Figure 4-12 Components of a NIC team

c04f012.eps

There are some important points regarding team interfaces:

The First Team Interface and Default Mode The first team interface is in a mode known as Default mode: all traffic is passed through from the NIC team to the team interface. You can configure and then assign a VLAN ID to the original team interface, putting it in VLAN mode.
Additional Team Interfaces You can have more than one team interface. Any additional team members that you create must be in VLAN mode (have an assigned VLAN ID). All traffic tagged for that VLAN will go to that team interface. This traffic will no longer travel to the original team interface that is in Default mode.
This is useful if you are doing nonvirtual NIC networking and want to support more than one VLAN connection on a single NIC team.
The Black Hole If traffic for a VLAN cannot find a team interface with the associated VLAN ID or a team interface in Default mode, that traffic is sent to a black hole by the team.
External Virtual Switches and NIC Teaming You can configure a Hyper-V external virtual switch to use the team interface of a NIC team as the connection to the physical network, as shown previously in Figure 4-11.
In this case, Microsoft will support this configuration only if there is just one team interface on the NIC team and it is used for just the external virtual switch. In other words, the NIC team will be dedicated to just this single external virtual switch. This team interface must be in Default mode, and not assigned a VLAN ID in this scenario. All VLAN filtering will be done at the virtual NIC or guest OS levels.

NIC teaming enables us to give our hosts and virtual machines higher levels of availability in case of an outage. We can use one or more NIC teams for the management OS and associated network connections (possibly using additional team interfaces), and we can use another NIC team to connect the external virtual switch to the physical network.


NIC Teaming and Wi-Fi NICs
You cannot team Wi-Fi NICs.

Configuring NIC Teams

If you asked administrators how they configured the settings of their NIC teams in the past, most of them probably would (if they were honest) say that they just took the default options; the NIC team just worked. Some might even admit that they looked at the list of options, figured that everything was working anyway, and left the settings as is—it is best not to mess with these things.

The reality is that their success resulted from a lot of blind luck. It is only when you start to place a massive load (the kind that dense virtualization can create) or you purchase 10 GbE networking and want to see how much of it you can consume, that you soon realize that there must be more to NIC teaming, because the default options are not giving you the results that you were expecting. And there is more to it than accepting the defaults:

  • What sort of switching environment do you have? That affects how the switch ports that are connected to the team members will know that this is a team.
  • What kind of workload are you putting on the NIC team? This will determine how outbound traffic is distributed across the team and how inbound traffic is handled by the switches.
  • How will your NIC team configuration impact the usage of advanced NIC hardware features?

The first choice you have to make is related to the kinds of switching in the physical network that the NIC team is connected to:

Switch-Independent Teaming A NIC team configured for Switch-Independent Teaming is probably connected to multiple independent switches, but it can be used where there is just a single switch. The switches connected to this type of NIC team have no participation in the functionality of the team and require no manual or automated configuration.
Normally, all team members are active and share the burden if one of them fails. The Switch-Independent Teaming option has one odd feature that you can use: you can configure one of the team members to be a hot standby NIC that will be idle until a team member fails. The usefulness of this feature might be limited to troubleshooting; if you suspect a driver, NIC team configuration, or switch issue is being caused by the NIC team load balancing in a two-member NIC team, you could set one of the team members to be a hot standby (without breaking the team) to see whether that resolves the issue.
Switch-Dependent Teaming As the name suggests, with this kind of NIC team, there is a dependence on the switches having some kind of configuration. In Switch-Dependent Teaming, all of the switch ports that the NIC team is connected to must be a part of a single switch or logical switch—for example, a stacked switch made up of multiple appliances.
There are two ways to configure Switch-Dependent Teaming, depending on the physical switching environment.
The first is Static Teaming, also called Generic Teaming (see IEEE 802.3ad draft v1 for more information). This configuration requires the server administrator to configure the team, the network administrator to configure the switch ports, and for the cabling to be highly managed. Being static, it is not a very flexible solution in a dynamic cloud environment.
The second approach is Link Aggregation Control Protocol (LACP) or Dynamic Teaming (see IEEE 802.3ad-2000 for more information). LACP is a layer 2 control protocol that can be used to automatically detect, configure, and manage, as one logical link, multiple physical links between two adjacent LACP-enabled devices. An LACP-enabled NIC team will reach out to LACP-enabled switches to inform the devices of the NIC team’s presence and team members. This allows for quicker and easier deployment and configuration with less human involvement, which is more cloud-friendly.

The second choice you have to make regarding the configuration of a NIC team is how to distribute the traffic across the team. There are two options:

Hyper-V Port This approach is usually, but not always, the option that you will select when creating a team for a virtual switch. Each virtual NIC that is transmitting through the NIC team is assigned (automatically by the NIC team) a team member (a physical NIC) for outbound and inbound communications. This means that if you have a team made up of 32 1-GbE team members, one virtual NIC will always be able to transmit at a maximum of only 1 GbE, depending on how much the bandwidth is being shared or throttled. At first this might seem like an illogical choice; more bandwidth is always better, right? Not always.
The reason that virtualization first was adopted was that applications were using only a small percentage of the resources that were available to them in legacy physical server installations. This included processor, memory, storage, and network bandwidth. Most workloads do not require huge amounts of bandwidth and will work just fine with a share of a single NIC. Hyper-V Port is suitable when dense hosts are deployed, with many more virtual NICs (such as those in virtual machines) than there are physical NICs (team members) in the NIC team.
Hyper-V Port is also required if we decide to turn on a feature called Dynamic Virtual Machine Queue (DVMQ) that improves network performance for virtual NICs. DVMQ is an offload that binds the MAC address of a virtual NIC to a queue on a specific physical NIC (team member). When the physical network sends traffic to a virtual NIC, it needs to know which team member to target. Hyper-V Port gives some level of dependency; the physical network knows which team member to target with traffic addressed for a specific virtual NIC so that DVMQ can optimize the flow of traffic. Without this binding, the physical network would hit random or all team members, and DVMQ would have been all but useless.
Note that when a team is configured for Hyper-V Port load distribution but is not used by a Hyper-V virtual switch, that team will always transmit on only a single team member.
Don’t make the mistake of thinking that Hyper-V Port doesn’t give you link aggregation. It does, but it’s a bigger-picture thing, where the sum of your virtual NICs share the total bandwidth with team member failover. You just need to remember that each virtual NIC can use only one team member at a time for network transmissions.
Address Hashing The Address Hashing option hashes the addressing of outbound packets and uses the results to distribute the packets across the members of the NIC team. With this configuration, traffic from a single source can have access to the total bandwidth of the NIC team. How that traffic is distributed depends on how the hashing algorithm performs. There are three types of data that Address Hashing load distribution can automatically choose from:
  • 4-tuple hash: This method hashes TCP/UDP ports and it is the most granular data offering the best results. It cannot be used for non-TCP or non-UDP traffic and it cannot be used for encrypted data, such as IPsec, that hides the TCP/UDP ports.
  • 2-tuple hash: Address hashing will use the source and destination IP addresses from the packets.
  • Source and destination MAC addresses: This is used if the traffic is not IP based.

There are four basic types of NIC teams based on the configuration of switch dependency and load distribution. Notes on each configuration are shown in Table 4-1.

Table 4-1: NIC team configurations

Hyper-V PortAddress Hashing
Switch-IndependentEach virtual NIC sends and receives on the same team member.
Best used when:
• The number of virtual NICs greatly exceeds the number of team members.
• You want to use DVMQ.
• You do not need any one virtual machine to exceed the bandwidth of a single team member.
Sends across all team members and receives on just one team member NIC.
Best used when:
• Switch diversity is important.
• You need to have a standby team member.
• You have heavy outbound but light inbound services, such as web servers.
Switch-DependentEach virtual NIC sends on a single team member. Inbound traffic is subject to the (physical) switch’s load distribution algorithm.
Best used when:
• You want to use LACP.
• You do not need any one virtual machine to exceed the bandwidth of a single team member.
Outbound traffic is sent across all team members. Inbound traffic is sent across all team members, based on how the (physical) switch is configured.
Best used when:
• Switch diversity is important.
• You want maximum bandwidth availability for each connection.

You might be thinking that there is more to NIC teaming than you previously believed. Don’t think that this is Microsoft just making NIC teaming complex. It’s not that at all; go have a look at any NIC teaming software or manuals that you have been using before Windows Server 2012 and you’ll soon see that these options are not all that unique. Most of us have never really had to deal with large-capacity networking before, so we’ve never really had an opportunity to see how a misconfigured team with the default options can underperform. Our advice is to take some time reading through the options, maybe reviewing it a few times, before moving forward. You might even want to take note of what page this guidance is on so you can quickly return to it in the future.


Learn More about Windows Server 2012 NIC Teaming
Jose Barreto and Don Stanwyck of Microsoft presented on the subjects of NIC Teaming and Server Message Block (SMB — the file server protocol) Multichannel at the TechEd North America 2012 conference. You can find the video recording and the slides of this presentation (WSV314) at the following site:

Creating NIC Teaming in Virtual Machines

Are you wondering who in their right mind would want to create a NIC team in a virtual machine? In some scenarios, it is required—for example, when virtual NICs use Single-Root IO Virtualization (SR-IOV). When SR-IOV is enabled on a host, all connections to the physical NIC bypass much of the management OS, including NIC teaming. That means a virtual switch (and therefore all the connecting virtual machines) that is enabled for SR-IOV does not have network-path fault tolerance. We can solve that by providing two SR-IOV NICs in the host, each with its own virtual switch, and by putting two virtual NICs in each required virtual machine, and then by enabling NIC teaming in the virtual machine. You can see this scenario in Figure 4-13.

Figure 4-13 Enabling NIC teaming in a virtual machine for SR-IOV

c04f013.eps

Windows Server 2012 NIC teaming is not available for legacy operating systems, so you will need to use Windows Server 2012 as the guest OS for this architecture.

The process for creating the team inside the virtual machine will be the same as it is for a physical server. However, you must configure each virtual NIC that will be a team member in the virtual machine to allow NIC teaming. There are two ways to do this, both of which are available to only the Hyper-V administrator or cloud management system.

In Hyper-V Manager, you can do the following:

1. Open the settings of the virtual machine.
2. Expand the first virtual NIC that will be a team member and browse to Advanced Features.
3. Select the check box labeled Enable This Network Adapter To Be Part Of A Team In The Guest Operating System.
4. Repeat this configuration change for each virtual NIC that will be a team member in the guest operating system.

Alternatively, you can use PowerShell. The first example configures all virtual NICs to allow NIC teaming:

Set-VMNetworkAdapter -VMName “Virtual Machine 1” –AllowTeaming On

If a virtual machine has many virtual NICs and you want to target this setting change, you could run this code snippet that uses an array to capture and configure the virtual NICs:

$VNICS = Get-VMNetworkAdapter -VMName “Virtual Machine 1”
Set-VMNetworkAdapter -VMNetworkAdapter $VNICS[0] -AllowTeaming On
Set-VMNetworkAdapter -VMNetworkAdapter $VNICS[1] -AllowTeaming On

Now these NICs are configured to support NIC teaming in the guest OS.

How will you configure switch dependency and load distribution in a guest OS NIC team? There is no choice to make; you will find that you can use only Switch-Independent teaming with Address Hashing inside a guest OS. A guest OS NIC team can be created with lots of virtual NICs, but Microsoft will support only the solution with two team members (virtual NICs in the team).


Don’t Go Crazy with Guest OS NIC Teaming
You should not do something just because you can, and guest OS NIC teaming is a perfect example. Your usage of this solution should be very limited. For example, say you are enabling SR-IOV for some virtual machines, and their workloads need network-path fault tolerance—or you’re a consultant with limited resources who needs to demonstrate the technology.
If your scenario doesn’t require NIC teaming in the guest OS, continue with the simpler, and more cloud friendly design in which a virtual NIC connects to a virtual switch that is connected to a NIC team.

Creating and Configuring NIC Teams

After you have determined your NIC team design, you can create one. You can do this by using the GUI or PowerShell. In the GUI, you can get to the NIC Teaming utility in one of two ways, opening the window shown in Figure 4-14:

  • Launch LBFOADMIN.EXE.
  • Open Server Manager, browse to Local Server, and click the hyperlink beside NIC Teaming (which will be set to either Disabled or Enabled, depending on whether there is a NIC team).

Figure 4-14 The NIC Teaming console

c04f014.tif

The Network Adapters view under Adapters And Interfaces shows the NICs that are installed in the machine as well as their speed and current team membership status. You can create a new team in two ways. The first is to select each NIC, expand Tasks under Adapters And Interfaces, and select Add To New Team. This opens the New Team window (Figure 4-15) with the NICs already preselected. The second method is to click Tasks under Teams and select New Team. This opens the New Team window with no NICs preselected. Now you will configure the team:

1. Select or modify the selection of NICs that will be team members.
2. Name the team. This will be the name of the first team interface and how the team interface will appear in Network Connections.

Figure 4-15 Creating a new NIC team

c04f015.tif
3. Expand Additional Properties to configure the NIC team.
You configure the NIC team by using three drop-down list boxes:
  • Teaming Mode: Choose from Static Teaming (Switch-Dependent), Switch-Independent, and LACP (Switch-Dependent).
  • Load Balancing Mode: The options are Address Hash and Hyper-V Port.
  • Standby Adapter: This is available only in Switch-Independent teaming mode. You select which team member will be the hot standby if another member fails.
If you are creating a NIC team for a Hyper-V Switch, you are finished and can click OK. However, if your intention is to bind this team interface to a VLAN, continue to step 4.
4. You can click the hyperlink beside Primary Team Interface to open the New Team Interface window. Here you can choose to leave the Team Interface in Default mode (accepting packets tagged for all VLANs, except those tagged for other Team Interfaces on the NIC team). Or you can bind the new Team Interface to a VLAN by selecting Specific VLAN and entering the VLAN ID.

Your new team will be created when you click OK. The team will appear under Teams. The NICs will be shown as team members in Network Adapters under Adapters And Interfaces. The new NIC team will also appear as a Microsoft Network Adapter Multiplexor Driver in Network Connections using the NIC team name as the connection name.

You can also create a team by using PowerShell. The following example creates it in LACP Switch-Dependent mode, with Hyper-V Port load distribution, using two selected NICs. If every NIC was to be in the team, you could replace the names of the NICs with the * wildcard.

New-NetLBFOTeam -Name “ConvergedNetTeam” -TeamMembers “Ethernet”, “Ethernet `
2” -TeamingMode LACP -LoadBalancingAlgorithm HyperVPort -Confirm:$false

The -Confirm:$false flag and value instruct this cmdlet not to ask for confirmation. You can use this option with a cmdlet if there is no -Force flag. The -LoadBalancingAlgorithm flag requires some special attention. If not selected, it defaults to Address Hash. But if you want to specify a load-balancing algorithm, the cmdlet demands that you know precisely which kind of load distribution you require for your NIC team. It’s not just a matter of Hyper-V Port vs. Address Hashing, as in the GUI. The cmdlet breaks down the Address Hashing options into its four possible hashing methods:

  • HyperVPort: The Hyper-V Port load distribution method.
  • IPAddresses: 2-tuple hash but requires IP.
  • MacAddresses: The least efficient but does not depend on IP.
  • TransportPorts: 4-tuple hash is the most efficient but requires visibility of destination TCP/UDP ports.

If you want to keep it simple and duplicate what is in the GUI, do one of the following:

  • Do not use the flag if you want generic Address Hashing.
  • Use the flag and specify HyperVPort.

If this is a NIC team for normal server communications (in other words, not for a Hyper-V virtual switch), you can configure the protocols and IP addressing of the team interface in Network Connections. Do not configure the protocol settings of the team members. The only selected protocol in the team members should be Microsoft Network Adapter Multiplexor Protocol.

You can always return to the NIC Teaming console to modify or delete teams, and you can do the same in PowerShell by using Set-NetLBFOTeam and Remove-NetLBFOTeam. The full list of LBFO PowerShell cmdlets and their documentation can be found at http://technet.microsoft.com/library/jj130849.aspx.

The status of the team members is shown in Adapters And Interfaces, and the health of the team is in Teams. You can also retrieve team status information by using Get-NetLBFOTeam.

If a team or team members are immediately unhealthy after the creation of the team, double-check both the NIC team and the switch configurations. For example, an LACP team will be immediately unhealthy if the switches are incompatible; the NIC team will have a fault and the team members will change their state to Faulted: LACP Negotiation. You can also find status information for NIC teaming in Event Viewer at Application And Services Logs ⇒ Microsoft ⇒ Windows ⇒ MsLbfoProvider ⇒ Operational. Strangely, important issues such as a NIC being disabled create Information-level entries instead of Warning or Critical ones.

After your team is ready, you should test it for the LBFO functionality:

  • Put outbound and inbound loads through the NIC team to determine the maximum throughput.
  • Test the failover by removing a cable and/or disabling a NIC one at a time.

Creating and Configuring Team Interfaces

If your NIC team is not going to be used to connect a Hyper-V virtual switch to the physical network, you can add more team interfaces to the team. Each additional team interface will be in VLAN mode, binding it to a specific VLAN. This will allow the server in question to connect to multiple VLANs or subnets without the expense or complexity of adding more physical connections.

You can use the NIC Teaming console to add a team interface:

1. Select the team you are adding the team interface to in Teams.
2. Browse to Team Interfaces in Adapters And Interfaces.
3. Select Tasks and click Add Interface. This opens the New Team Interface window.
4. Name the new interface. It is a good idea to include something descriptive such as the purpose or VLAN ID in the name.
5. You can have a maximum of only one team interface in Default mode. Specify the VLAN ID for this VLAN mode team interface in Specific VLAN.

A new team interface is created when you click OK, and the team interface name will appear in Network Connections. You can then configure the protocols of this new connection.

The PowerShell alternative for creating a team interface (or team NIC) is as follows:

Add-NetLBFOTeamNIC -Team “ConvergedNetTeam” -Name “ConvergedNetTeam - VLAN `
102” -VLANID 102 -Confirm:$false

If the original team interface is not going to be used to connect a Hyper-V virtual switch, you can switch it from Default mode to VLAN mode by editing it in the GUI, or by running PowerShell:

Set-NetLBFOTeamNIC -Team ConvergedNetTeam -Name “ConvergedNetTeam” -VLANID 101

You should be aware that changing the team interface to use a different VLAN will change its name from ConvergedNetTeam to ConvergedNetTeam - VLAN 101.

If you view the team interfaces in the NIC Teaming console or run Get-NetLBFOTeamNIC, you will see that one of the team interfaces is the primary team interface. This is the original team interface.

Connecting a Hyper-V Switch to the NIC Team

You can create a new virtual switch or change a virtual switch to use the NIC team. You can do this in Virtual Switch Manager by using these steps:

1. Get the device name (from Network Connections or Get-NetAdapter | FL Name, InterfaceDescription).
2. Create or edit an external virtual switch that will not be used for SR-IOV (remember that NIC teaming in the management OS and SR-IOV are incompatible).
3. Change the External Network to select the device that is the team interface for the NIC team.

It’s a little easier in PowerShell because you can use the team interface name instead of the device name (InterfaceDescription). Here is how to modify an existing external virtual switch to use the team interface of a new NIC team:

Set-VMSwitch -Name External1 -NetAdapterName ConvergedNetTeam

Now you know how to create, and importantly, design NIC teams to suit your workloads. Don’t go rushing off yet to create lots of NIC teams. First you’re going to want to learn how to take advantage of some powerful hardware features in Windows Server 2012 Hyper-V, and then you’ll get to read how you might need only one or two NIC teams in places where you might have used four or more in the past.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.37.147