Chapter 7

Deploying Hosts and Clusters in VMM 2012

Chapter 4, “Setting Up and Deploying VMM 2012,” introduced Hyper-V hosts. This chapter provides more details about adding and managing Hyper-V hosts, and it explains how to use VMM to create a cluster from Hyper-V servers deployed on bare-metal machines. It shows how to add servers to trusted domains, untrusted domains, and workgroups. It explains how to work with VMware and Citrix virtualization servers in VMM and how to maintain and update Hyper-V clusters.

The chapter includes the following topics:

  • Deploying Hyper-V clusters on bare-metal
  • Using dynamic optimization, power optimization, and cluster remediation to keep your Hyper-V clusters healthy, efficient, and up-to-date
  • Using VMM 2012's improved integration with VMware to deploy and manage ESX/ESXi hosts and clusters
  • Deploying and managing XenServer hosts and clusters

Adding Existing Hyper-V Servers and Clusters

A new VMM installation has a single, empty host group called All Hosts. Chapter 4 explained how to create host groups and add Hyper-V hosts to them.

You can add an existing Hyper-V server to the VMM-managed fabric using the following methods, all of which install a VMM agent on the server:

  • Using Windows Server computers in a trusted Active Directory domain
  • Using Windows computers in an untrusted Active Directory domain
  • Using Windows Server computers in a perimeter network

The remainder of this section describes these options. A subsequent section describes another option:

  • Using physical computers provisioned as virtual-machine hosts
This option adds a bare-metal computer with Windows Server 2008 R2 SP1 automatically deployed, including the Hyper-V role. In the Add Resource Wizard, it is the fourth Windows computer location option.

Adding a Hyper-V Server in a Trusted Domain

The large majority of servers that you add to VMM are Hyper-V servers that are members of the same domain as the VMM server or a domain that is trusted by the domain containing the VMM server. Before you can add Hyper-V servers that are located in a trusted domain, the following steps must be taken:

  • Have one or more servers available that are joined to the same domain, or trusted by the domain that contains the VMM server.
  • Check the requirements for supported versions of Hyper-V virtual-machine hosts (see Chapter 4).
  • Check network access to the Hyper-V hosts that are added.
  • Validate Hyper-V clusters before you add them to VMM (although you can do this from VMM later).

Do not add a Windows server without Hyper-V if the server is not available for a reboot (see the “Adding New Hyper-V Servers” section). To add one or more Hyper-V servers or clusters to the fabric of your VMM environment, follow these steps:

1. In the VMM console, select Fabric, and choose Servers from the Navigation pane. Select Fabric Resources from the ribbon, click Add Resources, and select Hyper-V Hosts and Clusters (Figure 7.1).

Figure 7.1 Adding a host or cluster in a trusted domain

7.1
2. In the Add Resource Wizard in the Resource Location tab, choose Windows Server Computers In A Trusted Active Directory Domain. Click Next.
3. On the Credentials tab, enter the appropriate credentials to discover and add a Hyper-V host or cluster or, alternatively, select a Run As account. These credentials will allow you to add the VMM service account to the local Administrators group on the Hyper-V hosts. Click Next.
4. On the Discovery Scope tab, specify one or more Hyper-V servers and clusters. Each server or cluster must be on a separate line. You can identify the hosts by their name, their FQDN, their IPv4 address, or their IPv6 address. By default, the discovery process uses Active Directory (AD), but you can avoid this by checking the Skip AD Verification box. When you have finished your list, click Next.
If you have hundreds of servers, you might not want to specify the hosts for discovery by name; you can specify an Active Directory query to search for the Windows Server computers.
For example, you could search for hosts and clusters that start with the letters hv (hv*) (as shown in Figure 7.2) or that contain the letters hv (*hv*).

Figure 7.2 Using an AD query to discover hosts

7.2
5. The Add Resource Wizard will generate a list of discovered computers; it is up to you to select which computers you want to add as hosts. The wizard will specify the FQDNs of the hosts, the clusters they are in, the OS, and the discovered hypervisor (in this case Hyper-V). As you can see in Figure 7.3, you can discover and add Hyper-V servers in the same step. Click Next.

Figure 7.3 Selecting hosts to be added

7.3
6. On the Host Settings tab, you can specify the host group and VM placement settings. All of these selections can be changed later, so you can ignore most of the other fields. Interestingly, if you want to add a host previously managed by another VMM server, you can select Reassociate This Host With This VMM Environment. This could save some time because the VMM agent might still be installed. When you are done, click Next.
7. When you are satisfied with the results shown on the Summary tab, click Finish.

Adding a Hyper-V Server in an Untrusted Domain

Some of your Hyper-V servers and clusters may be part of an untrusted AD domain. An example of this type of setup would be a hosting provider managing a customer's Hyper-V hosts from their own management domain without trusting the customer's domains.

To set up this type of Hyper-V host, take the following steps to integrate the hosts and clusters:

1. In the VMM console, select Fabric, and choose Servers from the Navigation pane. Right-click the host group to which you want to add the hosts or clusters; or select Fabric Resources from the ribbon, click Add Resources, and select Hyper-V Hosts and Clusters.
2. In the Add Resource Wizard in the Resource Location tab, choose Windows Server Computers In An Untrusted Active Directory Domain. Click Next.
3. On the Credentials tab, specify the appropriate credentials to discover the host in the untrusted AD domain. If the Run As account does not exist yet, you can create one. Select a suitable Run As account, and click Next.
4. Enter the FQDN or IP address of the Hyper-V host in the untrusted domain, and click Add. When you are done adding hosts, click Next.
5. Choose a host group and click Next.
6. Confirm the summary and click Finish. After a minute or so, you will see the Hyper-V in an untrusted AD domain in your Hosts view, as shown in Figure 7.4.

Figure 7.4 The Hosts view

7.4

Adding a Hyper-V Server in a Perimeter Network

You can add a Hyper-V server that is configured to be in a workgroup and is unrelated to any AD domain. Often such Hyper-V hosts are in a perimeter network or DMZ. For this category, an encryption key or a CA-signed certificate is required to authenticate the Hyper-V server from the perimeter network before VMM will accept it.

A non-enterprise scenario for this type of setup is a demo laptop with Windows Server 2008 R2 SP1 plus Hyper-V in a workgroup and several Hyper-V guests, including a domain controller and VMM running on the same laptop. Before you can add the Hyper-V host, you must install the VMM agent locally from VMM media and provide an encryption key. VMM requests this key when you add a Hyper-V Server in a perimeter network. An alternative is to provide a CA-signed certificate.

To add a Hyper-V server in a perimeter network, follow these steps:

1. From the VMM media, run setup.exe.
2. Click Local Agent at the bottom of the VMM Install screen under Optional Installations.
3. Click Next on the Welcome screen.
4. Accept the Microsoft Software Notice Terms and click Next.
5. Accept or change the default destination folder of the VMM Agent installation and click Next.
6. Check the box for This Host Is On A Perimeter Network and provide a random file encryption key. By default, it is saved in the root of the VMM Agent installation. You can optionally use a CA certificate for encrypting communications with this host. In this case, you must provide the thumbprint of the certificate (Figure 7.5). Click Next.

Figure 7.5 Specifying the security file folder

7.5
7. Choose how VMM contacts this host: either use the local computer name or use its IP address. Click Next.
8. Accept or change the VMM Agent ports used to communicate with the VMM server. By default, VMM Manager uses port 5986 for communications with the virtualization host. VMM uses port 443 to transfer files between VMM and host computers. Click Next to continue.
9. If you are ready, click Install to start the installation.
10. Click Finish to complete the local-agent installation.

On the VMM Manager server, you can directly add a Windows server. A reboot of the perimeter host is not necessary. These are the steps:

1. In the VMM console, select Fabric, and choose Servers from the Navigation pane. Either right-click the host group to which you want to add the hosts or clusters or, alternatively, select Fabric Resources from the ribbon and click Add Resources and select Hyper-V Hosts And Clusters.
2. In the Add Resource Wizard in the Resource Location tab, choose Windows Server Computers In A Perimeter Network and click Next.
3. On the Target Resources tab of the Add Resource Wizard, enter the name of the computer, add the encryption key, and provide the path of the security file. You can optionally select the host group to which the perimeter host is added. Click Add.
4. After clicking Add, you can add more Hyper-V perimeter hosts or click Next to continue.
5. Optionally, add one or more virtual-machine paths or use the defaults. Click Next.
6. When you are happy with the results, click Finish.
7. In a short while, your new perimeter Hyper-V hosts will appear in the host groups you selected. You can recognize the hosts easily because they don't have fully qualified domain names. If you had chosen to add perimeter hosts with their IP addresses, you would not have seen the host names.
8. If you right-click a host after it has been refreshed, you can check a large number of properties; just walk down the different tabs. The Status tab should give you an idea about how successful the addition of a host has been (Figure 7.6).

Figure 7.6 Checking the host status

7.6

In the Hosts view of one of your host groups, you can add additional fields by right-clicking one of the headers. Check one or more fields and they show up in the current view. The Group By This Column option is at the bottom; this option gives you a great deal of flexibility in the way hosts can be viewed. Figure 7.7 shows an example of the Managed Computers view.

Figure 7.7 Group view of managed computers

7.7

Note
For service deployment, a two-way trust is required between the VMM server and the VM guest OS. The domain membership of the Hyper-V host should not impact service and application deployment. That said, a service can be tiered across different hypervisors without any limitations, as long as that two-way trust is maintained.

Adding New Hyper-V Servers

If you want to use a Windows 2008 or 2008 R2 server that doesn't have the Hyper-V role enabled, you can still use all of the previous choices. The Add Resource Wizard enables the Hyper-V role for you; but the server is restarted at least once, so be careful if this server carries production workloads. Of course, according to best practices, a Hyper-V server should be fully dedicated to this role and you should add only servers without other duties.

Adding New Hyper-V Servers with Bare-Metal Deployment

If you have only a few Hyper-V hosts, you don't really need to learn bare-metal deployment. You will probably be able to provision five Hyper-V hosts much faster than you will be able to set up the prerequisites and successfully test a bare-metal deployment with VMM. Nevertheless, if you are just curious about the technology or if your organization expects you to rapidly expand the number of Hyper-V hosts, then bare-metal deployment is for you.

So how does bare-metal deployment work in VMM? First, a number of prerequisites need to be met before you can actually start deploying your first Hyper-V host without ever touching the server. It is always nice to do some work up front and then let the system do all the work for you. You may finally get an opportunity to read your RSS feeds.

Prerequisites

You can find the prerequisites for bare-metal deployment in Chapter 4, but a shortened version of them is given here. Before you can launch a bare-metal deployment, you'll need the following:

  • A Windows Server 2008 R2 SP1 server with Windows Deployment Services (WDS) serving as a PXE server. There is an alternative route if you cannot implement a PXE server in your environment.
  • A DHCP server.
  • Physical hosts that support Hyper-V and include a baseboard management controller (BMC) supporting IPMI, DCMI, or SMASH. In earlier versions, a custom provider route was offered; however, you should use the standard protocols first. Also, be careful to update the firmware to a version that supports one of the required protocols. Examples of supported BMCs include:
    • HP: Integrated Lights Out (iLO)
    • Dell: Dell Remote Access Controller (DRAC)
    • IBM: Remote Supervisor Board (RSB)
  • A Windows Server 2008 R2 operating-system image.

Bare-Metal Deployment Steps

You'll need to complete quite a few steps for a successful bare-metal deployment. As with most tasks, the process can be broken down into a few basic procedures. Here is an overview of that process.

1. Configure the physical computer:
  • Rack and stack the server(s).
  • Configure the local disk (array, logical disk).
  • Set the BIOS details (virtualization, power, enable PXE).
  • Set the BMC credentials.
  • Disconnect any SAN ports.
  • Update all relevant firmware, especially for the BMC.
2. Optionally, create DNS entries for the new hosts (in case your DNS replication takes a long time).
3. Configure a Windows deployment server (WDS) for Preboot eXecution Environment (PXE) services. Add this server to VMM and have a DHCP server available.
4. Prepare a sysprepped VHD, rename it to reflect its use, and place it in the VMM library. Add other resources such as custom device drivers and unattend scripts to the library.
5. Create one or several host profiles to describe the configurations of the Hyper-V hosts you want to deploy (hostname, VHD file, network, disk, drivers, OS configuration, host settings).
6. Start the Add Resources Wizard to discover physical computers, supply information (host group, host profile), and start the deployment process.
7. As soon as you commit your job, the physical host will turn off and on via the BMC. Then it will boot from the network, look for a PXE server, receive an IP address from a DHCP server, and initiate the bootstrap.
8. Finally, the physical computer will boot into the WinPE (Windows Pre-Execution Environment) that has been prepared by VMM. The WinPE agent will do most of the other work: format and partition the disks, transfer VHD with OS, convert dynamic VHD to fixed VHD, set up booting from native VHD, install matching drivers from the VMM library, deploy OS-customization scripts, enable the Hyper-V role, set up OS-customization scripts, reboot the host; and after the customization is finished, it will install the VMM agent, refresh the host, and enable Hyper-V.

Understanding Physical Machine Management (OOB/BMC)

In VMM you can remotely control a host via out-of-band (OOB) management if that host uses one of the supported BMCs. Think of the concept as a computer within a computer. Even when the host is switched off, you can still independently control the host and perform operations such as power off, power on, and reset.

Microsoft supports several standards-based OOB power-management-configuration-provider options:

  • Intelligent Platform Management Interface (IPMI) versions 1.5 or 2.0
  • Data Center Management Interface (DCMI) version 1.0
  • System Management Architecture for Server Hardware (SMASH) version 1.0 over WS-Management (WS-Man).

Tip
Some BMCs use case-sensitive credentials, so take this into account when you create a Run As account to access your OOB management.

Configuring BMC Settings

After a Hyper-V host has been added to VMM, you'll need to configure its BMC in one of the hardware properties, as described in the following steps:

1. Right-click an existing server and select Properties from the context menu.
2. Select the Hardware tab and scroll to the bottom. Under the Advanced menu, you can manually configure the BMC settings (Figure 7.8). Check the box to enable OOB management for this physical machine, select the OOB power-management configuration provider, provide the BMC address and BMC port, and specify a valid Run As account. Click OK when you are ready.

Figure 7.8 Configuring the BMC settings

7.8
You can now control a server from VMM even if it is in a powered-off state.

Configuring a PXE Server

If you can use a PXE server for bare-metal deployment, your deployment of Hyper-V hosts can be fully automated. Of course, PXE servers are out of the question in some environments. If that is the case, you can bypass a PXE server by preparing a specially configured ISO file from which your bare-metal servers can boot. You can do this manually from USB or a virtual DVD using your BMC, or you can burn that ISO file to create a bootable DVD if you have no other options. On a positive note, the entire bare-metal deployment workflow can still be triggered from VMM with all the other steps and reporting except the PXE-boot part of the workflow.

The next steps explain how to prepare a PXE server based on Windows Deployment Services in Windows Server 2008 R2 SP1. First, place the bare-metal computers you want to convert to Hyper-V servers in the same subnet as your WDS/PXE and DHCP server. You need to do this because PXE boot messages are nonroutable. Note that no customizations are necessary in WDS. You don't have to set any parameters within WDS, and you don't have to add any images or drivers. All you have to think about is where to place the WDS server, because it makes a difference whether you combine the PXE with a DHCP server or keep them separate. As a general best practice, you should keep the two separate if you want to avoid having to perform a custom configuration of your DHCP-server options.


Combining WDS and DHCP on the Same Server
If a DHCP server is running on the same computer as the Windows Deployment Server host, check the “Do not listen on port 67” and “Configure DHCP option 60” boxes in your DHCP configuration to indicate that this server is also a PXE server.

Here is how a WDS/PXE server is configured:

1. On the server you have designated for Windows Deployment Services, right, start servermanager.msc.
2. Under Server Manager, right-click Add Roles, click Next, and then select Windows Deployment Services. Click Next.
3. At the WDS tab, leave both the Deployment Server and Transport Server role services selected and click Next.
4. On the Confirmation screen, verify that both roles have been selected and click Install. The server may need to be restarted after the installation completes.
5. The Results screen shows you whether the installation of WDS is successful. Click Close.
6. When you expand Windows Deployment Services in Server Manager, a yellow triangle appears in front of the server name. You can configure WDS by right-clicking the server and selecting Configure Server.
7. The next screen explains that the WDS server needs to be member of an Active Directory domain. There should be an active DHCP server on the network, as well as an active DNS server. Also an NTFS partition should be available for image storage. As a best practice, you can make a separate partition available to WDS; but because VMM handles the image, there is no real need for this. Click Next.
8. Select the path for the remote installation folder (D:RemoteInstall) and click Next.
9. On the PXE Server Initial Settings screen, leave the default option on. Do not respond to any client computers. Click Next.
10. The last screen asks you to add images to the server, but you don't need to do this because the image is stored in the VMM library. When you are ready, click Finish.
11. Windows Deployment Services is now ready for action.

Creating Host Profiles

As mentioned in the overview of the bare-metal deployment steps, you need to do a number of things before the actual deployment. Your server's BIOS should be configured for Hyper-V (the usual hardware-virtualization settings), and you might want to do some configuration (enable/disable power management/C-state processor power modes) based on your own preferences.


Note
In our practice, we often need to switch off advanced power management and disable C-states to avoid crashing hosts and slowing down live migration. We effectively kill the feature that marketing calls core parking. If you can afford to thoroughly test your new servers before they go into production, you can experiment with having power-management settings enabled and make your own decision on this topic.

In addition to racking and stacking your servers and setting BIOS, this is a good time to configure your BMCs:

  • Configure your boot disks or “Boot from SAN” configuration.
  • Set an IP for a network that can connect to the PXE server and configure the BMC credentials.
  • Make a record of your servers, including the IP addresses, the MAC addresses of the network adapters, the BMC information, unique server identifiers, and so forth.

Now you are ready to leave your servers in the dark and close the data-center door behind you. You can return to your desk and open your VMM console to remotely control the bare-metal deployment from your management computer. If you have forgotten any of the previous steps, the BMC will save you another trip to the data center. So yes, BMCs are worth investing in, even if you and the servers are in the same building.

The first task in VMM is to create one or more host profiles. A host profile is like a template that describes how to configure the bare-metal host. Take the following steps to prepare a host profile:

1. Select the Fabric workspace and choose Home from the ribbon.
2. Expand Profiles in the Navigation pane, or choose Create and then Host Profile from the ribbon.
3. Enter a name and description for the Hyper-V host profile.
4. Because VMM supports deploying only versions of Windows Server that boot from a virtual hard disk (VHD), such as Windows Server 2008 R2, you need to select a VHD file from the library. This VHD must contain a sysprepped image of Windows Server 2008 R2. Browse to the VHD file, and click Next (Figure 7.9).
If you use a dynamic VHD, this file is automatically converted to a fixed-type VHD during deployment. To speed up testing, you can check the “Do not convert the VHD to fixed type during deployment” check box.

Figure 7.9 Selecting a VHD file

7.9
5. On the Hardware Configuration screen, in the Management NIC section, you can obtain an IP address through the DHCP service or allocate a static IP from a logical network, as defined in VMM Networks.
6. In the Disk and Partitions section, you can leave the defaults at one disk with one partition or use the Add Disk and Add Partition buttons to create additional ones.
Unless your disk is bigger than 2 TB, you can leave the portioning scheme at Master Boot Record (MBR). In this section, you can navigate between disks and partitions. The default primary partition is named OS, uses all remaining free disk space, uses the NTFS file system, and is designated as the boot partition.
7. On the Driver Option section (Figure 7.10) you can leave the default on, automatically applying drivers that match the Plug and Play (PnP) IDs that are discovered on the computer during Windows installation. Or you can specify custom drivers based on a driver tag. This is covered in a later section. Click Next.

Figure 7.10 Configuring driver options

7.10
8. On the OS Configuration screen, you can specify all relevant settings for configuring the operating system, including joining a domain; specifying the local administrator password, identity information, product key, and time zone; as well as answering files and [GUIRunOnce]commands. Oddly enough, you can't select a Run As account for the local Administrator account. Click Next.
9. On the Host Settings screen, you can provide one or more virtual-machine placement paths. These paths can be configured in VMM later, so just click Next.
10. Finally, you can confirm all settings and click Finish.

Detailed Bare-Metal Deployment Steps

To better understand the exact steps of a bare-metal deployment, study the process flow. The numbers in the following steps refer to Figure 7.11.

Figure 7.11 Explaining the process flow of Bare-metal Deployment

7.11
1. The VMM server issues an OOB reboot to the bare-metal server.
2. The bare-metal server boots from the WDS (PXE) server using boot.wim.
3. The VMM server receives a request to authorize the PXE boot.
4. The bare-metal server downloads WinPE from the VMM library.
5. WinPE is able to run custom commands and configure partitions on the bare-metal server as defined in a host profile prepared by the administrator. All custom commands connected to a host profile must be prepared in advance (see the “Running Post Deployment Scripts” section for more information).
6. The bare-metal server downloads the VHD from the VMM library.
7. VMM copies the custom drivers to WinPE at runtime and injects these drivers into the VHD automatically, based on the settings in the host profile (using PnP ID matching or tags). These custom drivers are placed and tagged in the VMM library (see the “Adding Drivers” section for more information).
8. The operating system (OS) is customized by the unattend.xml file, and the bare-metal server is joined to the domain as part of the sysprep customization phase when the machine is booted into the VHD for the first time. The domain information is contained in the unattend.xml file that VMM generates. This unattend.xml file contains settings from the host profile and is merged with the custom unattend.xml that the user can add to the host profile.
9. The VMM agent that is running in Windows PE enables the Hyper-V role on the VHD before it is booted for the first time. This is done to save a reboot step later. Although the VMM host agent will also try to enable Hyper-V when it is installed, this step is ignored because it was enabled earlier. This is why you see “Enabling Hyper-V role” twice in the job log.
10. When these steps are complete, the VMM agent is installed.

If you click on the Jobs pane and select the related VMM job, you will be able to see which steps ran successfully and which failed.


Note
Be careful not to modify the boot.wim file manually. The boot.wim on the WDS server is customized by the VMM server and should not be modified by any method other than using the Update Windows PE action from the VMM UI or the publish-scwindowspe cmdlet.

Discovering and Deploying Hosts

If you have made all the preparations described in the previous paragraphs, you are ready to test your bare-metal deployment. Do not make any special configurations yet. Just see if you can successfully discover the hosts, boot from the PXE server into WinPE, and deploy the VHD with all its configuration steps. If that succeeds, you are ready to make some additional configurations to perfect your bare-metal deployment.

Before you begin the bare-metal deployment, connect to your BMC screen so you can see what happens. If the PXE boot does not kick in automatically, press F12 to manually force the server to boot from the network. In some blade-server enclosure configurations, it is possible to configure a one-time boot for PXE, which will force the server to boot from the network at the next reboot.

Let's kick off a bare-metal deployment by following these steps:

1. In the VMM console, select the Fabric workspace and choose Fabric Resources from the ribbon.
2. Right-click Servers in the Navigation pane and choose Add Hyper-V Hosts and Clusters, or choose Add Resources and then Host Profile from the ribbon.
3. On the Resource Location screen, select Physical Computers To Be Provisioned As Virtual Machine Hosts and click Next.
4. On the Credentials and Protocol screen, specify a valid Run As account to discover computers that use BMC technology, select the correct OOB management protocol, and select a port, as shown in Figure 7.12. Click Next.

Figure 7.12 Providing credentials and the OOB protocol

7.12
5. On the Discovery Scope tab, specify one IP address, an entire IP subnet, or an IP range, as shown in Figure 7.13. Click Next.

Figure 7.13 Discovering bare-metal servers

7.13
6. On the Target Resources tab, verify the correct IP address and SMBIOS ID before you select one or more servers. Check the correct servers and click Next.
7. On the Provisioning Options tab, select the correct host group, choose the appropriate host profile, and click Next.
If you had chosen a logical network, the other option would have been available. In that case, you would have to specify the MAC address of the management NIC and its IP configuration on the next page of the wizard.
8. On the Deployment Customization tab, double-check the selected computers, provide names, and click Next.
If you redeploy a server that already has a computer object in Active Directory, you can check the Skip Active Directory check box for this computer name.
9. When you are satisfied with the summary, click Finish to start the bare-metal deployment.
Check the job information in the Jobs pane and the BMC screen to see what steps the server goes through.
As soon as the bare-metal deployment job starts, you can follow its deployment steps. VMM uses BMC to power on the physical machine and initiates the PXE boot. You have exactly 10 minutes before this step times out, and there is no way to get additional information about this step until the server is able to PXE-boot into WinPE.

At this point, the following steps occur without your intervention.

1. When the physical machine has received an IP address it can download the boot.wim from DCMgrBootwindowsImages on the PXE server. If it is successful, it boots into WinPE.
2. WinPE is initialized by the command-line tool wpeinit.exe, and it installs PnP devices, processes unattend.xml settings, and loads network resources.
3. When the network is available, VMM registers the physical machine, configures the disk(s), and starts to transfer the VHD from the VMM library.
4. Now that PXE boot has succeeded and the physical server has successfully started WinPE, progress is shown in the Create A New Host job, as shown in Figure 7.14. VMM uses Background Intelligent Transfer Service (BITS) to deploy the file.

Figure 7.14 Progress of the bare-metal deployment job

7.14
5. If you did not check “Do not convert the VHD to fixed type during deployment” in the host profile, the job converts the VHD from dynamic to fixed.
Next is the setup of BCD, which enables booting from VHD (also called native VHD).
6. Custom drivers are injected from the VMM library to the correct location in the VHD.
7. The OS-customization script is executed using the information provided in the host profile.
8. After reboot, the matching drivers are installed.
9. Hyper-V is enabled.
10. The OS customization script is run.
11. The server performs a final reboot.
12. The VMM agent is installed.
13. The new host is refreshed in VMM so that all the details become available.

This completes a full cycle of a bare-metal deployment of a Hyper-V server. You can deploy one server at a time; however, if the process works well, it is just as easy to deploy an entire blade enclosure with 16 blades or even more. This is a big time-saver and, most important, you'll end up with identical servers. Uniformly installed servers help you lay a solid foundation for your private cloud.

Adding Drivers

As with any generation of Windows, the operating system includes support for device drivers of the current generation of servers. As soon as a vendor introduces a new server generation with a new class of network or storage devices, you are out of luck and have to do your own plug before you play. Fortunately, VMM offers the opportunity to inject drivers into the VHD from which a Hyper-V host boots. So, if PnP does not recognize any of your server's device drivers, you'll need to do some extra preparation and testing. This is what you have to do:

1. Go to your server vendor's website and download the drivers you need.
2. Extract the driver package to a temporary location.
3. Create a directory called Drivers in the root of your VMM library.
4. Create another directory with a short name for your server model. This name is the tag for your custom drivers. The tag is referenced in the host profile that describes the configuration of your server.
5. Copy all extracted drivers to the tag directory.
6. Refresh the library and wait until you can see the Drivers directory, including the tag name and the drivers in that directory.
7. To add a custom tag, right-click each driver, select Properties, and click Select.
8. Click New Tag, rename the tag to your tag name, and click Add. Click OK when you are ready (Figure 7.15).

Figure 7.15 Adding a custom tag to a driver package

7.15
9. If you want to see the custom tags in the library, simply add the Custom Driver Tag column to your view.
10. Find the host's profile, right-click it, and select Properties. Go to the Hardware Configuration tab and click Driver Options. Change from Filter Drivers With Matching PnP IDs to Filter Drivers With Matching Tags, and click Select to choose the tags you prepared earlier. Click OK.
In Figure 7.16, you can see the difference between deploying the same server using custom tags and without custom tags.
You are ready to deploy your bare-metal servers with custom drivers.

Figure 7.16 Deploying with custom tags (left) and without custom tags (right)

7.16

Tip
Some hardware vendors offer an easy solution to adding custom drivers to the VMM Library. As an example look at HP's whitepaper “Implementing Microsoft Windows Server 2012 Beta on HP ProLiant servers” which is available at

Creating an ISO File

Even if you can't use a WDS/PXE server, you can still fully benefit from using bare-metal deployment for rapidly deploying new Hyper-V hosts. All of the steps remain the same except you don't have to install WDS and add it to the VMM fabric. The requirement for a DHCP server still stands.

Instead of booting from a PXE server, you can boot from an ISO file that is configured to use the VMM bare-metal deployment script. This ISO file is created by a PowerShell cmdlet. First, start a PowerShell session from within VMM, issue the following cmdlet, and use a valid path in which to create the ISO file. The directory does not have to exist. The command does not give any output if it succeeds, so either check the directory for the existence of the ISO file or check under Jobs.

PS C:> Publish-SCWindowsPE -ISOPath E:ISO -UseWindowsAIK

PS C:> dir c:iso

    Directory: C:

Mode                LastWriteTime     Length Name
----                -------------     ------ ----

-a---         1/30/2012   1:05 PM  182130688 iso

You can burn the ISO file to a disk, place it on a bootable USB drive, or connect to the ISO file through a BMC.

The process then continues as follows:

1. Attach the ISO file or insert a disk/USB drive to physical machine.
2. Run the Add Resource Wizard and choose Physical Computers To Be Provisioned As Virtual Machine Hosts, as you would normally do for bare-metal deployment.
3. When the machine prompts you to “Press enter to boot from DVD,” press Enter.
4. The bare-metal deployment continues as usual.

Adding Custom Commands

You can use custom commands in the host profile to prepare certain activities. The next PowerShell code snippet provides an example of a general command executing during the bare-metal process. In this example, a script calls an HP disk utility to delete a RAID configuration and then create a new mirror raid configuration before kicking off the rest of the deployment.

#1 Get resource folder location (HPArrayUtility.cr) in the VMM library
PS C:> $resource = Get-SCCustomResource | where { $_.SharePath -eq "\ 4.1
vmmserver ProgramDataVirtual Machine Manager Library FilesHPArrayUtility.cr" }

#2 Get the host profile to be used
PS C:> $HostProfile = Get-SCVMHostProfile -Name "host gce profile"

#3 Configure Script Command Settings
PS C:> $scriptSetting = New-SCScriptCommandSetting
PS C:> Set-SCScriptCommandSetting -ScriptCommandSetting $scriptSetting 4.1
-WorkingDirectory "" -PersistStandardOutputPath "" -PersistStandardErrorPath "" 4.1
-MatchStandardOutput "" -MatchStandardError ".+" -MatchExitCode "[1-9][0-9]*" 4.1 
-FailOnMatch -AlwaysReboot $false -MatchRebootExitCode "{1641}|{3010}|{3011}" 4.1 
-RestartScriptOnExitCodeReboot $false

#5 Run hpacucli.exe with a command to delete the raid configuration
PS C:> Add-SCScriptCommand -VMHostProfile $HostProfile -Executable "hpacucli. 4.1
exe" -ScriptCommandSetting $scriptSetting -CommandParameters "ctrl slot=1 delete 4.1
forced" -TimeoutSeconds 120 -LibraryResource $resource

#6 Run hpacucli.exe with a command to create new mirror raid configuration
PS C:> Add-SCScriptCommand -VMHostProfile $HostProfile -Executable "hpacucli. 4.1 
exe" -ScriptCommandSetting $scriptSetting -CommandParameters "ctrl slot=1 create 4.1 
type=ld drives=1:1,1:2 raid=1" -TimeoutSeconds 120 -LibraryResource $resource

Running Post-Deployment Scripts

After the new host job finishes, the admin can decide to run one or more post-deployment general command executions (GCEs) — for example, to configure NIC teaming. These GCEs can be initiated from the Run Command action available in the user interface, but they can also be scripted and started by the invoke-scscript command PowerShell cmdlet.

Troubleshooting Bare-Metal Deployment

If you can perform a bare-metal deployment without any errors, hats off to you! In practice, you will stumble into one or more gotchas. The following sections provide some hints and guidance for common situations.

PXE Boot Fails

When you're configuring a WDS server for PXE, don't configure the same way you would for other purposes. VMM takes full control of the whole process, so all you need to do is a basic WDS role installation. Don't do any configurations or add any images.

A PXE boot can fail for many reasons, including the following:

  • The PXE server is not in the same subnet as the network adapter that is used to boot from the network.
  • No DHCP server is available in the subnet used for PXE boot.
  • PXE and DHCP are on the same server, and no additional DCHP options are set.
  • The physical server to be provisioned already has a boot partition from another installation.

Make sure you have installed the WDS server according to the detailed requirements described in this chapter.

When the server boots, either press F12 or use your BMC to start it from the network. If everything fails, resort to the procedure explained under “Creating an ISO File.”

PXE Boot Succeeds, but WinPE Fails

You might encounter a situation in which you can boot from PXE and an IP address is provisioned to the server, but the process halts when WinPE kicks in. When the provisioning process halts, you'll probably get a message like “Synchronizing Time with Server.” After this, error 803d0010 is displayed, prompting you to check X:VMMvmmAgentPE.exe.etl. If you are unlucky, this file will be full of blank entries.

A likely reason for the process stall is that WinPE does not have a suitable network driver to continue the installation. Press Shift+F10 to open a command prompt and enter ipconfig /all to check for a network configuration.

The Creating ISO method will not help you here because that method also requires a network connection after WinPE boots.

You need to add drivers to the WinPE image that is taken from the Windows Automated Installation Kit location by VMM and deployed to the WDS/PXE server. This involves the following process:

1. Get tags for matching drivers in the VMM library. By default, these are in the following location:
c:Program FilesWindows AIKToolsPEToolsamd64
2. Prepare the working directories.
3. Copy the default WIM file to a working directory and use DISM to mount winpe.wim.
4. Find the path of each driver that matches the tag and use DISM to insert it into the mounted WinPE image.
5. Commit the changes.
6. Republish the winpe.wim to the PXE server(s) managed by VMM.

DISM: Deployment Image Servicing and Management Tool
DISM enumerates, installs, uninstalls, configures, and updates features and packages in Windows images. The commands that are available depend on the image being serviced and whether the image is offline or running.

First, check whether the custom drivers can be found in the library with the tags you have given them:

PS C:> $tags = "BL460G6"
get-scdriverpackage | where { $_.tags -match $tags } | select-object class,  inffile,type, tags, provider, version, date | ft -auto

Class       INFFile    Type Tags      Provider                Version
-----       -------    ---------      --------                -------  

system      evbd.inf   INF  {BL460G6} Hewlett-Packard Company 6.2.16.0
net         bxnd.inf   INF  {BL460G6} Hewlett-Packard Company 6.2.9.0
SCSIAdapter bxois.inf  INF  {BL460G6} Hewlett-Packard Company 6.2.7.0
system      bxdiag.inf INF  {BL460G6} Hewlett-Packard Company 6.2.3.0

You can see some more detail by issuing the following command:

PS C:> $tags = "BL460G6"
PS C:> get-scdriverpackage | where { $_.tags -match $tags }

[output shows only one driver]

PlugAndPlayIDs    : {B06BDRVL2ND&PCI_164A14E4&SUBSYS_3101103C, B06BDRV L2ND&PCI_16AA14E4&SUBSYS_3102103C, B06BDRVL2ND
                    &PCI_164A14E4&SUBSYS_3106103C,  B06BDRVL2ND&PCI_16AA14E4&SUBSYS_310C103C…}
Tags              : {BL460G6}
TagsString        : BL460G6
Type              : INF
INFFile           : bxnd.inf
Date              : 2/4/2011 12:00:00 AM
Version           : 6.2.9.0
Class             : net
Provider          : Hewlett-Packard Company
Signed            : True
Signer            : Microsoft Windows Hardware Compatibility Publisher
BootCritical      : False
Release           :
State             : Normal
LibraryShareId    : 2dda2b24-bf52-4308-a4df-8c192a097e52
SharePath         : \vmmlib1.private.cloudSCVMMLibrary1DriversBL460G6 bxnd.inf
Directory         : \vmmlib1.private.cloudSCVMMLibrary1DriversBL460G6
Size              : 6153482
IsOrphaned        : False
FamilyName        :
Namespace         : Global
ReleaseTime       :
HostVolumeId      : 9eecdae3-c395-4fd1-a17b-0cd07179eac7
HostVolume        :
Classification    :
HostId            : 64f2e51f-469f-4d6a-9d28-30c06a241fc9
HostType          : LibraryServer
HostName          : vmmlib1.private.cloud
VMHost            :
LibraryServer     : vmmlib1.private.cloud
Cloud             :
LibraryGroup      :
GrantedToList     : {}
UserRoleID        : 00000000-0000-0000-0000-000000000000
UserRole          :
Owner             :
ObjectType        : DriverPackage
Accessibility     : Public
Name              : bxnd.inf
IsViewOnly        : False
Description       :
AddedTime         : 1/25/2012 9:06:33 PM
ModifiedTime      : 1/30/2012 12:23:34 PM
Enabled           : True
MostRecentTask    :
ServerConnection  : Microsoft.SystemCenter.VirtualMachineManager.Remoting .ServerConnection
ID                : ddf2560a-c718-41c4-bebb-f9eb260f00ff
MarkedForDeletion : False
IsFullyCached     : True

After you have verified the tags of the custom drivers in the library, you are ready to run a script to prepare a winpe.wim image with the custom drivers injected.

#1 Get tags for matching drivers in the VMM library
# Master WIM = c:Program FilesWindows AIKToolsPEToolsamd64winpe.wim
# Driver tag = winpe
PS C:> $wim = "c:Program FilesWindows AIKToolsPEToolsamd64winpe.wim "
PS C:> $tags = "winpe"

#2 Prepare directories
PS C:> $winpesrcdir = $wim
PS C:> $workingdir = $workingdir = $env:temp + "" + [System.Guid]::NewGuid() 4.1
.toString()
PS C:> $mountdir = $workingdir + "mount"
PS C:> $wimfile = $workingdir + "winpe.wim"
PS C:> mkdir $workingdir
PS C:> mkdir $mountdir

#3 Copy default WIM file and mount it using DISM
PS C:> copy $winpesrcdir $workingdir
PS C:> dism /Mount-Wim /wimfile:$wimfile /index:1 /MountDir:$mountdir

#4 Find the path of each driver that matches the tag and insert it into mounted wim using DISM
PS C:> $drivers = Get-SCDriverPackage | where { $_.tags -match $tags }
foreach ($driver in $drivers)
{
    $path = $driver.sharepath
    dism /image:$mountdir /Add-Driver /driver:$path
}

#5 Commit the changes
PS C:> Dism /Unmount-Wim /MountDir:$mountdir /Commit

#6 Republish the WIM file to every PXE server managed by VMM
Publish-SCWindowsPE -Path $wimfile

#7 Clean up
PS C:> del $wimfile
PS C:> rmdir $mountdir
PS C:> rmdir $workingdir

Step 4 is the part of the script where the drivers are actually injected into the mounted WIM file using DISM:

[output shows only one driver]

Deployment Image Servicing and Management tool
Version: 6.1.7600.16385

Image Version: 6.1.7600.16385

Found 1 driver package(s) to install.
Installing 1 of 1 - \vmmlib1.private.cloudSCVMMLibrary1DriversDL460G6Broadcom10Gbxnd.inf: The driver package was
successfully installed.
The operation completed successfully.

Deployment Image Servicing and Management tool
Version: 6.1.7600.16385

Machine Is Not Detected

If you are using some sort of hardware-virtualization technology, such as HP Virtual Connect, chances are that you not only configured virtual MAC and WWN addresses, but you also virtualized your servers' unique identifiers. If so, you will have a logical serial number and a logical UUID, as shown in Figure 7.17).

Figure 7.17 The physical and logical serial numbers and UUIDs

7.17

Because the bare-metal deployment process looks at the physical UUID and not the virtual one, the only way to successfully run the bare-metal deployment job is to use PowerShell. You can start the process in the GUI; but before you kick it off, click the button for viewing the PowerShell command. Save the file, replace the SMBiosGuid with the logical UUID, and copy and paste the cmdlet into a VMM PowerShell command session.

PS C:> $HostGroup = Get-SCVMHostGroup -ID "0e3ba228-a059-46be-aa41-2f5cf0f4b96e" 4.1 
-Name "All Hosts"
PS C:> $RunAsAccount = Get-SCRunAsAccount -Name "Run_As_HP_iLO"
PS C:> $HostProfile = Get-SCVMHostProfile -ID "d3982328-2a4b-48d9-8eaa- 4.1
ad5129e8cc5e"
PS C:> New-SCVMHost -ComputerName "hv1" -VMHostProfile $HostProfile 4.1
-VMHostGroup $HostGroup -BMCAddress "172.16.3.22" -SMBiosGuid "41AF9DB5-1AF3- 4.1
4369-9805-60F8EDE56C51" -BMCRunAsAccount $RunAsAccount -RunAsynchronously 4.1
-BMCProtocol "IPMI" -BMCPort "623" -ManagementAdapterMACAddress "00-17-A4-77-00- 4.1
60" -LogicalNetwork "VM1" -Subnet "192.168.1.0/24" -IPAddress "192.168.1.51"

Server Already Exists in Active Directory

A bare-metal deployment will fail if the name of the new server already exists in Active Directory. To side-step the issue, you can skip the AD check. In the GUI, in the Deployment Customization section of the host profile, check the Skip Active Directory check box for the computer name.

If you are using the PowerShell script, just add the following to the cmdlet:

-BypassADMachineAccountCheck

Server Already Exists in VMM

A bare-metal deployment will fail when the name of the server already exists in VMM. In that case, double-check the name and remove it from VMM if appropriate. Removing a server from VMM requires elevated privileges. At any rate, be careful when you have to do this.

Managing Hyper-V Clusters in VMM

If you add a Hyper-V host that is part of a cluster, VMM automatically adds all nodes of that cluster and installs a VMM agent on each. Once the cluster has been inventoried, you can view its properties. Under the General tab, you will not only find the cluster name and host-group location, but also its cluster reserve state. The default is 1, which means that the resources of one cluster node are reserved for high availability. If you start with a one-node cluster and the cluster reserve is 1, the cluster reserve state will not show OK. Of course, for testing purposes, you can change this value to 0. In general, though, you should have one cluster reserve per eight cluster nodes. This means that for a 16-node cluster, you follow best practices by changing this value to 2.

You can obtain more information about the cluster from the tabs of the Properties screen, including the following:

Status

The status displays information such as whether a cluster validation test was run, whether the test was successful or not, and the state of cluster core resources and cluster services.

Available Storage

If the cluster has access to disks that are still in the available storage section, you'll see those disks here. As a best practice, you should have at least one small, shared disk available for testing/validation purposes.

Shared Volumes

A similar view (Figure 7.18) is available for disks that are shared volumes of the cluster. You can also add available disks to the list of Cluster Shared Volumes, remove disks, or convert shared volumes to available storage. Naturally, these tasks are performed only when these disks hold no active virtual machines.

Figure 7.18 Viewing shared storage

7.18

Virtual Networks

This tab shows which virtual networks are available to the entire cluster. If you make a spelling error in the name of a virtual network on one of the nodes in the cluster, the virtual network will not show up here.

Custom Properties

You can add additional custom properties to the cluster for further identification or custom sorting and grouping.

Once the cluster is managed under VMM, you can perform a number of actions against the cluster:

Create Service

You can create a new service on this cluster, based on a service template. A service can be a single- or multitier combination of virtual machines with applications (see Chapter 8).

Create Virtual Machine

You can create a new VM based on an existing VM, VM template, or VHD. You can also create a new VM with a blank VHD.

Refresh

You can reread all the properties of the cluster and cluster nodes to capture any changes since the last update. The refresh interval is 10 minutes.

Optimize Hosts

You can begin a dynamic resource-optimization task for balancing guests across the cluster.

Move to Host Group

You can move a cluster to another host group.

Uncluster

You can perform what Failover Cluster Manager calls Destroy Cluster. You cannot uncluster if there are still active resources on one of the cluster nodes. If there are, the job will fail with Error 25330 (Figure 7.19).

Figure 7.19 A failed Uncluster Cluster action

7.19

Add Cluster Node

Expand the cluster to the current maximum of 16 nodes. The candidate node must have access (a possible owner) to the same shared storage as the other cluster nodes and must be validated before joining the cluster.

This option does not support clusters with asymmetric storage (where not all cluster disks are presented to all cluster nodes, which can be useful in multi-site clusters). Asymmetric storage is a feature that was introduced in Windows Server 2008 R2 SP1.

Validate Cluster

Revalidate the cluster. The validation status will appear under Status on the cluster's Properties screen.

Remove

Remove the cluster from VMM management. The cluster will remain unaltered, but VMM will uninstall its agents from all cluster nodes.

Properties

View the cluster's properties, as discussed earlier.

Automated Creation of Hyper-V Clusters

If you've followed along with the discussion, you have added one or more hosts and clusters and brought existing hosts and clusters under VMM management. As soon as you have one or more Hyper-V machines available, you can build clusters from them. Before you create a cluster, you need to prepare all the network and storage connections because VMM validates the potential cluster nodes before they can join the cluster. It is recommended that all cluster nodes be as similar as possible, including but not limited to service packs, Windows updates, and hotfixes.

This is how a cluster can be created from VMM:

1. In the VMM console, select Fabric, and choose Servers from the Navigation pane. Click Create from the ribbon and select Hyper-V Cluster.
2. On the General tab of the Create Cluster Wizard, provide a name for the cluster and select a Run As account or provide a username and password. Click Next.
3. On the Nodes tab, select a host group with available Hyper-V hosts, point at one or more hosts, and click Add to move them to the Hosts To Cluster column. Click Next.
4. On the IP Address tab, either select a static IP pool from which VMM will pick a static IP address, or manually provide a cluster IP address. Click Next.
5. On the Storage tab (Figure 7.20), select any disks you want to cluster. For each, choose the partition style (MBR or GPT), file-system type (NTFS), volume label, whether to quick-format the disk, and whether to add the disk to CSV. Click Next. By default, VMM automatically will select the smallest disk as your quorum disk, depending on the number of cluster nodes. From this Storage view, you can also format the disks and mark them as CSV disks.

Figure 7.20 Selecting disks for a cluster

7.20
6. On the Virtual Networks screen, select which logical networks to use for creating external virtual networks. Click Next.
7. On the Summary screen, review your settings and click Finish.

You can also add to existing Hyper-V clusters. Take these steps to add a node to an existing cluster:

1. In the VMM console, select Fabric, and choose Servers from the Navigation pane. Right-click the cluster you want to expand and select Add Cluster Node from the context menu.
2. In the Available Hosts column, select the hosts you want to add to this cluster and click Add to move them to the Hosts To Cluster column. Click Add.
3. Select a valid Run As account, or manually add the proper credentials, and click OK.

Troubleshooting a Failed Add Cluster Node Job
The Add Cluster Node job will fail with Error 25343 if no network adapter matches the cluster virtual network of the nodes in the existing cluster. In that case, prepare the network adapters first, then open the properties of the host, select the hardware tab, and map the network adapters to the proper logical networks, as shown in Figure 7.21.

Figure 7.21 Mapping network adapters to logical networks

7.21
If the Add Cluster Node job fails with Error 20400, you'll need to take a look at the storage configuration for the node you are adding. You'll find that the disks were not presented to this node, the disks are not online, or VMM is not aware of these disks. Correct the storage configuration and refresh the candidate cluster node.

An alternative way to add a node to an existing cluster is to drag the node to the cluster. The Add Node To Cluster Wizard kicks in and you will only have to provide credentials to start the job.

Configuring Dynamic Optimization and Power Optimization

This section deals with dynamic optimization and power optimization — two cluster-optimization techniques that help the cluster to keep load-balanced or can reduce power consumption.

Dynamic optimization refers to the built-in support for load-balancing a cluster. It is no longer dependent on performance and resource optimization (PRO) and integration with Operations Manager, as was the case with VMM 2008 R2. Power optimization can help conserve power by shutting down hosts that are not running any workloads. Alternatively, they are turned on again when workload activity increases. Power optimization requires a BMC in the virtualization host.

Both dynamic optimization and power optimization work with Hyper-V, VMware ESX, and XenServer clusters. The respective automated VM migration functionality of the different hypervisors is used to either balance workloads or evacuate a host so that a host can be powered off.

Dynamic optimization and power optimization are configurable on a per-host-group basis. To view dynamic optimization and power optimization in action, you must deploy and run virtual machines on the host cluster.


Note
You can configure dynamic optimization and power optimization on any host group. However, the settings have no effect unless the host group contains a host cluster supporting a VM migration technology.

Dynamic Optimization in VMM

Dynamic optimization in VMM adds functionality to the host reserve settings from VMM 2008 R2. Like its predecessor, VMM receives its host reserve settings as it is added to the host group.

A host reserve is set by default on the All Hosts host group and can be changed for all underlying host groups. It is also possible to create an underlying host group with different host reserve settings. In such cases, you must deselect “Use the host reserves settings from the parent host group” on the Host Reserves tab of the subgroup's Properties screen. For instance, by default 10 percent of the CPU is reserved, meaning that a host is available for placement until 90 percent of the CPU is utilized.

A different host reserve can also be set at the host level to effectively override the settings of the host group. If no override is set, the host receives the settings of the host group when the host is placed under management by VMM. Also, when a host is moved from one host group to another, the host will inherit the host reserve settings of that group, unless you have set specific host reserves on the host itself.

You can check the current host reserve settings for any specific host or host group by issuing the following PowerShell commands:

PS C:> Get-SCVMHost -computername "hvserver1"

RunAsAccount                          : Run_As_Domain_Admin
OverallStateString                    : OK
OverallState                          : OK
CommunicationStateString              : Responding
CommunicationState                    : Responding
Name                                  : hvserver1.private.cloud
FullyQualifiedDomainName              : hvserver1.private.cloud
ComputerName                          : hvserver1
DomainName                            : private.cloud
Description                           :
RemoteUserName                        :
OverrideHostGroupReserves             : True
CPUPercentageReserve                  : 30
NetworkPercentageReserve              : 0
DiskSpaceReserveMB                    : 10240
MaxDiskIOReservation                  : 10000
MemoryReserveMB                       : 256

In the previous example, the host reserve has an override for CPUPercentageReserve. To check for a specific host group, use this command:

PS C:> Get-SCHostReserve -VMHostGroup Production

CPUReserveOff                   : False
CPUPlacementLevel               : 40
CPUStartOptimizationLevel       : 30
CPUVMHostReserveLevel           : 15
MemoryReserveOff                : False
MemoryReserveMode               : Megabyte
MemoryPlacementLevel            : 1024
MemoryStartOptimizationLevel    : 1024
MemoryVMHostReserveLevel        : 1024
DiskSpaceReserveOff             : False
DiskSpaceReserveMode            : Percentage
DiskSpacePlacementLevel         : 10
DiskSpaceVMHostReserveLevel     : 10
DiskIOReserveOff                : False
DiskIOReserveMode               : IOPS
DiskIOPlacementLevel            : 1000
DiskIOStartOptimizationLevel    : 1000
DiskIOVMHostReserveLevel        : 1000
NetworkIOReserveOff             : False
NetworkIOReserveMode            : Percentage
NetworkIOPlacementLevel         : 0
NetworkIOStartOptimizationLevel : 0
NetworkIOVMHostReserveLevel     : 0
Name                            : Production
ReadOnly                        : False
ConnectedHostGroup              : All HostsProduction
OwnerHostGroup                  : All HostsProduction
ServerConnection                : Microsoft.SystemCenter.VirtualMachineManager.Remoting.ServerConnection
ID                              : aab05537-0406-49d5-9670-5f0a938196b7
IsViewOnly                      : False
ObjectType                      : HostReserveSettings
MarkedForDeletion               : False
IsFullyCached                   : True

As you can see from the output, there are several settings for optimization levels. The value for a reserve is also the starting point for dynamic optimization. In other words, if the MemoryPlacementLevel is at 1,024 MB (the default), then MemoryStartOptimizationLevel is also set at 1,024 MB, unless it is manually set to another value. The Set-SCHostReserve command is used to set host reserves such as placement levels and start optimization levels.

Dynamic optimization is also a property of the host group that can be configured to migrate virtual machines within host clusters with a specified frequency and aggressiveness. If no overrides are set, the host group receives the Dynamic Optimization settings from the parent host group.

By default, the dynamic optimization settings are configured with medium aggressiveness and a frequency of 10 minutes. Aggressiveness can be set in five steps from Low to High (with an intermediate step between Low and Medium and between Medium and High); Aggressiveness defines how eagerly VMM looks for optimization opportunities:

  • High: Balances a cluster for small gains, resulting in more VM migrations
  • Medium: The default setting
  • Low: Balances for only substantial gain, resulting in fewer VM migrations

In Figure 7.22, the Dynamic Optimization settings have been changed to a more aggressive setting but with an hourly interval.

Figure 7.22 Overriding the default Dynamic Optimization settings

7.22

Frequency is set to automatically migrate virtual machines to balance the load every 10 minutes and can be set to a maximum of 2,440 minutes (just over 40 hours). You should test these settings carefully before you automate dynamic optimization. In the current version of Hyper-V, simultaneous migrations are unavailable. Live migrations are queued and wait until the previous migration has finished. If you set the frequency too high, migrations from the previous optimization may not be finished yet.

For dynamic optimization to work, you need clusters with two or more nodes. Any hosts or clusters that do not support migration are ignored. An additional requirement is that VMs must be configured to be highly available and placed on shared storage.

If you want to test dynamic optimization, you can start the process on demand by right-clicking one of your clusters and selecting Optimize Hosts from the context menu.

VMM calculates resource optimizations and proposes a migration for one or more VMs. In the example in Figure 7.23, host hv2 is running all the VMs and VMM proposes to move five VMs to host hv1.

Figure 7.23 Calculating optimizations

7.23

The resulting PowerShell script looks like this:

PS C:> $hostCluster = Get-SCVMHostCluster -Name "HVCluster1.private.cloud"
PS C:> Start-SCDynamicOptimization -VMHostCluster $hostCluster

Under Jobs, you can easily track how VMM optimizes the cluster and which VMs have moved (Figure 7.24).

Figure 7.24 You can track the optimization.

7.24

If no resource optimizations are available when you run this command, you will be notified that the host cluster is either within desired resource-usage limits or no further optimization is possible.

Power Optimization in VMM

Power optimization is functionally part of dynamic optimization. It can only be set if a host group has been configured for dynamic optimization. If you enable power optimization, you effectively allow VMM to power VM hosts off and on based on their actual usage. Before power optimization kicks in, all remaining VMs on that host are migrated to the remaining nodes in the cluster.

By default, power optimization is switched off. If you switch it on, it operates 24 hours a day unless you change the schedule.

There is a difference for power optimization, depending on whether the clusters are created by VMM or outside VMM. For clusters created by VMM, you can set up power optimization for clusters of four or more nodes. For clusters created outside VMM, you need five or more nodes, as shown in Table 7.1.

Table 7.1 Minimum Number of Cluster Nodes for Power Optimization

Nodes Powered Off Cluster Created in VMM Cluster Created Outside VMM
0 More than 3 nodes More than 4 nodes
1 4 or 5 nodes 5 or 6 nodes
2 6 or 7 nodes 7 or 8 nodes
3 8 or 9 nodes 9 or 10 nodes

If the number of cluster nodes is sufficient, you can enable power optimization using the following steps:

1. In the VMM console, select Fabric, and choose Servers from the Navigation pane. Right-click the host group you want to enable for power optimization and choose Properties.
2. Click Dynamic Optimization, uncheck “Use dynamic optimization from the parent host group,” check “Automatically migrate virtual machines to balance load at this frequency (minutes),” and set it to an acceptable value.
3. Under Power Optimization (bottom of screen), enable power optimization and click Settings.
4. On the Power Optimization Settings screen, click the blue squares to disable or enable power optimization for a particular day and hour as shown in Figure 7.25. If you are a 24x7x365 company and your hosts are constantly utilized, power optimization is probably better switched off.

Figure 7.25 Modifying the power-optimization schedule

7.25

Cluster Remediation

Traditionally, updating clusters has been a laborious task. Windows Server Update Services (WSUS) and System Center Configuration Manager (SCCM or Config Mgr) are still not cluster-aware, so you cannot take the risk of automatically updating cluster nodes. If you update a node in a Hyper-V cluster, it reboots without migrating the VMs to another node. Simultaneous reboots of more nodes than the cluster can handle (majority node or other quorum requirements) can take the entire cluster down.

Fortunately, VMM introduces a workflow for updating a Hyper-V cluster based on the assumption that the cluster and the VMs should stay highly available. As when remediating a single Hyper-V host as described in Chapter 5, “Understanding the VMM Library”, VMM uses a supported WSUS server as a source for the update catalog and update baselines. To perform the following steps, assume that the update server is already part of the VMM fabric.

These are the steps for setting up cluster remediation:

1. In the VMM console, select Library, and choose Update Catalog and Baselines from the Navigation pane. Right-click Update Baselines and select Synchronize Update Server (Figure 7.26).

Figure 7.26 Adding a cluster node

7.26
2. Click Create on the ribbon and select Baseline.
3. Specify a name and a description and click Next.
4. On the Updates screen, click Add to select all required updates relevant to your Hyper-V cluster. Click Next when you are ready.
5. On the Assignment Scope screen, select only those clusters that you intend to update with the newly created Hyper-V Cluster baseline (Figure 7.27). Click Next.

Figure 7.27 Assigning clusters to a baseline

7.27
6. Review the summary and click Finish. In the Baselines view, you now see an additional baseline called Hyper-V Cluster with one assigned cluster and one or more updates.
7. Now change to Fabric, expand Servers in the Navigation pane, and click Compliance on the ribbon. Select the Hyper-V cluster you want to remediate. If this is the first time you are updating your cluster from VMM, the compliance status is Unknown and the operational status is Pending Compliance Scan.
8. In the View pane, right-click a cluster and select Scan from the context menu. The operational status changes to Scanning. In the Jobs view you can check progress.
9. When the scan is ready, it reports the compliance status of all cluster nodes. To initiate remediation, right-click the cluster and select Remediate.
10. Before remediation starts, you can make some additional settings. First of all, you should check that the remediation method is Live Migration. You can choose not to restart the servers after remediation and delay this action to a later time. If a cluster node is in Maintenance mode, you need to check the cluster option “Allow remediation of clusters with nodes already in maintenance mode.” If there is no need to keep the VMs online during remediation and you want to speed up the entire process, you can also choose to change the cluster-remediation method from Live Migration to Save State (also known as Quick Migration). If you are remediating a Windows Server 2008 Hyper-V cluster, this is the only available option.
11. In the Jobs view, the different steps can be monitored.
In the example in Figure 7.28, one of the cluster nodes is placed in Maintenance mode. This effectively evacuates all highly available VMs, installs the updates, reboots the host, and starts a compliance scan. When it is finished, it stops Maintenance mode and moves on to the next cluster node, repeating all the necessary steps until all nodes have been remediated.

Figure 7.28 The details of a remediation job

7.28
Any non–highly available VMs are put into the Save state.
The cluster-remediation method employed by VMM is fully aware of Hyper-V, its version and migration capabilities, and the state of the cluster nodes. Currently, this remediation process has to be started manually and cannot be scheduled. Of course, you can create a PowerShell script and kick it off at a later time, but you might want to be available while the cluster-remediation process is running.
At the end of the cluster-remediation process, the status should be as shown in Figure 7.29.

Figure 7.29 The remediation is complete.

7.29

Adding Existing VMware ESX Hosts

VMM allows you to deploy and manage virtual machines on VMware ESX hosts, as you can with any supported hypervisor. The only difference is that you cannot bare-metal-deploy an ESX host because the chosen native VHD deployment model is not applicable to VMware hosts. Also cluster remediation cannot be applied, because WSUS does not support updating non-Windows hosts. This should not keep you from adding your VMware ESX hosts to VMM, because all the other deployment and management capabilities fully apply to ESX hosts, including creating clouds, creating VMs, and using PowerShell scripting. You can even deploy a multitiered service across multiple hypervisors. Service modeling is described in subsequent chapters.

Because you can't create new VMware ESX hosts, you must first add existing ESX hosts and clusters. Check the requirements in Chapter 4 before you begin, because VMM no longer supports some older versions of ESX and vCenter.

VMM integrates directly with VMware vCenter Server. As soon as vCenter is added to the VMM fabric, you can start adding ESX hosts and clusters.

VMware ESX Integration Improvements

Let's look at some of the differences between managing VMware ESX hosts in VMM 2012 and VMM 2008 R2.

  • As with VMM 2008 R2, VMM 2012 can specify ESX hosts to import or not import an entire vCenter data center.
  • You can add an ESX host to any existing VMM 2012 host group; so in contrast to VMM 2008 R2, you can mix and match Hyper-V and VMware hosts.
  • You can import VMware templates into the VMM library without copying the .vmdk file to the library. Unlike VMM 2008 R2, only the template metadata is registered in the VMM 2012 library, allowing for much faster virtual-machine deployment.
  • If you delete a VMware template from the VMM library, it is not deleted from the VMware data store, as was the case with VMM 2008 R2.
  • Whereas VMM 2008 R2 supported Secure File Transfer Protocol (SFTP) for file transfers, VMM 2012 uses HTTPS for all file transfers between ESX hosts and the VMM library. Therefore, you no longer need to enable Secure Shell (SSH) to access an ESX host, as was necessary in VMM 2008 R2. Root credentials are needed for file transfers between ESX servers and VMM.
  • VMM 2012 supports VMware's distributed virtual switch functionality, but you must still configure it from within vCenter. There was no support for distributed virtual switches in VMM 2008 R2
  • Port groups are still supported, but they must also be configured in vCenter Server. Unlike VMM 2008 R2, the current version of VMM does not automatically create port groups on ESX hosts.

Supported Features

When managing VMware ESX hosts, VMM 2012 supports the following features:

  • The PowerShell command shell
  • Placement of VMs and services using host ratings for creation, deployment, and migration of VMware virtual machines
  • Deployment of VMM services
  • Private clouds with ESX host resources
  • Self-service roles and quotas with ESX host resources
  • Dynamic optimization (but turn off VMware's Dynamic Resource Scheduler)
  • Power optimization to turn ESX hosts on or off
  • Live migration (vMotion)
  • Live storage migration (Storage vMotion)
  • Migration to and from the VMM library (but VMware thin-provisioned disks become thick when placed in a VMM library)
  • Network migration between hosts
  • Maintenance mode
  • VMM library for storing VMware VMs, .vmdk files, and VMware templates
  • .vmdk files (older formats created in VMware Server and Workstation have to be converted)
  • Templates
  • Standard and distributed vSwitches and port groups
  • Storage of type VMware Paravirtual
  • Hot addition and hot removal of virtual hard disks on VMware VMs
  • Conversion (between VMware VM and Hyper-V VM via V2V process)
  • Performance and resource optimization (PRO) for monitoring and alerting ESX host via VMM with integration of Operations Manager and PRO
  • Recognition of VMware fault-tolerant VMs (VMM only shows primary VM on the vCenter server. If it fails VMM recognizes the new primary.)
  • Dynamic memory

Limitations

Here are some limitations and unsupported features:

  • VMM does not support VMware VMs with disks on IDE bus. V2V of these machines fails.
  • Storage must be added to ESX hosts outside of VMM and cannot be provisioned using SMI-S storage automation functionality.
  • VMM does not automatically create port groups on ESX hosts. Port groups and VLANs must be configured outside of VMM.
  • VMM does not integrate with VMware vCloud.
  • You cannot use VMM to deploy vApps.
  • Update management/cluster remediation is not supported for ESX hosts.
  • Bare-metal deployment is not supported for ESX hosts.
  • Dynamic memory is supported only on Hyper-V hosts that are running an OS that supports dynamic memory.

Capabilities

A VM on an ESX host managed by VMM can have the capabilities listed in Table 7.2.

Table 7.2 VMware VM Capabilities

Category Minimum Maximum
Processor Range 1 8
Memory Range 4 MB 255 GB
DVD Range 1 4
Hard-Disk Count 0 255
Disk Size Range 0 MB 256 GB
Network Adapters 0 64

Adding a VMware vCenter Server

Before you add a VMware vCenter Server to VMM, check the requirements in Chapter 4. Not all versions are supported.

Secure Sockets Layer (SSL) is used for communication between the VMM management server and the vCenter Server. Either a third-party or a self-signed certificate can be used. Self-signed certificates must be stored in the Trusted People Certificate store.

Before adding the vCenter Server, create a Run As account with the correct Active Directory credentials. This account must have administrative privileges on vCenter Server. A local administrator account on the vCenter Server is also supported.

Adding a vCenter Server does not automatically add VMware ESX hosts. That requires an additional step.

These are the steps to add a VMware vCenter server:

1. In the VMM console, select Fabric, expand Servers from the Navigation pane, and right-click vCenter Server; or click Add Resources from the ribbon and select VMware vCenter Server.
2. Provide the computer name of the vCenter server and specify the correct TCP/IP port (default 443). Also, select your prepared Run As account for connecting to the vCenter server. If you have a certificate, you can leave the “Communicate with VMware ESX host in secure mode” check box checked. Click OK when you are ready.
3. When the VMware certificate appears, click Import.

The PowerShell command to add a VMware vCenter server is

PS C:> $certificate = Get-SCCertificate -ComputerName "vcenter1.private.cloud" -TCPPort 4.1 443

PS C:> $runAsAccount = Get-SCRunAsAccount -Name "Run_As_Domain_Admin"
PS C:> Add-SCVirtualizationManager -ComputerName "vcenter1.private. 4.1 cloud"
-TCPPort "443" -Credential $runAsAccount -Certificate $certificate -EnableSecureMode 4.1$true -RunAsynchronously

To obtain some extra information about the imported virtualization manager, execute the following code:

PS C:> Get-SCVirtualizationManager -ComputerName vcenter1.private.cloud

Name                           : vcenter1.private.cloud
SecureMode                     : True
SslTcpPort                     : 443
SslCertificateHash             : 01F5675198A1E88C41228A3F484AC76118D91B0E
NumberOfManagedHosts           : 0
ManagedHosts                   : {}
UnmanagedHosts                 : {}
UnmanagedHostClusters          : {}
NumberOfManagedVirtualMachines : 0
Status                         : Responding
StatusString                   : Responding
Version                        : 4.1.0
UserName                       : administrator
Domain                         : private
RunAsAccount                   : Run_As_Domain_Admin
ServerConnection               : Microsoft.SystemCenter.VirtualMachineManager.Remoting.ServerConnection
ID                             : 09447c1c-c63e-4d18-97fc-90abf9c9981f
IsViewOnly                     : False
ObjectType                     : VirtualizationManager
MarkedForDeletion              : False
IsFullyCached                  : True

Adding a VMware ESX/ESXi Host or Cluster

Now that the VMware vCenter Server has been successfully added to VMM, you can add VMware ESX/ESXi hosts and clusters. Follow these steps:

1. In the VMM console, select Fabric, expand Servers from the Navigation pane, click Add Resources from the ribbon, and select VMware ESX Hosts and Clusters.
2. Select a Run As account with root access to the ESX host you are adding to VMM, and click Next.
3. A list of ESX hosts will appear. Select the hosts you want to add, and click Next.
4. On the Host Settings page, choose a host group for the ESX hosts. Click Next.
5. Review the Summary page and click Finish to confirm.
If the host status shows OK, then the ESX host has been added successfully. You can also check the status by looking at the job details or by expanding the host group that contains the ESX servers. If the ESX hosts shows OK (Limited), the Run As account does not have root credentials. Another possibility is that you enabled Secure mode without importing the certificate.

To update the ESX-host status, follow these steps:

1. Right-click the ESX host that has an OK (Limited) status and select Properties.
2. Select the Management tab and select a Run As account with root credentials on the ESX host.
3. Click Retrieve to get the certificate and public key for the host. You can view the thumbprint details by clicking View Details.
4. Check the “Accept the certificate for this host” check box and click OK when you are finished.
5. Verify that the host status has changed to OK.

Adding Existing XenServer Hosts

You have seen how to integrate both Microsoft and VMware hypervisors and how to bring these hosts under management in VMM. In VMM 2012, Microsoft introduces a third supported hypervisor, Citrix XenServer. XenServer host management is made possible by a XenServer Integration Pack written by Citrix. Without depending on Citrix XenCenter, this hypervisor perfectly integrates with the VMM Management and Library servers (Figure 7.30).

Figure 7.30 Integration of XenServer in VMM

7.30

Supported Features

VMM 2012 offers you the following functionality in combination with Citrix XenServer:

  • Deployment of virtual machines and services using VMM templates
  • Intelligent placement with host ratings
  • Paravirtualized VMs (PVs), meaning there is no requirement to use emulated device drivers, freeing up processor time and resources
  • Highly available VMs (HAVMs)
  • VM migration (XenMotion) within a resource pool
  • LAN migration between XenServer and the VMM library
  • VM conversion using the Physical-to-Virtual (P2V) instead of the Virtual-to-Virtual (V2V) method
  • Dynamic optimization
  • Power optimization (requires a BMC in the host for shutdown, power-on, and restart of host)
  • Maintenance mode (for evacuating a host before maintenance)
  • XenServer storage with support for all kinds of local and shared XenServer repositories (iSCSI, NFS, HBA, StorageLink). All storage must be added to XenServer outside of VMM.
  • ISO repositories on NFS or CIFS. The ISO repository must be read-write. Attachment of ISOs is supported only from the VMM library.
  • XenServer host networking by wrapping a single virtual switch around all of the XenServer switches on a single physical adapter (Figure 7.31)
  • Windows with highly available VMs
  • Standard VHDs
  • Regular start, stop, save state, pause, and shut down VM actions
  • Enhanced XenServer checkpoints
  • Guest-console access

Figure 7.31 Network integration of XenServer in VMM

7.31

Limitations

You need to be aware of these limitations when integrating XenServer in your private cloud with VMM 2012:

  • Dynamic memory is not supported.
  • Virtual floppy drive and COM ports are not available as VM devices.
  • SMI-S for discovering and configuring storage on XenServer hosts is not supported.
  • XenServer templates are not supported.
  • Bare-metal deployment of XenServer hosts is not supported.

Capabilities

Table 7.3 shows supported configurations for VMs on XenServer hosts.

Table 7.3 XenServer VM Capabilities

Category Minimum Maximum
Processor Range 1 8
Memory Range 16 MB 32 GB
DVD Range 1 4
Hard-Disk Count 0 7
Disk Size Range 0 MB 2,040 GB
Network Adapters 0 7

Installing Microsoft System Center Integration Pack

Integrating Citrix XenServer into your VMM environment is supported for specific versions of XenServer. The requirements for Citrix XenServer integration can be found in Chapter 4.

Unlike the case with Hyper-V, you cannot create a new XenServer host by means of bare-metal deployment. The XenServer hosts or pools (clusters) must already exist before you can add them to the VMM fabric.

Before you can add a XenServer host or cluster (they are called pools in XenServer terminology), you need to prepare XenServer by installing the Microsoft System Center integration pack in Domain 0 (Dom0). Like the first partition or parent partition in Hyper-V, Dom0 is more privileged and has full access to the hardware. When XenServer starts, Dom0 is started automatically after XenServer boots, and you use its console to configure XenServer for VMM integration.

The Microsoft System Center integration pack can be downloaded from MyCitrix. If you don't have an account for MyCitrix, register for one at

www.citrix.com/english/mycitrix/

Follow these steps to install the integration pack:

1. Mount the ISO of the integration pack.
# mkdir -p /mnt/tmp
# # mount <path_to_Integration_Pack_iso/XenServer-6.0.0.2-integration-suite.iso   4.1/
mnt/tmp -o loop

OR

# mount /dev/dvd /mnt/tmp
Mount: block device /dev/dvd is write-protected, mounting read-only
2. Start the installation script.
# cd /mnt/tmp
# ls
install.sh
ms-scx-1.0.0-32074.i386.rpm
openpegasus-2.10.0-xs1.i386.rpm
xenserver-vnc-control-6.0.0-52391p.noarch.rpm
xs-cim-cmpi-6.0.0.52271p.i386.rpm
XS-PACKAGES
XS-REPOSITORY
# ./install.sh

You can check the installed version of the integration pack using a XenServer command:

Xe host-param-get uuid=<host-uuid> param-name=software-version

After installing the XenServer integration pack, you can verify that the installation was correct by using winrm. On the VMM management server, open a PowerShell console from within VMM (so the virtualmachinemanager module is already loaded).

PS C:> winrm enum http://schemas.citrix.com/wbem/wscim/1/cim-schema/2/Xen_4.1
HostComputerSystem -r:https://<hostname>:5989 -encoding:utf-8 -a:basic4.1
-u:<XenUser> -p:<Password> -skipCACheck

Xen_HostComputerSystem
    AvailableRequestedStates = 3, 10, 4
    CN = xenserver1.private.cloud
    Caption = XenServer Host
    CommunicationStatus = null
    CreationClassName = Xen_HostComputerSystem
    Dedicated = 2
    Description = Default install of XenServer
    DetailedStatus = null
    ElementName = xenserver1
    EnabledDefault = 2
    EnabledState = 2
    Generation = null
    HealthState = 5
    IdentifyingDescriptions = IPv4Address, ProductBrand, ProductVersion,         BuildNumber
    InstallDate = null
    InstanceID = null
    Name = 420b6087-a97e-403d-b626-0c3f68dc97a9
    NameFormat = Other
    OperatingStatus = null
    OperationalStatus = 2
    OtherConfig = agent_start_time=1327743281., boot_time=1327743234., iscsi_     iqn=iqn.2011-12.com.example:5e105dfa
    OtherDedicatedDescriptions = null
    OtherEnabledState = null
    OtherIdentifyingInfo = 192.168.1.71, XenServer, 6.0.0, 50762p
    PowerManagementCapabilities = null
    PrimaryOwnerContact = null
    PrimaryOwnerName = null
    PrimaryStatus = null
    RequestedState = 2
    ResetCapability = null
    Roles = null
    StartTime = 2012-01-28T09:33:54Z
    Status = OK
    StatusDescriptions = null
    TimeOfLastStateChange = 2011-12-11T09:43:27Z
    TimeOffset = 3600
    TransitioningToState = 12

Checking the XenServer Hostname

If the XenServer host does not have an FQDN, VMM imports the XenServer based on its IP address. If VMM displays a XenServer host with only its IP address, perform the following steps to correct this:

1. Remove the XenServer host from VMM.
2. Run the following command to check whether the server has a FQDN or not:
# Hostname -f
Xenserver1
3. From the XenServer console menu, select the Network And Management Interface menu, which shows the hostname along with other information. When the NIC for the management interface is shown, press Enter. Press Enter again to accept the IP-address configuration. Press Tab to go to Hostname and complete the FQDN, as shown in Figure 7.32.

Figure 7.32 Changing the hostname of XenServer

7.32
4. Open a XenServer command shell and check the FQDN again with hostname -f, which should now show you the full hostname.
# Hostname -f
Xenserver1.private.cloud
5. Because the certificate is not yet aware of this name change, you have to rename or remove the certificate file (xapi-sll.pem).
# cd /etc/xensource
# rm xapi-sll.pem
rm: remove regular file ‘xapi-sll.pem’? y
6. Restart the server.

Adding a XenServer Host or Cluster

Before you add your first XenServer host to VMM, prepare a Run As account with credentials for root access to the XenServer hosts. When you are ready, perform these steps to add a XenServer host or cluster:

1. In the VMM console, select Fabric, expand Servers from the Navigation pane, click Add Resources from the ribbon, and select Citrix XenServer Hosts and Clusters.
2. On the Server Settings screen, enter the FQDN of the XenServer host and accept TCP port 5989 unless you don't want to use the default port. Leave “Use certificates to communicate with this host” turned on, select an appropriate Run As account for the XenServer, and choose a target host group. Click Add and repeat this step for each host or cluster you want to add.
3. Select one of the added hosts and click View Certificate.
4. A thumbprint of the CA Root certificate is shown. Check the name of the host on the certificate and click OK when you are ready.
5. Review the summary and click Finish.
The XenServer host is now added to VMM. Because the integration pack is already installed, this job completes quickly.

The alternative route is to use PowerShell for adding a Citrix XenServer host:

PS C:> $RunAsAccount = Get-SCRunAsAccount -Name "Run_As_XenServer"
PS C:> $HostGroup = Get-SCVMHostGroup -ID "35fc20ae-fa7c-4395-a55a-0591fe1cf68b" 4.1
-Name "Development"
PS C:> $Certificate = Get-SCCertificate -ComputerName "xenserver1.private.cloud" 4.1
-TCPPort 5989
PS C:> Add-SCVMHost -ComputerName "xenserver1.private.cloud" -TCPPort "5989" 4.1
-EnableSecureMode $true -Credential $RunAsAccount -VMHostGroup $HostGroup 4.1
-XenServerHost -RunAsynchronously -Certificate $Certificate

You are now ready to manage XenServer hosts and clusters as you would any other virtualization host in VMM. Your private cloud has become a truly heterogeneous cloud, and you are ready to deploy virtual machines and services to any of the hypervisors.

Summary

This chapter discussed the deployment of hosts and clusters. Hyper-V hosts that are domain-joined in a nontrusted domain or in a perimeter network need to be treated differently. The VMM Add Host Wizard makes this task easy. A more ambitious task is the bare-metal deployment method, which transforms servers without an OS into fully operational Hyper-V servers. Building on the previous chapter about integrating storage and networking, you can automate the creation of Hyper-V clusters and configure dynamic optimization and power optimization.

The chapter also showed you how to add a VMware vCenter, including VMware ESX hosts and clusters. The paragraphs on vSphere explain what features are supported, vSphere's capabilities, and its limitations.

Finally, the newly supported Citrix XenServer host was introduced. You learned how to add XenServer hosts without using XenCenter; add a VMware vCenter, including VMware ESX; and you saw the supported features, capabilities, and limitations of this hypervisor platform under the wings of VMM 2012.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.135.250