At this stage, you are probably hungry to do some actual UCS server configuration, I am, and this is where it all starts; with policies.
Policies are used to create service profile templates, and from these templates we can assign service profiles to our servers. Before we start though, we should create our UCS organization.
Creating the UCS Organization
We create UCS organizations to simplify our management. They offer us a hierarchical way of organizing our policies (as well as our pools and service profiles). We create the organization by going to the Servers tab, expanding any one of the options, such as Servers ➤ Policies ➤ root ➤ Sub-Organizations, and selecting “Add.” Give the organization a name, and click “OK” (Figure 4-1).
You will receive an acknowledgment that the organization has been created (Figure 4-2).
You will also notice that the same organization has been created under Service Profiles and also under Service Profile Templates, as well as Pools. You will also find the new organization in the LAN tab, the SAN tab, the Storage tab, and the Chassis tab.
Now, we can start to create our policies. We will not cover every single policy option, as there are a lot of them. Instead, we will focus on the ones required to create a service profile template, which will then be applied to our servers.
Storage Policies
Our storage policy is going to be quite simple; we will just create a mirrored RAID volume from our two disks. To do this, go to “Servers ➤ Policies ➤ root,” right-click “Local Disk Config Policies” and select the pop-up option to create one. Call it “LocalDiskPol” and set the mode to “RAID 1 Mirrored” (Figure 4-3).
Dynamic vNIC Connection Policies
Dynamic vNICs are not applicable to us (in our sandboxed environment), as these are used for determining connectivity between virtual machines and dynamic vNICs running on servers with VIC adapters. However, if we were running through the service profile wizard (which we will do in the next chapter), this is where we would set up this connectivity. In the same wizard though, is the VLAN creation, which is where we are going to sidestep to.
Creating VLANs
UCSPE has already created some VLANs for us, but we will create some more, by going to LAN ➤ LAN Cloud ➤ VLANs. We can create VLANs on a per-Fabric basis, or on both fabrics at the same time. Click “All” and then click “Add.. Our first VLAN will be called “DB,” and will have a VLAN ID of 10 (Figure 4-4).
As the GUI shows, we can use this to create ranges of VLANs on both fabrics (Common/Global), individual fabrics, or we can configure the fabrics differently. The latter option allows us to specify different VLAN IDs for each fabric (though the name we give this particular VLAN will be the same across both fabrics, just the VLAN ID will be different).
The sharing type is for setting up private VLANs (PVLAN) and allows us, if we should so desire, to isolate ports. We create a primary VLAN and one (or more) secondary VLANs, which can either be an isolated or a community VLAN. Isolated ports can only communicate with the associated port in the primary VLAN, not even with each other. Community ports – communicate with each other and with promiscuous ports. For both Isolated and Community VLANs, we must create a primary VLAN first.
Repeat the process, creating a VLAN 13, which is our DMZ. Our VLANs should look like Figure 4-5.
vNIC/vHBA Placement
UCS blades have a component called a “Mezzanine” card. Mezzanine cards can give us storage acceleration, port expansion, GPUs (Graphics Processing Units) and VICs (Virtual Interface Cards). We also have mLOMs (modular LAN on Motherboard) cards, which offer VIC expansion.
The UCS, as we spoke about back in Chapter 2, has an IOM and each IOM has a defined internal bandwidth (the bandwidth that goes to the blades). The 2104 has 2x 10GB, the 2204 has 4x 10GB, and the 2208 has 8x 10GB. This means that a blade can get 80Gb-KR bandwidth across a pair of IOMs.
The “KR” in this equation is a data rate specification across a backplane medium (K), using a 64B/66B (R) coding scheme (which is all to do with the electrical encoding at the physical layer) in a single lane configuration. For a deeper dive into this, check out this very good blog post: www.tbijlsma.com/2012/03/how-ucs-achieves-80gbe-of-bandwidth-per-blade/
We can control how each of our vNICs are assigned to these lanes through a “Placement Policy,” allowing us to utilize the hardware capacity to its fullest. Such as all having all vNICs on one card and all vHBAs (virtual Host Bus Adaptors) on another card; this could be due to compatibility reasons, or card speed.
To create a placement policy we would go to Servers ➤ Policies ➤ root ➤ Sub-Organizations ➤ LearningUCS ➤ vNIC/vHBA Placement Policies. Although we don’t need to create one ourselves, we would do so by clicking on the “Add” button (Figure 4-6).
The options we have are
All: the vCON (virtual network interface connection) is used for all vNICS and vHBAs that are assigned to it, not assigned to it, or are dynamic.
Assigned only: Only vNICs and vHBAs are assigned to the vCON.
Exclude-Dynamic: The vCON cannot be used for dynamic vNICS or vHBAs.
Exclude-Unassigned: the vCON can only be used for vNICs or vHBAs assigned to it, or dynamic vNICs and vHBAs.
Exclude usNIC: The vCON cannot be used by user-space NICs.
vMedia policies allow us to boot our servers from ISO images stored on a share. We create these by going to Servers ➤ Policies ➤ root ➤ Sub-Organizations ➤ LearningUCS ➤ vMedia Policies. To create one, click “Add” and enter the details, such as those in Figure 4-7.
In the preceding policy, we would be loading a CD ISO image called Linux.iso from https://san.domain.local/ISOs/Linux. Well, depending on our server boot policy, that is.
Server Boot Policies
Server boot policies control how we boot our servers and in what order we try these options. We configure a boot policy by going to Servers ➤ Policies ➤ root ➤ Sub-Organizations ➤ LearningUCS ➤ Boot Policies. Click on “Add” to create a new policy. In Figure 4-8, we are creating a policy to first boot from a CD (or DVD) mounted via the CIMC. It will then try to boot from a local LUN if no CD or DVD is found.
Maintenance Policies
We will, from time to time, have to perform maintenance on our UCS, usually in the way of upgrading the firmware. You may upgrade at a particular time, taking the inevitable reboots of the fabric and IOMs as you go. However, you may not want the blades to reboot at the same time, so, unless you want to cause an outage, it’s a good idea to implement a maintenance policy. Head to Servers ➤ Policies ➤ root ➤ Sub-Organizations ➤ LearningUCS ➤ Maintenance Policies and click “Add.” Create a maintenance policy that will (at a very minimum) require a user acknowledgment before rebooting the servers (Figure 4-9).
Server Pool Policies
Server Pool are used for servers that share characteristics, such as type, amount of memory, drive configuration, or the type of CPU. We create a pool first (Servers ➤ Pools ➤ root ➤ Server Pools). We start by naming our server pool as shown in Figure 4-10.
Next, we add our servers, selecting them in the first window (Figure 4-11).
Once we have added the servers (Figure 4-12), click “Finish.”
We can also create a pool qualification, which will, as we just mentioned, pool servers based on characteristics. We do this from Servers ➤ Policies ➤ root ➤ Sub-Organizations ➤ LearningUCS ➤ Server Pool Policy Qualifications (Figure4-13).
In our qualification, we are going to keep it simple and just match against the server product ID (PID), as shown in Figure 4-14.
Once we have added this (Figure 4-15), we can click on “Finish” to create the qualification.
The last step is to create a policy to tie these all together. We do this by going to Servers ➤ Policies ➤ root ➤ Sub-Organizations ➤ LearningUCS ➤ Server PoolPolicies. We name our policy and either assign the policy to a pool or we can select the qualification, but not both (Figure 4-16). While we can set both when we create the policy, once we go back into it, we will find the pool empty. Pool assignments are fairly static, whereas qualifications are more dynamic in nature.
The last policies we are going to cover are some small but very important ones!
Operational Policies
Operational policies cover aspects of the servers like BIOS, IPMI, management IP addresses, power control, scrub policies, KVM management and graphics card policies. There are three that we should cover, starting with management IP addresses.
Management IP Addresses
The management IP addresses come from a defined pool of IP addresses and it is to one of these addresses we connect to when we launch the KVM from the UCS GUI. We create the pool by going to LAN ➤ Pools ➤ root ➤ Sub-Organizations ➤ LearningUCS ➤ IP Pools. We can create them under LAN ➤ Pools ➤ root ➤ IP Pools as well, if you so desire. Create a new IP pool called “KVM-IP-Pool” (Figure 4-17).
Click “Next,” and assign a block of IP addresses (Figure 4-18). This needs to be large enough to cover all the servers you have (and any future ones).
Pick a range that doesn’t overlap with anything (such as your DHCP scope) otherwise this could cause issues in your environment. The pool will appear in the GUI (Figure 4-19).
As we are not adding an IPv6 pool, click “Next,” and then click “Finish.” Now that we have our port range, we need to specify which port we will be using.
KVM Management Policy
The default KVM port is 2068, but we can change that by going to Servers ➤ Policies ➤ root ➤ Sub-Organizations ➤ LearningUCS ➤ KVM Management Policy. Create a new policy called KVM-Port-Policy, setting the port to 3099 (Figure 4-20).
Onto our final policy.
Scrub Policies
The last set of policies we are going to implement are scrub policies. These control how the disks on a server will be treated in scenarios such as moving blades. For example, you are balancing the blades in your UCS chassis, evening out three application blades across three chassis. You have arranged the downtime, attached the service profile to the empty destination slot, and removed the blade. When you put it in the new chassis slot, the blade is picked up and once it’s booted up, you find that (due to the configured Scrub policy) the disks have been wiped.
This is where scrub policies will save you. Head to Servers ➤ Policies ➤ root ➤ Sub-Organizations ➤ LearningUCS ➤ Scrub Policies. Set all the options to “No” (Figure 4-21).
Now, we can keep our data safe if we move a blade! Before we move onto the next chapter, however, we need to create a few more items, namely our pools and a VSAN
UUID Pool
We need to be able to identify our servers in UCS, well, more specifically, the UCS systems need to identify our servers. While we can name them (by giving them labels) in the UCS GUI, the backend systems have a different way of referencing the servers, and this is through a UUID. The UUID (Unique Identifier) is a 128-bit reference. We can create a pool of UUIDs, saving us from manually assigning them to each of our servers. To create the pool, go to Servers ➤ Pools ➤ root ➤ Sub-Organizations ➤ LearningUCS ➤ UUID Suffix Pools. Click “Add” and create a block of 30 UUIDs, as in Figure 4-22.
Once we have created our pool, we can see the sequential suffixes (Figure 4-23).
MAC Pools
In the same way that our servers need a unique identifier, so do our network interfaces. We do this through MAC pools. Navigate to LAN ➤ Pools ➤ root. Click “Add” and name the MAC pool (such as “MyMacPool,” as in Figure 4-24).
Click “Next” to add the MAC addresses (Figure 4-25). Cisco suggests that the block uses 00:25:B5:xx:xx:xx for compatibility reasons.
WWNN
In the same way that we created blocks of IDs for our servers and MAC addresses for our network cards, our SAN fabric will also need some uniqueness. We do this through the WWNN (World Wide Node Names) pool, which has a number of WWNs (World Wide Names). Navigate to “SAN ➤ Pools ➤ root ➤ Sub-Organizations ➤ LearningUCS” and right-click WWNN Pools, choosing the option to create a new one. Name the pool “wwnn-pool” (Figure 4-26) and click “Next.”
Create a block of sixty WWNs, following the naming advice of Cisco (20:00:00:25:b5:xx:xx:xx), as shown in Figure 4-27.
VSAN
The last component we are going to create is our VSAN. This will enable us to separate our storage traffic. We will be using the 2000 and 2001 as our VSAN and FCoE (Fibre Channel over Ethernet) IDs (as these are the ones Cisco suggests), as shown in Figure 4-28.
Summary
In this chapter, we have created the policies and pools to control our servers. In the next chapter, we will start assigning these to our servers.