When UCSPE starts up, it generates a new inventory. This comprises two chassis with five blades, one enclosure with two nodes, two fex, and ten rack servers. While the naming of the devices will vary between what you may see on your screen and the screenshots in this book, any differences should be minor.
We can see the equipment that has been created for us by clicking on the Equipment link on the left-hand side.
Managing UCSPE Hardware
Adding and Removing Devices
If you want to edit the auto-generated hardware that UCSPE has provided, then you can remove and add devices. You don’t have to remove (or add anything) here, this is more for reference if you want to create your own setup. However, it does give us an excellent, logical, way of introducing all the different components of the UCS and how they are connected. If you do go ahead and follow the following steps, then you might want to do a factory reset on the VM before the next chapter, which will set us up with a brand new UCS again before we move on to the next chapter.
Removing UCS Devices
We can disconnect individual devices by clicking on the red circle next to the line item we want to remove if we only want to remove one piece of equipment.
This will remove the devices. If you find that (after waiting a few minutes) nothing has changed, then click on the trashcan icon on the device line item to delete them one by one. Refreshing the page is also useful here, as is clicking on the Equipment link, to update any changes that have been made.
If the devices do not appear to change state, then click the Equipment link on the left-hand side and the screen should refresh. You may have to do this a lot with UCSPE as it can be slow to pick up changes, such as when we come to add hardware shortly.
Now that we have an empty canvas (so to speak) we can start with the Fabric Interconnects.
Fabric Interconnects
The Fabric Interconnects (also referred to as “FICs” or “FIs.” FI is better to use to avoid confusion, as FIC sounds much like FEX, which we will cover shortly) is where all the magic happens. This is where we manage the UCS estate as this is where the UCS software is held.
Generally, we would have two Fabric Interconnects, though you may also encounter a UCS-Mini. The UCS-Mini can handle between two and fifteen servers (a maximum of eight blade servers and seven rack servers), and places the FI (the UCS 6324 model) within the chassis, rather than them being separate hardware.
The FI runs a version of the Cisco Nexus software providing northbound connectivity to the rest of the network as well as connectivity to storage.
Adding Devices
Next, we will come to our chassis.
Chassis
UCSS-S3260 – a modular storage server with dual M5 server nodes.
UCSC-C3X60 – similar to the S3260 but is now discontinued. Both are optimized for large datasets.
UCSB-5108-DC.
UCSB-5108-DC2.
N20-C6508.
UCSB-5108-AC2.
The 5108s are 8-slot 6RU chassis with two I/O bays. The N20-C6508 is the same as the previous, but is now discontinued.
Now that we have our first chassis, we need to fill it with the components that connect it to our FIs and make it hum gently4 in the data center (power supplies and fans).
Next, we can add the IOMs. The IOMs are “In/Out Modules.” These are also known as FEXs. They are the line cards that connect our chassis to our fabric interconnects. They also provide the interface connections to the blade servers, and CMC (Chassis Management Controller), which is used for monitoring our components, such as fans, power supplies, and temperatures and this is also the component that is responsible for monitoring blade insertion and removal. Lastly, they also provide Chassis Management Switch (CMS), which gives us the KVM (Keyboard, Video, Mouse), Serial over LAN (SoL), and Intelligent Platform Management Interface (IPMI) abilities to our blades.
We are not going to configure the IOMs just yet, instead, we are going to see how we can quickly create two more. If we click back onto the main Equipment list, then we can click on the duplicate icon next to our chassis, and again to create two more chassis.
This ability to duplicate can save a lot of time if we need to add multiples of the same hardware component and is especially useful when adding servers.
We can now connect our chassis to our Fabric Interconnects. We can do this by editing the first chassis and clicking the pencil icon under “Edit Port” on port 1/1 (IOM 1, port 1). The Peer Device Type needs to be set to “fi.” Select FI A and select a free port (such as 1/20). Repeat the process for iom 1/2, selecting the next available port on the same FI (1/21).
IOM connectivity
Chassis | IOM | IOM Port | Peer Device | Peer Port |
---|---|---|---|---|
2 | 1 | 1/1 | FI A | 1/24 |
2 | 1 | 1/2 | FI A | 1/25 |
2 | 2 | 2/1 | FI B | 1/24 |
2 | 2 | 2/2 | FI B | 1/25 |
3 | 1 | 1/1 | FI A | 1/28 |
3 | 1 | 1/2 | FI A | 1/29 |
3 | 2 | 2/1 | FI B | 1/28 |
3 | 2 | 2/2 | FI B | 1/29 |
The reason we leave gaps between the ports of one IOM and the ports of another (as they go into the FI) is that if we want to increase bandwidth later on, we can keep things nice and neat and ordered. Also, bear in mind that the cabling is one IOM to one FI. The IOM, essentially, becomes part of the FI, so we never cross the streams. This isn’t Ghostbusters. Bad things really will happen. Maybe not end-of-the-world type stuff, but certainly a call to Cisco TAC (Technical Assistance Center)!
Blade Servers
Our chassis are fairly useless if we have no (B-series) servers to run in them. So, add some servers by dragging the server from the menu at the bottom into the chassis. Each server will need CPU, memory, and storage, so add these as well. When adding servers, plan them out carefully. For example, if you have three chassis and servers that will perform different functions (such as ESXi hypervisors, database servers, application servers, and so on), share these out across all three chassis so that if one chassis has an issue, the servers in the other chassis can continue to server your data an environment as required.
Once you have added your servers, click on the red button next to each of the chassis (to remove them) and then click the green button to insert them again. The red button should turn green, as well as the green button next to each of the servers. You may need to wait a few minutes before you can insert it again.
Now that we have some blade servers, we should add some rack servers. Before we do this, however, we are going to add some FEXs.
FEX
Model | Server ports | Uplinks |
---|---|---|
N2K-C2232TM-E-10GE | 32x 1/10GBASE-T | 8x10GE |
N2K-C2232TM-10GE | 32x 1/10GBASE-T | 8x10GE |
N2K-C2148T-1GE | 48x 1G Base-T | 4x SFP+ |
N2K-C2232PP-10GE | 32x 1/10GE SFP/SFP+ | 8x10GE |
N2K-C2348UPQ-10GE | 48x 1/10 Gigabit Ethernet (SFP/SFP+) | 6x 40GB |
N9K-C93180YC-FX3 | 48x 1/10/25-GBps fiber | 6x 40/100GB |
Rack Servers
In UCSPE, the rack servers (C-series) connect to the FEXs and there is a wide variety of servers to choose from. Way too many to list here with all their differences, but similarly to the blade servers, you will need to add CPU, memory, disks, I/O adapters, storage controllers, and PSUs.
When we connect rack servers, we need to consider how we connect them; we have options of “Direct Attach Server,” “Single Wire Management,” or “Dual Wire Management.”
Direct Attach Mode
In Direct Attach mode (available with UCS version 3.1 or later), the servers attach directly to the fabric interconnects, bypassing the need to have FEXs.
Single Wire Management
As the name suggests, single wire management uses a single cable into the FEX for management and data traffic.
Dual Wire Management
In dual wire mode, separate cables are used for data and management.
Enclosures
UCS enclosures, such as the UCSC-C4200-SFF can host up to four “nodes,” such as the C125. These are designed for dense compute form factor with high core densities, where the ability to scale out with compute-intensive machines is critical.
We do need to add fans and power to the enclosures, but not IOMs. We can add the nodes by dragging them onto the chassis as we do with the other hardware. Once we have added the nodes (as well as the CPU, memory, I/O adapters, and disks), we can connect the node to the FEX by clicking the chain icon where it says “Manage Links of Node.”
We can see how this all looks by clicking the Equipment link at the top left-hand corner and then clicking the UCS icon at the end of the row of icons. We can log in using the username and password of “ucspe.”
Summary
In this chapter, we looked at how to add and remove the components available to us in UCSPE, as well as how to connect them all together.