© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
S. FordhamIntroducing Cisco Unified Computing System https://doi.org/10.1007/978-1-4842-8986-0_2

2. The UCS Components

Stuart Fordham1  
(1)
Bedfordshire, UK
 

When UCSPE starts up, it generates a new inventory. This comprises two chassis with five blades, one enclosure with two nodes, two fex, and ten rack servers. While the naming of the devices will vary between what you may see on your screen and the screenshots in this book, any differences should be minor.

We can see the equipment that has been created for us by clicking on the Equipment link on the left-hand side.

Managing UCSPE Hardware

Adding and Removing Devices

If you want to edit the auto-generated hardware that UCSPE has provided, then you can remove and add devices. You don’t have to remove (or add anything) here, this is more for reference if you want to create your own setup. However, it does give us an excellent, logical, way of introducing all the different components of the UCS and how they are connected. If you do go ahead and follow the following steps, then you might want to do a factory reset on the VM before the next chapter, which will set us up with a brand new UCS again before we move on to the next chapter.

Removing UCS Devices

Before we can remove a piece of hardware, we need to disconnect it first. The easiest way to do this is to click on the broken chain-link icon at the top right-hand corner (Figure 2-1). This will disconnect all the devices (apart from the Fabric Interconnects).

A window section depicts four icons, a link, a broken link, a green dot, and a red dot, and a box below the broken link reads, disconnect all devices.

Figure 2-1

Disconnecting the devices

We can disconnect individual devices by clicking on the red circle next to the line item we want to remove if we only want to remove one piece of equipment.

If you have chosen to remove all of them, once they change their green circle to a red one, click on the red circle at the top (Figure 2-2).

A section of a window shows four icons, a link, a broken link, a green dot, and a red dot. The red dot is selected, and a box below it reads, remove all devices.

Figure 2-2

Removing the devices

This will remove the devices. If you find that (after waiting a few minutes) nothing has changed, then click on the trashcan icon on the device line item to delete them one by one. Refreshing the page is also useful here, as is clicking on the Equipment link, to update any changes that have been made.

The only devices that are left will be the fabric interconnects (Figure 2-3).

A hardware inventory displays the information on fabric interconnect, rack server, fabric extender, chassis, blade server, enclosure, and enclosure NODE.

Figure 2-3

The hardware inventory

If the devices do not appear to change state, then click the Equipment link on the left-hand side and the screen should refresh. You may have to do this a lot with UCSPE as it can be slow to pick up changes, such as when we come to add hardware shortly.

Now that we have an empty canvas (so to speak) we can start with the Fabric Interconnects.

Fabric Interconnects

The Fabric Interconnects (also referred to as “FICs” or “FIs.” FI is better to use to avoid confusion, as FIC sounds much like FEX, which we will cover shortly) is where all the magic happens. This is where we manage the UCS estate as this is where the UCS software is held.

Generally, we would have two Fabric Interconnects, though you may also encounter a UCS-Mini. The UCS-Mini can handle between two and fifteen servers (a maximum of eight blade servers and seven rack servers), and places the FI (the UCS 6324 model) within the chassis, rather than them being separate hardware.

The FI runs a version of the Cisco Nexus software providing northbound connectivity to the rest of the network as well as connectivity to storage.

We can change the FI model if we want to by clicking on the cog icon at the top (Figure 2-4).

A fabric interconnect section displays various icons, and a drop-down menu lists the options, change cluster state, change F I serial, change fabric interconnect, and U C S P E settings.

Figure 2-4

Changing the Fabric Interconnect

The available models are shown in Table 2-1.
Table 2-1

Fabric Interconnect models

Model

Size (RU)

100G ports

40/100G ports

40G ports

10/25G ports

10G ports

PSU

Fans

UCS-FI-M-6324

N/A

-

-

1

-

4

N/A

N/A

UCS-FI-6296UP

2

-

-

-

-

48+481

2

2

UCS-FI-6332-16UP

1

-

-

24

-

162

1+1

2+2

UCS-FI-6248UP

1

-

-

-

-

32+161

2

1+1

UCS-FI-6454

1

543

6

-

-

-

2

3+1

UCS-FI-64108

2

1083

12

-

96

-

2

2+1

UCS-FI-6332

1

-

-

32

-

-

1+1

2+2

If you do change the FI, then you will have to restart UCSPE (Figure 2-5).

A section of a window of the fabric interconnect option, and a command is displayed, that is, restart the U C S emulator with current settings, yes or no.

Figure 2-5

Restarting UCSPE

The interconnects come with two power supplies (PSUs) and four fans (Figure 2-6).

A page titled Fabric Interconnect A lists the model, serial, vendor, and description, followed by information on two P S Us and 4 Fans.

Figure 2-6

The Fabric Interconnects

Adding Devices

Next, we will come to our chassis.

Chassis

The chassis holds our blade servers. The chassis model options we have are
  • UCSS-S3260 – a modular storage server with dual M5 server nodes.

  • UCSC-C3X60 – similar to the S3260 but is now discontinued. Both are optimized for large datasets.

  • UCSB-5108-DC.

  • UCSB-5108-DC2.

  • N20-C6508.

  • UCSB-5108-AC2.

The 5108s are 8-slot 6RU chassis with two I/O bays. The N20-C6508 is the same as the previous, but is now discontinued.

You can add a chassis, such as the UCSB-5108-AC2, by clicking on the plus sign next to the word “Chassis” on the Equipment page. Enter the name for the chassis, select the model and click on “Add” (Figure 2-7).

A window section displays how a chassis is added to the Chassis section. The Chassis, U C S B 5108 A C 2 Cisco systems inc is highlighted.

Figure 2-7

Adding a Chassis

The chassis will appear in our inventory on the left-hand side (Figure 2-8).

A section displays equipment, the fabric interconnect, and stash servers. The equipment section includes chassis C H 5 1.

Figure 2-8

The new Chassis

Now that we have our first chassis, we need to fill it with the components that connect it to our FIs and make it hum gently4 in the data center (power supplies and fans).

If we select the chassis and click on the edit button to the right on the item line, then we can see that we have many options of components we can add (Figure 2-9).

Information on Chassis C H 5 2 or Chassis number 1, such as model, U C S M chassis I D, serial, and description, is displayed. Options such as manage links of chassis and stash are present.

Figure 2-9

Chassis hardware

We are not going to add any blade servers at the moment, but we do need to add some power. We do this by clicking on “Psu” (not sure why Cisco didn’t capitalize all of “PSU,” but there we are), selecting an appropriate model (such as the Platinum II AC power supply), and dragging it up to the chassis, above where the model is shown and underneath the plus and minus buttons (Figure 2-10).

A section titled Chassis C H 5 2 is displayed. Platinum 2 A C power supply is involved. The model, U C S M chassis I D, serial, and description are displayed.

Figure 2-10

Adding a PSU to a chassis

Once you let go, you can select how many to add. UCSPE will tell us how many available slots we have available to fill. We can decide which slot to add an item to by typing in the slot number (such as “1”) or a range, by typing in “1-4” and pressing enter. You should now have four power supplies (Figure 2-11):

A section titled Chassis C H 5 2. A box displays the message, successfully added P S U to Chassis. Information on P S U slots 1 to 4 is listed.

Figure 2-11

Our Chassis has power!

Chassis also require fans and we add these in the same manner, by clicking on the Fan link next and adding eight fans (Figure 2-12). In the box, you can type “1-8” to add all eight fans in one go.

A section titled Chassis C H 5 2. A box in it, displays the message, successfully added fan to chassis. Information on fan slots 1 to 8 is listed.

Figure 2-12

Adding chassis fans

Next, we can add the IOMs. The IOMs are “In/Out Modules.” These are also known as FEXs. They are the line cards that connect our chassis to our fabric interconnects. They also provide the interface connections to the blade servers, and CMC (Chassis Management Controller), which is used for monitoring our components, such as fans, power supplies, and temperatures and this is also the component that is responsible for monitoring blade insertion and removal. Lastly, they also provide Chassis Management Switch (CMS), which gives us the KVM (Keyboard, Video, Mouse), Serial over LAN (SoL), and Intelligent Platform Management Interface (IPMI) abilities to our blades.

The IOM options we have are

Model

Fabric ports

Server ports

Throughput

2304

4x40GE

8x40Gbps

320Gbps5

2208XP

8x10Gbps

32x10Gbps

80Gbps

2204XP

4x10Gbps

16x10Gbps

40Gbps

2408

8x25GE

32-10Gbps

400Gbps5

If we add two 2408 IOMs and click the word “Equipment” in the left-hand pane, then we should see the chassis change, listing the (currently disconnected) IOM ports (Figure 2-13).

A section titled Chassis C H 5 2. Alongside is present a section, titled manage links of Chassis, which displays the list of sources, peer device type, peer device side, peer slot, and edit or delete.

Figure 2-13

IOMs in our Chassis

We are not going to configure the IOMs just yet, instead, we are going to see how we can quickly create two more. If we click back onto the main Equipment list, then we can click on the duplicate icon next to our chassis, and again to create two more chassis.

We should have three chassis now (Figure 2-14).

A table titled Chassis comprises 6 columns, titled, U C S M I d, name, vendor, model, serial, and insert or remove. Three chassis and their respective information is listed.

Figure 2-14

Three Chassis

This ability to duplicate can save a lot of time if we need to add multiples of the same hardware component and is especially useful when adding servers.

We can now connect our chassis to our Fabric Interconnects. We can do this by editing the first chassis and clicking the pencil icon under “Edit Port” on port 1/1 (IOM 1, port 1). The Peer Device Type needs to be set to “fi.” Select FI A and select a free port (such as 1/20). Repeat the process for iom 1/2, selecting the next available port on the same FI (1/21).

Next, edit iom 2/1 (IOM 2, port 1) selecting FI B, and the same port number as used on iom 1/1 (port 1/20), and repeat for 2/2, selecting the next port (1/21) (Figure 2-15).

A section titled Manage links of Chassis lists the source, peer device type, peer device serial, peer slot, edit port, and delete link.

Figure 2-15

Chassis 1 IOM connectivity.

Repeat the process on the other chassis, following Table 2-2.
Table 2-2

IOM connectivity

Chassis

IOM

IOM Port

Peer Device

Peer Port

2

1

1/1

FI A

1/24

2

1

1/2

FI A

1/25

2

2

2/1

FI B

1/24

2

2

2/2

FI B

1/25

3

1

1/1

FI A

1/28

3

1

1/2

FI A

1/29

3

2

2/1

FI B

1/28

3

2

2/2

FI B

1/29

The reason we leave gaps between the ports of one IOM and the ports of another (as they go into the FI) is that if we want to increase bandwidth later on, we can keep things nice and neat and ordered. Also, bear in mind that the cabling is one IOM to one FI. The IOM, essentially, becomes part of the FI, so we never cross the streams. This isn’t Ghostbusters. Bad things really will happen. Maybe not end-of-the-world type stuff, but certainly a call to Cisco TAC (Technical Assistance Center)!

If you were to start UCS now, this is what your systems would look like (Figure 2-16).

A schematic diagram of the topology of the system comprises Chassis 1, Chassis 2, fabric interconnect 1, and 2, and chassis 3. Chassis 1, 2, and 3 are connected to both fabric interconnects.

Figure 2-16

Our topology so far

Blade Servers

Our chassis are fairly useless if we have no (B-series) servers to run in them. So, add some servers by dragging the server from the menu at the bottom into the chassis. Each server will need CPU, memory, and storage, so add these as well. When adding servers, plan them out carefully. For example, if you have three chassis and servers that will perform different functions (such as ESXi hypervisors, database servers, application servers, and so on), share these out across all three chassis so that if one chassis has an issue, the servers in the other chassis can continue to server your data an environment as required.

Once you have added your servers, click on the red button next to each of the chassis (to remove them) and then click the green button to insert them again. The red button should turn green, as well as the green button next to each of the servers. You may need to wait a few minutes before you can insert it again.

Our chassis will look something like this (Figure 2-17):

A Chassis has two nearly identical compartments.

Figure 2-17

Our chassis

Now that we have some blade servers, we should add some rack servers. Before we do this, however, we are going to add some FEXs.

FEX

FEX stands for “Fabric Extender.” These allow us to increase the number of ports we have at our disposal. The options we have in UCSPE are

Model

Server ports

Uplinks

N2K-C2232TM-E-10GE

32x 1/10GBASE-T

8x10GE

N2K-C2232TM-10GE

32x 1/10GBASE-T

8x10GE

N2K-C2148T-1GE

48x 1G Base-T

4x SFP+

N2K-C2232PP-10GE

32x 1/10GE SFP/SFP+

8x10GE

N2K-C2348UPQ-10GE

48x 1/10 Gigabit Ethernet (SFP/SFP+)

6x 40GB

N9K-C93180YC-FX3

48x 1/10/25-GBps fiber

6x 40/100GB

Rack Servers

In UCSPE, the rack servers (C-series) connect to the FEXs and there is a wide variety of servers to choose from. Way too many to list here with all their differences, but similarly to the blade servers, you will need to add CPU, memory, disks, I/O adapters, storage controllers, and PSUs.

When we connect rack servers, we need to consider how we connect them; we have options of “Direct Attach Server,” “Single Wire Management,” or “Dual Wire Management.”

Direct Attach Mode

In Direct Attach mode (available with UCS version 3.1 or later), the servers attach directly to the fabric interconnects, bypassing the need to have FEXs.

Single Wire Management

As the name suggests, single wire management uses a single cable into the FEX for management and data traffic.

Dual Wire Management

In dual wire mode, separate cables are used for data and management.

Enclosures

UCS enclosures, such as the UCSC-C4200-SFF can host up to four “nodes,” such as the C125. These are designed for dense compute form factor with high core densities, where the ability to scale out with compute-intensive machines is critical.

We do need to add fans and power to the enclosures, but not IOMs. We can add the nodes by dragging them onto the chassis as we do with the other hardware. Once we have added the nodes (as well as the CPU, memory, I/O adapters, and disks), we can connect the node to the FEX by clicking the chain icon where it says “Manage Links of Node.”

We can see how this all looks by clicking the Equipment link at the top left-hand corner and then clicking the UCS icon at the end of the row of icons. We can log in using the username and password of “ucspe.”

The default UCSPE-generated layout will look a little like this (Figure 2-18):

A schematic diagram of the U C S environment comprises Chassis 3 and 4, fabric interconnects A and B, F E X 1 and 2, and 9 servers. The F E X 1 and 2 convey information to the servers.

Figure 2-18

Our UCS environment

Summary

In this chapter, we looked at how to add and remove the components available to us in UCSPE, as well as how to connect them all together.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.226.255