IOM - Fabric Interconnect physical cabling

IOMs provide connectivity to individual blade servers through backplane server ports and fabric uplink ports connectivity to the Fabric Interconnect. IOM interface connectivity to blade servers does not require user configuration.

IOM-to-Fabric Interconnect connectivity, however, requires physical cabling. Both IOM and Fabric Interconnects in the third-generation have Quad Small Form-Factor Pluggable Plus (QSFP+) slots, whereas the second-generation has Small Form-Factor Plus (SFP+) slots. The Fabric Interconnect 6300 series and I/O module 2300 series provide 40 Gig QSFP+ ports with a maximum of 320 Gig throughput from each chassis, while the Fabric Interconnect 6200 series and I/O module 2200 series provide 10 Gig SFP+ port with a maximum of 160 Gig throughput. There is a variety of possibilities in terms of physical interfaces. Some of the common configurations for the second-generation include the following:

  • 10 GB FET SFP+ interface (special optical multimode fibre SFP+ module that can only be used with UCS and Nexus equipment)
  • 10 GB CU SFP+ (copper Twinax cable)
  • 10 GB SR SFP+ (short-range multimode optical fibre SFP+ module for up to 300 m)
  • 10 GB LR SFP+ (long-range single-mode optical fibre SFP+ module for above 300 m)

The following diagram shows eight connections from IOM 0 to Fabric Interconnect A and eight connections from IOM 1 to Fabric Interconnect B. Depending on bandwidth requirements and model, it is possible to have only 1, 2, 4 or 8 connections from IOM to Fabric Interconnect.

Although a large number of links provide higher bandwidth for individual servers, as each link consumes a physical port on the Fabric Interconnect, they also decrease the total number of UCS chassis that can be connected to the Fabric Interconnects.

Fabric Interconnect and I/O module cabling:

As shown in the preceding diagram, the IOM-to-Fabric Interconnect link only supports direct connection. However, Fabric Interconnect to northbound Nexus switches connectivity can either be direct and use the regular port channel (PC), or the connections from a single Fabric Interconnect may traverse two different Nexus switches and may use a virtual PortChannel (vPC).

The next diagram shows a direct connection between Fabric Interconnects and Nexus switches. All connections from Fabric Interconnect A are connected to Nexus Switch 1, and all connections from Fabric Interconnect B are connected to Nexus Switch 2. These links can be aggregated into a port channel.

There are two other connections that need to be configured:

  • Cluster heartbeat connectivity: Each Fabric Interconnect has two fast Ethernet ports, L1 and L2. These ports should be connected using a Cat 6 UTP cable for cluster configuration. L1 and L2 ports needs to directly connect with Ethernet cables from L1 to L1 and L2 to L2 ports. These ports will be used to provide the cluster configuration between Fabric Interconnects and continuously monitor each other's status. The cluster configuration provides only management-plane redundancy, not data-plane redundancy. These ports don't forward any data traffic and can only be used for heartbeats between Fabric Interconnects.
  • Management connectivity: Each Fabric Interconnect has one management port, mgmt0, which is also a fast Ethernet port and can be configured using a Cat 6 UTP cable for remote management of the Fabric Interconnect. The mgmt0 port will be connected to an out-of-band management switch from each Fabric Interconnect. An extra clustered (virtual) IP address along with two management IP addresses will be assigned to these management ports to provide management redundancy.

Fabric Interconnect and Nexus connectivity:

The next diagram shows Fabric Interconnect-to-Nexus switch connectivity where links traverse Nexus switches. One network connection from Fabric Interconnect A is connected to Nexus Switch 1, and the other connection is a connection from Fabric Interconnect A to Nexus Switch 2. Both these connections are configured in vPC configuration. Similarly, one connection from Fabric Interconnect B is connected to Nexus Switch 2, and the other is connected to Nexus Switch 1. Both these connections are also configured in vPC. It is also imperative to have vPC on the physical connections between both Nexus switches. This is shown as two physical links between Nexus Switch 1 and Next Switch 2. Without this connectivity and configuration between Nexus switches, vPC will not work.

Physical slots in Nexus switches also support the same set of SFP+ modules for connectivity as Fabric Interconnects and IOM modules.

Fabric Interconnect and Nexus connectivity with vPC:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.142.232