Fabric design overview
This chapter provides a high-level overview of common fabric designs based on IBM b-type Gen 5 16 Gbps products and features. The topics include various topologies along with benefits and limitations for each topology. The guidelines that are outlined in this chapter do not apply to every environment, but they can help guide you through the decisions that you must make for a successful storage area network (SAN) design.
This chapter includes the following sections:
2.1 Topologies
This section describes the most common topologies for fabric connectivity, core-edge, or edge-core-edge fabrics. A topology is described in terms of how the switches are interconnected, such as ring, core-edge, and edge-core-edge or fully meshed.
The preferred SAN topology to optimize performance, management, and scalability is a tiered, core-edge topology (sometimes called core-edge or tiered core edge). This approach provides good performance without unnecessary interconnections. At a high level, the tiered topology has many edge switches that are used for device connectivity, and fewer core switches that are used for routing traffic between the edge switches, as shown in Figure 2-1.
Figure 2-1 Four scenarios of tiered network topologies (hops shown in bolded orange)
The scenarios have these characteristics:
Scenario A has localized traffic, which can have small performance advantages but does not provide ease of scalability or manageability.
Scenario B, also known as edge-core, separates the storage and servers, thus providing ease of management and moderate scalability.
Scenario C, also known as edge-core-edge, has both storage and servers on edge switches, which provide ease of management and is much more scalable.
Scenario D is a full-mesh topology, and server to storage is no more than one hop. Designing with UltraScale Inter-Chassis Links (ICLs) is an efficient way to save front-end ports, and users can easily build a large (for example, 1536-port or larger) fabric with minimal SAN design considerations.
2.1.1 Edge-core topology
The edge-core topology (Scenario B in Figure 2-1) places initiators (servers) on the edge tier and storage (targets) on the core tier. Because the servers and storage are on different switches, this topology provides ease of management and good performance, with most traffic traversing only one hop from the edge to the core.
The disadvantage of this design is that the storage and core connections are in contention for expansion.
 
Note: Adding an IBM SAN director as the SAN core can reduce the expansion contention.
2.1.2 Edge-core-edge topology
The edge-core-edge topology (Scenario C in Figure 2-1) places initiators on one edge tier and storage on another edge tier, leaving the core for switch interconnections or connecting devices with network-wide scope, such as Dense Wavelength Division Multiplexers (DWDMs), inter-fabric routers, storage virtualizers, tape libraries, and encryption engines.
Because servers and storage are on different switches, this design enables independent scaling of compute and storage resources, ease of management, and optimal performance. Traffic traverses only two hops from the edge through the core to the other edge. In addition, it provides an easy path for expansion because ports and switches can readily be added to the appropriate tier as needed.
2.1.3 Full-mesh topology
A full-mesh topology (Scenario D in Figure 2-1 on page 10) allows you to place servers and storage anywhere because the communication between source to destination is no more than one hop. With optical UltraScale ICLs, you can build a full-mesh topology that is scalable and cost-effective compared to the previous generation of SAN products.
 
Note: Hop count is not a concern if the total switching latency is less than the disk I/O timeout value.
2.2 Gen 5 Fibre Channel technology
Gen 5 Fibre Channel technology is designed for high-density server virtualization, cloud architectures, and next generation flash and SSD storage. Gen 5 provides Fabric Vision and 16 Gbps performance.
This section describes the new Gen 5 Fibre Channel technology and its features.
2.2.1 Condor3 ASIC
The Condor3 ASIC is the kernel of the Gen 5 switches. Condor3 ASIC provides unmatched performance compared to its predecessors. Condor3 ASIC increases the frames that are switched per second and the total throughput bandwidth, which is all done with increased energy efficiency. Here are some of the significant Condor3 ASIC specifications:
Performance and compatibility:
 – 420 million frames switched per second
 – 768 Gbps of bandwidth
 – 16/10/8/4/2 Gbps speed
 – EX/E/F/M/“D” on any port
Industry-leading efficiency with less than 1 watt/Gbps.
More scalable across distance:
 – 8000 buffers (four times of what exists on Gen 4)
 – Up to 5000 km distance at 2 Gbps
Unmatched investment protection that is compatible with over 30 million existing SAN ports.
UltraScale Optical ICLs support optical connections to up 10 chassis and distances up to 100 meters.
Fabric Vision provides advanced diagnostic tests, monitoring, and management that maximizes availability, resiliency, and performance.
ClearLink Diagnostic Ports ensure link-level integrity from the server adapter across fabrics and ICLs.
Forward Error Correction provides automatic recovery of transmission errors, which enhances reliability of transmission, and in turn results in higher availability and performance.
In-flight Encryption/Compression.
Secure ISL connectivity and compression of ISL traffic for bandwidth optimization.
Using 10 Gbps native Fibre Channel, you can configure any Condor3 port because 10 Gbps Fibre Channel eliminates the need for specialized ports for optical MAN (10 Gbps DWDM) connectivity.
ASIC-Enabled Buffer Credit Loss Detection and Automatic Recovery at Virtual Channel Level.
Auto Link Tuning for Back-end Ports.
E_Port Top Talkers and Concurrency with Fibre Channel Routing.
Monitors top bandwidth-consuming flows in real time on each individual ISL and EX_Ports.
2.2.2 Fabric Vision
Brocade Fabric Vision technology is an advanced hardware and software solution that combines capabilities from the Brocade Gen 5 Fibre Channel ASIC.
Fabric Vision is partially compatible with Gen 4 switches and fully supported on Gen 5 switches. It is a set of new software features that works with the new Gen 5 hardware capabilities to provide advanced diagnostic tests, improved monitoring, and management. It is designed to maximize availability, resiliency, and performance, as well as simplify SAN deployment and management.
 
Licensing: Most Fabric Vision features and capabilities, such as Brocade ClearLink Diagnostics, are included in Fabric OS. MAPS and Flow Vision are available with an optional Fabric Vision license.
If you have existing licenses for both Advanced Performance Monitoring and Brocade Fabric Watch, you will automatically receive the Fabric Vision capabilities when you upgrade to Fabric OS 7.2.0 or later. You do not need to purchase an additional license.
There are many technologies behind Fabric Vision:
Switch, director, and adapter ASICs.
Delivery in Brocade Fabric Operating System (FOS) begins primarily with Version 7.0, with some features available before Version 7.0.
IBM Network Advisor V12.0 and later delivers aspects of the new architecture.
The following are the main Fabric Vision features:
ClearLink Diagnostic Ports: Ensures optical and signal integrity for Gen 5 Fibre Channel optics and cables.
Fabric Performance Impact (FPI) Monitoring: Uses predefined thresholds and alerts with MAPS to automatically detect and alert latency, identify slow drain devices, and pinpoint exactly which devices are causing and are affected by a bottlenecked port. FPI monitoring also provides the ability to automatically mitigate the effects of slow drain devices or even resolve the slow drain behavior at the source.
FPI functionality replaces the Bottleneck and Credit recovery functionality in Fabric OS v7.3.0 and later.
Latency Bottleneck Detection: Enables proactive monitoring, alerting, and visualization of high latency devices and high latency ISLs that are affecting application performance. It simplifies SAN administration by narrowing troubleshooting efforts.
Forward Error Correction (FEC): Automatically detects and recovers from bit errors, enhancing transmission reliability and performance.
Buffer Credit Recovery at the VC level: Automatically detects and recovers buffer credit loss at the Virtual Channel level, providing protection against performance degradation and enhancing application availability.
Health and Performance Dashboards: Provide integration with IBM Network Advisor, providing all the critical information in one window.
Monitoring and Alerting Policy Suite (MAPS): Policy-based monitoring tool that simplifies fabric-wide threshold configuration and monitoring.
Flow Vision: A comprehensive tool that enables administrators to identify, monitor, and analyze specific application data flows without using taps.
 
Note: MAPS and Flow Vision are available only with FOS V7.2 or later.
ClearLink Diagnostic Ports
ClearLink Diagnostic Ports identify and isolate optics and cable problems faster by reducing fabric deployment and diagnostic times. It has these main functions:
Non-intrusively verifies transceiver and cable health
Tests electrical and optical transceiver components
Monitors and trends transceiver health based on uptime
Conducts cable health checks
Monitors and sets alerts for digital diagnostic tests
Ensures predictable application performance over links
Provides granular latency and distance measurement for buffer credit assignment
Simulates application-level I/O profiles
The background that ensures the advanced diagnostic tests for 16G SFP+ and 16G links is a new diagnostic port type that is known as D_Port. D_Port is used to diagnose optics and cables, and is configured by the user to run diagnostic tests.
D_Port mode allows you to convert a Fibre Channel port into a diagnostic port for testing link traffic, electrical loopbacks, and optical loopbacks between a pair of switches, a pair of access gateways, and a switch. Support is also provided for running D_Port tests between a host bus adapter (HBA) and a switch. The test results that are reported can be useful in diagnosing various port and link problems.
 
Note: D_Port ports must use 10G or 16G Brocade-branded small form-factor pluggables (SFP).
Understanding D_Port
D_Port does not carry any user traffic, and is designed to run only specific diagnostic tests for identifying link-level faults or failures. To start a port in D_Port mode, you must configure both ends of the link between a pair of switches (or switches configured as Access Gateways), and disable the existing port before you can configure it as a D_Port.
Figure 2-2 illustrates an example D_Port connection between a pair of switches through SFPs (port assignments vary).
Figure 2-2 Example of a basic D_Port connection between switches
After the ports are configured as D_Ports, the following basic test suite is run in the following order, depending on the SFPs that are installed:
1. Electrical loopback (with 16G SFP+ only)
2. Optical loopback (with 16G SFP+ only)
3. Link traffic (with 10G SFPs and 16G SFPs+)
4. Link latency and distance measurement (with 10G SFPs and 16G SFPs+)
 
Note: Electrical and optical loopback tests are not supported for ICLs.
Figure 2-3 shows the D_port tests capabilities.
Figure 2-3 D_port tests
The following are the fundamentals of D_Port testing:
The user configures the ports on both ends of the connection.
After both sides are configured, a basic test suite is initiated automatically when the link comes online, conducting diagnostic tests in the following order:
a. Electrical loopback
b. Optical loopback
c. Link traffic
After the automatic test is complete, the user can view results (through command-line interface (CLI) or graphical user interface (GUI)) and rectify any issues that are reported.
The user can also start (and restart) the test manually to verify the link.
Advantages of ClearLink Diagnostic ports
Use the D_Port tests for the following situations:
Testing a new link before you add it to the fabric
Testing a trunk member before you join it with the trunk
Testing long-distance cables and SFPs
Tests can be run with the following options:
Number of test frames to transmit
Size of test frames
Duration of the test
User-defined test payload
Predefined pattern for use in the test payload
Testing with FEC on or off (default is off)
Testing with credit recovery (CR) on or off (default is off)
For more information about using D_Port, see 7.2, “ClearLink Diagnostics Port” on page 193.
Latency Bottleneck Detection
A bottleneck is a port in the fabric where frames cannot get through as fast as they should. The offered load in the port is greater than the achieved egress throughput. Bottlenecks can cause unwanted degradation in throughput on various links. When a bottleneck occurs at one place, other points in the fabric can experience bottlenecks as the traffic backs up.
 
Note: If you are running Fabric OS v7.3.0 or later, use Fabric Performance Impact Monitor, which replaces Bottleneck Detection. If you are running an earlier version, you can use the Bottleneck Detection feature. Bottleneck Detection and FPI are mutually exclusive.
The Latency Bottleneck Detection feature enables you to perform the following tasks:
Prevent degradation of throughput in the fabric.
The bottleneck detection feature alerts you to the existence and locations of devices that are causing latency. If you receive alerts for one or more F_Ports, use the CLI to check whether these F_Ports have a history of bottlenecks.
Reduce the time that it takes to troubleshoot network problems.
If you notice one or more applications that are slowing down, you can determine whether any latency devices are attached to the fabric and where they are. You can use the CLI to display a history of bottleneck conditions on a port. If the CLI shows above-threshold bottleneck severity, you can narrow the problem down to device latency rather than problems in the fabric.
A latency bottleneck is a port where the offered load exceeds the rate at which the other end of the link can continuously accept traffic, but does not exceed the physical capacity of the link. This condition can be caused by a device that is attached to the fabric that is slow to process received frames and send back credit returns. A latency bottleneck because of such a device can spread through the fabric and can slow down unrelated flows that share links with the slow flow.
A congestion bottleneck is a port that is unable to transmit frames at the offered rate because the offered rate is greater than the physical data rate of the line. For example, this condition can be caused by trying to transfer data at 8 Gbps over a 4 Gbps ISL.
You can set alert thresholds for the severity and duration of the bottleneck. If a bottleneck is reported, you can then investigate and optimize the resource allocation for the fabric. Using the zone setup and Top Talkers, you can also determine which flows are destined to the affected F_Ports.
You configure bottleneck detection on a per-fabric or per-switch basis, with per-port exclusions.
 
Note: Bottleneck detection is disabled by default. The preferred practice is to enable bottleneck detection on all switches in the fabric, and leave it on to gather statistics continuously.
Supported configurations for bottleneck detection
Remember the following configuration rules for bottleneck detection:
The switch must be running FOS V6.4.0 or later.
Bottleneck detection is supported on Fibre Channel ports and FCoE F_Ports.
Bottleneck detection is supported on the following port types:
 – E_Ports
 – EX_Ports
 – F_Ports
 – FL_Ports
F_Port and E_Port trunks are supported.
Long-distance E_Ports are supported.
FCoE F_Ports are supported.
Bottleneck detection is supported on 4 Gbps, 8 Gbps, and 16 Gbps platforms.
Bottleneck detection is supported in Access Gateway mode.
Bottleneck detection is supported whether Virtual Fabrics is enabled or disabled. In VF mode, bottleneck detection is supported on all fabrics, including the base fabric.
For more information about how bottlenecks are configured and displayed, see the Fabric OS Administrator’s Guide, which you can find at the following website:
Forward Error Correction
Forward Error Correction (FEC) provides a data transmission error control method by including redundant data (error-correcting code) to ensure error-free transmission on a specified port or port range. When FEC is enabled, it can correct one burst of up to 11-bit errors in every 2112-bit transmission, whether the error is in a frame or a primitive.
FEC is enabled by default. It is supported on E_Ports on 16 Gbps-capable switches and on the N_Ports and F_Ports of an access gateway by using RDY, Normal (R_RDY), or Virtual Channel (VC_RDY) flow control modes. It enables automatically when negotiation with a switch detects FEC capability. This feature is enabled by default and persists after driver reloads and system reboots. It functions with features such as QoS, trunking, and BB_Credit recovery.
Limitations
Here are the limitations of this feature:
FEC is configurable only on Gen 5 16 Gbps-capable switches.
FEC is supported only on 1860 and 1867 Brocade Fabric Adapter ports operating in HBA mode that are connected to 16 Gbps Gen 5 switches running FOS V7.1 and later.
FEC is not supported in the following scenarios:
When the HBA port speed changes to less than 16 Gbps, this feature is disabled.
For HBA ports that operate in loop mode or in direct-attach configurations.
On ports with DWDM.
Buffer credit recovery at the Virtual Channel level
The management of buffer credits in wide-area SAN architectures is critical. Furthermore, many issues can arise in the SAN network whenever buffer credit starvation or buffer credit loss occurs.
Buffer credit loss detection and recovery is part of the Gen 5 Fibre Channel diagnostic and error recovery technologies. It helps you avoid a “stuck” link condition or an extended lack of buffer credits for an extended time period, resulting in loss of communication across the link.
The IBM b-type Gen 5 16 Gbps Fibre Channel network implements a multiplexed ISL architecture called Virtual Channels (VCs), which enables efficient usage of E_Port to E_Port ISL links.
Virtual Channels create multiple logical data paths across a single physical link or connection. They are allocated their own network resources, such as queues and buffer-to-buffer credits.
Virtual Channels are divided into three priority groups. P1 is the highest priority, which is used for Class F, F_RJT, and ACK traffic. P2 is the next highest priority, which is used for data frames. The data Virtual Channels can be further prioritized to provide higher levels of Quality of Service (QoS). P3 is the lowest priority and is used for broadcast and multicast traffic.
QoS is a licensed traffic shaping feature that is available in FOS. QoS allows the prioritization of data traffic based on the SID and DID of each frame.
Through the usage of QoS zones, traffic can be divided into three priorities: High, medium, and low, as shown in Figure 2-4. The seven data VCs, VC8 through VC14, are used to multiplex data frames based on QoS zones when congestion occurs.
Figure 2-4 Virtual Channel on a QoS enabled ISL
IBM Gen 5 Fibre Channel switches can detect buffer credit loss at the VC level. If the application-specific integrated circuits (ASICs) detect only a single buffer credit lost, can restore the buffer credit without interrupting the ISL data flow. If the ASICs detect more than one buffer credit lost or if they detect a “stuck” VC, they can recover from the condition by resetting the link. This process requires retransmission of frames that were in transit across the link at the time of the link reset.
When a switch automatically detects and recovers buffer credit loss at the VC level, it provides protection against performance degradation and enhances application availability.
Health and Performance Dashboards
The IBM b-type Gen 5 16 Gbps switches, integrated with IBM Network Advisor V12.x and later, can provide all the critical information about the health and performance of a network in a single window. With a customizable dashboard, it is possible to define what is critical and what to monitor.
MAPS also provides a CLI dashboard, which is available when you do not have IBM Network Advisor.
For more information about dashboards, see Chapter 4, “IBM Network Advisor” on page 49.
Monitoring and Alerting Policy Suite
MAPS is an optional SAN health monitor that supported on all switches that are running FOS V7.2.0 or later. It allows you to enable each switch to constantly monitor itself for potential faults and automatically alerts you to problems before they become costly failures.
MAPS tracks various SAN fabric metrics and events. Monitoring fabric-wide events, ports, and environmental parameters enables early fault detection and isolation, and performance measurements.
MAPS provides a set of predefined monitoring policies that allow you to immediately use MAPS on activation.
In addition, MAPS provides customizable monitoring thresholds. These thresholds allow you to configure specific groups of ports or other elements so that they share a common threshold value. You can configure MAPS to provide notifications before problems arise, for example, when network traffic through a port is approaching the bandwidth limit. MAPS lets you define how often to check each switch and fabric measure, and specify notification thresholds. Whenever fabric measures exceed these thresholds, MAPS automatically provides notification by using several methods, including email messages, SNMP traps, and log entries.
The MAPS dashboard provides you with the ability to quickly view what is happening on the switch. This insight helps administrators dig deeper to see details about exactly what is happening on the switch (for example, the kinds of errors and the error count).
MAPS provides a seamless migration of all customized Fabric Watch thresholds, thus allowing you to take advantage of the advanced capabilities of MAPS. MAPS provides extra advanced monitoring, such as monitoring for the same error counters across different periods, or having more than two thresholds for error counters. MAPS also provides support for you to monitor the statistics that are provided by the Flow Monitor feature of Flow Vision.
 
Note: MAPS is the next generation monitoring tool that replaces Fabric Watch. MAPS cannot coexist with Fabric Watch. MAPS was introduced in FOS v7.2.0. Fabric Watch is no longer available in FOS v7.4.0 or later.
Flow Vision
Introduced in FOS V7.2, Flow Vision1 is a comprehensive tool that enables administrators to identify, monitor, and analyze specific application data flows.
Flow Vision provides these features:
Flow Monitor: Provides comprehensive visibility into application flows in the fabric, including the ability to learn (discover) flows automatically.
Flow Mirror: You can use this function to nondisruptively create copies of the application flows, which can be captured for deeper analysis (only mirroring to processor is supported in FOS V7.2).
Flow Generator: Test traffic generator for pre-testing the SAN infrastructure (including internal connections) for robustness before deploying the applications.
 
Note: Using Flow Vision features requires a Fabric Vision license or both Fabric Watch and Advanced Performance Monitor (APM) licenses. Flow Vision is the next generation Performance monitoring tool and it replaces APM. APM cannot coexist with Flow Vision. Flow Vision was introduced in FOS v7.2.0. APM is no longer available in FOS v7.4.0 or later.
2.3 Standard features
This section describes some of the standard features that are available.
2.3.1 Zoning
Zoning is a fabric-based service that enables you to partition your SAN into logical groups of devices that can access each other.
For example, you can partition your SAN into two zones, winzone and unixzone, so that your Windows servers and storage do not interact with your UNIX servers and storage. You can use zones to logically consolidate equipment for efficiency or to facilitate time-sensitive functions. For example, you can create a temporary zone to back up nonmember devices.
A device in a zone can communicate only with other devices that are connected to the fabric within the same zone. A device not included in the zone is not available to members of that zone. When zoning is enabled, devices that are not included in any zone configuration are inaccessible to all other devices in the fabric. For more information about this topic, see Introduction to Storage Area Networks, SG24-5470.
2.3.2 ISL Trunking
ISL Trunking is an optional software product that is available for all FOS-based Fibre Channel switches, directors, and fabric backbones. ISL Trunking technology optimizes the usage of bandwidth by allowing a group of links to merge into a single logical link, called a trunk group. Traffic is distributed dynamically over this trunk group, achieving greater performance with fewer links. Within the trunk group, multiple physical ports appear as a single port, which simplifies management. Trunking also improves system reliability by maintaining in-order delivery of data and avoiding I/O retries if one link within the trunk group fails.
Figure 2-5 shows the ISL with and without trunking.
Figure 2-5 ISL Trunking
The first example in Figure 2-5 on page 20 (on the left) shows a fabric without trunking. When the trunk is not enabled, there is no traffic optimization, so a link can become congested even when there is bandwidth available on other ISL links.
When the trunking feature is activated, all physical ISLs became a single logical ISL, so the performance is optimized by balancing the traffic across all physical links automatically. Trunking is frame-based instead of exchange-based. Because a frame is much smaller than an exchange, frame-based trunks are more granular and better balanced than exchange-based trunks and provide maximum usage of links.
 
 
Note: An ISL Trunking license is required for any type of trunking, and must be installed on each switch that participates in trunking.
Port groups for trunking
To establish a trunk, several conditions must be met, one of which is that all of the ports in a trunk group must belong to the same port group. A port group is a group of eight ports, the members of which are based on the user port number, such as 0 - 7, 8 - 15, 16 - 23, and so on up to the number of ports on the switch. The maximum number of port groups is platform-specific.
Ports in a port group are usually contiguous, but they might not be. For information about which ports can be used in the same port group for trunking, see the appropriate Hardware Reference Manual for your switch.
Figure 2-6 shows the port group for the SAN96B-5.
Figure 2-6 SAN96B-5 port group
Supported configurations for trunking
Here are the supported configurations for trunking:
Trunk links can be 2 Gbps, 4 Gbps, 8 Gbps, 10 Gbps, or 16 Gbps, depending on the b-type platform.
The maximum number of ports per trunk and trunks per switch depends on the b-type platform.
You can have up to eight ports in one trunk group to create high-performance ISL trunks between switches, providing up to 128 Gbps (based on a 16 Gbps port speed).
If in-flight encryption/compression is enabled, you can have a maximum of two ports per trunk.
An E_Port or EX_Port trunk can be up to eight ports wide. All the ports must be next to each other, in the clearly marked groups on the front of the product.
Trunks operate best when the cable length of each trunked link is roughly equal to the length of the others in the trunk. For optimal performance, no more than 30-meters difference is recommended. Trunks are compatible with both short wavelength (SWL) and long wavelength (LWL) fiber-optic cables and transceivers.
Trunking is performed according to the QoS configuration on the ports. That is, in a trunk group, if there are some ports with QoS enabled and some with QoS disabled, they form two different trunks: One with QoS enabled and the other with QoS disabled.
Requirements for trunk groups
The following requirements apply to all types of trunking:
The Trunking license must be installed on every switch that participates in trunking.
All of the ports in a trunk group must belong to the same port group.
All of the ports in a trunk group must meet the following conditions:
 – They must be running at the same speed.
 – They must be configured for the same distance.
 – They must have the same encryption, compression, QoS, and FEC settings.
Trunk groups must be between b-type switches. Trunking is not supported on M-EOS or third-party switches.
There must be a direct connection between participating switches.
Trunking cannot be done if ports are in ISL R_RDY mode. You can disable this mode by using the portCfgIslMode command.
Trunking is supported only on FC ports. Virtual FC ports (VE_ or VEX_Ports) do not support trunking.
2.3.3 Dynamic Path Selection
Available as a standard FOS feature, exchange-based routing or Dynamic Path Selection (DPS) optimizes fabric-wide performance by automatically routing data to the most efficient available path in the fabric.
DPS is where exchanges or communication between end devices in a fabric are assigned to egress ports in ratios that are proportional to the potential bandwidth of the ISL or trunk group. When there are multiple paths to a destination, the input traffic is distributed across the different paths in proportion to the bandwidth that is available on each of the paths. This configuration improves usage of the available paths, reducing possible congestion on the paths. Every time there is a change in the network (which changes the available paths), the input traffic can be redistributed across the available paths. This is an easy and nondisruptive process when the exchange-based routing policy is engaged.
DPS augments ISL Trunking to provide more effective load balancing. With DPS, traffic loads are distributed at the exchange level across independent ISLs or trunks, and in-order delivery is ensured within the exchange. The combination of trunking and DPS provides immediate benefits to network performance, even in the absence of 16 Gbps devices. DPS in particular can provide performance advantages when connecting to lower-speed 4 Gbps switches. As a result, this combination of technologies provides the greatest design flexibility and the highest degree of load balancing.
Figure 2-7 shows DPS balancing data flow between different ISL trunk paths.
Figure 2-7 Dynamic Path SelectionPort types
The following port types can be part of a b-type device:
D_Port: A diagnostic port that lets an administrator isolate the ISL to diagnose link-level faults. This port runs only specific diagnostic tests and does not carry any fabric traffic. For more information, see “ClearLink Diagnostic Ports” on page 13.
E_Port: An expansion port that is assigned to ISL links to expand a fabric by connecting it to other switches. Two connected E_Ports form an ISL. When E_Ports are used to connect switches, those switches merge into a single fabric without an isolation demarcation point. ISLs are non-routed links. For more information, see 2.3.2, “ISL Trunking” on page 20.
EX_Port: A type of E_Port that connects a Fibre Channel router to an edge fabric. From the point of view of a switch in an edge fabric, an EX_Port appears as a normal E_Port. It follows applicable Fibre Channel standards like other E_Ports. However, the router terminates EX_Ports rather than allowing different fabrics to merge, which happens on a switch with regular E_Ports. An EX_Port cannot be connected to another EX_Port.
F_Port: A fabric port that is assigned to fabric-capable devices, such as SAN storage devices.
G_Port: A generic port that acts as a transition port for non-loop fabric-capable devices.
L_/FL_Port: A loop or fabric loop port that connects loop devices. L_Ports are associated with private loop devices, and FL_Ports are associated with public loop devices.
M_Port: A mirror port that is configured to duplicate (mirror) the traffic that passes between a specified source port and destination port. This configuration is supported only for pairs of F_Ports. For more information about port mirroring, see the Fabric OS Troubleshooting and Diagnostics Guide, which you can find at the following website:
U_Port: A universal Fibre Channel port. This is the base Fibre Channel port type, and all unidentified or uninitiated ports are listed as U_Ports.
VE_Port: A virtual E_Port that is a gigabit Ethernet switch port that is configured for an FCIP tunnel.
VEX_Port: A virtual EX_Port that connects a Fibre Channel router to an edge fabric. From the point of view of a switch in an edge fabric, a VEX_Port appears as a normal VE_Port. It follows the same Fibre Channel protocol as other VE_Ports. However, the router terminates VEX_Ports rather than allowing different fabrics to merge, which is what happens on a switch with regular VE_Ports.
2.3.4 In-flight encryption and compression
The in-flight encryption and compression features of FOS allow frames to be encrypted or compressed at the egress point of an ISL between two IBM b-type switches. They can then be decrypted or extracted at the ingress point of the ISL. These features use port-based encryption and compression. You can enable the encryption and compression feature for both E_Ports and EX_Ports on a per-port basis. By default, this feature is initially disabled for all ports on a switch.
The purpose of encryption is to provide security for frames while they are in flight between two switches. The purpose of compression is for better bandwidth usage on the ISLs, especially over long distance. An average compression ratio of 2:1 is provided, but your compression ratios will depend on the compressibility of your data. Frames are never left in an encrypted or compressed state when delivered to an end device. Both ends of the ISL must terminate in 16G-capable FC ports.
Encryption and compression can be enabled at the same time for an ISL, or you can enable either encryption or compression selectively. Figure 2-8 shows an example of 16 Gbps links connecting three Brocade switches. One link is configured with encryption and compression, one with just encryption, and one with just compression.
Figure 2-8 Encryption and compression on 16 Gbps ISLs
 
Note: No license is needed to configure and enable in-flight encryption or compression.
For more information, see the Fabric OS Administrator’s Guide, which you can find at the following website:
2.3.5 NPIV
N_Port ID Virtualization (NPIV) enables a single Fibre Channel protocol port to appear as multiple, distinct ports, providing separate port identification within the fabric for each operating system image behind the port (as though each operating system image had its own unique physical port).
NPIV assigns a different virtual port ID to each Fibre Channel protocol device. NPIV enables you to allocate virtual addresses without affecting your existing hardware implementation. The virtual port has the same properties as an N_Port, and can register with all services of the fabric.
Each NPIV device has a unique device PID, Port worldwide name (WWN), and Node WWN, and behaves the same as all other physical devices in the fabric. Multiple virtual devices that are emulated by NPIV appear no different from regular devices that are connected to a non-NPIV port.
The same zoning rules apply to NPIV devices as non-NPIV devices. Zones can be defined by domain, port notation; by WWN zoning; or both. However, to perform zoning to the granularity of the virtual N_Port IDs, you must use WWN-based zoning.
If you are using domain port zoning for an NPIV port, and all the virtual PIDs that are associated with the port are included in the zone, then a port login (PLOGI) to a non-existent virtual PID is not blocked by the switch. Rather, it is delivered to the device that is attached to the NPIV port. In cases where the device cannot handle such unexpected PLOGIs, use WWN-based zoning.
For more information, see the Fabric OS Administrator’s Guide, which you can find at the following website:
2.3.6 Dynamic Fabric Provisioning
Introduced in FOS V7.0, Dynamic Fabric Provisioning (DFP) simplifies server deployment in a Fibre Channel SAN (FC SAN) environment.
Server deployment typically requires that multiple administrative teams (for example, server and storage teams) coordinate with each other to perform configuration tasks, such as zone creation in the fabric and LUN mapping and masking on the storage device. These tasks must be complete before the server is deployed. Before you can configure WWN zones and LUN masks, you must discover the physical port worldwide name (PWWN) of the server. This requirement means that administrative teams cannot start their configuration tasks until the physical server arrives (and its physical PWWN is known). Because the configuration tasks are sequential and interdependent across various administrative teams, it might take several days before the server is deployed in an FC SAN.
DFP simplifies and accelerates new server deployment and improves operational efficiency by using a fabric-assigned PWWN (FA-PWWN). An FA-PWWN is a “virtual” port WWN that can be used instead of the physical PWWN to create zoning, and LUN mapping and masking. When the server is later attached to the SAN, the FA-PWWN is then assigned to the server.
The FA-PWWN feature allows you to perform the following tasks:
Replace one server with another server, or replace failed HBAs or adapters within a server, without having to change any zoning or LUN mapping and masking configurations.
Easily move servers across ports or Access Gateways by reassigning the FA-PWWN to another port.
Use the FA-PWWN to represent a server in boot LUN zone configurations so that any physical server that is mapped to this FA-PWWN can boot from that LUN, thus simplifying boot over SAN configuration.
 
Note: For the server to use the FA-PWWN feature, it must be using a Brocade HBA or adapter. For more information, see the release notes for the HBA or adapter versions that support this feature.
Configuration of the HBA must be performed to use the FA-PWWN.
For more information, see the Fabric OS Administrator’s Guide, which you can find at the following website:

1 Available only with FOS V7.2 or later
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.12.202