Implementation and migration strategies
This chapter introduces designing a SAN, redundancy and resiliency, migration assessment, migration strategy, and creating a migration plan to replace or expand an existing SAN. These topics can also be applied to planning of a new SAN because most of the requirements and concerns about planning a new SAN represent a subset of the requirements for replacing an existing SAN.
A licensing overview and information about specific usage are provided to explain the available advanced functionalities and their licensing requirements.
This chapter includes the following sections:
9.1 Designing a storage area network
The storage area network (SAN) planning process is similar to any type of project planning and includes the following phases:
Phase I: Gathering requirements
Phase II: Developing technical specifications
Phase III: Estimating project costs
Phase IV: Analyzing return on investment (ROI) or total cost of ownership (TCO) (if necessary)
Phase V: Creating a detailed SAN design and implementation plan
When you select which criteria to meet, the subject matter experts (SMEs) from the different areas (server, storage, and networking) should be engaged to understand the role of the fabric. Because most SANs tend to operate for a long time before they are renewed, consider future growth because SANs are difficult to redesign. Deploying new SANs or expanding existing ones to meet additional workloads in the fabrics requires critical assessment of business and technology requirements. Proper focus on planning will ensure that the SAN, when it is deployed, meets all current and future business objectives, including availability, deployment simplicity, performance, future business growth, and cost.
A critical aspect for successful implementation that is often overlooked is the ongoing management of the fabric. Identifying systems-level individual SMEs for all the components that make up the SAN, and adequate and up-to-date training on those components, is critical for efficient design and operational management of the fabric. When you are designing a new SAN or expanding an existing SAN, take the following parameters into account:
Application virtualization
 – Which applications will run in a virtual machine (VM) environment?
 – How many VMs per server?
 – Under what conditions will migration of VMs take place, such business or non-business hours, and is more CPU or memory needed to maintain response times?
 – Is there a need for Flash systems to improve read response times?
Homogeneous/heterogeneous server and storage platforms
 – Will the system be using blade servers or rack servers?
 – Is auto-tiering in place?
 – Fabric OS compatibility and feature support?
 – What is the refresh cycle of servers and storage platforms?
Scalability
 – How many user ports are needed now?
 – How many inter-switch links (ISLs)/UltraScale inter-chassis links (ICLs) are required for minimizing congestion?
 – Will you scale out at the edge or the core?
Backup and disaster tolerance
 – Is there a centralized backup? This will determine the number of ISLs needed to minimize congestion at peak loads.
 – What is the impact of backup on latency-sensitive applications?
 – Is the disaster solution based on a metro Fibre Channel (FC) or Fibre Channel over IP (FCIP) solution?
Diagnostics and manageability
 – What is the primary management interface to the SAN (command-line interface, IBM Network Advisor, or third-party tool)?
 – How often will the Fabric Operating System (FOS) be updated?
 – How will you validate cable and optics integrity?
Investment protection
 – Support for future FC technology and interoperability
 – Support for alternative technologies such as Fibre Channel over Ethernet (FCoE)
9.1.1 Redundancy and resiliency
An important aspect of SAN topology is the resiliency and redundancy of the fabric. The main objective is to remove any single point of failure. Resiliency is the ability of the network to continue to function, recover from a failure, or both. Redundancy describes duplication of components, even an entire fabric, to eliminate a single point of failure in the network. IBM b-type fabrics have resiliency built into Fabric OS, which can quickly “repair” the network to overcome most failures. For example, when a link between switches fails, Fibre Channel shortest path first (FSPF) quickly recalculates all traffic flows if a second route is available, which is when redundancy in the fabric becomes important.
The key to high availability and enterprise-class installation is redundancy. By eliminating a single point of failure, business continuance can be provided through most foreseeable and even unforeseeable events. At the highest level of fabric design, the complete network should be redundant, with two completely separate fabrics that do not share any network equipment (routers or switches). Servers and storage devices should be connected to both networks by using some form of Multi-Path I/O (MPIO) solution.
MPIO allows data to flow across both networks seamlessly in either an active/active or active/passive mode. It ensures that if one path fails, an alternative is readily available. Ideally, the networks would be identical, but at a minimum they should be based on the same switch architecture. In some cases, these networks are in the same location. However, to provide for Disaster Recovery (DR), two separate locations are often used, either for each complete network or for sections of each network. Regardless of the physical location, there are two separate networks for complete redundancy.
9.1.2 Switch interconnections
As mentioned in the previous section, there should be at least two of every element in the SAN to provide redundancy and improve resiliency. The number of available ports and device locality (server/storage tiered design) determines the number of ISLs needed to meet performance requirements. This requirement means that there should be a minimum of two trunks, with at least two ISLs per trunk. Each source switch should be connected to at least two other switches, and so on.
In addition to redundant fabrics, redundant links should be placed on different blades, different ASICs, or at least different port groups whenever possible. Whatever method is used, it is important to be consistent across the fabric.
9.1.3 UltraScale ICL connectivity for Gen 5 directors
The SAN768B-2 and SAN384B-2 platforms use second-generation UltraScale ICL technology from Brocade with optical QSFP. The SAN768B-2 allows up to 32 QSFP ports, and the SAN384B-2 allows up to 16 QSFP ports to help preserve switch ports for end devices. Each QSFP port has four independent 16 Gbps links, each of which terminates on a different ASIC within the core blade. Each core blade has four ASICs. A pair of connections between two QSFP ports can create 32 Gbps of bandwidth.
9.1.4 SAN768B-2 and SAN384B-2 UltraScale ICL connection preferred practices
Each core blade in a chassis must be connected to each of the two core blades in the destination chassis to achieve full redundancy. For redundancy, use at least one pair of links between two core blades.
Follow these guidelines when designing switch ISL and UltraScale ICL connectivity:
There should be at least two core switches.
Every edge switch should have at least two trunks to each core switch.
Select small trunk groups (keep trunks to two ISLs) unless you anticipate very high traffic volumes. This configuration ensures that you can lose a trunk member without losing ISL connectivity.
Place redundant links on separate blades.
Trunks should be in a port group (ports within an ASIC boundary).
Allow no more than 30 m in cable difference for optimal performance for ISL trunks.
Use the same cable length for all UltraScale ICL connections.
Avoid using ISLs to the same domain if there are UltraScale ICL connections.
Use the same type of optics on both sides of the trunks: Short Wavelength (SWL), Long Wavelength (LWL), or Extended Long Wavelength (ELWL).
9.1.5 Device placement
Device placement is a balance between traffic isolation, scalability, manageability, and serviceability.
With the growth of virtualization and multinode clustering on the UNIX platform, frame congestion can become a serious concern in the fabric if there are interoperability issues with the end devices.
Traffic locality
Designing device connectivity depends on the expected data flow between devices. For simplicity, communicating hosts and targets can be attached to the same switch. However, this approach does not scale well. Given the high-speed, low-latency nature of Fibre Channel, attaching these host-target pairs on different switches does not mean that performance is adversely affected. Although traffic congestion is possible, it can be mitigated with proper provisioning of ISLs/UltraScale ICLs. With current generation switches, locality is not required for performance or to reduce latencies. For mission-critical applications, architects might want to localize the traffic when using Flash systems or in exceptional cases, particularly if the number of ISLs available is restricted or there is a concern for resiliency in a multi-hop environment.
One common scheme for scaling a core-edge topology is dividing the edge switches into a storage tier and a host/initiator tier. This approach lends itself to ease of management and ease of expansion. In addition, host and storage devices generally have different performance requirements, cost structures, and other factors that can be readily accommodated by placing initiators and targets in different tiers.
Fan-in ratios and oversubscription
Another aspect of data flow is the “fan-in-ratio” or “oversubscription”, in terms of source ports to target ports and device to ISLs This is also referred to as the “fan-out-ratio” if viewed from the storage array perspective. The ratio is the number of device ports that share a single port, whether ISL, UltraScale ICL, or target. This number is always expressed from the single entity point of view, such as 7:1 for seven hosts that are using a single ISL or storage port.
 
Note: One approach to ISL oversubscription is to use the base values 3:1 for very high-performance applications, 7:1 when intermediate performance is required, and 15:1 when performance is not the key consideration. When you are working with environments where high performance is required and you do not want to sacrifice a high port count for ISL connectivity, consider the use of the b-type enterprise ICL technology to interconnect your fabric backbone.
What is the optimum number of hosts that should connect per to a storage port? This seems like a fairly simple question. However, when you consider clustered hosts, VMs, and the number of logical unit numbers (LUNs) (storage) per server, the situation can quickly become much more complex. Determining how many hosts to connect to a particular storage port can be narrowed down to three considerations:
Port queue depth
I/O per second (IOPS)
Throughput
Of these three, throughput is the only network component. Thus, a simple calculation is to add up the expected bandwidth usage for each host that is accessing the storage port. The total should not exceed the supported bandwidth of the target port.
However, in practice it is highly unlikely that all hosts perform at their maximum level at any one time. With the traditional application-per-server deployment, the host bus adapter (HBA) bandwidth is overprovisioned. However, with virtual servers (KVM, Xen, Hyper-V, proprietary UNIX OSs, and VMware) the situation can change radically. Network oversubscription is built into the virtual server concept. To the extent that servers use virtualization technologies, the network-based oversubscription should be reduced proportionally. Therefore, it might be prudent to oversubscribe ports to ensure a balance between cost and performance.
Another method is to assign host ports to storage ports based on capacity (density). The intended result is a few high-capacity hosts and a larger number of low-capacity servers assigned to each storage port, thus distributing the load across multiple storage ports.
Regardless of the method that is used to determine the fan-in/fan-out ratios, port monitoring should be used to determine actual utilization and what adjustments, if any, should be made. In addition, ongoing monitoring provides useful heuristic data for effective expansion and efficient assignment of existing storage ports. To determine the device-to-ISL fan-in ratio, a simple calculation method works best. The storage port should not be oversubscribed into the core. For example, an 8 Gbps storage port should have an 8 Gbps pipe into the core.
9.2 Migration assessment
It is important to understand the current application environment and the new SAN requirements before attempting a migration. There is more than one way to proceed with the migration process, depending on the current SAN architecture, fabric topology, size, and number of active devices attached. A SAN fabric migration can be done both offline or online, depending on the application or project requirements. An offline migration is the simpler of the two approaches, although careful planning is required. However, in many environments where planned downtime is not possible, the migration must be performed online. An online migration in a single or redundant fabric requires careful evaluation of the application availability and currently deployed topology to plan for a methodical migration path.
Consider these factors, regardless of the migration approach:
Assessing the existing fabric topology
Assessing the new fabric topology
Logistic planning for hardware installation
Topology and zone planning
Preliminary migration planning
9.2.1 Assessing the existing fabric topology
Determine whether the current environment is a single fabric or a redundant fabric. If the current environment uses a redundant fabric, a rolling migration might be an option, where one fabric is active and the other fabric is migrated offline. Similarly, device paths in a single resilient fabric can be failed over as devices are moved to the new fabric. Both methods minimize fabric downtime and I/O interruptions, if multipathing software is in use. Migrating a single non-resilient fabric is more complex and requires application interruption or an outage if the host must be rebooted.
Consider these elements when assessing the migration activity:
Application failover considerations: If multipathing software such as Microsoft MPIO, IBM AIX® MPIO, Hitachi HiCommand Dynamic Link Manager, or EMC PowerPath is in use, collect metrics to determine how long it takes to fail over and fail back in the existing SAN.
Storage failover considerations: Move all the LUNs to a single controller if not dual-pathed. Verify that the number of LUNs from a single port does not exceed the vendor recommendation.
Topology change at the time of migration: Migrating to a new fabric is a good opportunity to address any performance bottlenecks, server and storage scalability, and general maintenance of the fabric, such as structured cable management. High-density directors with ICLs offer an opportunity to simplify traditional SAN designs.
Zone configuration export/modify strategy: If some or all of the devices in the old fabric are being migrated to the new fabric, the existing zone database can be exported and then imported into the IBM b-type SAN to minimize the migration time frame.
Server and storage device placement: Although hop count is no longer an issue, keeping the number of hops between server and storage to no more than two can minimize possible congestion issues as the SAN expands. Whatever method is used for device placement, be consistent across switches and fabrics.
9.2.2 Assessing the new fabric
Consider these points when assessing the new fabric:
Fabric OS upgrade requirements: Before connecting any devices, verify that the switches are running the correct version of Fabric OS.
Capture configuration parameters of the existing switch: Capture the configuration of the existing switches and compare that configuration with the new ones.
Analyze the existing zoning: Assess the existing zone database. Clean the zone database by removing any zone members that are no longer part of the fabric.
Trunking setup considerations: ISL Trunking is a hardware-based stripping mechanism with predictable latencies for traffic flows. In a multi-switch environment, multiple trunks should exist such that during an entire trunk failure, the remaining trunks are not congested.
Future server or storage expansion: Planning for the future is key to ensuring that the architecture that is put in place for the new SAN will meet long-term requirements.
9.2.3 Logistic planning of hardware installation
When planning a migration, consider the following concerns about facilities and logistics:
Rack space requirements: IBM b-type Gen 5 directors use front-to-back airflow, which allows a narrow rack and the implementation of hot/cold aisle cooling.
Power requirements: SAN768B-2 and SAN384B-2 have a power consumption of 1952 W and 1064 W when fully loaded.
Cable requirements: Confirm that the cable plant is within the required specifications and uses structured cabling, when possible, to minimize device placement errors during the migration.
9.2.4 Preliminary migration planning
When developing a preliminary migration plan, consider the following topics:
To complete a successful migration, identify the personnel that are needed during the key phases of the project: Facilities management, network administration, SAN administration/engineering, server administration with knowledge of the dual-pathing and failover software, storage administration with knowledge of redundant paths, and project management.
Identify and analyze key implemented features, and define equivalent solutions for IBM b-type SAN infrastructure.
Identify and analyze advanced features that might need to be considered, such as FCIP, Encryption, or FICON for the new SAN.
Define move groups based on applications, storage ports, and zoned hosts.
Identify and resolve any service level agreement (SLA) conflicts within move groups.
Create port maps for host/storage on the migrated SAN.
Review the migration plan with the user or business group, and revise as needed.
Complete the final migration plan.
Establish the following post-migration verification criteria:
Total number of devices in the fabric.
Baseline performance metrics for ISL and ICLs.
Baseline latency measurements for server and storage ports.
Hosts are dual-pathed to the fabric so that the use of failover mechanisms minimizes the disruption to production I/O.
The IBM b-type devices basic setup and configuration has been performed in advance.
All required switch licenses have been acquired and installed.
If IBM Network Advisor is being used, it has already been set up and is able to discover the IBM b-type fabric.
Regardless of the type of fabric, perform the migration during non-peak business hours.
9.2.5 Gather infrastructure information
Detailed information is required for the following items:
Individual Fabric Details
Device Details
Device Mapping Details
Application-Specific Details
9.3 Migration strategy
The migration process can be simplified by preparing a migration plan in advance. Besides cabling, rack space, and power requirements, other factors such as scheduling downtime, personnel security, and application change windows as well as host and storage failover can significantly affect the SAN operations. The current configuration and operational requirements of a target SAN might impose additional constraints. The key to a successful migration is to minimize fabric interruption or to completely eliminate downtime, whenever possible, by identifying issues in advance.
Effective planning provides the preliminary groundwork for the evaluation phase and sets the foundation for the migration process. After reviewing the requirements that apply to a specific situation, the migration process will fall into one of the following categories:
Online Redundant Fabric Migration
A redundant fabric provides the flexibility to upgrade one fabric by bringing it offline while redirecting active I/O to the other fabric. Current I/O operations are not affected by the migration activity. With this strategy, the hosts are operating in a degraded mode with no data path protection. Any failure on an active path completely ceases I/O.
With proper planning, any downtime or outage is minimized. When the fabric upgrade has been completed and verified, it can be brought back online by restoring the I/O paths. The migration process is repeated for the second fabric after all I/O paths are successfully restored on the first fabric.
Offline fabric migration
An offline fabric migration assumes that a fabric can be brought offline to perform the migration and that I/O is stopped during the downtime. This method is the safest and most convenient method for migration.
9.3.1 Migration methods
Infrastructure resiliency or redundancy of the fabric determines the primary migration strategy. While you are preparing for the development of a migration plan, the strategy to be used must be identified. The migration has the following options:
Port-to-Port migration: This method is a straightforward port-port migration from one fabric to another. This method requires all logically grouped initiator/target pairs to be moved during a single migration activity. This strategy is called migrating by “move groups.” For example, when a storage port is moved, all associated HBAs that are accessing LUNs through this port must also be moved.
Application migration: This method is possible if the physical infrastructure is not shared across application tiers. If the application happens to run on a new server and storage infrastructure, a validation is required to check whether all the data has been migrated before the cutover. SANs tend to be logically identified as database, web services, backup, and so on.
Device migration: This method is a logical approach to offline migration because customers physically isolate servers and storage devices in racks or sections of the data center. Migrating devices by using this method provides a clear high-level accounting, especially for the racks that are relocated as part of the migration.
9.4 Developing a migration plan
A plan is the foundation for a successful project or, in this case, a SAN migration. The best practices for such a plan can be derived from any number of formal methodologies. However, any good plan should include at least the following steps:
Project scope and success criteria
Phases, tasks, and subtasks
Resource definitions
Timelines
Task dependencies
Tracking criteria
Checkpoints
Deliverables for procedures, designs, and configurations
Fallback plans
Signoff criteria
9.5 Preparing to migrate
Performing the following steps ahead of time helps to minimize the time that is required for migration. Keep checklists to track switch configurations, zone information, and port mappings.
Migration preparation falls into the following categories:
Build the new SAN infrastructure
Configure the SAN
Validate the new SAN
9.6 Performing the migration and validation
The migration includes the following tasks:
Run the migration plan
Validate migration per phase
Validate application operations per phase
Sign off on the SAN migration phase
When the migration assessment, qualification, and preparation are complete, the SAN can be migrated. Based on the criteria that are listed in the previous sections, select the primary migration strategy from these options:
Offline migration
Redundant and single fabric online migration
9.6.1 Offline migration
Although this method requires the fabric to be offline, it is also the safest option for migration.
9.6.2 Redundant and single fabric online migration
This strategy involves migrating devices by keeping the applications online. It is challenging to facilitate and requires a great deal of planning. However, if planned properly, and if the key applications that need to remain online are designed to support high availability and redundancy, this method can allow migration with no interruption of service. The key to this approach is setting the correct expectations in advance.
9.7 Completing the migration
When the migration activity is complete, it is critical to use a post-migration plan. There are several steps to ensuring that all the work that you completed is protected and validated. The following are some of the post-migration activities:
Run IBM SAN Health
Validate new SAN configurations
Validate application operations
Back up new SAN configurations
Sign off on SAN migration
Decommission the old SAN infrastructure
9.8 Licensing
The following license types are supported in Fabric OS:
Permanent license: A permanent license enables a license-controlled feature to run on the switch indefinitely.
Temporary license: A temporary license enables a license-controlled feature to run on the switch on a temporary basis. A temporary license enables demonstration and evaluation of a licensed feature, and can be valid for up to 45 days.
Universal temporary license: A universal temporary license can only be installed once on a switch, but can be applied to as many switches as required. Temporary use duration (the length of time the feature will be enabled on a switch) is provided with the license keys.
Slot-based licensing: A slot-based license allows you to select the slots that the license will enable up to the capacity purchased. You can increase the capacity without disrupting slots that already have licensed features running. Each licensed feature that is supported on the blade has a separate slot-based license key.
9.8.1 Available Fabric OS licenses
This subsection provides a description and usage of the available licenses.
10 Gigabit FCIP/Fibre Channel (10G license)
The 10G license has these characteristics:
Allows 10 Gbps operation of FC ports on the Brocade SAN48B-5 or SAN96B-5 switches or the FC ports of FC16-32 or FC16-48 port blades installed on a SAN384B-2 or SAN768B-2 Backbone.
Enables the two 10-GbE ports on the FX8-24 extension blade when installed on the SAN384B, SAN768B, SAN384B-2, or SAN768B-2 Backbone.
Allows selection of the following operational modes on the FX8-24 blade:
 – 10 1-GbE ports and 1 10-GbE port
 – 2 10-GbE ports
License is slot-based when applied to a Brocade Backbone. It is chassis-based when applied to a SAN48B-5 or SAN96B-5 switch.
This license allows the establishment of a 10G ISL in metro optical connectivity or with 10G dense wavelength division multiplexing (DWDM) devices. It also allows a 10G FCIP connection.
SAN06B-R Upgrade
The SAN06B-R Upgrade has these characteristics:
Enables full hardware capabilities on the Brocade SAN06B-R base switch, increasing the number of Fibre Channel ports from four to sixteen and the number of GbE ports from two to six.
Supports up to eight FCIP tunnels instead of two.
Supports advanced capabilities such as Open Systems tape read/write pipelining.
 
Note: The SAN06B-R switch must have the SAN06B-R Upgrade license to add FICON Management Server (CUP) or Advanced FICON Acceleration licenses.
Adaptive Networking with QoS
This license provides a rich framework of capability, allowing a user to ensure that high-priority connections obtain the bandwidth necessary for optimum performance, even in congested environments. The QoS SID/DID Prioritization and Ingress Rate Limiting features are included in this license, and are fully available on all 8G and 16G platforms.
 
Notes:
The SAN96B-5 does not require an Adaptive Networking with QoS license to enable the capabilities that are associated with this license. These capabilities are included by default on the SAN96B-5.
This license is automatically enabled for new switches that operate with only Fabric OS 7.2.0 or later, and for existing switches that are upgraded to Fabric OS 7.2.0 or later.
Advanced Extension
The Advance Extension license provides these features:
Enables two advanced extension features: FCIP Trunking and Adaptive Rate Limiting.
FCIP Trunking feature allows all of the following configurations:
 – Multiple (up to 4) IP source and destination address pairs (defined as FCIP Circuits) using multiple (up to 4) 1-GbE or 10-GbE interfaces to provide a high-bandwidth FCIP tunnel and failover resiliency.
 – Support for up to 4 of the following quality of service (QoS) classes: Class-F, high, medium, and low priority, each as a TCP connection.
The Adaptive Rate Limiting feature provides a minimum bandwidth guarantee for each tunnel with full usage of available network bandwidth without any negative impact to throughput performance under high traffic load.
Available on the SAN06B-R switch, SAN42B-R, and on the SAN768B, SAN384B, SAN768B-2 and SAN384B-2 platforms for the FX8-24 on an individual slot basis.
Advanced Acceleration for FICON
This license allows use of specialized data management techniques and automated intelligence to accelerate FICON tape read and write and IBM Global Mirror data replication operations over distance. It maintains the integrity of command and acknowledgment sequences.
This license is available on the SAN06B-R and SAN42B-R switches. It is also available on the SAN384B, SAN768B, SAN384B-2, and SAN768B-2 directors for the FX8-24 blade on an individual slot basis.
Advanced Performance Monitoring
The Advanced Performance Monitoring (APM) license offers these features:
Enables performance monitoring of networked storage resources
Includes the Top Talkers feature
Helps to identify end-to-end bandwidth usage by host/target pairs and is designed to provide information for capacity planning
The Fabric Vision license is equivalent to the combination of both the APM and Fabric Watch (FW) licenses. If you have both the APM and the FW licenses installed, you do not need the Fabric Vision license.
Fabric Watch
The FW license constantly monitors mission-critical switch operations for potential faults and automatically alerts administrators about problems before they become costly failures. Fabric Watch includes Port Fencing capabilities.
The Fabric Vision license is equivalent to the combination of both the APM and FW licenses. If you have both the APM and FW licenses installed, you do not need the Fabric Vision license.
Extended Fabrics
This license provides greater than 10 km of switched fabric connectivity at full bandwidth over long distances (depending on the platform, this distance can be up to 3000 km).
Fabric interconnectivity over Fibre Channel at longer distances
ISLs can use long-distance dark fiber connections to transfer data. Wavelength-division multiplexing, such as DWDM, coarse wavelength division multiplexing (CWDM), and time-division multiplexing (TDM), can be used to increase the capacity of the links. As Fibre Channel speeds increase, the maximum distance decreases for each switch.
The Extended Fabrics feature extends the distance the ISLs can reach over an extended fiber. This extension is accomplished by providing enough buffer credits on each side of the link to compensate for latency that is caused by the extended distance.
Simplified management over distance
Each device that is attached to the SAN appears as a local device, which simplifies deployment and administration.
Optimized switch buffering
When Extended Fabrics is installed on gateway switches (with E_Port connectivity from one switch to another), the ISLs (E_Ports) are configured with a large pool of buffer credits. The enhanced switch buffers help ensure that data transfer can occur at near-full bandwidth to use the connection over the extended links efficiently. This efficiency ensures the highest possible performance on ISLs.
 
Note: This license is not required for long-distance connectivity that uses licensed 10G ports.
ISL Trunking
The ISL Trunking license provides these features:
Provides the ability to aggregate multiple physical links into one logical link for enhanced network performance and fault tolerance.
Includes Access Gateway ISL Trunking on those products that support Access Gateway deployment.
Ports on Demand
The Ports on Demand license allows you to instantly scale the fabric by provisioning additional ports by using license key upgrades.
 
Note: This license applies to the SAN24B-4, SAN40B-4, SAN80B-4, SAN24B-5, SAN48B-5, SAN96B-5, and Flex System FC5022 16 Gb SAN Scalable switches.
DataFort Compatibility
This license provides ability to read, write, decrypt, and encrypt the NetApp DataFort encrypted Disk LUNs and tapes to the following switches:
Encryption Switch (SAN32B-E4)
Enterprise b-type directors with FS8-18 blade
The DataFort Compatability license includes metadata, encryption, and compression algorithms.
Encryption Performance Upgrade
This license provides additional encryption bandwidth on encryption platforms. For the Brocade Encryption Switch, two Encryption Performance Upgrade licenses can be installed to enable the full available bandwidth. On a Brocade enterprise platform, a single Performance License can be installed to enable full bandwidth on all FS8-18 blades that are installed in the chassis.
Enhanced Group Management
The Enhanced Group Management license enables full management of the device in a data center fabric with deeper element management functionality and greater management task aggregation throughout the environment. This license is used with Brocade Network Advisor application software. This license is applicable to all of the IBM b-type 8G and 16G FC platforms.
 
Note: This license is enabled by default on all 16G FC platforms, and on SAN768B and SAN384B platforms that are running Fabric OS v7.0.0 or later. This license is not included by default on 8G FC fixed port switches (SAN24B-4, SAN40B-4, SAN80B-4, and 8G FC blade server SAN I/O modules).
Fabric Vision
The Fabric Vision (FV) license allows you to activate the following features:
Monitoring and Alerting Policy Suite (MAPS)
Flow Vision
Run D_Port tests between a switch and non-Brocade HBAs
This license replaces the Advanced Performance Monitor (APM) and Fabric Watch (FW) licenses. If you have the Fabric Vision license, you can use Advanced Performance Monitoring and Fabric Watch features without the APM and FW licenses.
Integrated Routing
Integrated Routing (IR) allows any ports on the SAN40B-4, SAN80B-4, SAN48B-5, SAN96B-5, SAN42B-R, SAN32B-E4, Flex System FC5022 switches, or on the SAN768B, SAN384B, SAN384B-2 and SAN768B-2 platforms to be configured as an EX_Port supporting FC-FC routing. This configuration provides improved scalability and fault isolation, along with multivendor interoperability.
FICON CUP
FICON with control unit port (CUP) Activation provides in-band management of the supported SAN b-type switch and director products by system automation for z/OS. To enable in-band management on multiple switches and directors, each chassis must be configured with the appropriate FICON CUP feature. This support provides a single point of control for managing connectivity in active FICON I/O configurations. System automation for OS/390 or z/OS can now concurrently manage IBM 9032 ESCON directors, in addition to supported SAN b-type switch and director products with FICON.
Full Fabric
The Full Fabric license enables a switch to connect to a multi-switch fabric through E_Ports, forming ISL connections.
 
Note: This license is only required on select blade server SAN I/O models and the SAN24B-4, and does not apply to other fixed-port switches or chassis-based platforms.
ICL 8-Link
ICL 8-Link activates all eight links on ICL ports on a Brocade SAN384B or half of the ICL bandwidth for each ICL port on the SAN768B platform by enabling only eight links out of the sixteen links available. This license allows you to purchase half the bandwidth of SAN768B ports initially and upgrade with an additional ICL 8-Link license to use the full ICL bandwidth later.
This license is also useful for environments that need to create ICL connections between a SAN768B and a SAN384B. The latter cannot support more than eight links on an ICL port.
It is available on the SAN768B and SAN384B backbones only.
ICL 16-Link
This license activates all 16 links on ICL ports on a SAN768B chassis. Each chassis must have the ICL 16-Link license installed in order to enable the full 16-link ICL connections.
This license is available only on the SAN768B.
Inter-Chassis Link (1st POD)
This license activates half of the ICL bandwidth on a SAN768B-2, or all the ICL bandwidth on a SAN384B-2. It allows you to enable only the bandwidth that is needed and upgrade to more bandwidth later. This license is also useful for environments that need to create ICL connections between a SAN768B-2 and a SAN384B-2. The latter platform supports only half the number of ICL links that the former platform supports.
This license is available only on the Gen5 Backbones.
Inter-Chassis Link (2nd POD)
Activates the remaining ICL bandwidth on the Brocade SAN768B-2 chassis. Each chassis must have this ICL license installed to enable all available ICL connections.
This license is available only on the Gen5 backbones.
Enterprise ICL
This license allows you to connect four or more chassis to a SAN768B-2 or SAN384B-2 chassis by using ICLs. For each Gen5 backbone, you can connect up to three Gen5 Backbones with ICLs without this license. This license is required only on the Gen5 Backbone that is connected to four or more Gen5 Backbone chassis.
This license requirement does not depend on the total number of Gen5 Backbone chassis that exist in a fabric. Rather, it depends only on the number of chassis connected directly to a Gen5 Backbone with ICLs.
You must also have an ICL POD license on each Gen5 Backbone to activate the ICL ports.
The Enterprise ICL license allows only connection of more than four chassis by using ICLs. It does not enable the ICL ports on a chassis.
 
Note: This license applies only to the Gen5 Backbone family.
Server Application Optimization
When deployed with Brocade server adapters, the Server Application Optimization (SAO) license optimizes overall application performance for physical servers with virtual machines by extending virtual channels to the server infrastructure. Application-specific traffic flows can be configured, prioritized, and optimized throughout the entire data center infrastructure.
 
Notes:
The Brocade SAN96B-5 does not require an SAO license to enable the capabilities that are associated with this license. These capabilities are included by default on the SAN96B-5.
This license is automatically enabled for new switches that operate with only Fabric OS 7.2.0 or later and for existing switches that are upgraded to Fabric OS 7.2.0 or later.
WAN Rate Upgrade 1
This license provides additional WAN transmission throughput up to 10 Gbps on a SAN42B-R. Without WAN Rate Upgrade 1 license, the SAN42B-R provides WAN throughput of 5 Gbps. Upgrade licenses do not impose restriction on the number of physical ports that are used when the aggregate bandwidth of all configured FCIP tunnels does not exceed the licensed limit.
WAN Rate Upgrade 2
Provides unlimited WAN transmission throughput (other than the physical port limit) and enables two 40 GbE ports on a SAN42B-R switch. The 40 GbE ports cannot be used without the WAN Rate Upgrade 2 license.
The WAN Rate Upgrade 1 license must be installed before you install and activate the WAN Rate Upgrade 2 license. The WAN Rate Upgrade 1 license cannot be removed until you remove the WAN Rate Upgrade 2 license.
Enterprise software bundle for SAN768B-2 and SAN384B-2
The enterprise software bundle is a bundle of FOS features on top of the base FOS functionality that is included in the hardware base for both the SAN768B-2 and the SAN384B-2. It includes the following features (licenses):
Adaptive Networking
Advanced Performance Monitoring
Extended Fabrics
Fabric Watch
ISL Trunking
Server Application Optimization
Fabric Vision
FICON with CUP activation
Integrated routing
Inter-Chassis License with eight 16 Gbps 2 km quad small form factor pluggables (QSFPs)
Inter-Chassis License with sixteen 16 Gbps 100m QSFPs
16 Gbps 2 km QSFP
Inter-chassis QSFP bundle
Inter-Chassis License conversion
Enterprise ICL license
9.8.2 License administration
When you receive a transaction key, you need to retrieve the license ID (LID) of the b-type switch by using the licenseidshow command:
switch:admin> licenseidshow
a4:f8:69:33:22:00:ea:18
Go to the IBM Storage License Keys portal:
Select the license activation type Generate SAN b-type switch feature license key, and follow the instructions to generate a SAN b-type switch feature license key. See Figure 9-1.
Figure 9-1 Switch feature activation
Use your web browser to connect to Web Tools (or Element Manager  Hardware through IBM Network Advisor) to find the WWN of the switch and manage licensing. After logging in to Web Tools using admin credentials, click Configure  Switch Admin to open the Switch Administration window.
In the Switch Administration window, the WWN of the switch is displayed, along with a License tab that you can use to manage licensing.
For an in-depth licensing overview, see the Fabric OS Software Licensing Guide that is available at the following website:
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.197.94