Host attachment considerations for Virtual Tape Library
This chapter describes the preferred practices for connecting hosts to the IBM System Storage TS7600 ProtecTIER family Virtual Tape Library (VTL), including device driver specifications for various operating system platforms, such as IBM AIX®, UNIX, Linux, and Solaris. This chapter also describes the suggested settings for LUN masking, persistent device name binding, and considerations about control path failover (CPF).
 
Note: This chapter applies only when you use the VTL emulation feature of ProtecTIER. It does not apply to the File System Interface (FSI).
This chapter describes the following topics:
General suggestions (to connect a backup application host to a ProtecTIER VTL system)
For further details about the topics in this chapter and managing your tape environment, see the IBM Tape Device Drivers Installation and User's Guide, GC27-2130:
6.1 General suggestions
When you use the VTL emulation of the ProtecTIER system, there are several general suggestions that you can follow to connect any backup application host to the
ProtecTIER system:
Ensure that the operating system (OS) and version of your host, and your backup server version, are listed as supported in the IBM Interoperability Matrix and in the Backup Application independent software vendor (ISV) Support Matrix. For more information, see 6.2, “Device driver specifications” on page 80.
Install the suggested device driver on the host, as specified in the IBM Interoperability Matrix. For more information, see 6.2, “Device driver specifications” on page 80.
When possible, configure CPF to enable redundancy to access virtual robot devices. For more information, see 6.2, “Device driver specifications” on page 80.
Set up persistent device naming to avoid changes on devices that are recognized by the OS after a system restart. For example, persistent naming can be configured under Linux by using the udev device manager. For more information, see 6.2, “Device driver specifications” on page 80.
When you set up persistent naming, do not use SAN Discovery in Tivoli Storage Manager (because version 7.1.3 was rebranded to IBM Spectrum Protect). The Tivoli Storage Manager SAN Discovery function discovers IBM devices that are based on the original OS device name, not based on customized devices names as they are created, for example, with udev.
When you share a VTL across several backup hosts, enable the LUN masking feature and configure LUN masking groups, as described in 6.3, “LUN masking for VTL systems” on page 87.
6.2 Device driver specifications
Select the appropriate device driver, depending on the backup application, OS, and version of the host that you attach to the ProtecTIER server.
To access the IBM SSIC and ProtecTIER ISV Support Matrix. More details are in Appendix B, “ProtecTIER compatibility” on page 457.
To ensure that your host hardware is supported, and that the firmware versions are at the minimum required levels. Review the Notes section of the ProtecTIER ISV Support Matrix. This section specifies which device driver must be installed on the host to work with the ProtecTIER VTL.
Table 6-1 summarizes which device driver to chose (either IBM Tape Device Driver, native OS driver, or ISV device driver) for each application. To confirm detailed information about version and specific configurations, see the latest release of the ProtecTIER ISV Support Matrix.
Table 6-1 Summary of recommended device drivers by each backup application
Backup application
IBM Tape Device Drivers
Native OS driver or ISV driver
IBM Tivoli Storage Manager (Rebranded to IBM Spectrum Protect as of v 7.1.3)
All platforms
Not applicable (NA)
Symantec Veritas NetBackup (NetBackup)
AIX with NetBackup version 6.5.2 and later. When an IBM tape driver is used, its multipath function must be disabled.
AIX with NetBackup older than version 6.5.2, requires the Symantec ovpass driver.
Solaris: Requires the Symantec sg driver for drives and solaris st driver for the robot/changer devices.
All other platforms.
EMC NetWorker
Windows
AIX
Solaris: Requires solaris sg driver for the robot changer devices and IBM tape driver for tape drives.
Linux
HP-UX
Solaris
Commvault
Windows
AIX
All other platforms: Native OS drivers
HP Data Protector
Windows
All other platforms: Native OS drivers
Although the following sections list the specifications grouped by OS, always see the latest release of the ISV Support Matrix for current information. To access the latest IBM SSIC and ProtecTIER ISV Support Matrix, see Appendix A, “ProtecTIER compatibility” on page 465.
6.2.1 AIX specifications to work with VTL
The following backup and recovery applications and AIX specifications work with VTL:
The IBM Spectrum Protect (formerly Tivoli Storage Manager) backup application on all AIX OS versions requires IBM Tape Device Drivers for the TS3500 Library medium changer and for LTO3 drives.
The EMC NetWorker (Legato) backup application on all AIX OS versions requires IBM Tape Device Drivers for the LTO3 tape drives.
The HP Data Protector backup application requires the native OS driver for changer and drive devices.
Symantec NetBackup (NetBackup) in Version 6.5.2 and higher uses the IBM tape driver with TS3500 Library medium changer and LTO3 drives. Earlier releases require the Symantec ovpass driver and the V-TS3500 library.
For all other backup applications on AIX platforms, use the native Small Computer System Interface (SCSI) pass-through driver for all existing VTL emulations.
6.2.2 Solaris specifications to work with VTL
The following backup recovery applications and Solaris specifications work with VTL:
The IBM Spectrum Protect backup application (on all Linux platforms) requires IBM Tape Device Drivers on all Solaris platforms.
The EMC NetWorker (Legato) backup application supports either the IBM Tape Device Driver or the native st driver.
The HP Data Protector backup application requires a Solaris sst driver for the TS3500 medium-changer, and the native driver for the drives.
All other backup applications on Solaris use the native driver for all existing
VTL emulations.
6.2.3 Linux specifications to work with VTL
The following backup recovery applications and Linux specifications work with VTL:
The IBM Spectrum Protect backup application (formerly Tivoli Storage Manager) requires IBM Tape Device Drivers on all Linux platforms.
The EMC NetWorker (Legato) backup application requires only the native st driver, and it can support up to 128 tape drives per host.
For all other backup applications on Linux platforms, use the native SCSI pass-through driver for all existing VTL emulations.
Implementation of control path failover (CPF) is possible only with the Tivoli Storage Manager backup application on all Linux platforms.
6.2.4 Windows specifications to work with VTL
The following backup recovery applications and Windows specifications work with VTL:
IBM Spectrum Protect (formerly Tivoli Storage Manager), EMC NetWorker, and Commvault require IBM Tape Device Drivers.
NetBackup and all other backup applications that are not previously listed use the native Windows driver for the VTL emulations.
6.2.5 IBM Tape Device Driver
For the IBM Tape Device Driver, an installation and user guide contain detailed steps to install, upgrade, or uninstall the device driver for all supported OS platforms. See the IBM Tape Device Drivers Installation and User’s Guide, GC27-2130:
The IBM Tape Device Drivers can be downloaded from the Fix Central website. Fix Central also provides fixes and updates for your systems software, hardware, and operating system.
To download the IBM Tape Device Driver for your platform, complete the following steps:
1. Go to the IBM Fix Central website:
2. Click the Select product tab and complete the following steps:
a. From the Product Group drop-down menu, select System Storage.
b. From the Select from System Storage drop-down menu, select Tape systems.
c. From the next Select from Tape systems drop-down menu, select Tape drivers and software.
d. From the Select from Tape drivers and software drop-down menu, select Tape device drivers.
e. From the Platform drop-down menu, select your operating system. You can select the generic form of the platform (Linux) and all device drivers for that platform are listed.
f. Click Continue. In the window that opens, select the download that you need.
6.2.6 Control path failover and data path failover
The path failover features ensure the use of a redundant path in the event that communication over the primary path fails. These path failover features are built in to the IBM Tape Device Drivers, and are enabled by default for ProtecTIER.
ProtecTIER offers path failover capabilities that enable the IBM Tape Device Driver to resend a command to an alternate path. The IBM Tape Device Driver initiates error recovery and continues the operation on the alternate path without interrupting the application.
Two types of path failover capabilities exist:
Control Path Failover (CPF). Control refers to the command set that controls the library (the SCSI Medium Changer command set).
Data Path Failover (DPF). Data refers to the command set that carries the customer data to and from the tape drives.
Path failover means the same in both: that is, where there is redundancy in the path from the application to the intended target (the library accessory or the drive mechanism, respectively), the IBM Tape Device Driver transparently fails over to another path in response to a break in the active path.
 
Note: CPF and DPF are activated when errors or events occur on the physical or transmission layer, such as cable failures, HBA hardware errors (and some software HBA errors), and tape drive hardware errors.
Both types of failover include host-side failover when configured with multiple HBA ports connected into a switch, but CPF includes target-side failover through the control paths that are enabled on more than one tape drive. DPF includes target-side failover for the dual-ported tape drives, but ProtecTIER does not virtualize dual-ported tape drives. Table 6-2 summarizes CPF and DPF support offered on ProtecTIER.
Table 6-2 CPF and DPF support enabled on ProtecTIER
Failover type
Host side
Target side
Control Path Failover
With multiple HBA ports connected into a switch
With robot accessible through multiple ports
Data Path Failover
With multiple HBA ports connected into a switch
Not supported
For more details about CPF and DPF see the Common extended features topic in IBM Tape Device Drivers Installation and User’s Guide, GC27-2130.
Because CPF and DPF require use of the IBM Tape Device Driver, the backups applications connected to ProtecTIER that will support path failover features are those that use the IBM Tape Device Driver.
 
Note: CPF and DPF are not features owned by IBM Spectrum Protect (known as Tivoli Storage Manager prior to version 7.1) and are not related to the IBM Spectrum Control SAN Discovery.
In the ProtecTIER Manager, you can verify that CPF is enabled, which is the default, by checking the properties of a defined library in the Configuration window (Figure 6-1).
 
Tip: To use CPF, in addition to it being enabled for the library, make sure to have more than one robot enabled and available in the library.
Figure 6-1 Control path failover mode enabled at ProtecTIER Manager
Enabling control path failover in IBM Spectrum Protect (formerly Tivoli Storage Manager)
To enable CPF/DPF in an AIX system with Tivoli Storage Manager, enable path failover support on each SCSI medium changer by running the chdev command in AIX:
chdev –l smc0 –aalt_pathing=yes
chdev –l smc1 –aalt_pathing=yes
Primary and alternative paths
When the device driver configures a logical device with path failover support enabled, the first device that is configured always becomes the primary path.
On AIX systems, on SCSI attached devices, -P is appended to the location field. On Fibre attached devices, -PRI is appended to the location field of the device (Example 6-1 on page 85). When a second logical device is configured with path failover support enabled for the same physical device, it configures as an alternative path.
On SCSI attached devices, -A is appended to the location field. On Fibre attached devices, -ALT is appended to the location field of the device (Example 6-1 on page 85). A third logical device is also configured as an alternative path with either -A or -ALT appended, and so on. The device driver supports up to 16 physical paths for a single device.
If smc0 is configured first, and then smc1 is configured, the lsdev -Cc tape command output is similar to Example 6-1 on page 85.
Example 6-1 Primary and alternative path example for Fibre attached devices on AIX
aixserver> lsdev -Cc tape | grep smc
smc0 Available 06-09-02-PRI IBM 3584 Library Medium Changer (FCP)
smc1 Available 0B-09-02-ALT IBM 3584 Library Medium Changer (FCP)
 
Configuring CPF: Detailed procedures of how to configure CPF for AIX and other platforms are in the topic about installing and configuring OS device drivers in IBM System Storage TS7600 with ProtecTIER Version 3.3, SG24-7968, and in the IBM Tape Device Drivers Installation and User's Guide, GC27-2130.
Redundant robots with Symantec NetBackup V6.5.2
NetBackup V6.0 became the first release to support multiple paths to tape drives. In NetBackup V6.5.2, the method for handling multiple robots is enhanced.
This version of NetBackup can handle multiple robot instances without the IBM Tape Device Driver because the path failover mechanism is implemented in the NetBackup software.
The V-TS3500 library type presents redundant robots to NetBackup V6.5.2, which eliminates the single robot limitation.
After you configure your storage devices (use the Configure Storage Devices wizard), only the first path that is detected by the robot is stored in the Enterprise Media Manager database.
If other paths to the library robot exist, you can configure them as alternative paths by enabling multiple path support in NetBackup. Use the NetBackup robtest utility to enable and manage multiple path support for library robots.
If all paths fail and the robot is marked as down, then, in multiple path automatic mode, NetBackup regularly scans for the robot until it becomes available again. Automatic mode is the default. If you use multiple path manual mode, NetBackup regularly attempts to access the robot through all the paths that are configured in the multipath configuration.
To enable multiple paths for library robots, complete the following steps:
1. Start the robtest utility:
 – For UNIX: /usr/openv/volmgr/bin/robtest
 – For Windows: install_pathVolmgrin obtest.exe
2. Select the library robot for which you want to enable multiple paths.
3. At the Enter tld commands prompt, enter the following command:
multipath enable
6.2.7 Persistent device naming
When the multipath feature is enabled, it defaults to running in automatic mode. The automatic mode automatically scans for all paths for each library robot at each tldcd daemon start, requiring no additional setup.
Persistent device naming from a hardware perspective is a way of permanently assigning SCSI targets identifiers (IDs) to the same Fibre Channel (FC) LUNs. With persistent naming, these devices are discovered across system restarts, even if the device's ID on the fabric changes. Some host bus adapter (HBA) drivers have this capability built in, and some do not. Therefore, you must rely on additional software for persistent binding.
From a software perspective, the device files that are associated with the FC LUNs can be symbolically linked to the same secondary device file based on the LUN information. This setup ensures persistence upon discovery, even if the device's ID on the fabric changes.
Operating systems and upper-level applications (such as backup software) typically require a static or predictable SCSI target ID for storage reliability, and for persistent device naming.
An example where persistent naming is useful is a specific host that always assigns the same device name to the first tape library and drives that it finds (Figure 6-2).1
Figure 6-2 Persistent device name binding
Why persistent device naming matters
Persistent device naming support ensures that attached devices are always configured with the same logical name that is based on the SCSI ID, LUN ID, and HBA. You want to be certain that the same logical names are assigned to your device, even when the system
is restarted.
For example, when the AIX OS is started, the HBA performs a device discovery and assigns a default logical name to each device that is found, in sequential order.
Assume that an AIX system is connected to a tape library with two tape drives, with a LUN ID of 0 and target addresses of 0, 1, and 2. The HBA initially configures them as Available with the following logical names:
rmt0 target 0, lun 0 Available
rmt1 target 1, lun 0 Available
rmt2 target 2, lun 0 Available
Suppose that the tape devices are deleted from the system (by running rmdev -dl rmt1 and rmdev -dl rmt2) before you restart the machine. On the next restart, if the existing rmt1 target 1 device is powered off, or not connected, the HBA initially configures two devices as Available with the following logical names:
rmt0 target 0, lun 0 Available
rmt1 target 2, lun 0 Available
If the previous rmt1, target 1 device is powered on after restart and the cfgmgr command is run; the HBA configures the device as rmt2 rather than rmt1:
rmt2 target 1, lun 0 Available
This example is a simple one. Imagine if you have a system with 200 tape drives, and with every system restart, each device is assigned a different name. This situation could cause extra work for a system administrator to correctly reconfigure all of the devices after each restart or device reconfiguration, such as changing the characteristics of a VTL.
For applications that need a consistent naming convention for all attached devices, use persistent device naming support by defining a unique logical name (other than the AIX default names) that is associated with the specific SCSI ID, LUN ID, and HBA that the device is connected to.
In AIX, you can change the logical name of a device by running the chdev command. For example, to change the logical name of the device rmt1 to rmt-1, run the following command:
chdev –l rmt1 –a new_name=rmt-1
This command enables the system to understand that rmt-1 is not detected by the HBA but is predefined at the SCSI ID and LUN ID. The rmt-1 device remains in the defined state and is not configured for use, but the next rmt-2 tape drive is configured with the same name at the same location after restart.
 
Path failover: When path failover is enabled, if you change the logical name for either a primary or alternative device, only the individual device name changes.
Detailed procedures of how to configure persistent device naming for AIX and other platforms are in the IBM Tape Device Drivers Installation and User's Guide:
6.3 LUN masking for VTL systems
Administrators can manage the visibility of specific devices to specific hosts in the IBM ProtecTIER environment. This ability is called LUN masking.
LUN masking permits specific devices (such as tape drives or robots) to be seen by only a select group of host initiators. You can use this feature to assign specific drives to a specific host that runs backup application modules. It enables multiple initiators to share the target FC port without having conflicts on the devices that are being emulated.
The LUN masking setup can be monitored and modified at any time during system operation. Every modification to LUN masking in a ProtecTIER server that might affect the host configuration requires rescanning by the host systems. By default, LUN masking is disabled.
Without LUN masking, all of the devices in the environment are visible to all of the FC attached hosts in the fabric if SAN zoning is set up accordingly. When you enable LUN masking, no LUNs are assigned to any backup host, and the user must create LUN masking groups and associate them with the backup hosts.
Figure 6-3 shows the management of a ProtecTIER environment. The ProtecTIER system includes several devices, such as tape drives and robots. Each device is assigned a LUN ID. The administrator manages two hosts, and each host has two HBA ports, where each HBA port has a unique worldwide name (WWN).
A host initiator is equivalent to a host port. The host initiator uses the port’s WWN for identification. By default, all the devices in the environment are visible to all the hosts. For security purposes, you must hide some of the devices from one of the ports. To accomplish this task, you must create a LUN masking group, and assign a host initiator and specific devices to that group. Performing this process ensures that the selected devices are only visible to the selected hosts.
Figure 6-3 LUN masking scenario
6.3.1 LUN masking methods and preferred practices
Use LUN masking to manage device visibility. LUN masking conceals specific devices (tape drives or robots) from the view of host initiators while enabling a selected host initiator group to view them. Several preferred practices for LUN masking are as follows:
Define host aliases to identify the host ports. When you define backup host aliases, use a practical naming scheme as in the following example:
 – hostname-FE0 (for front-end port 0)
 – hostname-P0 (for port 0)
With more than two backup hosts, use LUN masking to load balance ProtecTIER performance across multiple front-end ports.
Regardless of LUN masking, virtual drives are physically assigned to one front-end port, so backup hosts must be attached to that single port. For load balancing purposes, distribute drives across multiple front-end ports. If possible, distribute drives across all four
front-end ports.
Use LUN masking to establish two or more front-end paths to a backup server for redundancy. For example, the following configurations are relevant:
 – In environments with up to four backup servers, you can dedicate a single front-end port to each backup server rather than using LUN masking, but with the disadvantage of missing load balancing across multiple front-end ports and missing redundancy.
 – In environments where front-end ports are shared, and you want to prevent backup hosts from sharing, use LUN masking to isolate each backup host.
6.3.2 LUN masking configuration steps
Figure 6-4 shows the steps for a LUN masking configuration.
Figure 6-4 LUN masking configuration steps
Host initiator management
First, perform host initiator management by completing the following steps:
1. From ProtecTIER Manager, click VT → Host Initiator Management. A list of available host initiators is displayed (Figure 6-5).
2. Select one or more host initiators from the list, or manually add the host initiator by entering the appropriate WWN, as shown in Figure 6-5.
 
Maximum host initiators: You can define a maximum of 1024 host initiators on a ProtecTIER system.
Figure 6-5 Host initiator management
3. You can also assign an alias to the WWN by clicking Modify (Figure 6-6). Aliases help you more easily identify which host is related to the WWN. The ProtecTIER worldwide port names (WWNs) are found in the Host Initiator Management window.
Figure 6-6 Modifying a host initiator alias
LUN masking groups
Now that you defined a host initiator, you can create a LUN masking group. Figure 6-7 shows the LUN Masking Group window.
Figure 6-7 LUN Masking Group window
To create a LUN masking group, complete the following steps:
1. From ProtecTIER Manager, click VT → LUN Masking → Configure LUN
Masking Groups
.
2. At the LUN Masking Group pane, click Add and enter a name for the group.
3. Click Add in the “Selected node initiators” pane to add one or more host initiators to the group. The list of Host Initiators is displayed, and you can check the boxes of the necessary hosts.
4. Click Add in the “Library mappings” pane to select the library that contains the devices that you want to make visible to the hosts. Then, select the devices to assign to that group.
5. After you select all the necessary options, click Save Changes to create your LUN masking group.
6. If LUN masking is not enabled, the ProtecTIER Manager asks this question: “LUN masking is disabled. Would you like to enable it?” Click Yes if you are ready to enable it, or No if you do not want to enable it yet. Even if you click No, the LUN masking group that you created is saved.
You can create more LUN masking groups, or you can modify an existing group for adding or removing devices, libraries, or host initiators.
 
Important:
A maximum of 512 LUN masking groups can be configured per system.
A maximum of 512 drives can be configured per LUN masking group.
Each group must contain at least one host initiator and one device (tape drive or robot). Robots can be added as required.
A specific host initiator can belong to one LUN masking group, but you can have multiple host initiators in a group, and multiple groups.
A device can belong to multiple LUN masking groups, but a host initiator can belong to only one LUN masking group.
Reassigning LUNs
After you modify a LUN masking group, unwanted gaps might occur in the LUN
numbering sequence.
For example, removing a device from an existing group causes gaps in the LUN numbering scheme if this device does not have the highest LUN number. As a result, the backup application might have trouble scanning the devices. If your backup application has trouble scanning the devices, you should renumber the LUN.
To reassign a LUN, complete the following steps:
1. From ProtecTIER Manager, click VT → LUN Masking → Configure LUN Masking Groups. The LUN Masking Group window opens.
2. Select one of the existing groups, and click Reassign LUNs at the bottom of the Select Devices pane.
3. The system displays the Reassign LUNs window, which has the following message (as shown in Figure 6-8):
You are about to renumber all the LUN values of the available devices in the group and all host connected must be rescanned
Figure 6-8 Reassigning LUNs
4. Click Yes to renumber. The LUN values are sequentially renumbered and all the devices in the Selected Devices pane are assigned new LUN numbers, sequentially, starting with zero.
Enabling or disabling LUN masking
LUN masking is disabled by default. When LUN masking is disabled, devices are accessible by all hosts that are zoned to the respective front-end ports. When LUN masking is enabled for the first time, all devices are masked/hidden from all hosts. You can then create LUN groups to associate host initiators with specific VTL devices, and open paths between hosts and devices. You can also enable or disable LUN masking at anytime.
To enable or disable LUN masking, complete the following steps:
1. From the ProtecTIER Manager, click VT → LUN Masking → Enable/Disable LUN Masking.
2. If no LUN masking groups are created, ProtecTIER Manager notifies you that if you enable the LUN masking feature without configuring LUN masking groups, the devices are hidden from the hosts. ProtecTIER Manager prompts you to confirm whether you want to proceed with this process.
3. When the Enable/Disable LUN masking window opens, select Enable LUN masking, and click OK, as shown in Figure 6-9.
Figure 6-9 Enabling/Disabling LUN masking
You can use this same procedure to disable the LUN masking. After you enable or disable the LUN masking option, rescan the devices from the host systems. Rescanning sends the updated information for the list of visible devices and their associated LUN numbers.
 
 
Important: Every modification to LUN masking in a ProtecTIER server might affect the host configuration and might require rescanning by the hosts.
 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.239.103