ProtecTIER File System Interface: General introduction
This chapter describes network configuration considerations for setting up your network with the IBM System Storage TS7600 ProtecTIER family File System Interface (FSI). For the general configuration process, review the relevant user’s guide for IBM TS7600 with ProtecTIER, for example User’s Guide for FSI Systems, GA32-2235. This chapter describes how to create Network File System (NFS) exports and Common Internet File System (CIFS) shares on ProtecTIER FSI, and how to connect them to your backup host.
This chapter describes the following topics:
 
Important: ProtecTIER GA Version 3.4 was released with only the Virtual Tape Library (VTL) interface support. File System Interface (FSI) support was added to ProtecTIER PGA 3.4 Version. For details, see the announcement letter:
The ProtecTIER FSI presents ProtecTIER as a network-attached storage backup and recovery target that can use the HyperFactor algorithm and ProtecTIER native replication bandwidth reduction techniques for storing and replicating deduplicated data. The ProtecTIER FSI interface is intended to be used for backup and restore of data sets by using a backup application.
ProtecTIER FSI supports various backup applications, such as IBM Spectrum Protect, Symantec NetBackup, and EMC NetWorker. A list of supported backup applications is in Appendix B, “ProtecTIER compatibility” on page 457.
Starting with ProtecTIER Version 3.2, support is provided for Windows based servers through the CIFS protocol, and starting with ProtecTIER Version 3.3, support is provided for UNIX clients through the NFS protocol. ProtecTIER emulates a UNIX or Windows file system behavior, and presents a virtualized hierarchy of file systems, directories, and files to UNIX NFS or Windows CIFS clients. These clients can perform file system operations on the emulated file system content.
You can use the ProtecTIER FSI to create multiple user file systems in a single ProtecTIER repository. When you create a user file system, the maximum size of the user file system is dynamically calculated by determining the total free nominal space in the repository and comparing it to the overall maximum user file system size of 256 TB.
The size of all file systems shrinks proportionally if the deduplication ratio goes lower than expected. If the deduplication ratio goes beyond the expected size, extending the file system size up to the 256 TB limit is possible by using the ProtecTIER Manager.
The FSI interface of ProtecTIER for UNIX and Windows clients is supported on a single node. Dual node cluster support is currently not available. However, a single node can serve multiple CIFS and NFS exports in the same repository.
On the current release, exporting a single FSI share through CIFS or NFS protocol is mutually exclusive. To change the export type from NFS to CIFS, you must delete the NFS share definition before you export it through CIFS, and vice versa. Disabling the share definition alone is not sufficient.
 
 
Important: ProtecTIER FSI support is intended for storing backup images that are produced by backup applications, and not for primary storage deduplication. ProtecTIER performs best when sequential streams are delivered to ProtecTIER rather than
random input/output (I/O).
5.1 ProtecTIER FSI network overview
This section provides an overview of the ProtecTIER FSI network configuration.
5.1.1 ProtecTIER network
ProtecTIER servers have several physical network ports. The number of ports varies based on the ProtecTIER model. Ports are used for management, replication, or file system-related operations from the hosts. Each port is assigned to one of these uses.
This configuration is achieved by assigning the physical ports to a virtual interface on the ProtecTIER server. The set of virtual interfaces on the ProtecTIER product includes external, replic1, replic2, fsi1, fsi2, on to fsi_n, as shown in Figure 5-4 on page 69. Each one of the virtual interfaces can have assigned one or more physical network ports.
The default setup of the ProtecTIER product assigns all of the FSI physical ports to a single virtual interface by using round robin load balancing. This setup can be changed as needed.
If more than one physical port is assigned to a virtual interface, be sure to configure the bonding methodology in this interface to align with the network environment to fulfill the wanted behavior in terms of performance and redundancy. For more information about the bonding methods that are available with the ProtecTIER product, see Chapter 3, “Networking essentials” on page 37.
5.1.2 Network configuration considerations
This section describes network configuration considerations and preferred practices for FSI. The following guidelines are valid for CIFS and NFS configurations. Because ProtecTIER IP replication with FSI is realized on a file share level, you should create a dedicated CIFS share or NFS export for each backup server that you use with ProtecTIER FSI:
Make sure that the backup application runs in the context of a user that has read and write permissions on the FSI share/export.
You must have at least two different network subnets to separate the ProtecTIER management IP interface from the ProtecTIER file system (application) interfaces.
For FSI workloads, you must have sufficient Transmission Control Protocol/Internet Protocol (TCP/IP) infrastructure for the incoming and outgoing traffic of backup servers. Ensure that you do not suffer from network bandwidth congestion.
If bonding of network adapters is implemented, it must be implemented on all involved network devices, that is, the ProtecTIER server, the backup server, and the network switches. Enabling bonding only on the ProtecTIER server might not be enough to achieve the best results.
5.1.3 Connecting a ProtecTIER server to the network
To better understand the requirements that arise from using the FSI model of ProtecTIER in your network environment, look at the potential connections that you will deal with during the initial deployment. The diagrams displayed in this section reference the newest TS7650G DD6 model.
As shown in Figure 5-1, this example uses the connection (labeled 1) to attach the ProtecTIER server to the customer network. Through this connection, you use the ProtecTIER Manager graphical user interface (GUI) and connect to the ProtecTIER server for management and configuration purposes.
The ProtecTIER IP replication feature shows two Ethernet connections (labeled 21 and 22). By default, the replication workload is balanced across both ports.
Figure 5-1 ProtecTIER network interfaces for customer and replication network on Gateway model
To use the FSI feature on a ProtecTIER Gateway, you must prepare at least one (and as many as four when using 1 GbE cards) dedicated subnets for the backup server traffic to the ProtecTIER server. The data that is transferred from the backup server to the FSI interface must not use the customer network IP interface or the replication interfaces. For details about the cabling for other ProtecTIER models, review the chapter about hardware planning for the IBM TS7600 ProtecTIER system in IBM System Storage TS7600 with ProtecTIER Version 3.3, SG24-7968.
Figure 5-2 shows the 1 GbE interface model of the ProtecTIER server. The interfaces that are labeled 13, 14, 15, and 16 are available to use the FSI traffic between the backup servers and the ProtecTIER server.
Figure 5-2 ProtecTIER network interfaces for FSI traffic from backup servers on a Gateway model for FSI 1 GbE
Now that you see all of the important interfaces for potential network traffic, you can review the configuration through the ProtecTIER Manager GUI.
To configure all networking-related aspects of the ProtecTIER server open the ProtecTIER Manager GUI and click Node  Network configuration (Figure 5-3).
Figure 5-3 ProtecTIER Manager GUI network configuration window
The Network configuration window (Figure 5-4 on page 69) provides options to change the networking parameters.
Figure 5-4 ProtecTIER network configuration window
When you perform ProtecTIER network configuration, you can assign a physical device to a ProtecTIER virtual device for all interfaces, even if the virtual interface contains only one physical device.
 
Tip: Optionally, you can also configure your network from the ProtecTIER Service menu directly in the ProtecTIER server.
Multiple ways exist to set up your networking to ensure that you have an HA configuration, and that you distribute the load across all available resources. The default setup is a single virtual interface, fsi1, which consists of all four physical 1 Gb ports (Table 5-1).
Table 5-1 DD5 1 GbE Ethernet default port assignments - Gateway FSI 1 GbE
Network types
Virtual interfaces
Assigned physical ports
Network IP
LB
Subnet
Name
Speed
Slot
Port
External
External IP
RR
1
Eth2
1 GbE
 
Onboard
Application
fsi1
RR
2
Eth3
1 GbE
1
13
Eth4
1 GbE
1
14
Eth5
1 GbE
1
15
Eth6
1 GbE
1
16
Replication
replic1
N/A
3
Eth0
1 GbE
 
Onboard
replic2
N/A
4
Eth1
1 GbE
 
Onboard
 
Separation of networks: Again, you must separate your external customer management network from your backup FSI network. An important step is to configure the ProtecTIER network so that each virtual interface (IP) is on a different network and preferably a different VLAN in a multitier network infrastructure.
If you use the configuration that is shown in Table 5-1 on page 69, all of your backup servers connect to the IP of the ProtecTIER application virtual interface fsi1. The default load-balancing (LB) method of round robin (RR) mode 1 works without special network infrastructure hardware requirements. This LB mode permits, depending on your network infrastructure, a unidirectional bandwidth increase.
This configuration means that, from the perspective of a single data stream that flows outbound from a ProtecTIER server, you can potentially benefit from up to 4 Gb of bandwidth, which is essentially the combined throughput of all four aggregated interfaces.
It also means that restoring data from your ProtecTIER server to your backup server could be fast. For further details about port aggregation, see Chapter 3, “Networking essentials” on page 37.
Backing up your data creates a data stream that is directed toward the ProtecTIER server. Single data streams directed toward a ProtecTIER server do not benefit from the potential bandwidth increase when you use the round-robin LB method in this example.
To fully use the ProtecTIER server resources in this configuration, you must use multiple backup servers that back up to their respective file systems on the ProtecTIER server. To further optimize the potential throughput of single backup server environments, you must understand the link aggregation methods that can be used for load balancing and increasing throughput, as listed in Table 3-1 on page 42.
5.1.4 Replication
You can use ProtecTIER to define replication policies to replicate a file system's directories and all the objects that are contained in these directories recursively to remote ProtecTIER repositories without any disruption to the operation of the file system as a target for backup. It is possible to define up to 64 source directories per one replication policy, and to define up to three remote ProtecTIER destinations.
The replicated data in the remote destination can be easily used to restore data in the case of a Disaster Recovery (DR), or in the case of a DR test (without any interruption to the backup and replication procedures).
An important task is to enable the ProtecTIER system to supervise all the changes that are made to a directory, or to a set of directories, that is constantly defined in a replication policy. Therefore, you should not disable a replication policy unless this policy is no longer considered relevant.
If maintenance is scheduled for the network that is used for replication, a possibility (although not mandatory) is to suspend the replication to a specific destination. Suspending replication enables the ProtecTIER system to continue supervising all of the changes, but it does not attempt to send the replication data through the network for the time that is defined by the suspend operation. The suspend operation is limited in time, with a maximum suspend time of 72 hours.
If a policy is disabled for some reason, a new Replication Destination Directory (RDD) must be defined to re-enable the policy. The ProtecTIER system does not need to replicate all of the data from scratch if the old RDD is not deleted; it needs to create only the structure and metadata in the new RDD. Therefore, do not delete the old RDD until at least a new cycle of replication to the new RDD is complete.
5.1.5 Disaster recovery: Test
Use the ProtecTIER cloning function for DR testing in an FSI environment. Cloning creates a space-efficient, writable, and point-in-time copy of the data without disruption to the ongoing replications and recovery point objective (RPO). The DR test can be performed on the cloned data while the source repository continues replicating data without modifying any data on the cloned copy.
5.1.6 Disaster recovery: Event
If there is a real DR event where the primary repository that owns the backup data is temporarily or permanently down, the data can be restored from the replicated copy. If you want to do new backups at the DR ProtecTIER system during the DR event, then you must take ownership of the RDD to have write privileges.
Taking ownership of an RDD means that the replication directory can be accessed through shares/exports with read/write permissions. After an RDD is modified to be read/write accessible, the source repository can no longer replicate data to the modified RDD. The modified RDD now becomes a “regular” directory, and can be used as a source for replication. It can also have shares that are defined to it with writing permissions.
For more information about this procedure, see the chapter about native replication and disaster recovery in IBM System Storage TS7600 with ProtecTIER Version 3.3, SG24-7968.
5.1.7 General FSI suggestions
Disable any encryption features in the backup server when you use ProtecTIER as the backup target, as shown in Table 5-2.
Table 5-2 Suggested settings
Parameter
Value in backup application
Compression
Disable
Deduplication
Disable
Encryption
Disable
Multiplexing
Disable
5.2 File System Interface guidelines for NFS
This section provides an introduction to, and preferred practices for, configuring the ProtecTIER FSI for NFS protocol. The ProtecTIER FSI for NFS emulates a Network File System that is accessed by UNIX Operating Systems. The FSI-NFS file system presents a virtualized hierarchy of file systems, directories, and files to UNIX NFS clients. The ProtecTIER FSI is intended to be used for backup and restore of data sets by using a backup application.
 
Important: ProtecTIER GA Version 3.4 was released with only the Virtual Tape Library (VTL) interface support. File System Interface (FSI) support was added to ProtecTIER PGA 3.4 Version. For details, see the announcement letter:
5.2.1 ProtecTIER NFS authentication and security management
As of ProtecTIER Version 3.3 implements FSI-NFS exports with NFS protocol Version 3. Access to the export is granted either for a single host or a host group. Before guiding you through the process of creating and mounting an FSI-NFS export, this section describes the most important options that you must specify when you use the create NFS export wizard:
Port security
Root squash/no root squash
 
Port security
To configure port security, go to the Properties tab of the NFS export wizard. The Port Security option is under the Details section (Figure 5-5 on page 73).
In the Details section, select whether you want to enable NFS clients to connect to ports higher than 1023. The port numbers 0 - 1023 are the well-known ports, also known as system ports. These TCP/IP port numbers permit only root users and services to run servers on these ports. Port 1024 and higher are also known as user ports.
Keep the default setting and leave the check box selected, as shown in Figure 5-5.
Figure 5-5 Create an NFS export wizard: Port Security definition
Root squash
Enabling this option prevents root users, from the NFS client systems, from having root privileges on the NFS export that are provided by ProtecTIER. If you do not enable root squash, any root user of a remote system might delete any user data on the mounted NFS export because root can delete any data of foreign users.
To prevent this action, the root squash option maps the root user ID 0 (UID) and group ID 0 (GID) to a customized UID. By doing this task, the remote root user cannot delete or modify any other data than the one that is created with the customized UID. Typically, the root squash function by default maps the UID to nfsnobody, but in the ProtecTIER implementation, the UID of that user is higher than the value that you are allowed to enter in the wizard's field.
Alternatively, the no_root_squash option turns off root squashing.
To select the User ID Mapping for root squash or no root squash, use the Add Host Access window shown in the Create an NFS export wizard (Figure 5-6).
Figure 5-6 Create NFS Export wizard - User ID Mapping definition
Mapping the root user to a UID that does not exist on the ProtecTIER system is possible but not recommended. Instead, map it to an existing user such as nobody. The nobody user has limited permissions and is not allowed to log in to the system. Alternatively, you can create a user and a group with limited permissions and map the root users of the client host systems to these IDs.
Example 5-1 shows how to use the grep command to determine the UID and the GID of the user nobody. This user exists in the ProtecTIER system. You must log on to the ProtecTIER CLI using SSH to query the user account information.
Example 5-1 Determine the user ID and group ID of user nobody on the ProtecTIER server
root@BUPKIS]# grep nobody /etc/passwd
nobody:x:99:99:Nobody:/:/sbin/nologin
nfsnobody:x:4294967294:4294967294:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
[root@BUPKIS]# grep nobody /etc/group
nobody:x:99:
The output of the commands in Example 5-1 shows that the numeric value for the user and group are both 99. You can use this number to configure the root user ID mapping, or to create a customized user account and a dedicated group to map one or more root accounts of the remote NFS clients.
If you decide not to use the existing nobody account, you can create your own customized group and several users, as shown in Example 5-2.
Example 5-2 Create a customized group and user
[root@BUPKIS]# groupadd -g 65536 nfsanonymous
[root@BUPKIS]# useradd -u 65536 -g nfsanonymous -M -s /sbin/nologin
-c "Anonymous PT NFS client user" nfsanonymous
5.2.2 Understanding root squash
The basics of NFS root squash and no root squash are explained in “Root squash” on page 73. The following section demonstrates the effects of turning root squash on or off.
Example 5-3 shows a directory listing of a ProtecTIER NFS export. The file1 file was created by a root user. Usually, the user ID of root is 0, but because root squash was turned on when the NFS export was defined, the root user ID is mapped to a defined UID (in this example, they are user ID 65536 and group ID 65536). The file2 file was created by the tsminst1, which belongs to the tsmsrvrs group.
Example 5-3 Directory listing on an NFS share
[tsminst1@Amsterdam thekla_tsm6]$ ls -ltrh
total 1.0K
-rw-r--r--. 1 65536 65536 12 Nov 7 02:07 file1
-rw-r--r--. 1 tsminst1 tsmsrvrs 12 Nov 7 02:08 file2
When root squash is enabled, the root user loses the authority to delete files that belong to any other user ID than the root squash user ID. In this example, the root user is not allowed to delete files of tsminst1 anymore. Turning on root squash is an important security feature. It prevents the possibility that any root user of any host can mount the export and delete data that belongs to other systems and users.
Example 5-4 demonstrates that the root user ID is not allowed to delete file2, which belongs to tsminst1. The delete command fails with an error message Permission denied.
Example 5-4 Deleting files with root squash enabled in the NFS export definition
[root@Amsterdam thekla_tsm6]# rm file2
rm: remove regular file `file2'? y
rm: cannot remove `file2': Permission denied
To demonstrate the power of the root user without the root squash function enabled, we modified the NFS export definition and disabled root squash. In comparison to Example 5-4 the root user can delete file2 even if the file is owned by tsminst1. The result of the delete operation is shown in Example 5-5. The file2 was deleted without any error.
Example 5-5 Deleting files with root squash disabled in the NFS export definition
[root@Amsterdam thekla_tsm6]# rm file2
rm: remove regular file `file2'? y
[root@Amsterdam thekla_tsm6]# ls -ltr
total 1
-rw-r--r--. 1 65536 65536 12 Nov 7 02:07 file1
5.3 File System Interface guidelines for CIFS
This section provides a general introduction to, and preferred practices for, configuring the ProtecTIER FSI for CIFS. The ProtecTIER FSI emulates Windows file system behavior and presents a virtualized hierarchy of file systems, directories, and files to Windows CIFS clients. Clients can perform all Windows file system operations on the emulated file system content. The ProtecTIER FSI interface is intended to be used for backup and restore of data sets using a backup application.
This section describes how to create a CIFS share on ProtecTIER, connecting to a CIFS share, and shows preferred practices.
 
Important: ProtecTIER GA Version 3.4 was released with only the Virtual Tape Library (VTL) interface support. File System Interface (FSI) support was added to ProtecTIER PGA 3.4 Version. For details, see the announcement letter:
5.3.1 Mounting the NFS export in a UNIX system
When working with FSI, properly mounting the export in your host system is important. Example 5-6 shows the parameters to set when using a UNIX system.
Example 5-6 Suggested parameters for using a UNIX system
Linux: mount -o rw,soft,intr,nolock,timeo=3000,nfsvers=3,proto=tcp <PTServerIPAdd>:/<ExportPath> /<mountpoint>
Solaris 10: mount -o rw,soft,intr,llock,timeo=3000,vers=3,proto=tcp <PTServerIPAddress>:/<ExportPath> /<mountpoint>
Solaris 11: mount -o rw,soft,intr,llock,timeo=3000,vers=3,proto=tcp <PTServerIPAddress>:/<ExportPath> /<mountpoint>
AIX: mount -o rw,soft,intr,llock,timeo=3000,vers=3,proto=tcp,rsize=262144,wsize=262144 <PTServerIPAddress>:/<Exportpath> /<path>
 
[root@flash ~]# mount -o rw,soft,intr,nolock,timeo=3000,nfsvers=3,proto=tcp 10.0.25.129:/flash /mnt/flash
[root@flash ~]# mount
/dev/mapper/vg_flash-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda2 on /boot type ext4 (rw)
/dev/sda1 on /boot/efi type vfat (rw,umask=0077,shortname=winnt)
/dev/mapper/vg_flash-lv_home on /home type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
10.0.35.129:/flash on /mnt/flash type nfs (rw,soft,intr,nolock,timeo=3000,nfsvers=3,proto=tcp,addr=10.0.35.129)
backup_utility on /mnt/fuse type fuse.backup_utility (rw,nosuid,nodev)
After it is mounted, the export can be used with your backup application.
5.3.2 ProtecTIER authentication and user management
The ProtecTIER product supports two modes of authentication and user management in a CIFS environment:
Active Directory
Workgroup
In the Active Directory mode, the ProtecTIER system joins an existing domain that is defined by the user. The domain users can work with the file systems if they are authenticated by the Active Directory server. In Workgroup mode, the ProtecTIER system manages the users that can access the file systems. In Workgroup mode, you define the users through the ProtecTIER Manager GUI.
Active Directory and user IDs
The ProtecTIER system assigns user IDs to Windows users that access the system through CIFS. The only control that you must set is the range of user IDs (UIDs) that are generated. Set a range that is not overlapping with UIDs used for existing UNIX users in the organization.
Active Directory realm
One of the parameters that must be provided to the ProtecTIER system when you define authentication mode to Active Directory is the realm. In most cases, the name of the realm is the DNS domain name of the Active Directory server. The realm must always be uppercase characters, and must not be a single word (for example, add .COM or .LOCAL to the domain name).
Some helpful commands can be used to define the realm:
From the Active Directory server, run the following command:
C:>ksetup
default realm = RNDLAB02.COM ---------------> The realm
From the ProtecTIER server, run the following command:
net ads lookup -S IP_Address_of_ADServer
Example 5-7 shows output for the net ads lookup command.
Example 5-7 Output of the net ads lookup command
net ads lookup -S 9.148.222.90
output:
Information for Domain Controller: 9.148.222.90
Response Type: SAMLOGON
GUID: 9a1ce6f2-17e3-4ad2-8e41-70f82306a18e
Flags:….
Forest: rndlab02.com
Domain: R NDLAB02.COM ---------------> The realm
Domain Controller: RNDAD02.rndlab02.com
Pre-Win2k Domain: RNDLAB02
Pre-Win2k Hostname: RNDAD02
Server Site Name : Default-First-Site-Name
Client Site Name : Default-First-Site-Name
5.4 FSI file system scalability
An FSI file system can scale to the following values:
Maximum virtual file systems per repository: 128; 48 on SM2
Maximum nominal virtual file system size: 256 TB
Maximum files per virtual file system: 1 million
Maximum files per repository: 16 million; up to 3 million on SM2
Maximum “open files” per replication (streams): 192; 64 on SM2
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.136.226