Application considerations and data types
This chapter described guidelines for the settings and parameters that are modified in your applications, and specific data types for optimal performance and deduplication factoring ratios.
This chapter describes the following topics:
DB2
SAP
Readers of the relevant sections should be familiar with the backup and restore concept of the managed application or data type. Therefore, this chapter does not provide steps for configuring backup applications.
 
Notes:
ProtecTIER GA Version 3.4 was released with only the Virtual Tape Library (VTL) interface support. File System Interface (FSI) support was added to ProtecTIER PGA 3.4 Version. For details, see the announcement letter:
Beginning with ProtecTIER V3.4, the OpenStorage (OST) interface is not supported.
Beginning with version 7.1.3, Tivoli Storage Manager was rebranded to IBM Spectrum Control. The scenarios in this chapter were conducted with a version prior to 7.1.3, so this chapter uses the name Tivoli Storage Manager for legacy and reference purposes.
20.1 IBM Domino
This section describes the settings and parameters that should be modified in IBM Domino environments to enable the optimal factoring for the ProtecTIER product.
20.1.1 Common server
IBM Domino employs a common email or application server for several members of the company or network. Clients usually run backup policies from the common server (the Domino server) that stores all the data on the physical or virtual tape.
Domino servers in enterprise or secured environments are configured in application clusters. In contrast, the server-based clusters and the shared storage resources that are assigned to the application are available on both (or more) cluster nodes simultaneously. Access to the them is fully controlled by the clustered applications, in this case, Domino servers.
A Domino mail environment typically has a configuration of active-active clusters, where each Domino server is always active only on a dedicated node of the dual-node cluster, never fails over, and both Domino applications (that is, the applications running on the server //cluster) control the same storage resources. However, only the dedicated portion of the databases (set of mail files) is served at one time by the single node.
The common understanding of the application failover to the standby cluster node does not apply in Domino environments. If there is a node failure, the application that is running on a live node takes full management control over all portions (all mail files) of storage resources instantaneously.
From the Domino functional perspective, the following categories of Domino server installations exist:
An email server that supports IBM Lotus Notes, IMAP, POP3, SMTP, and WebMail access
(IBM iNotes®).
An application server where the Lotus Notes client provides the application run time.
A database server that offers Notes Storage Facility.
A web server that enables Lotus Notes clients to access the data through a web browser.
A directory server for authentication services (hub/gateway).
Instant messaging and web conferencing, also known as IBM Sametime®.
This section focuses on email, application, and database servers, which usually hold the most amount of data in Domino server repositories. The other listed features are highly transactional services with small amounts of data, and are therefore not optimal candidates for ProtecTIER deduplication.
20.1.2 Existing backup and disk space usage
Running the existing backup commands and using the general suggested methods cause a low factoring ratio of the data even if the change is low. These actions reduce the benefit of using the ProtecTIER solution and disk space usage.
The root cause is the inherent compaction of the database, which reshuffles Notes Storage Format (NSF) files inside the database. Although this function reduces space usage from the perspective of Domino, it also changes the layout and data pattern of every NSF.
The next time that the ProtecTIER server receives blocks from these databases, they all look unique, so the factoring ratio is low. Ratios of 1:2 - 1:3 are possible in environments with Domino Version 7. However, running compaction that is based on the Domino DELETE operation is a preferred practice for Domino, so disabling it is not a solution.
Simply stated, the compaction saves the primary expensive storage on Domino server and increases the performance of mailbox operation, especially in clustered environments, so there is no single client that wants to disable it.
regarding deduplication efficiency, another factor to consider is the method that is used to store email attachments. Working documents are compressed and archived by one of the widely available compression tools, and are converted to various file formats (.zip, .rar, .tar, .7z, and others), which do not factor optimally. The same is true for media files in email attachments, such as compressed pictures (.jpg and .gif), movies (.mpg, .mov, and .avi), or music (.mp3).
20.1.3 Domino attachments and object service
Deduplication can be improved by using a feature in the Domino environment: Domino Attachment and Object Service (DAOS). DAOS removes all the email or application attachments from the Notes Storage Format (NSF) files and stores them separately in the server file system in a single occurrence as a Notes Large Object (NLO). This feature has been available since Domino Version 8.5.
See examples of a dedicated storage repository for NSF and NLO objects:
Figure 20-1 shows the location of mail files in Domino storage.
Figure 20-2 on page 299 shows the folder structure of DAOS attachments.
Figure 20-1 The location of mail files in Domino storage
Figure 20-2 shows another example of a dedicated storage repository for NSF and NLO objects.
Figure 20-2 The folder structure of DAOS attachments
DAOS divides the database objects into two categories of items:
Database items in .nsf files
Attachment items in .nlo files (attachment items)
The NLO holds only one instance of each attachment and the multiple NSF files that contain relevant metadata links, and reuses it. DAOS reduces the effect of using the DELETE option because the DAOS layout does not hold each attachment multiple times. This arrangement mitigates the compaction effect and the NLO change is marginal.
Backing up the NLO files in the DAOS repository can be done either while the Domino server is down, or when it is up and running. The backup does not require the usage of any Domino API-based utilities. After the NLO files are initially written, Domino never modifies their contents, so the backup mechanism does not work around file-write activity. NLO files can be backed up as with any other generic files in the file system. Only the NLO files that are complete and not in the process of being written to or renamed must be backed up.
Any files that are busy can be skipped until the next backup job runs. Most backup applications automatically skip files that they cannot read because of other activity.
 
Important: If Domino is running during a backup process (online backup), an important step is to first back up all NSF files before you proceed with the NLO backup because the metadata references in the NSFs are related to the newly detached NLO files.
In typical Domino environments, dedicated and independent processes are used to back up NSF and NLO files. The NLO file is just a flat file on the disk, but the NSF file is considered to be in database format, which means different tools are used to back up each of the files (for example, Tivoli Data Protection for Mail, in contrast with the Tivoli Storage Manager Backup/Archive client). Obviously, operating system flat file backups are not retained in the backup server for the same period as online backups of Domino NSF files.
 
Important: Ensure that the retention period of NLO file backups is at least the same or longer than the longest retention period used for online backups of NSF files (monthly, quarterly, or yearly backups).
Domino incorporates the feature to keep NLO files on disk for a period after the link to them becomes invalid and the NLO files are not needed anymore. If this DAOS retention period is longer than the backup retention of NSF files in a backup server, the previous statement does not apply. This case is the only exception where backup retention on NLO files does not play a role.
A backup copy is never used for the restoration of mail files with detached data, and the relevant NLO file is still retained on the disk by DAOS. However, the minimal backup retention is still needed to protect data against a disaster or file system corruption.
The disk footprint savings with DAOS apply to the backup processing as well. The NLO files represent the static data that used to be in the NSF, and was backed up every cycle even though it had not changed. In a typical mail environment, a large reduction in the NSF footprint, plus a small amount of NLO data, translates almost directly into a reduction in the backup footprint. In addition to the duplicate data being eliminated, the mail file data is also separated into static and dynamic components.
By applying an incremental backup regimen to the static NLO data, only the NLO files that were created since the last backup cycle need to be processed. Those files typically represent a small amount of data compared to the entire set of NLO files.
In the incremental backup case, duplicate NLOs are not backed up again. Therefore, the space savings from DAOS are directly proportional to the number of duplicate NLOs seen in the environment, and the backup time savings is the product of the space that is saved and the backup throughput.
The ProtectTIER server greatly benefits from this whole behavior. We have seen a factoring ratio of 3 - 5 times higher than before the DAOS is enabled.
20.1.4 Applying the DAOS solution
to assess the benefit of using DAOS, run the DAOS Estimator tool by issuing the DAOSest command in the Domino server console. Perform this action outside of your office hours, because the estimation procedure impacts the server performance, especially mail servers with hundreds to thousands of users.
The histogram result of the estimation job is shown in Example 20-1 on page 301:
The top line shows the number of files.
The bottom line show the specific size of the detachment threshold.
The middle line shows what the result represents in percentage of all attachments.
Example 20-1 Relative and percentage numbers of attachments of different size
==============================================================================
|66362 |19744 |33629 |35311 |161416|90946 |18550 | 3458 | 426 | 22 | 0 |
==============================================================================
| 0.0% | 0.1% | 0.3% | 0.6% | 7.5% |20.4% |31.5% |25.1% |11.4% | 3.2% | 0.0% |
==============================================================================
| 4k | 8k | 16k | 32k | 64k | 1MB | 5MB | 20MB | 100MB| 1GB | >1GB |
==============================================================================
The summary of the estimation process is also shown in a different form in Example 20-2 on page 303.
Before anything is done with DAOS, there are some prerequisites that must be addressed. These prerequisites might not all apply in your situation, but an important task is to verify them to ensure that any changes that are needed can be accommodated. These items are all requirements for enabling DAOS, and are not optional.
 
Consultation: Before you implement DAOS, consult with your Domino representative.
The prerequisites are as follows:
Disable SCOS Shared mail
The Single Copy Object Store (SCOS) is an older approach to attachment consolidation. This feature is not compatible with DAOS and must be disabled before you enable DAOS.
Disable NSFDB2
The NSFDB2 is a feature that you can use to store NSF data in DB2 running either on the same or a different server. This feature is also not compatible with DAOS and must be disabled on every NSF application that participates in DAOS.
Upgrade Domino server
Although DAOS was introduced in Domino V8.5.0, many important stability and performance improvements were made in subsequent releases. Therefore, all new DAOS deployments should use Domino 8.5.3 or later.
Enable transaction logging
The DAOS depends on transaction logging for correct operation. Because DAOS must update several locations simultaneously, it is important that all those updates succeed or fail (and are later rolled back) as a unit.
Adjust backup/restore processes
You must have reliable backup and restore procedures in a production environment to avoid the possibility of data loss. DAOS adds some complexity to the backup and restore process, so you must have a well-established backup and restore foundation for DAOS. Transaction logging introduces additional features that provide even better recovery options.
Upgrade Names.nsf design
The design of the Names.nsf file was changed to accommodate DAOS, and the Server document has a tab that covers the DAOS settings. Names.nsf must use the new pubnames.ntf template on all Domino servers that are enabled for DAOS.
20.1.5 ProtecTIER considerations
In contrast to the general suggestions for DAOS deployment on Domino servers, this section summarizes preferred practices or limitations that apply when the ProtecTIER deduplication solution for backup and recovery is in place:
Disable NLO compression.
The mail attachments that are represented by NLO files use a certain amount of disk space on the Domino server. Domino administrators tend to enable one of the available compression techniques on the attachments in NFS files. If no attachment compression is enabled on the NSF files, or if Huffman compression is being used, then enabling LZ1 compression can save a significant amount of disk space. Run compact -ZU to enable
LZ1 compression.
 
Tip: To achieve the best factoring ratio, avoid using the -ZU flag during compaction.
Disable design and data document compression.
Another Domino space-saving feature is design and data document compression. Enabling these compression forms can also save disk space, but they have a negative effect on deduplication results in the ProtecTIER server. The savings from these features are independent from DAOS and do not achieve the level of savings that you can make with the ProtecTIER solution.
 
Tip: Do not compress Design and Data documents.
Consider the compacting frequency.
Compacting less frequently is not popular with Domino administrators. However, it does not have a significant effect on the performance of a Domino server, mailboxes, or the storage capacity that is used by the Domino server. Complicating factors are backups with retention periods and the database file unique identifier, also called database instance identifier (DBIID).
When the DBIID is changed by running a compact job (which is the default action, unless the -b parameter is not specified), a Domino server always considers this database as eligible for full backup, regardless whether the next backup job is scheduled as an incremental only.
With a backup schedule that consists of weekly full backups (during the weekend) and daily incremental backup of databases with a changed DBIID (during weekdays), you should perform compact jobs on a weekly basis before the full backup occurs. This setup has a positive effect on the ProtecTIER factoring ratio.
 
Tip: Schedule compact jobs less frequently and ensure that they always complete before the next full (or selective) backup of NSF databases. Incremental backup does not back up Domino NSF files, unless the DBIID has changed.
Compact only selected databases.
Not all Domino databases must be compacted regularly. If the percentage of white space (unused space) in the database is, for example, less than 10% of the mailbox size, consider excluding this database from the compaction. The space savings of such a compact job is negligible, but your factoring ratio decreases. Use the -S 10 option to direct the compact task to databases only with 10% or more of the white space. The database DBIID still changes, unless the -b option is not used.
 
Tip: Do not compact databases that use storage space efficiently.
Disable encryption of attachments.
When access to server resources is restricted to responsible personnel only and there is minimal or no risk of data exposure, Domino administrators should disable encryption on detached email attachments (NLO files). The enabled encryption has a negative effect on the ProtecTIER factoring ratio, because each block of data that is sent to the ProtecTIER server behaves as a unique block of data. Although the encryption is enabled by default, you can disable it by adding the following parameter to the Notes.ini file:
DAOS_ENCRYPT_NLO=0
The setting cannot be changed retroactively, and the only way to remove encryption from an existing DAOS installation is to completely disable DAOS.
 
Encryption: Encryption has a negative effect on factoring ratios. Disable it if possible.
Define the appropriate thresholds of the DAOS process.
If the attachment is larger than the minimum participation size, it is stored in the DAOS repository. If it is smaller, it is still stored in the NSF, as it would be without the DAOS feature enabled.
Choosing a size that is too large results in too few attachments being stored in DAOS (low yield), which reduces the savings that DAOS can offer and the ProtecTIER product can benefit from. Conversely, choosing too small of a size can result in a high yield, resulting in an unmanageable number of files in the DAOS repository. The statistics in Example 20-2 show the DAOS minimum size versus the number of NLOs and disk space that is required.
Example 20-2 DAOS minimum size versus the number of NLOs and disk space
0.0 KB will result in 429864 .nlo files using 180.7 GB
4.0 KB will result in 363502 .nlo files using 136.6 GB
8.0 KB will result in 343758 .nlo files using 130.5 GB
16.0 KB will result in 310129 .nlo files using 128.2 GB
32.0 KB will result in 274818 .nlo files using 119.4 GB
64.0 KB will result in 113402 .nlo files using 110.4 GB
1.0 MB will result in 22456 .nlo files using 85.8 GB
5.0 MB will result in 3906 .nlo files using 47.9 GB
20.0 MB will result in 448 .nlo files using 17.6 GB
100.0 MB will result in 22 .nlo files using 3.8 GB
Look for a value that yields about 80 - 90% of the theoretical maximum of the DAOS repository size. Although that value might sound low, it is generally the best trade-off between the DAOS benefits and the resulting number of files.
 
Hint: Determine the appropriate DAOS size when the attachment is offloaded from the NSF to the NLO file.
20.1.6 Preparing Domino databases for DAOS
The task of preparing Domino databases for DAOS should be performed only once. To accomplish this task, complete the following steps:
1. Depending on your operating system, choose the most appropriate procedure:
 – If the Domino server is running Windows, click Start → Programs → Lotus Applications → Lotus Domino Server.
 – If the Domino server is running UNIX, enter the following command in the command-line interface (CLI):
/opt/lotus/bin/server
2. Double-click nlnotes.exe, and go to the workspace window.
3. Browse for the names.nsf file. The location is usually E:Lotus → Domino → Data.
4. Click Configuration → Servers → All Server Documents. Then, open the document that is related to your Domino server.
5. Double-click the page to change it to edit mode. Select the Transactional Logging tab.
6. Set the following parameters:
 – Log path: logdir
 – Logging style: Circular
 – Maximum log space: 512 MB
7. Save your parameters and close the window.
8. Shut down the Domino server by entering the following command at the CLI:
exit <password>
9. Add the following line to the notes.ini file:
CREATE_R85_DATABASE=1
10. Start the Domino server again and use the password.
 
Starting time: For the initial startup sequence after you make these changes, it might take several minutes for the start sequence to run.
11. Complete steps 5 - 7 to edit the server document again. Open the DAOS tab. You might need to scroll to the right to see the tab.
12. Update the following parameters:
 – Store Attachments in DAOS: ENABLED
 – Minimum size: 4096
 – DAOS base path: daos
 – Defer deletion: 30 days
13. Restart the Domino Server by entering the following command at the CLI:
restart server [password]
In the next compaction, you see the DAOS directory that is created in the Domino data directory. That directory contains the following entry:
0001/<really_long_name>.nlo
 
20.2 Microsoft Exchange
This section describes the suggested settings for the Microsoft Exchange (Exchange) environment to improve the backup throughput and the factoring ratio of the ProtecTIER server. The examples that are used in this section are based on IBM Tivoli Storage Manager, but the suggested settings apply to most enterprise backup applications. Some of the settings might not be available in other backup applications. Contact the backup application provider for additional information.
20.2.1 Defragmentation
Defragmentation is commonly used in the Microsoft Exchange environment to recover the disk efficiency of fragmented disks. The defragmentation process rearranges the data that is stored on the disk and creates continuous storage space. There are two types of defragmentation processes: online defragmentation and offline defragmentation.
Online defragmentation
The online defragmentation process removes objects that are no longer being used while Exchange databases remain online. Before Microsoft Exchange 2010, the online defragmentation ran as part of daily Mailbox database maintenance, although this Mailbox database maintenance can be scheduled to run at different times.
As of Exchange 2010, online defragmentation is separated from the Mailbox database maintenance process and it runs continuously in the background. Additional details about database defragmentation are available in at the following web page:
Offline defragmentation
Offline defragmentation is a manual process that creates a database file and copies database records without the white space from the original database file to the newly created database file. When the defragmentation process is complete, the original database is removed and the new database file is renamed as the original.
Offline defragmentation is not part of regular Mailbox database maintenance. It can be done only when the Mailbox database is in an offline state, and this action requires much storage space, because both the original database file and newly created database file must coexist on the disk during the defragmentation process.
20.2.2 Suggestions for Microsoft Exchange
The following list includes the suggested processes and settings of the backup applications to optimize the performance and factoring ratio of the ProtecTIER server:
Perform a daily full backup rather than daily incremental backups where only transaction logs are backed up.
Create one backup job for each database (or for each storage group) without multistreaming to keep similar data blocks in the same stream.
Create concurrent backup jobs if there is more than one database (or more than one storage group) in the Exchange servers to improve overall backup throughput. Example 20-3 shows how to create different backup jobs for different databases in Tivoli Storage Manager. Remember to increase the number of mount points for the client node if multiple databases are housed in one Exchange server.
Example 20-3 Create multiple backup jobs for different databases with Tivoli Storage Manager
TDPEXCC BACKup <Storage Group 01/ Mailbox Database 01> full
TDPEXCC BACKup <Storage Group 02/ Mailbox Database 02> full
TDPEXCC BACKup <Storage Group 03/ Mailbox Database 03> full
Disable compression and encryption in Exchange databases and backup applications.
If personal archive files (.pst) are backed up, do not enable compression and encryption whenever possible.
Configure a longer interval for an online defragmentation process, for example, reschedule the daily database maintenance to be on a weekly basis.
Consider a LAN-free backup that enables data to be sent directly to storage devices.
20.2.3 Microsoft Exchange 2010
Because the online defragmentation is moved out from the daily maintenance process and it runs continuously in background, there is no option to disable or schedule online defragmentation. This situation impacts the factoring ratio. However, we do expect deduplication for Exchange 2010 because Single Instance Storage (SIS) is no longer supported in Exchange 2010. You can find more information about SIS in the Microsoft Exchange team blog at the following web page:
20.3 Microsoft SQL Server
This section describes the suggested settings for the Microsoft SQL environment to improve the backup throughput and factoring ratio of the ProtecTIER server. The examples that are used in this section are based on IBM Tivoli Storage Manager, but the suggested settings apply to most enterprise backup applications. Some of the settings might not be available in other backup applications. For more information, contact the backup application provider.
20.3.1 Integrating the ProtecTIER server with Microsoft SQL Server backup
A ProtecTIER server can be integrated with traditional backup applications, or with the native SQL server backup utility to back up the Microsoft SQL server.
To back up a Microsoft SQL server with traditional backup applications, such as Tivoli Storage Manager (as of version 7.1.3, rebranded to IBM Spectrum Protect), the ProtecTIER product can be deployed as a Virtual Tape Library (VTL) and the File System Interface (FSI) to work in conjunction with the backup applications.
To back up the Microsoft SQL Server with the native SQL server backup utility, the ProtecTIER product can be used as CIFS shares through FSI deployment. This section describes the suggested ways to integrate the ProtecTIER server with different backup methods.
Using native SQL Server backup
You can back up the Microsoft SQL Server with the native SQL server backup and restore utility, where data files can be backed up directly to backup media without using third-party backup applications. The backup media can be disk or tape devices. Most administrators choose to back up to disk rather than to tape devices because the native SQL server backup does not have a tape media management capability similar to other backup applications.
The ProtecTIER product can be deployed as an FSI that provides CIFS shares to a Microsoft SQL server, and the native SQL server backup can use the CIFS share as the destination (Figure 20-3).
Figure 20-3 Use the ProtecTIER CIFS share as the destination of the SQL native backup
Using third-party backup applications
Most backup applications support Microsoft SQL backup through an SQL backup agent, for example, IBM Tivoli Storage Manager Data Protection for Microsoft SQL Server. Backup applications use the ProtecTIER server as tape devices, file devices, or disk storage as backup destinations during the backup server configuration. For details about how to integrate the ProtecTIER server with different types of back-end storage, see Part 2, “Back-end storage subsystems” on page 97.
20.3.2 Index defragmentation
The Microsoft SQL Server maintains indexes to track table updates, and these indexes can become fragmented over time. Heavily fragmented indexes might affect the database query performance, so Microsoft SQL Server uses a defragmentation feature to reorder the index row in continuous pages.
The defragmentation process rebuilds the indexes by compacting the index pages and reorganizing the index rows. This process results in the indexes being seen as new data blocks in backup streams, which can affect the deduplication process that identifies unique data at block level.
Index defragmentation can adversely affect database workload performance. You should perform index defragmentation only when it is necessary. Before you defragment, see the following web page:
20.3.3 Suggestions for Microsoft SQL Server
The following list includes some suggestions for the Microsoft SQL Server to improve the backup throughput and deduplication ratio of the ProtecTIER server:
Perform full backups whenever possible.
When you use the ProtecTIER server as CIFS shares, always use the Universal Network Convention (UNC) path rather than the Windows mapped drive to ensure that the correct CIFS shares are used, and to avoid the Windows connection timeout issue.
Do not schedule index defragmentation on a regular basis. Perform index defragmentation only when necessary.
Disable compression and encryption in the Microsoft SQL server and
backup applications.
Limit the number of backup streams to one stream for one database backup, or one stream per physical volume, if one single large database is split into multiple physical volumes. Setting the stream to 1 gives the best factoring ratio, but it might affect overall backup performance. Set the number of the stream to the minimal number that does not inhibit the performance by using the following option:
STRIPes=1
Use a larger buffer size for a better deduplication ratio. Increase the buffer size slowly from the default buffer size of backup application, but do not exceed the amount of buffer that can be handled by the system memory.
Limit the number of input/output (I/O) buffers in a backup stream. Ideally, there should be two buffers per stream, with one buffer for reading data from an SQL Server and the other for sending data to the backup applications, as shown by the following settings:
 – BUFFer=2
 – BUFFERSIze=1024
 – SQLBUFFer=0
 – SQLBUFFSIze=1024
20.3.4 LiteSpeed for SQL Server
LiteSpeed for SQL Server is a backup utility that compresses and encrypts the SQL database before the data is stored in backup devices. The factoring ratio is greatly impacted if the data is compressed and encrypted before it reaches the ProtecTIER repository. The ProtecTIER product offers little deduplication benefit if LiteSpeed is used for SQL server backup.
20.4 DB2
This section describes the settings and parameters that should be modified in DB2 environments to enable the maximum performance and optimum factoring for the ProtecTIER server. It also explains why combining DB2 compression with ProtecTIER deduplication is possible.
 
Updating DB2: Update your DB2 to Version 9.7 Fix Pack 4 or later and use the DEDUP_DEVICE option for backing up your database. This action results in the best deduplication ratio. DB2 compression types can work with deduplication.
20.4.1 Combining DB2 compression and ProtecTIER deduplication
DB2 offers multiple options to use compression in conjunction with database rows, database values, or both. Run the select tabname,compression from SYSCAT.TABLES command to verify the settings for your database.
Table 20-1 list the available compression types.
Table 20-1 DB2 compression types
SYSCAT.TABLES values
Compression type active
R
Row compression is activated if licensed. A row format that supports compression can be used.
V
Value compression is activated. A row format that supports compression is used.
B
Both value and row compression are activated.
N
No compression is activated. A row format that does not support compression is used.
With DB2 compression, data in the database is compressed on a table-row basis. These compressed rows are written to disk as DB2 pages with a default size of 4 K. Changes in a DB2 database with compression enabled affect only the data in these specific DB2 pages; the changes do not affect the entire database because of block based compression, which is different from traditional compression approaches. Also, after changes occur in the database, only the changed pages are recompressed.
Effectively, compressed DB2 pages are not apparent to HyperFactor, and a large sequence of compressed pages factors well if they are not changed. Effectively, there is no general penalty for using compression in DB2. The data change rate affects this deduplication ratio; regardless of whether you use compression, the behavior is the same.
Remember, even if DB2 compression does have a friendly synergy with ProtecTIER deduplication, the full deduplication potential can be reached only with all sorts of data reduction technology, such as disabled compression.
 
Important: Using another form of compression with DB2 database backups, for example, Tivoli Storage Manager (version 7.1.3 was rebranded to IBM Spectrum Protect) compression or the compression feature of another backup software, still impacts your achievable deduplication ratio.
20.4.2 Upgrading the DB2 database to improve deduplication
The most suggested method for improving deduplication is to upgrade the DB2 database to DB2 9.7 Fix Pack 4 or later to be able to use the DEDUP_DEVICE option. This special feature improves the DB2 backup process to make it deduplication friendly. Update to DB2 9.7 Fix Pack 4 or later. Only with this version or any later version can you experience the full benefit of the optimized DB2 data handling for deduplication devices.
You can download DB2 Fix Packs for DB2, for Linux, UNIX, and Windows, and IBM DB2 Connect™ products from the following web page:
To fully understand the improvements of the DEDUP_DEVICE option, look at the default DB2 database backup behavior. When a DB2 backup operation begins, one or more buffer manipulator (db2bm) threads are started. These threads are responsible for accessing data in the database and streaming it to one or more backup buffers. Likewise, one or more media controller (db2med) threads are started and these threads are responsible for writing data in the backup buffers to files on the target backup device.
The number of db2bm threads that is used is controlled by the PARALLELISM option of the BACKUP DATABASE command. The number of db2med threads that is used is controlled by
the OPEN n SESSIONS option. Finally, a DB2 agent (db2agent) thread is assigned the responsibility of directing communication between the buffer manipulator threads and the media controller threads.
This process is shown in Figure 20-4.
Figure 20-4 DB2 backup process model
Without the DEDUP_DEVICE option, data that is retrieved by buffer manipulator (db2bm) threads is read and multiplexed across all of the output streams that are being used by the media controller (db2med) thread. There is no deterministic pattern to the way in which data is placed in the output streams that are used (Figure 20-5). As a result, when the output streams are directed to a deduplication device, the device thrashes in an attempt to identify chunks of data that are already backed up.
Figure 20-5 Default database backup behavior
20.4.3 DB2 DEDUP_DEVICE setting
When the DEDUP_DEVICE option is used with the BACKUP DATABASE command, data that is retrieved by buffer manipulator (db2bm) threads is no longer read and multiplexed across the output streams that are being used by the media controller (db2med) threads. Instead, as data is read from a particular table space, all of that table space’s data is sent to only one output stream. Furthermore, data for a particular table space is always written in order, from lowest to highest page. As a result, a predictable and deterministic pattern of the data emerges in each output stream, making it easy for a deduplication device to identify chunks of data that are already backed up.
Figure 20-6 illustrates this change in backup behavior when the DEDUP_DEVICE option of the BACKUP DATABASE command is used.
Figure 20-6 Database backup behavior with the DEDUP_DEVICE option
When you use the DEDUP_DEVICE option, each table space is backed up to a dedicated tape drive. Using a number of virtual tape drives that is equal to or greater than the number of table spaces you want to back up is suggested.
If the database contains table spaces larger than others (above 30% of the entire database size), it prolongs the backup. If this situation affects the backup window, consult your DB2 support to assist you in splitting the larger table spaces and making them smaller. Also, communicate this information to the DB2 database planning staff so that future deployments can directly benefit of the improved deduplication without any drawback.
20.4.4 Example of DEDUP_DEVICE setting
Example 20-4 uses 16 tape drives to back up your 16 table spaces that ideally are of equal size using Tivoli Storage Manager, in parallel, using the DEDUP_DEVICE option.
Example 20-4 Multistreamed backup of DB2
db2 backup db <database_name> use tsm open 16 sessions dedup_device exclude logs
This action results in the best possible deduplication ratio. With DB2 9.7 Fix Pack 4 and later, the DB2 self-tuning capability is able to support this backup command by choosing all tuning values automatically.
20.4.5 Excluding logs from the DB2 database backup
Use the exclude logs parameter to avoid backing up your database logs to the same destination as your database. Database logs tend to have a 100% change rate and therefore have a negative effect on your overall HyperFactor ratio. Instead, redirect the archive logs directly to storage with no further active data reduction technology. Using the include logs parameter with the DB2 backup command results in archive logs being automatically added to the backup images. This action causes different patterns in the backup streams and reduces deduplication efficiency.
20.4.6 DB2 suggested settings without DEDUP_DEVICE
Backing up to a deduplication device when the DEDUP_DEVICE option is not available can still be optimized by applying some rules. The DB2 settings in Table 20-2 provide the best deduplication efficiency for backing up without the DEDUP_DEVICE option.
Table 20-2 Suggested DB2 settings
DB2 parameter
Suggested value
Description
sessions/
OPEN n SESSIONS
Minimum1
Change the value to read the data at the required backup rate.
buffers/
WITH num-buff BUFFERS
Parallelism + sessions + 2
The numbers of buffers should be #sessions + #parallelism +2. Also, the following calculation must fit: (num-buffers * buffer-size) < UTIL_HEAP_SZ (UTIL_HEAP_SZ is the database utility heap size.).
buffer/
BUFFER buff-size
 
16384
This value requires much memory. If this value is too much for your environment, use the largest possible BUFFER value instead. The value of this parameter is specified in multiples of 4 KB pages.
parallelism/
PARALLELISM
Minimuma
Change the value to read the data at the required backup rate.

1 Select the minimum value to configure an acceptable backup window time frame. Although a value of 1 is the best for deduplication, it might increase backup times in large multi-table space databases.
 
Setting: The large BUFFER size of 16384 is the setting with the most effect on your HyperFactor deduplication. The bigger the BUFFER value is, the better your deduplication ratio is.
20.4.7 Example of DB2 command using sessions, buffers, and parallelism
Example 20-5 shows an example of a DB2 backup command using four sessions, eight buffers, a buffersize of 16384, and a parallelism of 2.
Example 20-5 Database backup command
db2 backup db <databasename> use tsm open 4 sessions with 8 buffers buffer 16384 parallelism 2
 
Tip: Always use the same parameters for restore as you did for backup (number of sessions, buffers, buffer size, and parallelism) to ensure maximum restore performance.
20.5 Oracle
Oracle Recovery Manager (RMAN) is a backup and recovery utility for Oracle databases. The RMAN backs up Oracle databases directly to the disk or to other storage devices using third-party backup applications. Backup applications interface with RMAN to back up Oracle databases with various storage devices, such as tape, or file system.
The ProtecTIER server can be deployed as a VTL or FSI. For more details about how to set up VTL, and FSI, see Chapter 4, “Virtual Tape Library guidelines” on page 49 and Chapter 5, “ProtecTIER File System Interface: General introduction” on page 65.
This section describes the optimal settings and guidelines of RMAN to improve the backup throughput and factoring ratio of the ProtecTIER solution. The focus is on preferred practice parameters in Oracle RMAN environment when using ProtecTIER network-attached storage (NAS) as a disk target for RMAN backups.
20.5.1 Suggested RMAN settings
The following list describes several suggested settings for RMAN:
Backup routine:
 – Perform daily backups whenever possible. Performing a full backup enables the simplest and fastest restoration.
 – If full backup is not feasible, consider using incremental backups (level 0 - level 1 backups).
 – Backup all database components including data files, control files, redo logs, spfile, and archive logs (where applicable).
Archive log mode:
 – For online backups of Oracle databases, the database must have archive log mode enabled.
 – Make sure that multiple redo log groups are defined, and that their members’ size is sufficient to complete.
 – Archive locations must be saved outside of ProtecTIER NAS.
 – Keep all archive logs generated between backups.
 – Enable ARCHIVELOG mode for your database. Run ARCHIVELOG as often as possible. Make sure that the RMAN backup scripts use separate commands when backing up data (Tablespaces, data files, control files) and a separate command for backing up archive logs. Separating the data files and archive logs to two separate backup streams will provide better deduplication because offline archive logs are unique and cannot be deduplicated. Run archive log backups as often as possible for better point-in-time recovery capabilities.
Disable compression and encryption in Oracle databases and backup applications.
Disable or minimize multiplexing. Multiplexing enables RMAN to combine data blocks from different files into a single backup set, which impacts the factoring ratio. RMAN multiplexing is affected by the following two parameters:
 – The FILESPERSET parameter determines how many files should be included in each backup set. Set FILESPERSET=1 to send only one file per backup set in each channel (backup stream).
 – The MAXOPENFILES parameter defines how many files RMAN can read from the Oracle source simultaneously. Set MAXOPENFILES=1 so that RMAN does not read from more than one file at a time.
Example 20-6 shows a multiplexing calculation with various FILESPERSET and MAXOPENFILES settings.
Example 20-6 Calculation of multiplexing in RMAN
Scenario 1: FILESPERSET=6, MAXOPENFILES=3, number of data files=4
Multiplex = 3 (Limiting by the MAXOPENFILES setting)
 
Scenario 2: FILESPERSET=2, MAXOPENFILES=3, number of data files=4
Multiplex = 2 (Limiting by the FILESPERSET setting)
 
Scenario 3: FILESPERSET=8, MAXOPENFILES=4, number of data files=2
Multiplex = 2 (Limiting by the number of data files)
 
Scenario 4: FILESPERSET=1, MAXOPENFILES=1, number of data files=4
Multiplex = 1 (Limiting by the FILESPERSET and MAXOPENFILES settings)
Increase the number of parallel backup streams to improve backup throughput. Ensure that the number of ProtecTIER virtual tape drives that are available for Oracle backup matches the number of parallel streams that are configured in RMAN. For example, this value is enabled in the definition of the Tivoli Storage Manager client on the Tivoli Storage Manager server by using the MAXNUMMP=32 parameter. Set PARALLELISM=32 (up to 64).
Figure 20-7 depicts a case study that shows the factoring ratio and a backup throughput result with different multiplexing and parallel channel settings. The result is taken from a case study of Oracle database backup with Tivoli Storage Manager, and a 30-day retention period on ProtecTIER virtual tape. A full backup that is performed on alternate days averages a 5% data change rate between the full backups.
Figure 20-7 Example of multiplexing and parallelism effect on the HyperFactor ratio
 
Environmental variations: The parameters that are used in this test are not absolute requirements. Different environments might produce different results, depending on the data change rate and backup practices. Fine-tune the RMAN settings in your environment gradually to get the settings that do not inhibit performance.
Be aware of the following usage guidelines:
Using Direct NFS (dNFS, Oracle 11gR2 introduced dNFS). An optimized NFS client that provides faster and more scalable access to NFS storage located on NAS storage devices. dNFS was tested and evaluated but is currently not supported. Use standard client kernel NFS instead.
Using Automatic Storage Manager (ASM). The use of ASM was tested using ASMlLib with ProtecTIER and found to be fully functional. The use of ASM for storing the database provided better performance for backups using less concurrent channels.
Using hidden underscore parameters. The use of Oracle hidden, undocumented, and unsupported parameters is strongly not recommended when using ProtecTIER. Some performance white papers suggest changing _backup_disk_bufcnt and _backup_disk_bufsz hidden parameters. Tests were performed for those setups and it is not recommended to change those default hidden parameters when using ProtecTIER.
20.5.2 Mounting NFS Oracle Server to ProtecTIER NAS
For RMAN backups to work with ProtecTIER NAS, the following mount parameters are suggested to be used from the Oracle Servers. Example 20-7 has ProtecTIER Server (protectier) exporting share /RMAN and Oracle server using mount point /mnt/RMAN for destination target for RMAN backups.
Example 20-7 Suggested parameters from Oracle Servers for RMAN backups with ProtecTIER NAS
mount -o rw,soft,intr,nolock,timeo=3000,nfsvers=3,proto=tcp protectier:/RMAN /mnt/RMAN
 
Attention: A suggestion is to use the soft mount attributes because using the hard mount option continues to retry I/O forever, which might lead to Oracle database crashes. If the ProtecTIER server goes down and the NFS client is using the hard mount, eventually the retry forever will succeed, or it could result in negative consequences on the Oracle DB server side.
The suggested timeout value to use with soft is five minutes. It is much quicker than ProtecTIER rebooting, then the Oracle NFS client will detect it and stop trying, only leading to a backup failure. When the Oracle DBA re-initiates the RMAN backup jobs for the failed data files, the data will be rewritten with no data loss.
Oracle requires the NFS mount point to be set to hard mount for NAS storage system. Oracle assumes that the mount point will be used to contain tablespaces and archive logs. Therefore, any down time with NAS storage will prevent Oracle from restarting or mounting the database. When an NFS mount point is being used by an Oracle instance, the instance checks for specific mount options.
If any of the option is incorrect, this Oracle instance will issue the following message and the operation will fail:
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
This behavior is described in document 359515.1 at the My Oracle Support website:
Because the suggested soft mount option for ProtecTIER NAS and Oracle Server requires hard mount, the messages from Oracle that report incorrect mount options (shown in Example 20-8) are listed in the alert.log file.
Example 20-8 Messages indicate incorrect mount option during RMAN backups to ProtecTIER NAS
WARNING:NFS file system /mnt/ORC2_RMAN1 mounted with incorrect options (rw,vers=3, rsize=1048576, soft, intr, nolock,ptoto=tcp,timeo=3000,retrans=2,sec=sysaddr=10.1.2.135)
WARNING:Expected NFS mount options: rsize>=32768,wsize>=32768,hard,
Wed Jul 10 03:10:31 2013
WARNING:NFS file system /mnt/ORC2_RMAN1 mounted with incorrect options (rw,vers=3, rsize=1048576, soft, intr, nolock,ptoto=tcp,timeo=3000,retrans=2,sec=sysaddr=10.1.2.135)
WARNING:Expected NFS mount options: rsize>=32768,wsize>=32768,hard,
Wed Jul 10 03:10:31 2013
WARNING:NFS file system /mnt/ORC2_RMAN1 mounted with incorrect options (rw,vers=3, rsize=1048576, soft, intr, nolock,ptoto=tcp,timeo=3000,retrans=2,sec=sysaddr=10.1.2.135)
WARNING:Expected NFS mount options: rsize>=32768,wsize>=32768,hard,
Wed Jul 10 03:10:31 2013
To work around the Oracle NFS checks, with error message ORA-27054, use the following commands:
-- Set event until database shutdown
alter system set events '10298 trace name context forever,level 32';
 
-- Set event after database restart
alter system set event="10298 trace name context forever, level 32" scope = spfile;
These commands will prevent any NFS checks of the database which will enable the customer to use the suggested mount options, especially using the soft mount options during RMAN backups to ProtecTIER NAS.
To re-enable Oracle's NFS checks, use the following SQL command:
-- Set event until database shutdown
alter system set events '10298 trace name context off';
-- Set event after database restart
alter system set event="10298 trace name context off" scope = spfile;
 
Note: This setting will apply to all mount points used by the instance, and not for only those that are used by RMAN. If you are using NFS mount point for data file or archive log containers, you should use the suggested mount point settings mentioned in My Oracle Support node #359515.1 for those mount points.
20.5.3 Using ProtecTIER NAS to run RMAN incremental merge backups
Using Oracle RMAN incremental backups, you can decrease the data that needs to be backed up, including only datafile blocks that have changed since a specified previous backup. The goal is to back up only those data blocks that have changed since a previous backup.
This kind of backup is permitted on the ProtecTIER, but has no significant advantages in terms of deduplication. The backup space will be lower, but the time to recover from those backups will be longer. The fastest backup in these terms is to use image copy backups.
Because image copy backup might take much time and storage space, Oracle suggests using the Incrementally Updated Backups feature, which enables you to avoid the overhead of taking full image copy backups of datafiles, while providing the same recovery advantages as image copy backups.
At the beginning of a backup strategy, RMAN creates an image copy backup of the datafile. Then, at regular intervals, such as daily or weekly, level 1 incremental backups are taken and applied to the image copy backup, rolling it forward to the point in time when the level 1 incremental was created.
During restore and recovery of the database, RMAN can restore from this incrementally updated copy and then apply changes from the redo log, with the same results as restoring the database from a full backup taken at the System Change Number (SCN) of the most recently applied incremental level 1 backup.
A backup strategy based on incrementally updated backups can help minimize time required for media recovery of your database. For example, if you run scripts to implement this strategy daily, at recovery time you never have more than one day of redo to apply.
The down side of this backup is that only one valid full backup exists at all times, making a recovery to a point prior to the latest backup impossible.
Using the ProtecTIER Cloning feature, you can now duplicate the backed up directory to a new directory without any physical effect (100% deduplication) and applying the incremental changes to one of the copies. This method will ensure that any unchanged data in the datafiles will be deduplicated, and all new or changed information will be available. When using this method, multiple copies of a backup can be saved at the same time, and the restore of the database to any version will be available at all times.
Example 20-9 and Example 20-10 on page 320 show an instance of an implementation of RMAN incremental merge backup with ProtecTIER clone.
Example 20-9 Master script for running the backup and clone
#!/bin/bash
# Run rman using catalog to backup incremental
rman TARGET / CATALOG rman/rman@rman CMDFILE rman_inc.rcv
 
# Run clone to end of backup
NOW=$(date +%Y%m%d_%H%M)
loginInline="ptadmin,ptadmin"
sourceFS=ORC2_RMAN1
targetFS=ORC2_RMAN1
sourceDir="ORC2_RMAN1/RMAN_backups"
targetDir="ORC2_RMAN1/clone_${NOW}"
ssh ptadmin@protectier /opt/dtc/ptcli/ptcli CloneDirectory --loginInline $loginInline --force --sourceFS $sourceFS --targetFS $targetFS --sourceDir $sourceDir --targetDir $target-Dir
Example 20-10 RMAN Backup script (rman_inc.rcv) using 12 channels
run {
allocate channel c1 device type disk format '/mnt/ORC2_RMAN1/RMAN_backups/%U' ;
allocate channel c2 device type disk format '/mnt/ORC2_RMAN1/RMAN_backups/%U' ;
allocate channel c3 device type disk format '/mnt/ORC2_RMAN1/RMAN_backups/%U' ;
allocate channel c4 device type disk format '/mnt/ORC2_RMAN1/RMAN_backups/%U' ;
allocate channel c5 device type disk format '/mnt/ORC2_RMAN1/RMAN_backups/%U' ;
allocate channel c6 device type disk format '/mnt/ORC2_RMAN1/RMAN_backups/%U' ;
allocate channel c7 device type disk format '/mnt/ORC2_RMAN1/RMAN_backups/%U' ;
allocate channel c8 device type disk format '/mnt/ORC2_RMAN1/RMAN_backups/%U' ;
allocate channel c9 device type disk format '/mnt/ORC2_RMAN1/RMAN_backups/%U' ;
allocate channel c10 device type disk format '/mnt/ORC2_RMAN1/RMAN_backups/%U' ;
allocate channel c11 device type disk format '/mnt/ORC2_RMAN1/RMAN_backups/%U' ;
allocate channel c12 device type disk format '/mnt/ORC2_RMAN1/RMAN_backups/%U' ;
 
CROSSCHECK BACKUP;
 
RECOVER COPY OF DATABASE WITH TAG 'LVL0_MERGE_INCR';
 
BACKUP CHECK LOGICAL INCREMENTAL LEVEL 1 CUMULATIVE COPIES=1
FOR RECOVER OF COPY WITH TAG 'LVL0_MERGE_INCR' DATABASE;
 
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
 
BACKUP CHECK LOGICAL AS COMPRESSED BACKUPSET FILESPERSET 10 ARCHIVELOG ALL DELETE INPUT;
 
DELETE NOPROMPT OBSOLETE;
DELETE NOPROMPT EXPIRED BACKUP;
 
release channel c1;
release channel c2;
release channel c3;
release channel c4;
release channel c5;
release channel c6;
release channel c7;
release channel c8;
release channel c9;
release channel c10;
release channel c11;
release channel c12;
}
When using incremental backup, a strong suggestion is to enable Change Block Tracking for faster incremental backup:
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/mnt/RMAN_backup/rman_change_track.f' REUSE;
20.5.4 Using ProtecTIER NAS to store Oracle Data Pump exports
Data Pump Export is an ready-for-use Oracle utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported on the same system, or it can be moved to another system and loaded there.
The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set and import it to the database.
Export and import are commonly used for creating logical backups of a full database, schema or a single object.
Oracle Data Pump Export and Import can use the ProtecTIER NAS location, when mounted as an NFS location using the mounting options and the event setting described previously.
Because the export dump files are binaries, the deduplication ratio can vary based on changes made to the exported objects in a single dump file. Compressing the dump files using the COMPRESSION=DATA_ONLY parameter will cause the deduplication to be less efficient, and is therefore not recommended. Using the parallelism option PARALLEL=n parameter will multiplex the export into files and will speed up the export operations.
Deduplication on export files is decent, and was tested to be as good as 1:10 on multiple exports to the same object.
20.5.5 Using ProtecTIER NAS to store Oracle database files and offline redo logs
Placing data files on the ProtecTIER storage will affect data base performance. This kind of usage might hinder the performance of the ProtecTIER as well. Because Oracle data files are always open and data is being written into it, putting the data files on ProtecTIER storage will cause the ProtecTIER to constantly evaluate the data for deduplication.
 
Attention: Even though the ProtecTIER NAS was not designed for it, internal tests have proven that it was possible to create table spaces on it, but it is highly not recommended.
Using ProtecTIER NAS as a single archive log destination will work, but again, it is not recommended. The purpose of using ProtecTIER NAS is to provide a target for backups. Using ProtecTIER NAS as a single archive log destination solution might affect the database performance, and in some cases might cause the database to hang.
The use of the ProtecTIER storage to reduce costs by storing read-only or offline table spaces was tested and found to be working and valid.
20.5.6 Other suggestions for RMAN
When working with RMAN and ProtecTIER, other suggestions are relevant:
As a preferred practice, use client mount parameters for Oracle database servers:
mount -o rw,soft,intr,nolock,timeo=3000,nfsvers=3,proto=tcp protectier:/RMAN /mnt/RMAN
Disable Oracle NFS check by using the following SQL command for each Oracle database instance:
SQL> alter system set event="10298 trace name context forever, level 32";
To disable Oracle NFS checks during the lifecycle of the database instance, use the following command:
SQL> alter system set event="10298 trace name context forever, level 32" scope = spfile;
To enable Oracle NFS checks (when no longer using RMAN backups to ProtecTIER NAS), use the following command:
SQL> alter system set events '10298 trace name context off';
Do not use ProtecTIER NAS to store or migrate tablespaces, tables or datafiles. It is to be used only as a target for RMAN backups.
Do not use ProtecTIER NAS as primary or alternate archive log destination.
Because this scenario suggests disabling the Oracle NFS check, Oracle Servers that are using NAS from other vendors (Netapp, EMC, Hitachi, or IBM N-Series) to store Oracle tablespaces should not use ProtecTIER NAS for Oracle RMAN backups.
Oracle Real Application Clusters (RAC) are currently not supported.
Oracle Direct NFS (dNFS) is being tested and evaluated, but is currently not supported. Use standard client kernel NFS instead.
In summary, ProtecTIER can be deployed as either Serial Backup Tape (SBT) or Disk target for RMAN backups. Using ProtecTIER NAS as a target for RMAN disk backups greatly strengthen the benefits of Oracle database administrators (DBAs) gaining complete control of their RMAN backups, and at the same time using the benefits of deduplication and scalability enhancements.
Additionally, disabling hard mount options will ensure the Oracle DB that no issues should occur with the database being crashed during timeout between the ProtecTIER NAS unit and the Oracle DB server.
20.6 SAP
This section describes settings and parameters to be modified for optimum performance when you are working with specific data types, such as SAP integrated with Tivoli
Storage Manager.
 
Note: Beginning with version 7.1.3, Tivoli Storage Manager was rebranded to IBM Spectrum Control. The scenarios in this topic were conducted with a version earlier than 7.1.3. For legacy and reference purposes, we continue to refer to Tivoli Storage Manager.
20.6.1 SAP introduction
SAP is an acronym for Systems Applications and Products. SAP provides a common centralized database for all the applications that are running in an organization. The database instance is a mandatory installation component for the installation of an SAP system.
SAP supports the following databases:
Oracle
MS SQL Server
IBM DB2 Universal Database™ for UNIX and Windows
SAP liveCache technology
MaxDB
IBM DB2 Universal Database for z/OS®
IBM DB2 Universal Database for iSeries
IBM Informix®
For more database and operating system support information, see the Product Availability Matrix (PAM) at the SAP Service Marketplace. Log in at the following address:
20.6.2 Data protection for SAP
Data protection for the SAP server involves steps to protect all of the software components that are needed by the SAP system to operate. The base components of the SAP server are the operating system, the SAP application server, the database instance, and the data files. Each component requires different data protection techniques.
The SAP system uses the relational database as main storage for all SAP data and meta information. This main storage is the basis for the tight integration of all SAP application modules and ensures consistent data storage. Data in the SAP database is unique for every company, and if the data is lost, it cannot be simply reinstalled in the same manner as an operating system is reinstalled. Therefore, be especially careful when you plan the protection of the data that is stored in the SAP database.
Protection of the SAP database
The protection of the SAP database has two parts: protecting the database binary files and configuration files, and protecting data that is stored in the data files.
Database binary files and configuration files are typically backed up as part of the operating system or file system backup. The backup of the database data files and other supporting structures that are associated with SAP data should be performed by a dedicated tool that is designed especially for the database backup. You can use database backup and restore tools to perform backup and restore data in a consistent state.
The backup tools can also perform an online backup of the database and backup of the redo log files just after the log files are archived. A backup of a database creates a copy of the database’s data files, control files, and, optionally, log files. It then stores these files on backup media.
A consistent backup, also called an offline backup or cold backup, is a backup of all the data files in the database that is taken when all interim changes are physically written to the data files. With a consistent backup, partial changes from the log files that are not written to the data files are not backed up. If you restore a database from a consistent backup, the database is in a consistent state when the restore operation finishes.
Also note the following information:
For an Oracle database, a consistent backup can be taken only when the database is shut down for the entire duration of the backup procedure.
For a DB2 Universal Database (UDB), a database must be deactivated, or the instance must be stopped before the backup operation starts.
The database must stay inactive until the backup finishes. This action ensures that there are no data changes on the database at the time the backup is being taken. A consistent backup is always a backup of the entire database; it cannot be a partial or incremental backup.
You can take an offline backup by using either a dedicated database backup tool, such as Oracle Recovery Manager, BR*Tools, the DB2 BACKUP command, or a non-database backup tool, such as the Tivoli Storage Manager backup archive client. The dedicated database backup tools ensure that all the objects that are required for the successful database restore are included in the backup image.
The database backup tool also ensures that the location, time stamp, type, and other information about the backup copy is registered in the repository, such as BR*Tools logs or a DB2 database history file. Using the metadata in the repository, backup tools can perform an automatic restore that is based on the specified time stamp without prompting for the backup images to restore and their location.
IBM offers products for both data protection and data retention and reduction. For example, in the SAP environment, there are Tivoli Storage Manager for Enterprise Resource Planning (ERP) and Tivoli Storage Manager for Database for data protection. For data retention, you can use IBM DB2 CommonStore for SAP. Both solutions can use a Tivoli Storage Manager server for the media manager.
 
Note: Beginning with version 7.1.3, Tivoli Storage Manager was rebranded to IBM Spectrum Control. The scenarios in this topic were conducted with a version earlier than 7.1.3. For legacy and reference purposes, this book continues to use the name Tivoli Storage Manager.
Tivoli Storage Manager for ERP, formerly known as Tivoli Data Protection for SAP, is a component of Tivoli Storage Manager family that provides a complete backup solution for SAP databases. The current version supports Oracle and DB2 only.
The following features are available for Tivoli Storage Manager for ERP:
Handles large amounts of data
Optimized processor usage that reduces the overall time for backup and restore
Optimized for an SAP environment
Supports multiple management classes
These solutions are illustrated in Figure 20-8.
Figure 20-8 Backup and archival products
 
Additional information: For more information, see the following web page:
20.6.3 Integration of Tivoli Storage Manager for ERP with SAP
Tivoli Storage Manager for ERP is fully integrated in to the SAP environment. The communication between the backup and archive server is performed by an application programming interface (API) called ProLE. This API is shared with other Tivoli Data Protection products. ProLE runs as a background process and provides communication with the Tivoli Storage Manager server. Figure 20-9 on page 326 shows a sample architecture of Tivoli Storage Manager for ERP integrated with SAP.
Figure 20-9 Tivoli Storage Manager for ERP sample scenario
 
Additional information: For more information, see the IBM Knowledge Center:
20.6.4 Tivoli Storage Manager for ERP for Oracle database
Tivoli Storage Manager for ERP is a client and server program that manages backups and restores in conjunction with the Tivoli Storage Manager. With Tivoli Storage Manager for ERP, it is possible to handle SAP database backups, and it includes the ability to manage backup storage and processing independently from normal SAP operations.
Furthermore, Data Protection for SAP in combination with Tivoli Storage Manager provides reliable, high performance, and repeatable backup and restore processes to manage large volumes of data more efficiently.
For Oracle databases, two options exist to implement a backup using Tivoli Storage Manager:
Tivoli Storage Manager for ERP using the BACKINT interface
Tivoli Storage Manager for ERP using Oracle Recovery Manager (RMAN)
With the integration, it is possible to follow the ERP backup and restore procedures and to use the integrated SAP database utilities BRBACKUP, BRARCHIVE, BRRESTORE, and SAPDBA for backup and restore. Other SAP-related files (executable files) are backed up by using Tivoli Storage Manager standard techniques for file backup and restore, for example, incremental backup, file filtering, and point-in-time recovery.
Tivoli Storage Manager for ERP for Oracle using BACKINT
Using this feature, you can perform the traditional Oracle online backup with automation provided by BACKINT. Figure 20-10 shows the data interface between Oracle Databases and Tivoli Storage Manager for ERP for Oracle using the BACKINT interface.
Figure 20-10 Tivoli Storage Manager for ERP for Oracle using BACKINT
The backup proceeds as follows:
1. BR*Tools takes control.
2. BRBACKUP calls the Tivoli Storage Manager for ERP by using BACKINT.
3. BACKINT changes the table spaces to backup mode with the following command:
alter tablespace <tablespace name> begin backup
4. BACKINT using Tivoli Storage Manager for ERP reads all the data files and saves them to Tivoli Storage Manager server.
5. BR*Tools updates the catalog with information about the backed up data file.
 
Logs: BR*Tools logs are stored in the /oracle/<SID>/saparch directory.
BRBACKUP automatically backs up the logs and profiles after every backup operation. In the case of bare metal restore or disaster recovery, logs and profiles must be restored to enable BR*Tools to restore data files. The process can be simplified if the logs and profiles are backed up by a Tivoli Storage Manager backup archive client during the file system backup.
Using this method, the chosen data files are sent to Tivoli Storage Manager one by one. No compression or block checking is performed at this level.
When a database is in backup mode, the amount of redo logs that are written to disk increases because Oracle writes the entire dirty block to the disk, not just the updated data. In some cases, when the backup routine fails for any reason, the data file remains in active backup mode, which can cause some performance effect and additional I/O to the disk.
Tivoli Storage Manager for ERP for Oracle using RMAN
Using this feature, you can take advantage of all the facilities that are provided by RMAN. In general, RMAN is able to perform a backup in less time compared to the traditional backup using BACKINT because RMAN sends only used data blocks (in an Oracle data file) to Tivoli Storage Manager. The other interesting feature is block checking, which discovers bad blocks as soon as they occur.
In addition, you can use the Oracle Recovery Manager (RMAN) utility to run some tasks that are not provided by BR*Tools, such as incremental backups, releasing backup versions, and catalog maintenance.
 
Note: Beginning with version 7.1.3, Tivoli Storage Manager was rebranded to IBM Spectrum Control. The scenarios in this topic were conducted with a version earlier than 7.1.3. For legacy and reference purposes, we continue to refer to Tivoli Storage Manager.
Figure 20-11 shows the data interface between Oracle Database and Oracle for SAP using RMAN.
Figure 20-11 Tivoli Storage Manager for ERP for Oracle using RMAN
20.6.5 Tivoli Storage Manager for ERP for DB2
Tivoli Storage Manager for ERP for Oracle for DB2 was created to provide an intelligent interface to manage backup and restore by using Tivoli Storage Manager. It is fully integrated in to the SAP environment. The backup command DB2 BACKUP DATABASE and the restore command DB2 RESTORE DATABASE are run at the DB2 CLI, which calls the Tivoli Data Protection for SAP for DBA module.
The backup and restore of the DB2 log files is provided by the BR*Tools commands BRARCHIVE and BRRESTORE. In addition, you can use the Tivoli Storage Manager for ERP for DB2 Tools BackOM and the built-in Log Manager.
Figure 20-12 shows the data interface between DB2 Databases and Tivoli Storage Manager for ERP for DB2.
Figure 20-12 Tivoli Storage Manager for ERP for DB2
The archiving of DB2 offline log files is provided by the SAP tool BRARCHIVE. The retrieval of DB2 offline log files is provided by the SAP tool BRRESTORE and by the Tivoli Storage Manager for ERP tool BackOM. As of DB2 Version 9.X, offline log files can be archived and retrieved with the DB2 built-in Log Manager.
The DB2 command line processor (CLP) interprets commands for the DB2 database and passes control to a DB2 Server Process. In the case of Tivoli Storage Manager for ERP,
the LOAD <libraryname> option causes DB2 to start the Tivoli Storage Manager for ERP shared library.
This process runs during the backup or restore, loads the library dynamically, and communicates with it through the Tivoli Storage Manager API. To start a backup or restore, the DB2 CLP communicates with the DB2 Server Process, providing the server process with the relevant information for processing the database.
 
Additional information: For more information about backup methodologies for SAP, see SAP Backup using Tivoli Storage Manager, SG24-7686.
All backup solutions that are described in this section can be integrated with advanced backup techniques, such as LAN-free backup, parallel transfer of backup data to and from Tivoli Storage Manager server, or multiplexing.
 
Reduction of processing time: Implementation of these techniques can reduce backup and restore times, and eliminate the effect of backup data transfers on LAN throughput.
20.6.6 SAP BR*Tools for Oracle using BACKINT
SAP BR*Tools for Oracle is a package of utilities, developed by SAP AG to protect and manage SAP data that is stored in Oracle databases. BR*Tools supports functions for online, offline, partial, or full backups of databases (BRBACKUP) and backups of archived redo logs (BRARCHIVE). It has functions for database restore and recovery (BRRECOVER and BRRESTORE).
In addition to using BR*Tools for database recoverability tasks, you can also have it serve as a tool for creating homogeneous database copies, and to assist with database migration to different platforms or database versions.
BRBACKUP is the BR*Tools utility that enables online or offline backup of database files (data files, control files, and online redo log files). BRBACKUP can be used to back up individual data files, table spaces, or the entire Oracle database. BRBACKUP also backs up the BR*Tools configuration profiles and logs that are required for the database’s disaster recovery.
The smallest unit that can be saved with BRBACKUP is a file. You can use BRBACKUP for backing up both files in the database and non-database files and directories. Use the backup_mode command from the Initialization Profile init<DBSID>.sap file or the brbackup -m|-mode command option for this purpose.
Before the offline backup is taken, BRBACKUP automatically closes the database and opens it when the backup is accomplished. BRBACKUP can also change the status of the table space to be backed up to BEGIN/END BACKUP.
You can also instruct BRBACKUP to use software compression. The software compression client can enhance the backup, especially if the network is slow.
 
Compression: If you plan to send data to a ProtecTIER server, do not enable software compression; it might affect the overall deduplication ratio.
The most frequently used BRBACKUP function is a full database backup. Example 20-11 shows BRBACKUP.
Example 20-11 Online backup by using the databases BRBACKUP tool
$su - cptadm
$BRBACKUP -c -u / -t ONLINE_CONS -m FULL -p /oracle/CPT/102_64/dbs/initCPT.sap
You can perform a full backup by running BRBACKUP with the following options:
The mode option (-mode/-m) is set to FULL or ALL.
You can start a full backup either in online mode (-type/-t online_cons) or in offline mode (-type offline). In the case of the online_cons type, the offline redo log files that are generated during the full backup are also backed up to the same media.
The backup storage media is defined by the BR*Tools profile file that is specified by the BRBACKUP parameter -profile/-p.
The user name and password that is used by BRBACKUP to log on the Oracle database system is specified by the parameter -user/-u. If you are working as a DBA user that is authenticated to the database by the OS ($OPSuser), you can use “/” as value of
this parameter.
The parameter -confirm/-c stands for an unattended mode, which is mostly used in the backup scripts, so BR*Tools does not prompt you for confirmations.
Archived redo log backup functions
BRARCHIVE provides functions for offline redo log files backup in Oracle databases that run in archiving mode. If archiving is enabled, a database cannot overwrite an active log file until the content is archived. Whenever an active redo log is filled, the database performs a log switch and starts writing to another log file. The full redo log files are archived by Oracle background processes into the archivelog directory.
The redo log is the most important database component for a recovery from a crash, media failure, or user failure. Therefore, at least the production databases should be configured in archiving mode. To prevent the archivelog directory from filling up, BRARCHIVE should be run periodically to move the offline redo logs from the archive directory to the backup media.
BR*Tools and Tivoli Storage Manager for ERP with Oracle
BR*Tools interacts with Tivoli Storage Manager for ERP with Oracle through the BACKINT interface. The communication of BR*Tools and BACKINT occurs as follows:
1. The BR*Tools utility BRBACKUP informs Oracle of what data must be backed up and puts the database into the correct backup state (online or offline backup).
2. BRBACKUP calls Tivoli Data Protection for ERP using the BACKINT interface with a list of all files to be backed up.
3. Tivoli Data Protection for ERP reads all the requested files from the database and reports back to BRBACKUP. BRBACKUP adds these files to the repository that contains all processed backups.
4. BACKINT transfers the data to the Tivoli Storage Manager server by using the Tivoli Storage Manager Client API.
5. The BR*Tools updates the repository that contains information about the status of
the files.
BR*Tools configuration
To configure BR*Tools, complete the following steps:
1. The BR*Tools configuration is stored in the init<SID>.sap initialization profile file. The configuration file contains parameters that affect the performance of backup and restore functions. The file is in the following default locations:
 – On UNIX: <ORACLE_HOME>/dbs
 – On Windows: <ORACLE_HOME>database
2. Some parameters that are specified in the profile can be overridden if the BR*Tools programs are called with different command options. In the BR*Tools profile, you can specify the backup adapter that is used to transfer data (cpio, BACKINT, or RMAN).
3. If you set up BR*Tools to use the BACKINT adapter, you need to reference the appropriate BACKINT profile (*.utl file) in the BR*Tools profile. If you want to instruct BR*Tools to use Oracle RMAN, you must define the RMAN channel parameters in the BR*Tools profile.
4. The configuration profile of Tivoli Storage Manager for ERP is defined in the init<SID>.utl file, which is in the same directory as the BR*Tools profile (init<SID>.sap). The configuration parameters in the init<SID>.utl file include the Tivoli Storage Manager node name and the management classes to be used for data files backup and offline redo logs backup. If the backup retention is going to be controlled by Tivoli Storage Manager for ERP, you can set up the number of backup versions to be kept in this file.
5. The configuration file of the Tivoli Storage Manager API (dsm.sys) is, by default, stored in the Tivoli Storage Manager API installation directory (specified by the environmental variable DSMI_DIR). The configuration file of the Tivoli Storage Manager API Client defines the network settings (protocol and network address of the Tivoli Storage Manager server) to enable communication between the API client and the Tivoli Storage Manager server.
6. You also specify in this file the authentication type (PASSWORDACCESS) that the Tivoli Storage Manager API client uses to connect to the Tivoli Storage Manager server. Additionally, if the Storage Agent is operable on the local node in this file, you can instruct the Tivoli Storage Manager API client to use the LAN-free backup (by using the LANFREE yes|no option).
7. Instruct BR*Tools to use the BACKINT interface by setting the backup_dev_type parameter in the SAP initialization file (init<SID>.sap) as follows:
backup_dev_type = util_file
8. Instruct BR*Tools to use the init<SID>.utl file (created by the Tivoli Storage Manager for ERP installation wizard) by setting the util_par_file parameter in the SAP
initialization file:
util_par_file=<full_path>/init<SID>.utl
 
Archiving functions: Tivoli Storage Manager for ERP uses the Tivoli Storage Manager archive functions to transfer data to Tivoli Storage Manager server and back. Therefore, the management classes that are assigned to Tivoli Storage Manager for ERP (in the init<SID>.utl file) must have an archive copy group defined.
For more information, see the IBM Knowledge Center:
20.6.7 SAP BR*Tools for Oracle using RMAN with Tivoli Storage Manager
Configure BR*Tools for use with the RMAN Tivoli Storage Manager channel as follows:
On the Tivoli Storage Manager server, complete the following steps:
a. Define a policy domain with two management classes that are used to transfer data and logs. Define an archive management class in each of the management classes. If the retention control is performed at the Tivoli Storage Manager server, specify RETVER=<days> for each archive copy group. If the retention control is performed at Tivoli Storage Manager for ERP, specify RETVER=nolimit.
b. Register the Tivoli Storage Manager node with the defined domain. Update the parameter MAXNUMMP for the Tivoli Storage Manager node to MAXNUMMP=2 (based on the
parallelism that is required).
On the client node, complete the following steps:
a. Update or create the DSM.OPT and DSM.SYS files to configure the Tivoli Storage Manager API client. The PASSWORDACCESS parameter must be set to “PROMPT” in this configuration.
b. Set up the environment values DSMI_DIR and DSMI_LOG for the Oracle OS user.
c. Install IBM Tivoli Storage Manager for ERP - Oracle on the Oracle server with SAP installed on it.
d. Configure the client resources for Oracle server in the IBM Tivoli Storage Manager for ERP configuration file (<ORACLE_HOME>dbsinit<SID>.utl).
e. Check the defined Tivoli Storage Manager node name and Tivoli Storage Manager management classes to be used for the backup of offline redo log files and data files. Ensure that the SERVER parameter refers to an existing stanza in the DSM.SYS file.
f. If the retention control is driven by Tivoli Storage Manager for ERP, set the
MAX_VERSIONS parameter.
g. Switch to the Oracle instance owner and update the Tivoli Storage Manager node password for Oracle by running the following command:
backint -p <ORACLE_HOME>dbsinit<SID>.utl -f password
h. Ensure that RMAN can access the Tivoli Storage Manager for ERP API. The following links must exist (be created):
 • ln -s /usr/tivoli/tsm/tdp_r3/ora/libtdp_r3.<ext>
 • /usr/lib/libobk.<ext> ln -s /usr/lib/libobk.<ext>
 • $ORACLE_HOME/lib/libobk.<ext>
i. Instruct BR*Tools to use RMAN by setting the backup_dev_type and rman_parms options in the SAP initialization file (init<SID>.sap) as follows:
 • backup_dev_type = rman_util
 • rman_parms="ENV=(XINT_PROFILE=<ORACLE_HOME>/dbs/init<SID>.utl,PROLE_PORT=<portnumber>,&BR_INFO)"
j. Instruct BR*Tools to use the init<SID>.utl file for Tivoli Storage Manager specific parameters by setting the util_par_file parameter in the SAP initialization file:
util_par_file=<path to Tivoli Storage Manager for ERP util file - init<SID>.utl>
20.6.8 SAP BR*Tools for Oracle: Using RMAN to configure DB2 to use Tivoli Storage Manager
Configure DB2 to use Tivoli Storage Manager for ERP as follows:
On the Tivoli Storage Manager server, complete the following steps:
a. Define a policy domain with two management classes that are used to transfer data and logs. Define an archive copy group for both management classes. If the retention control is performed at the Tivoli Storage Manager server, specify RETVER=<days> for each archive copy group. If the retention control is performed at Tivoli Storage Manager for ERP level, specify RETVER=nolimit.
b. Register Tivoli Storage Manager node with the defined domain. Update the parameter MAXNUMMP for Tivoli Storage Manager node to MAXNUMMP=2 (based on the parallelism that
is required).
On the client node, complete the following steps:
a. Update or create the Tivoli Storage Manager API client option files DSM.OPT and DSM.SYS. The PASSWORDACCESS=GENERATE parameter must be set for this configuration.
b. Configure the environment values DSMI_DIR, DSMI_CONFIG, and DSMI_LOG in the DB2 instance owner user’s profile. You must restart the DB2 instance to make the parameters effective for DB2.
c. Install Tivoli Storage Manager (As of version 7.1.3 known as IBM Spectrum Protect) for ERP - DB2 on the DB2 UDB server, with SAP already installed. You can use the installation wizard to specify the name of the Tivoli Storage Manager server stanza (in DSM.SYS), the Tivoli Storage Manager node name, and the management classes to be used for the backup of data and archived logs.
d. Check the client resource for the Tivoli Storage Manager server in the Tivoli Storage Manager for ERP configuration file /db2/<SID>/tdp_r3/init<SID>.utl. Verify that the following environment variables are set correctly in the DB2 owner user’s profile:
 • XINT_PROFILE
 • DB2_VENDOR_LIB
 • TDP_DIR
e. Switch to the DB2 instance owner and update the Tivoli Storage Manager client password for DB2 node by running the following command:
$/usr/tivoli/tsm/tdp_r3/db264/backom -c password
f. Restart the DB2 instance.
g. Optionally, you can set up DB2 automatic log management so that the archived logs are sent to Tivoli Storage Manager by using the Tivoli Storage Manager media management library that is provided by Tivoli Storage Manager for ERP. This task can be accomplished by setting the DB2 configuration parameters LOGARCHMETH1 and LOGARCHOPT1 as follows:
 • update db cfg for <SID> using LOGARCHMETH1 VENDOR:/usr/tivoli/tsm/tdp_r3/db264/libtdpdb264.a
 • update db cfg for <SID> using LOGARCHOPT1 /db2/<SID>/tdp_r3/vendor.env
h. If you use the direct log backup method that is specified in step g, you should also specify the FAILARCHPATH db2 configuration parameter. FAILARCHPATH points to a directory that is used as a temporary storage for offline logs in case the Tivoli Storage Manager server is unavailable, which can prevent the DB2 from filling up the log directory. This is the command syntax:
update db cfg for <SID> using FAILARCHPATH <offline log path>
20.6.9 Preferred practices for Tivoli Storage Manager for ERP with ProtecTIER
The configuration profile of Tivoli Storage Manager for ERP is defined in the init<SID>.utl file, which is in the same directory as the BR*Tools profile (init<SID>.sap).
When the ProtecTIER VTL is defined for Tivoli Storage Manager, some settings must be done in Tivoli Storage Manager for ERP to optimize this integration.
Set the following information in the init<SID>.utl file:
Disable multiplexing. The MULTIPLEXING parameter specifies how many files are read simultaneously and are multiplexed. If a file is multiplexed, it can affect the deduplication ratio. Set MULTIPLEXING=1.
Ê Use as many backup sessions in parallel as possible. The MAX_SESSIONS parameter defines the number of parallel sessions to be established. The valid range of MAX_SESSIONS is 1 - 32. Also define the SESSIONS parameter in each Tivoli Storage Manager stanza in the .utl file to specify the maximum number of sessions in that Tivoli Storage Manager server stanza.
 
Important: The MAX_SESSIONS parameter setting must not exceed the number of tape drives that are available simultaneously to the node in the Tivoli Storage Manager servers to be accessed. This maximum is established by the MAXNUMMP parameter settings in the Tivoli Storage Manager node definition.
Disable compression by configuring RL_COMPRESSION=NO. The RL_COMPRESSION parameter specifies whether a null block compression of the data should be performed before transmission to Tivoli Storage Manager. Although RL_COMPRESSION introduces additional processor load to the SAP server, throughput can be improved when the network is the bottleneck, but it can affect the ProtecTIER deduplication ratio.
On the Tivoli Storage Manager server, complete the following steps:
1. Update the MAXNUMMP parameter for the Tivoli Storage Manager node to MAXNUMMP=x, where x should be the number of parallels required. This number should match the MAXSESSION parameter that is set in the .utl file. The MAXNUMMP parameter specifies the maximum number of mount points a node can use on the server only for operations, such as backup and archive.
2. Update the COMPression parameter for the Tivoli Storage Manager node to COMPression=NO. This setting specifies that the client node does not compress its files before it sends them to the server for backup and archive.
20.7 VMware
In addition to the now available vStorage APIs, the vStorage APIs for Data Protection (VADP) are also available. VADP replaces the VMware Consolidated Backup (VCB) framework, and offers multiple methods to improve your VMware backup. With the new VADP comes the option to use incremental virtual machine image backups by using the changed block tracking (CBT) feature.
In contrast to the full virtual machine image backup, CBT reduces the amount of backed up data because only the changed blocks that are compared to the last full backup are backed up. With CBT enabled, the backup operation backs up only the changed blocks, which results in a high data change rate for the ProtecTIER server, because only new data is backed up. For ProtecTIER deduplication to perform optimally, run at least one full backup per week.
 
Incremental backups: If you use incremental virtual machine image backups, run at least one full virtual machine image backup per week to optimize your deduplication ratio.
Follow the general preferred practices and the Tivoli Storage Manager (as of version 7.1.3, rebranded to IBM Spectrum Protect) practices that are described in Chapter 13, “IBM Spectrum Protect” on page 187.
20.7.1 Technical overview
VMware ESX is installed directly on the hardware and does not require any specific operating system. It is a virtualization platform that is used to create the virtual machines (VMs) as a set of configuration and disk files that perform all the functions of a physical machine.
vCenter Server
The vCenter server is a service that acts as a central administration point for ESX hosts that are connected to a network. This service directs actions on the virtual machines and the hosts. The vCenter server is the working core of the vCenter.
Multiple vCenter servers can be joined to a linked mode group, where you can log on to any single vCenter server to view and manage the inventories of all the vCenter server systems in the group.
With vCenter, an administrator can manage every component of a virtual environment. ESX servers, VMs, and extended functions, such Distributed Resource Scheduler (DRS), vMotion, and VM backup, all access the vCenter server by using the vSphere Client GUI.
20.7.2 Settings and tuning for VMware and Tivoli Storage Manager
When you set up VMware for Tivoli Storage Manager (as of version 7.1.3, Tivoli Storage Manager has been rebranded to IBM Spectrum Protect) there are guidelines for using the vStorage API, changed block tracking (CBT), and format specification for formats for virtual disk files. This section provides a brief description of those guidelines.
vStorage API
The vStorage application programming interfaces (APIs) for data protection enable backup software to protect system, application, and user data in your virtual machines in a simple and scalable way. These APIs enable backup software to perform the following actions:
Perform full, differential, and incremental image backup and restore of virtual machines
Perform file-level backup of virtual machines using supported Windows and Linux operating systems
Ensure data consistency by using Microsoft Volume Shadow Copy Services (VSS) for virtual machines that run supported Microsoft Windows operating systems
Changed block tracking (CBT)
Virtual machines that run on ESX/ESXi hosts can track disk sectors that change. This feature is called CBT. On many file systems, CBT identifies the disk sectors that are altered between two change set IDs. On Virtual Machine File System (VMFS) partitions, CBT can also identify all the disk sectors in use. CBT is useful when you set up incremental backups.
Virtual disk formats
When you perform certain virtual machine management operations (such as creating a virtual disk, cloning a virtual machine to a template, or migrating a virtual machine), you can specify a format for the virtual disk file. However, you cannot specify the disk format if the disk is on an NFS data store. The NFS server determines the allocation policy for the disk. The disk formats listed in this section are supported.
Thick format
This format is the default virtual disk format. The thick virtual disk does not change its size, and from the beginning occupies the entire data storage space that is provisioned to it. Thick format does not zero the blocks in the allocated space.
 
Conversion: Converting a thick disk format to a thin provisioned format is not possible.
Thin provisioned format
Use this format to save storage space. For the thin provisioned format, specify as much data storage space as the disk requires based on the value that you enter for the disk size. However, the thin disk starts small and, at first, uses only as much data storage space as the disk needs for its initial operations.
 
Thin disk considerations: If a virtual disk supports clustering solutions such as fault tolerance, you cannot make the disk thin. If the thin disk needs more space later, it can grow to its maximum capacity and occupy the entire data storage space that is provisioned to it. Also, you can manually convert the thin disk to thick disk.
20.7.3 Backup solutions
This section describes backup solutions and prerequisites for VMware backup by using the ProtecTIER server and Tivoli Storage Manager.
Full VM backup on ProtecTIER
You can use Tivoli Storage Manager Data Protection (DP) for VMware to back up and restore VM data through SAN-based data movement. The two data paths where data movement is possible as follows:
The first data path is from the VMware data store to the vStorage server through SAN.
The second data path is from the vStorage server to the ProtecTIER server. It could be SAN-based if Tivoli Storage Manager LAN-free is used.
The backup data path uses the Tivoli Storage Manager for SAN feature (LAN-free backup). In this book, LAN-free backup is used on VMware.
The DP for VMware stores virtual machine full backup images (full-VM) as a collection of control and data files. The data files contain the contents of virtual machine disk files, and the control files are small metadata files that are used during full VM restore operations and full VM incremental backups. In most cases, VMs are cloned according to a predetermined template. As a result, there is huge duplication of data. The ProtecTIER solution, in conjunction with Tivoli Storage Manager, deduplicates such data.
Prerequisites to VMware backup using ProtecTIER and Tivoli Storage Manager
Check that the following items are complete before you use VMware to back up your data with the ProtecTIER product and Tivoli Storage Manager (rebranded as IBM Spectrum Protect beginning with version 7.1.3):
o The ProtecTIER repository exists.
o The ProtecTIER deduplication function is enabled.
o The Tivoli Storage Manager server is installed with a license.
o The Tivoli Storage Manager storage agent is installed on a vStorage server, if LAN-free
is used.
o The Tivoli Storage Manager Backup-Archive client is installed on the vStorage server.
 
Tip: For the best performance, the vStorage server must have separate HBA ports and each port must be connected to the ProtecTIER repository and the disk subsystem that stores the VMware data store.
Prerequisites for ESX/ESXi
Check that the following items are complete before you use VMware ESX/ESXi with the ProtecTIER repository and Tivoli Storage Manager:
o The host must be running ESX/ESXi Version 4.0 or later.
o The VMware vCenter Server must be Version 4.1.x or later.
o For incremental backup, the virtual machine that owns the disks to be tracked must be hardware Version 7 or later.
o CBT must be enabled for the virtual machine. (In the vSphere client, click Edit → Settings → Options → Advanced/General → Configuration Parameters.)
o The configuration of the virtual machine (.vmx) file must contain the following entry:
ctkEnabled = "TRUE"
o For each virtual disk, the .vmx file must contain the following entry:
scsix:x.ctkEnabled = "TRUE"
o For each virtual disk and snapshot disk, a .ctk file must exist (Example 20-12).
Example 20-12 Both the virtual disk and snapshot disk have an associated .ctk file
vmname.vmdk
vmname-flat.vmdk
vmname-ctk.vmdk
vmname-000001.vmdk
vmname-000001-delta.vmdk
vmname-000001-ctk.vmdk
VMware topology
Check that the topology shown in Figure 20-13 is in place before you use VMware with ProtecTIER and Tivoli Storage Manager.
Figure 20-13 Tivoli Storage Manager/VMware topology
The following conditions should be true:
The Tivoli Storage Manager backup-archive client on the vStorage server must read
GUEST OS data from Disk Subsystem by SAN.
The Tivoli Storage Manager Storage agent on the vStorage server must write GUEST OS data to ProtecTIER by SAN.
The Tivoli Storage Manager server writes control data to the ProtecTIER repository through the SAN.
20.7.4 Zoning
The tables in this section describe the required fabric zoning for the ProtecTIER repository, the hosts, and Tivoli Storage Manager:
The HBA ports are described in Table 20-3 on page 340.
A SAN zoning example is listed in Table 20-4 on page 340.
Table 20-3 HBA ports
Item
Port
ProtecTIER
PT front-end port_0
PT front-end port_1
Tivoli Storage Manager Storage Agent
Tivoli Storage Manager Storage Agent port_0
Tivoli Storage Manager Storage Agent port_1
Tivoli Storage Manager Server
Tivoli Storage Manager server port_0
Tivoli Storage Manager server port_1
ESX/ESi Server
ESX/ESXi server port_0
ESX/ESXi server port_1
XIV
XIV_Module4 port_0
XIV_Module5 port_0
XIV_Module6 port_0
XIV_Module4 port_1
XIV_Module5 port_1
XIV_Module6 port_1
Table 20-4 SAN zoning examples
Zone name
Zone members
Zone_XIV_TSM_StorageAgent_0
Tivoli Storage Manager Storage Agent port_0
XIV_Module4 port_0
XIV_Module5 port_0
XIV_Module6 port_0
Zone_XIV_TSM_StorageAgent_1
Tivoli Storage Manager Storage Agent port_1
XIV_Module4 port_1
XIV_Module5 port_1
XIV_Module6 port_1
Zone_PT_TSM_StorageAgent_0
Tivoli Storage Manager Storage Agent port_0
PT front-end port_0
Zone_PT_TSM_StorageAgent_1
Tivoli Storage Manager Storage Agent port_1
PT front-end port_1
Zone_PT_TSM_Server_0
Tivoli Storage Manager server port_0
PT front-end port_0
Zone_PT_TSM_Server_1
Tivoli Storage Manager server port_1
PT front-end port_1
Zone_ESX_XIV_0
Tivoli Storage Manager Storage Agent port_0
XIV Module4 port_0
XIV Module5 port_0
XIV Module6 port_0
Zone_ESX_XIV_1
Tivoli Storage Manager Storage Agent port_1
XIV Module4 port_1
XIV Module5 port_1
XIV Module6 port_1
20.7.5 Configuring the ProtecTIER server
This section describes all the steps that are required to create and to configure a new VTL in the ProtecTIER server. Also described is the optional procedure to enable and configure LUN masking for the Tivoli Storage Manager server and the Tivoli Storage Manager storage agent (vStorage server in a Tivoli Storage Manager environment).
To create the VTL, complete the following steps:
1. From the VT drop-down menu of ProtecTIER Manager, click VT → VT Library → Create new library. The Create new library window opens.
2. Input the name of the library in the VT name field, and press Next. The Library type window opens in the Create new library window.
3. Select IBM TS3500 as the library type to simulate. The Tape model window opens.
4. Select IBM ULT3580-TD3 as the tape model to simulate. The Port Assignment
window opens.
5. Create the robot and the drives (such as one robot and 20 drives). The drives are assigned, crossing all front-end ports to ensure better performance. The Cartridges window opens.
6. Create cartridges that are based on the backup policy (for example, 100 cartridges). The Slots window opens.
7. Create 100 slots and 32 import/export slots by selecting 100 in the No. of slots selection box, and 32 in the Number of import/exports slots selection box. Click Next.
 
Important: The creation of the library takes the system offline for a few minutes.
8. (Optional) Enable and configure LUN masking for the Tivoli Storage Manager server and the Tivoli Storage Manager storage agent (vStorage server). If you have multiple backup servers that are connected to the ProtecTIER server, enabling the LUN masking feature is suggested. Enable and configure LUN masking by completing the following steps:
a. From the expanded list on the left side of the ProtecTIER Manager window, click VT → LUN Masking → Enable/Disable LUN masking. ProtecTIER Manager notifies you that you if you enable the LUN masking feature without configuring LUN masking groups, the devices are hidden from the hosts, and prompts you to confirm whether you want to proceed with this process.
b. When the Enable/Disable LUN masking dialog box opens, select Enable LUN masking, and click OK.
c. From the expanded list on the left side of the ProtecTIER Manager window, click VT → LUN Masking → Configure LUN masking groups. The LUN Masking window opens.
d. In the “Selected Host Initiators” frame, click Add. The Host Initiator Management window opens.
e. Create LUN masking groups for the Tivoli Storage Manager server and the Tivoli Storage Manager storage agent by adding the WWPNs of the Tivoli Storage Manager server and the Tivoli Storage Manager storage agent to the list.
f. Select the check box beside each of the added ports, and click Save Changes. The LUN Masking window opens.
g. In the Library Mappings frame, click Add, and add the library that is called “TSM_VMW” to the library mappings list.
h. Click Save Changes.
20.7.6 Installing the tape driver on the Tivoli Storage Manager server and the Tivoli Storage Manager storage agent
You can install the tape driver on the Tivoli Storage Manager and the Tivoli Storage Manager storage agent for Windows and Linux based systems.
 
Note: Beginning with version 7.1.3, Tivoli Storage Manager was rebranded to IBM Spectrum Control. The scenarios in this topic were conducted with a version earlier than 7.1.3. For legacy and reference purposes, we continue to refer to Tivoli Storage Manager.
The Windows Device Manager shows the tape changer as “Unknown Medium Changer” and the tape drives as “IBM ULT3580-TD3 SCSI Sequential Device” (Figure 20-14).
Figure 20-14 Server Manager window
Procedure for Windows systems
Complete the following steps:
1. Download the IBM Tape Device Driver from the following website:
2. Run install_exclusive.exe (Figure 20-15) to install the IBM Tape Driver for Tivoli Storage Manager. The installation application initiates, and when complete, displays a dialog box that notifies you that the installation was successful.
Figure 20-15 Installation program
3. After installation is complete, Windows Device Manager shows the tape changer as “IBM 3584 Tape Library”, and the tape drives as “IBM ULTRIUM III 3580 TAPE DRIVE” (Figure 20-16).
Figure 20-16 Server manager window - showing renamed changer and tape drive
Procedure for Linux systems
Complete the following steps:
1. Download the tape device driver from the following website:
2. Install the following RPM packages:
 – lin_tape-1.54.0-1
 – lin_taped-1.54.0-1
20.7.7 Tivoli Storage Manager storage agent configuration
This section describes the process of configuring the Tivoli Storage Manager storage agent to establish communication with the Tivoli Storage Manager server in a VMware environment.
To accomplish this task, complete the following steps:
1. To establish communication for Tivoli Storage Manager server and Tivoli Storage Manager storage agent, run the commands that are shown in Example 20-13 at the CLI of the Tivoli Storage Manager storage agent.
Example 20-13 Establish communication for Tivoli Storage Manager server and Tivoli Storage Manager storage agent
dsmsta.exe setstorageserver
myname=ARCX385084529
mypassword=<user_password>
myhladdress=x.x.110.38
servername=ARCX3650N1332
serverpassword=open1sys
hladdress=x.x.110.65
lladdress=1500
2. Disable automatic mounting of volumes on the Tivoli Storage Manager storage agent host by running the following command at the CLI prompt of the Tivoli Storage Manager storage agent:
diskpart > automount disable > exit
 
Important: The diskpart command is necessary to keep the Tivoli Storage Manager storage agent from damaging the SAN volumes that are used for raw disk mapping (RDM) virtual disks.
3. To enable the Tivoli Storage Manager storage agent to access online GUEST OS data, click Server manager → Storage → Disk Management → Online (Figure 20-17).
Figure 20-17 Server Manager window shows Tivoli Storage Manager storage agent set to Online
4. Note the device name and serial number of the Tivoli Storage Manager storage agent (Figure 20-18). You need this information to define the path in the Tivoli Storage Manager server in a later step.
Figure 20-18 Serial numbers of Tivoli Storage Manager storage agent
 
If you do not use persistent naming, take a screen capture of the Tivoli Storage Manager management console so that you can have a readily accessible record of the tape device information in the Tivoli Storage Manager storage agent.
 
20.7.8 Tivoli Storage Manager server configuration
This section describes the procedure to define and to configure the Tivoli Storage Manager server in a VMware environment through the Tivoli Storage Manager server CLI. To accomplish this task, complete the following steps:
1. Define the server for the storage agent by running the following command at the Tivoli Storage Manager server CLI:
define server ARCX385084529 SERVERPAssword=admin HLAddress=x.x.110.38 LLAddress=1500 COMMmethod=TCPIP
2. Set the server name to Tivoli Storage Manager Server by running the following command:
set servername ARCX3650N1332
3. Set the password by running the following command:
set serverpassword admin
4. Set the Tivoli Storage Manager server IP address by running the following command:
set serverhladdress x.xx.xxx.xx
5. Set the Tivoli Storage Manager server port by running the following command:
set serverlladdress 1502
6. Create the library by running the following command:
define library VMW_LIB libtype=scsi autolabel=yes shared=yes RELABELSCRatch=yes
7. Choose all devices that are related to tsminst1.
8. Define a library path from the Tivoli Storage Manager server to the physical OS devices by running the following command:
DEFINE PATH ARCX3650N1332 VMW_LIB srctype=server desttype=library autodetect=yes device=/dev/IBMchanger0
9. Define all the drives by running the following commands:
 – define drive VMW_LIB drive0
 – define drive VMW_LIB drive1
10. Define the drives path from Tivoli Storage Manager server to the physical OS devices by running the following commands:
 – define path ARCX3650N1332 drive0 srctype=server desttype=drive library=VMW_LIB autodetect=yes device=/dev/IBMtape0
 – define path ARCX3650N1332 drive1 srctype=server desttype=drive library=VMW_LIB autodetect=yes device=/dev/IBMtape1
11. Define the drives path from Tivoli Storage Manager storage agent to the physical OS devices by running the following commands:
 – define path ARCX385084529 drive0 srctype=server desttype=drive library=VMW_LIB
autodetect=yes device=\.Tape0
 – define path ARCX385084529 drive1 srctype=server desttype=drive library=VMW_LIB
autodetect=yes device=\.Tape1
 
Important: Ensure that the device on the Tivoli Storage Manager server and the device on the Tivoli Storage Manager storage agent, which are mapped to the same drive path, have the same serial number. They are essentially the same device. (For the serial numbers in the Tivoli Storage Manager storage agent, see your notes from step 4 on page 347.)
12. Query the drive and verify that it has a status of online by running the following command:
query drive
13. Check in and label the cartridges by running the following command:
label LIBVOL VMW_LIB search=yes labelsource=barcode CHECKIN=scratch overwrite=yes waitt=0
14. Define a device class by running the following command:
define devclass LTOCLASS3 library=VMW_LIB devtype=lto format=ULTRIUM3C
15. Define a storage pool by running the following command:
define stgpool VMW_POOL LTOCLASS3 pooltype=primary maxscratch=99999
16. Define a domain by running the following command:
DEFine DOmain VMW_DOMAIN BACKRETention=60 ARCHRETention=365
17. Define a policy set by running the following command:
DEFine POlicyset VMW_DOMAIN VMW_POLICY
18. Define a management class by running the following command:
DEFine MGmtclass VMW_DOMAIN VMW_POLICY LTOCLASS3
19. Define a copy group by running the following command:
DEFine COpygroup VMW_DOMAIN VMW_POLICY LTOCLASS3 DESTination=VMW_POOL
20. Define an archive copy group by running the following command:
DEFine COpygroup VMW_DOMAIN VMW_POLICY LTOCLASS3 Type=Archive DESTination=VMW_POOL
21. Assign the default management class by running the following command:
ASsign DEFMGmtclass VMW_DOMAIN VMW_POLICY LTOCLASS3
22. Activate the policy set by running the following command:
ACTivate POlicyset VMW_DOMAIN VMW_POLICY
23. Register the node for Tivoli Storage Manager BAC by running the following command:
register node ARCX385084529 admin passexp=0 userid=admin domain=VMW_DOMAINcompression=no type=client DATAWritepath=lanfree DATAReadpath=lanfree hladdress=x.x.110.38 lladdress=1502 archdelete=yes backdelete=yes maxnummp=999
 
Compression value: Specify the value of compression as no.
For LAN-free backup, specify the value of DATAWritepath and DATAReadpath as lanfree.
24. Give the appropriate permissions to the administrator by running the following command:
GRant AUTHority admin CLasses=SYstem
20.7.9 Tivoli Storage Manager client installation
Install the Tivoli Storage Manager client in a VMware environment by using the IBM Tivoli Storage Management installation program:
1. In the wizard (Figure 20-19), select all the components to install, and click Next.
Figure 20-19 Tivoli Storage Manager installation program
 
Important: Ensure that VMware Backup Tools are installed.
2. When you are prompted to select the type of installation, click Complete and finish
the installation.
20.7.10 Disabling compression and deduplication on Tivoli Storage Manager
Disable compression and deduplication in Tivoli Storage Manager by using the Tivoli Storage Manager GUI as follows:
1. From the Tivoli Storage Manager GUI drop-down menus, click Edit → Client Preferences (Figure 20-20). The Client Preferences window opens.
Figure 20-20 Tivoli Storage Manager GUI with Client Preferences selected
2. In the menu at the left, select Deduplication (Figure 20-21). The Deduplication Preferences page opens.
Figure 20-21 Deduplication Preferences window
3. Disable Tivoli Storage Manager deduplication by clearing the Enable Deduplication check box.
 
Note: Tivoli Storage Manager is able to combine both compression and deduplication. The details are explained in Chapter 4, “Introduction to IBM Tivoli Storage Manager deduplication”, in Implementing Implementing IBM Storage Data Deduplication Solutions, SG24-7888.
4. In the menu at the left, click Backup (Figure 20-22). The Backup Preferences page opens.
Figure 20-22 Backup Preferences window
5. Disable Tivoli Storage Manager compression by clearing the Compress Objects
check box.
20.7.11 Configuring a full VM backup through the vStorage API
Configure a full VMware backup from the Tivoli Storage Manager GUI as follows:
1. From the Tivoli Storage Manager GUI drop-down menus, click Edit → Client Preferences The Client Preferences window opens.
2. From the menu at the left, click VM Backup (Figure 20-23).
Figure 20-23 VM Backup window
3. Under Backup Type, select VMware Full VM.
4. In the Domain for VM Backup selection box, select Domain Full VM.
5. In the VM Options selection box, select All-VM (If you are using Windows Guest OS, select ALL-WINDOWS) and click Insert.
6. Enter the IP address, user name, and password of the vCenter.
7. Under VM Management Class, select VStorage.
20.7.12 VMware Guest OS backup to ProtecTIER
Use one of these methods to implement VMware Guest OS backup to the ProtecTIER server:
Backing up VM by using the Tivoli Storage Manager GUI
Complete the following steps:
1. From the Tivoli Storage Manager GUI drop-down menus, click Actions → Backup VM (Figure 20-24).
Figure 20-24 Tivoli Storage Manager GUI showing the Backup VM menu cascade
2. The Backup Virtual Machine window opens (Figure 20-25). In the Backup selection box, select the backup type: VMware Full VM (vStorage) or VMware Full VM (incremental).
Figure 20-25 Choose the type of backup
Full VM backup by using the CLI
Start a full VM backup; use the mode=full backup parameter in the following command:
PS C:Program FilesTivoli smaclient> ./dsmc backup vm win2k3_large_3 -mode=full -vmbackuptype=fullvm
Tivoli Storage Manager backs up all of the inspected data, and the value of the total destruction ratio is 0%. The system displays output similar to Example 20-14.
Example 20-14 Output log: full backup
Total number of objects inspected: 1
Total number of objects backed up: 1
Total number of objects updated: 0
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects expired: 0
Total number of objects failed: 0
Total number of subfile objects: 0
Total number of bytes inspected: 15.00 GB
Total number of bytes transferred: 15.00 GB
LanFree data bytes: 15.00 GB
Data transfer time: 842.86 sec
Network data transfer rate: 18,661.04 KB/sec
Aggregate data transfer rate: 12,541.90 KB/sec
Objects compressed by: 0%
Total data reduction ratio: 0.00%
Subfile objects reduced by: 0%
Elapsed processing time: 00:20:54
Incremental VM backup by using the CLI
Start an incremental VM backup; use the mode=incremental backup parameter as follows:
C:Program FilesTivoliTSMaclient> ./dsmc backup vm win2k3_large_3 -mode=incremental -vmbackuptype=fullvm
Tivoli Storage Manager backs up only the changed data that is found by VMware CBT, so the value of the data deduction ratio is 99.77%. The output is shown in Example 20-15.
Example 20-15 Output log - full backup
Total number of objects inspected: 1
Total number of objects backed up: 1
Total number of objects updated: 0
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects expired: 0
Total number of objects failed: 0
Total number of subfile objects: 0
Total number of bytes inspected: 15.00 GB
Total number of bytes transferred: 38.63 GB
LanFree data bytes: 38.63 GB
Data transfer time: 2.77 sec
Network data transfer rate: 13,461.05 KB/sec
Aggregate data transfer rate: 2,689.30 KB/sec
Objects compressed by: 0%
Total data reduction ratio: 99.77%
Subfile objects reduced by: 0%
Elapsed processing time: 00:00:13
Restoring VM by using the Tivoli Storage Manager GUI
Complete the following steps:
1. In the Tivoli Storage Manager GUI, click Actions → Restore VM (Figure 20-26).
Figure 20-26 Tivoli Storage Manager GUI with Restore VM menu cascade
2. The Restore Virtual Machine window opens (Figure 20-27). Select the version to restored, either full backup (FULL) or incremental backup (INCR).
Figure 20-27 Restore Virtual Machine window
3. Tivoli Storage Manager prompts you to select whether to restore to the original location or to a new location (Figure 20-28). If you choose to restore to a new location, enter the following information, and click Restore:
 – Name: The Guest OS name that is managed by vCenter.
 – Datacenter: The name of the data center that stores the new Guest OS.
 – Host: The IP of the ESX server that stores the new Guest OS.
 – Datastore: The name of data store that stores the new Guest OS.
Figure 20-28 Restore Destination window
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.12.140