Deduplication considerations
This chapter describes the IBM System Storage TS7600 ProtecTIER family data deduplication concepts, methods, and system components. This chapter also elaborates on the benefits of data deduplication with IBM HyperFactor, and on general ProtecTIER deduplication considerations. This chapter also describes data type candidates for high factoring ratios, and describes data types that can have a negative effect on factoring ratios.
This chapter describes the following topics:
2.1 HyperFactor data deduplication
Data deduplication is the process of storing only a single instance of data that is backed up repetitively. It reduces the amount of space that is needed to store data on disk. Data deduplication is not a storage device; it is a function of a system, for example, a Virtual Tape Library (VTL), an OpenStorage (OST) application programming interface (API) for versions of ProtecTIER earlier than v3.4, or a File System Interface (FSI).
Data deduplication is not an input/output (I/O) protocol. However, it does require an I/O protocol for data transfer, such as Fibre Channel Protocol (FCP), Common Internet File System (CIFS), Network File System (NFS), or an API.
Figure 2-1 illustrates how ProtecTIER data deduplication stores repeated instances of identical data in a single instance. This process saves storage capacity and bandwidth. Data deduplication can provide greater data reduction than previous technologies, such as Lempel-Ziv (LZ) compression and differencing, which is used for differential backups.
Using data deduplication does not always make sense because not all types of data can be deduplicated with identical efficiency. Data deduplication might interfere with other technologies, such as compression, encryption, or data security requirements. Data deduplication is not apparent to users and to applications.
Figure 2-1 Simplified data deduplication process
With data deduplication, the incoming data stream is read and analyzed by the ProtecTIER HyperFactor algorithm while it looks for duplicate data. Using inline processing, ProtecTIER ensures high performance, scalability, and 100% data integrity, and it compares data elements of variable sizes to identify duplicate data. After the duplicate data is identified, one instance of each element is stored, pointers are created for the duplicate items, and the duplicate items are not stored but only referenced.
The effectiveness of data deduplication depends on many variables, which have a major effect on the effectiveness of data deduplication:
The data change rate
The number of backups
The amount of repetitive or similar data in your backups
The data retention period
For example, if you back up the exact same uncompressible data once a week for six months, you store the first copy and do not store the next 24, which would provide a 25:1 data deduplication ratio. If you back up an uncompressible file on week one, back up the exact same file again on week two and never back it up again, you have a 2:1 deduplication ratio.
A more likely scenario is that some portion of your data changes from backup to backup, so that your data deduplication ratio changes over time. For example, assume that you take weekly full and daily differential incremental backups. Assume that your data change rate for the full backups is 15% and for the daily incrementals is 30%. After 30 days, your deduplication ratio might be approximately 6:1, but if you keep your backups up to 180 days, your deduplication ratio might increase to 10:1.
These examples, and the remainder of this book, describe the deduplication ratio as being the nominal data (total backup data that has been received) divided by the physical data (amount of disk space that is used to store it).
Data deduplication can provide storage savings, but the benefit that you derive is determined by your data and your backup policies. Workloads with a high database content generally have the highest deduplication ratios. However, product functions, such as IBM Tivoli Storage Manager incremental forever backup, Oracle Recovery Manager (RMAN), or LiteSpeed for SQL Server, can affect the deduplication ratio.
Compressed, encrypted, or otherwise scrambled workloads typically do not benefit from deduplication, because the potential deduplication candidates are no longer similar. For more information, see 2.6, “Data types” on page 35.
2.1.1 HyperFactor, deduplication, and bandwidth savings
The cornerstone of ProtecTIER is HyperFactor, the IBM technology that deduplicates data inline as it is received from the backup application. ProtecTIER bandwidth-efficient replication, inline performance, and scalability directly stem from the technological breakthroughs inherent to HyperFactor.
HyperFactor is based on a series of algorithms that identify and filter out the elements of a data stream that was stored by ProtecTIER. Over time, HyperFactor can increase the usable capacity of an amount of physical storage by 25 times or more.
With replication, the data reduction value of HyperFactor is extended to bandwidth savings and storage savings for the disaster recovery (DR) operation. These performance and scalability attributes are critical for the DR operation, in addition to the primary site data protection operation.
When a change occurs in the source site of a replication grid, the differences between the original and the copy elements are calculated an only the changes are sent over the replication link. This search is quick, and uses a small and efficient memory-resident index. After similar data elements are found, HyperFactor can compare the new data to the similar data to identify and store only the byte-level changes.
With this approach, HyperFactor can surpass the reduction ratios that are attainable by any other data reduction method. HyperFactor can reduce any duplicate data, regardless of its location or how recently it was stored. When new data is received, HyperFactor checks to see whether similar data is already stored. If similar data is already stored, then only the difference between the new data and previously stored data must be retained. This technique is an effective and high performance one for identifying duplicate data.
Data deduplication using the HyperFactor technology identifies data similarities, and checks those similarities against the fixed size Memory Resident Index every time new data is received. When similar matches are found, a binary differential comparison is performed on similar elements. Unique data with corresponding pointers is stored in the repository and the Memory Resident Index is updated with the new similarities. Existing data is not stored again.
HyperFactor data deduplication uses a fixed-size 4 gigabyte (GB) Memory Resident Index to track similarities for up to 1 petabyte (PB) of physical disk in a single repository. Depending on the data deduplication ratio for your data, you could store much more than 1 PB of data on your disk array.
For example, with a ratio of 12:1, you can store 12 PB of data on 1 PB of a disk array. With the Memory Resident Index, HyperFactor can identify potentially duplicate data quickly for large amounts of data, and it does this action on data load or inline, reducing the amount of processing required for your data.
The read-back rate of the ProtecTIER deduplication technology is generally higher than the write rate to the system, because there is no risk of fragmentation. No access to the index or heavy computation is required during a restore activity. It requires only that you open metadata files and fetch the data according to the pointers that they contain.
2.2 ProtecTIER HyperFactor deduplication processing
Data deduplication is performed while the data is being backed up to the ProtecTIER (inline) server, in contrast to after the data is written to it (post processing). The advantage of inline data deduplication is that the data is processed only once, and no additional processing is performed after the backup window. Inline data deduplication requires less disk storage, because the native data is not stored before data deduplication.
2.3 Components of a ProtecTIER system
The ProtecTIER data deduplication system consists of three main components:
ProtecTIER server
HyperFactor deduplication software
Disk storage subsystem
Two of these components, the ProtecTIER server and the HyperFactor deduplication software, are always bundled together for your convenience. Depending on the model of ProtecTIER that you look at, you might have a bundled disk storage subsystem. Do not share the storage subsystem assigned to ProtecTIER with other applications. For an overview of ProtecTIER models, see 1.3, “ProtecTIER models” on page 11.
The components shown in Figure 2-2 are described in the next three sections.
Figure 2-2 ProtecTIER components
2.3.1 ProtecTIER server
Every ProtecTIER deduplication system uses a server with the Red Hat Enterprise Linux (RHEL) operating system on which the HyperFactor software runs.
The IBM System Storage TS7620 ProtecTIER Deduplication Appliance Express (TS7620) comes as a bundle (3959 SM2 server) and LSI MegaRAID system storage) that enables up to 300 megabytes per second (MBps) performance, depending on the ProtecTIER interface and the number of storage expansion drawers that are connected.
 
Note: The TS7650 Appliance (3958 AP1) and TS7610 Express (3959 SM1) are withdrawn from marketing. For details about support, see the IBM Software Support Lifecycle website and search for ProtecTIER.
The TS7650G server (3958 DD5) is a high-performance configuration. It can support up to 1600 MBps and more, depending on the configuration of the installation environment. A ProtecTIER high availability (HA), active-active, two-server cluster configuration (in a cluster the servers are also known as nodes) is available with the TS7650G solution. You can have up to 2500 MBps and more depending on the configuration of the installation environment with a TS7650G two-server cluster.
ProtecTIER version 3.4 introduces the new hardware platform the TS7650G server (3958 DD6) which is also a high performance configuration. The performance characteristics are similar to those of the 3958 DD5 server, however the footprint of the installed system (as shown in Figure 2-2) reduces up to 80% because even the full HA configuration with two nodes occupies only two units in the rack.
2.3.2 HyperFactor deduplication algorithm
The HyperFactor data deduplication solution can be ordered in three interface styles:
The VTL interface
The OST API (Versions of ProtecTIER earlier than V3.4)
The FSI with CIFS and NFS support (PGA v3.4)
Because the interface methods cannot be mixed, you must choose one or deploy multiple ProtecTIER models simultaneously.
 
Note: With the ProtecTIER FSI model, you can have CIFS and NFS connectivity with the same machine.
2.3.3 Disk storage subsystem
Data that is processed by the ProtecTIER HyperFactor data deduplication software is stored on disk. The ProtecTIER appliance system, the TS7620 Appliance Express is pre-bundled with disk storage included in a ready to run configuration.
The ProtecTIER TS7650G Gateway server attaches to a wide variety of disk storage subsystems that separately must be made available. For a list of supported disk systems, see TS7650/TS7650G independent software vendor (ISV) and Interoperability Matrix at the IBM System Storage Interoperation Center (SSIC) web page:
 
Compression: ProtecTIER performs compression after the deduplication process completes unless you decide to turn off compression, which is not recommended. Allocation of disk arrays for ProtecTIER that run their own compression is not recommended, because no additional benefit is gained.
If you want to use encryption, attach ProtecTIER to a back-end storage system that supports encryption, rather than backing up encrypted data to ProtecTIER. This action has a beneficial effect on your data deduplication ratio.
The ProtecTIER back-end storage subsystem must be a random access disk storage subsystem from the supported back-end storage list. Consult the TS7650/TS7650G ISV and Interoperability Matrix.
 
Attaching a physical tape drive: Attaching a physical tape drive directly to a ProtecTIER system is not supported. However, operating your backup application with a combination of ProtecTIER and physical tape drives is feasible.
2.4 Benefits of ProtecTIER HyperFactor
When appropriately deployed, data deduplication can provide benefits over traditional backups to disk or VTLs. Data deduplication enables remote vaulting of backup data using less bandwidth, because only changed data is sent to the remote site. Long-term data retention for local or offsite storage might still be achieved most economically with physical tape.
2.4.1 Flexibility
ProtecTIER is independent from the backup application. You can combine multiple backup applications from different vendors (included in the ProtecTIER compatibility matrix) to work with one single ProtecTIER solution. All attached backup solutions directly benefit from the whole ProtecTIER deduplication potential; sharing the repository with multiple backup applications is possible.
2.4.2 High availability
ProtecTIER offers true highly available, active-active, dual-node clustering for VTL and OST (for versions earlier than V3.4) models. Mimicking the behavior of a physical tape library, when the correct setup is made, you can use the ProtecTIER solution to access your data even if a node is unavailable. The initial configuration of the FSI model is available as only a single node.
2.4.3 High performance, low storage requirements, and lower environmental costs
Data deduplication can reduce the amount of disk storage that is required to store data and keep it online. Performing restores from disk can be faster than restoring from tape, and having the data online for longer periods reduces the possibility that the required data might be sent offsite.
Inline deduplication has no need for more post-processing space, and therefore further reduces space requirements. If data deduplication reduces your disk storage requirements, then the environmental costs for running and cooling the disk storage are also reduced.
2.5 General ProtecTIER deduplication considerations
The following considerations and best practices can help you better understand what to do or not do regarding ProtecTIER deduplication.
2.5.1 Rethinking your overall backup strategy
The preferred practices for ProtecTIER can be achieved by adopting in your environment the examples that are provided in this chapter. Revisit your backup and recovery strategy from a greater perspective. One of the biggest benefits of ProtecTIER is fast restore performance. Most clients are more interested in quickly restoring their data if the need should arise, as opposed to quickly backing up their data. Restoring your data quickly and efficiently is crucial to business continuity.
Rethink the method that you use to run backups to enable the fastest restore possible. For example, backing up data to a ProtecTIER server with only a few streams, and using only a few mount points, is no longer necessary. Think big! It is all virtual and virtual tape drives are available at no additional cost.
Keeping the number of used cartridges low to save money no longer applies in the virtual world of ProtecTIER. Using as many cartridges in parallel as possible, to some extent, is a good idea. The maximum number of cartridges in a VTL is greater than 65,000. You do not need to use all of them, but you should plan on using more virtual resources than you would use physical resources. This guideline is true for virtual tape drives and virtual tape cartridges. This general approach is also true for FSI deployments, and also OST for versions earlier than 3.4.
If you use methodologies such as client compression to reduce the load on your network, you might want to rethink compression too. Most pipes are “fat,” meaning that your infrastructure has plenty of bandwidth to support many uncompressed backups. This situation ensures faster backups and faster restores. This condition is true for network and Fibre Channel (FC) infrastructures. Local area network (LAN)-free backups in your data center can be possible if you do not have infrastructure bandwidth congestion.
If you perform incremental backups, especially for your databases, you might also want to rethink this process for critical applications. Multiple full backups, especially on a high frequency schedule, might appear to be a waste of space, but this situation is where you can benefit the most from ProtecTIER deduplication.
A ProtecTIER server has the best deduplication, the highest backup speed, and the highest restore speed if you write multiple full backups of the same objects to it. Your infrastructure should be up to the challenge because resources tend to sit idle during non-backup hours. So why not increase the usage of your already available resources?
As an additional benefit, the restore performance is further increased by the reduced number of restore steps. With ProtecTIER technology, you do not need to restore your database by first restoring the latest full backup, then multiple incremental backups, and finally applying the database logs. Simply restoring the latest full backup and applying the logs is sufficient to be at your recovery point objective (RPO).
Evaluate these suggestions with your data protection peers, staff, and infrastructure strategists to transform your data protection strategy and achieve the most out of your solution. This task is not always easy. If you cannot change the current setup now, at least make sure that you have an effect on the planning for future deployments.
2.5.2 Data reduction technologies should not be combined
ProtecTIER data deduplication is a data reduction technology. Compression is another data reduction technology. Tivoli Storage Manager (part of the IBM Spectrum Protect™ family) is an example of an application that provides its own brand of compression and deduplication. Tivoli Storage Manager also offers incremental forever backup, with which only changed data is backed up, so it can be thought of as a data reduction technology. There are many other potential data reduction technologies.
 
Important: Do not combine multiple data reduction technologies, because there is no benefit in compressing or deduplicating data multiple times. If your goal is to achieve a high deduplication ratio, disable all other data reduction technologies.
If you prefer to combine another data reduction technology with a ProtecTIER solution, a solution without deduplication is also available. Ask your IBM marketing representative for a ProtecTIER solution without a capacity license.
Some of the scenarios that enable the combination of data reduction technologies are described in this section.
Tivoli Storage Manager can combine both compression and deduplication. Details are explained in the Introduction to IBM Tivoli Storage Manager deduplication chapter of Implementing IBM Storage Data Deduplication Solutions, SG24-7888.
IBM DB2 database software can handle data in a way such that it can be compressed in DB2, but still achieves high deduplication ratios. For more information about using DB2 compression with a ProtecTIER repository, see 20.4.1, “Combining DB2 compression and ProtecTIER deduplication” on page 309.
2.5.3 Data streams must be in order
Many technologies that are available for improving performance and throughput for physical tape drives do not work well with deduplication. Multiplexing, for example, shuffles the data, so you cannot identify potential candidates for deduplication in the data stream. If you aim for a narrow backup window, increase the number of streams, increase parallelism, and disable multiplexing. Disabling multiplexing improves the HyperFactor process and increases performance.
Encryption also results in shuffled data. A small change in an encrypted file produces a file that, to a deduplication solution, appears different. Potential deduplication candidates cannot be identified, because the patterns do not match anymore. Analyze your environment for other potential data shuffling causes, and aim to eliminate them.
2.5.4 Data organization in your ProtecTIER repository
The ProtecTIER repository is the place where your deduplicated data is stored. You can define one or many VTLs with multiple slots and cartridges. You can define one or many storage units for the OST API, or you can have multiple file shares for the FSI. No matter what type of ProtecTIER repository you use, logically segment your data and group similar backup types together.
This setup enables detailed deduplication analysis that is based on cartridge granularity that is done by your IBM Service Support Representative (SSR). If you can supply a list of virtual cartridges, or a virtual cartridge range that contains one special type of backed up data for detailed analysis, this setup provides valuable data that you can use to improve your data protection environment.
 
Organization: Apply a meaningful organization scheme to your backup data. For VTL, multiple slots and cartridges should align to different bar code ranges. For FSI, dedicated directories with meaningful names should align to dedicated backup servers.
2.5.5 The dynamics of the ProtecTIER repository
In addition to the data that you write into your ProtecTIER repository, there are two other major effects of the ProtecTIER repository that must be understood. First, the repository dynamically reacts to the quality of your data.
If the data that you back up to the ProtecTIER repository suddenly changes and enables a higher deduplication ratio, the repository adapts and can store more data. If the quality of your data changes and enables only a reduced deduplication ratio, the ProtecTIER repository also reacts to this change, and less data can be stored.
 
Repository size: The ProtecTIER nominal repository size is calculated by using the following formula:
Physical Repository Size x HyperFactor Ratio = Available Free Space for you to write to (Nominal Free Space)
If your HyperFactor ratio changes, the available space for you to write to adapts.
A ProtecTIER repository is not directly aware of your data retention requirements. A ProtecTIER repository stores all data unless informed otherwise. An especially important point is for VTL emulations to specify whether or not you still need the data.
As an example, IBM Spectrum Control™ (formerly Tivoli Storage Manager) uses the RelabelScratch option of its library definition to communicate to the ProtecTIER repository that the space of a virtual cartridge can be freed up. Other backup applications might rely on housekeeping scripts to initiate a label sequence or write sequence from the beginning of tape, which has the same effect. Make sure to regularly free up unused space.
After you release the unused space, it becomes marked as Pending in the ProtecTIER repository. The ProtecTIER repository then automatically uses internal processes to optimize the available space for future backups:
Internal Deleter processes reduce the Pending space and, in the process, create Fragmented space.
Internal Defragger processes then reduce the Fragmented space.
In the Capacity section of the ProtecTIER GUI (Figure 2-3), the pie chart at the right side shows the nominal data, and you can see the pending space. In this example, the pending space is 16.9 terabytes (TB).
Figure 2-3 ProtecTIER repository with pending nominal space
As a result of the delete operation, some fragmented space can occur, as shown in the pie chart on the left in Figure 2-3. Further ProtecTIER internal housekeeping eliminates that space. The newly reworked repository is perfectly aligned to your next incoming backup.
2.5.6 ProtecTIER repository usage
From a technical standpoint, there is no problem with having a ProtecTIER repository that is 100% used (although from a usability standpoint it is not a good practice to permit a repository to get 100% full). After you reach a steady state where daily housekeeping frees up enough space to enable daily backups, a high usage is possible.
In reality, data tends to grow, so sooner or later you can face changed requirements for your ProtecTIER repository size. You should configure an automated message that informs you when the repository usage crosses a specified threshold value. Depending on the time you need to prepare a capacity upgrade of the ProtecTIER back-end storage, values greater than 80% or 90% can be selected to provide ample time for preparation.
To configure an automated message triggered by a threshold, follow these steps:
1. To access the configuration window, click System → Configuration in the ProtecTIER Manager GUI, as shown in Figure 2-4.
Figure 2-4 ProtecTIER Manager GUI
2. Click Physical Space Threshold Alerts and select the values for information and warning messages, as shown in Figure 2-5.
Figure 2-5 Physical Space Threshold Alerts
 
Important: If an out-of-space condition occurs, adding more virtual cartridges to a VTL does not enable you to store more data in your ProtecTIER repository. You must expand your repository by adding more physical disks to the back end to store more data.
2.5.7 Compression
Compression has a negative effect on your deduplication ratio. It effectively shuffles the data sent to the ProtecTIER repository, making pattern matching difficult. As expected, this action affects data matching rates and the factoring performance. The ProtecTIER repository compresses the data before it is written to the back-end physical disk. To avoid this negative effect, disable any compression features that are defined in the backup server for the ProtecTIER repository. Client compression should be disabled as well.
 
Note: Compression can hide in unexpected places. Table and Row compression features of databases, IBM Lotus Notes® compaction technology, compressed files, and *.mpeg files are all examples of compressed data. Compressed data files are not necessarily easily identified, but still lower your HyperFactor deduplication ratio.
2.5.8 Encryption
Encryption has a negative effect on your deduplication ratio. It makes each piece of data that is sent to the ProtecTIER repository unique, including duplicate data. This situation affects the data matching rates and the factoring performance. Even if the same data is sent each time, it appears differently to the deduplication engine, as shown in Figure 2-6. To avoid this negative effect, disable any encryption features working with data that is sent to ProtecTIER.
Figure 2-6 Challenges combining encryption with data deduplication
 
Important:
If you prefer to run your environment with encryption, consider enabling disk storage-based encryption, for example, IBM System Storage DS8870, which features Full Disk Encryption (FDE).
If you prefer to have client-side encryption enabled, consider using a ProtecTIER solution without deduplication, as described in 2.5.2, “Data reduction technologies should not be combined” on page 29.
Using an encryption switch between the ProtecTIER server and the storage system has been implemented in the field, but this is only supported under a request for product quotation (RPQ).
 
2.5.9 Database logs and other data types with high data change rates
If you have specific data with high change rates, you might decide to point the backup of this data to a target other than the ProtecTIER repository, to maximize your deduplication ratio in ProtecTIER. For example, database logs are known to have a high change rate, namely 100%. As database logs track all changes in the database, they are never identical. Consider multiple ProtecTIER deployments, some with deduplication enabled and some with deduplication disabled if you prefer to store data on VTLs.
 
Backing up database logs: You can back up database logs to a ProtecTIER repository without issue, but be aware that it has an effect on your deduplication ratio.
2.5.10 Multiplexing
Multiplexing has a negative effect on your deduplication ratio. It mixes up the bits of data from many different sources. This situation makes it harder to detect segments of data that already exist in the repository, so the HyperFactor and compression rates are greatly reduced. If you want to avoid this situation, disable any multiplexing features in your backup environment. To meet your backup window needs, increase the number of streams and the parallelism of the backup operation.
2.5.11 Tape block size
A large tape block size positively affects your deduplication ratio. To optimize the backup server, set the block size for data that is sent to the (virtual) tape drives to be at least 256 KB. This situation positively affects your HyperFactor deduplication ratio.
2.5.12 File size
Many small files, less than 32 kilobytes (KB) in size, have a negative effect on your deduplication ratio. They do not factor well, although the built-in compression might reduce their stored size. If you have a special application that generates many of these small files, they are probably not good deduplication candidates.
2.6 Data types
Deduplication is primarily influenced by the type of data that you have. Depending on whether the data is structured to a high degree or unstructured, and possibly already compressed, deduplication yields a higher or lower ratio. For more information about data types, see Chapter 20, “Application considerations and data types” on page 295.
2.6.1 Candidates for a high factoring ratio
Potential candidates for a high deduplication ratio are all kinds of structured data. For example, databases are perfect candidates for deduplication, as is email. Most applications that deal with structured data, such as databases and email, offer some compression to reduce the amount of storage the application data needs.
Because these types of data are good candidates for data reduction in general, many application vendors already have implemented some compression, compaction, or defragmentation. Turning off these application-internal data reduction technologies, or ensuring that they do not affect the backup data stream, enables high deduplication ratios.
For an example of effectively using DB2 compression with a ProtecTIER repository, see 20.4.1, “Combining DB2 compression and ProtecTIER deduplication” on page 309.
2.6.2 Candidates for a low factoring ratio
Data types that are unstructured have a negative effect on the achievable data deduplication ratio. Image data is an example of this type of data. Some image formats include *.jpg, *.exif, *tiff, or *gif. All of them come with compression that shuffles the data and reduces the achievable deduplication ratio. This situation is also true for video formats, such as *.mpg, *.mp4, *.3gp, *.flv, or *.asf. All of these data types are also compressed, which affects your deduplication ratio in a negative way.
The same situation generally applies to voice or audio data. Formats, such as *.mp3, *.aac, *.ogg, *.wma, or *.m4a, are also compressed. Backing up image files, video files, or audio files to a ProtecTIER repository results in a combination of data reduction technologies. This situation produces low deduplication ratios, because already reduced data cannot be reduced again (for more information, see 2.5.2, “Data reduction technologies should not be combined” on page 29).
All of the mentioned file types include compression. This compression does not work well with data deduplication. For the same reason, archives are also not good deduplication candidates because most archives are already compressed. File types, such as *.zip (Phil Katz zip, such as pkzip and pkunzip), *.gz (GNU zip, such as gzip and gzip -d), *.rar, or *.tgz, all use a compression algorithm.
 
Note: Multiple full backups of identical data yield high deduplication ratios. If you back up a compressed or encrypted file multiple times without changing it between the backup cycles, you have a high deduplication ratio. Changing only one single file in a huge compressed archive affects the whole data structure of that archive, which does not result in good deduplication.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.29.22