FlexClone volumes
This chapter introduces FlexClones and helps storage system administrators learn about the full value that FlexClone volumes can bring to their operations.
The following topics are covered:
 
6.1 Introduction to FlexClone volumes
In this chapter, we describe a feature that allows IBM System Storage N series administrators to instantly create clones of a flexible volume (FlexVol). A FlexClone volume is a writable point-in-time image of a FlexVol volume or another FlexClone volume, as shown in Figure 6-1.
Figure 6-1 FlexClone usage
FlexClone volumes add a new level of agility and efficiency to storage operations. They take only a few seconds to create, and are created without interrupting access to the parent FlexVol volume.
FlexClone volumes use space efficiently, utilizing the Data ONTAP architecture to store only data that changes between the parent and the clone. It is a significant potential saving in dollars, space, and energy. In addition to all these benefits, FlexClone volumes have the same high performance as other kinds of volumes, as shown in Figure 6-2.
Figure 6-2 Before FlexClone
Conceptually, FlexClone volumes are useful for any situation where testing or development occurs, any situation where progress is made by locking in incremental improvements, and any situation where there is a need to distribute data in changeable form without endangering the integrity of the original, as shown in Figure 6-3.
Figure 6-3 Testing with FlexClone volumes
For example, imagine a situation where the IT staff must make substantive changes to a production environment, as shown in Figure 6-4. However, the cost and risk involved are too high to do it on the production volume. Ideally, there will be an instant writable copy of the production system available at minimal cost in terms of storage and service interruptions.
Figure 6-4 FlexClone example
Figure 6-4 shows the following process:
T0: The first FlexClone 1 is created and changes are applied. The verified change blocks are written.
T1: The second FlexClone volume 2 is created, and changes from FlexClone 1 and additional changes are applied. Change blocks are written to the new location.
(The original FlexVol and FlexClone 1 blocks are untouched.)
T2: The third FlexClone volume 3 is created and changes from FlexClone1 and FlexClone 2 are applied, along with additional changes. Change blocks are written to the new location. (The original FlexVol, FlexClone 1, and FlexClone 2 blocks are untouched.)
T3: The fourth FlexClone volume is created with changes from FlexClone1, FlexClone 2, and FlexClone 3, and additional changes are applied. The testing fails, and the FlexClone is deleted. (No changes are made to the original FlexVol, FlexClone 1, FlexClone 2, or FlexClone 3.)
T4: The fifth FlexClone volume is created from changes from FlexClone 1, FlexClone 2, and FlexClone 3, and additional changes are applied and verified.
By using FlexClone volumes, the IT staff gets an instant point-in-time copy of the production data that is created transparently and uses only enough space to hold the desired changes. The staff can try out upgrades using FlexClone volumes.
At every point where the IT staff make solid progress, they clone their working FlexClone volume to lock in the successes. At any point where they get stuck, they just destroy the working clone and go back to the point of their last success. When everything is finally working as planned, the staff can either split off the clone to replace the current production volumes or codify the successful upgrade process to use on the production system during the next maintenance window.
To summarize, FlexClone allows you to make the necessary changes to your infrastructure without worrying about crashing your production systems or making untested changes on the system under tight maintenance window deadlines. The results are less risk, less stress, and higher levels of service for IT customers.
6.2 FlexClone operation
FlexClone volumes have all the capabilities of a FlexVol volume, including growing, shrinking, and being the source of a Snapshot or even another FlexClone volume. The technology that makes it all possible is integral to how Data ONTAP manages storage.
IBM System Storage N series storage systems use a Write Anywhere File Layout (WAFL) to manage disk storage. FlexClone writes use free blocks in the aggregate. Any new data that gets written to the volume does not need to go on a specific spot on the disk. It can be written anywhere. WAFL then updates the metadata to integrate the newly written data into the correct place in the file system.
If the new data is meant to replace older data, and the older data is not part of a Snapshot, WAFL marks the blocks containing the old data as reusable. It can happen asynchronously and does not affect performance. Snapshots work by making a copy of the metadata associated with the volume. Data ONTAP preserves pointers to all the disk blocks currently in use at the time that the Snapshot is created.
When a file is changed, the Snapshot still points to the disk blocks where the file existed before it was modified, and changes are written to new disk blocks. As data is changed in the parent FlexVol, the original data blocks stay associated with the Snapshot, rather than getting marked for reuse.
All the metadata updates are just pointer changes, and the storage system takes advantage of locality of reference, non-volatile RAM (NVRAM), and RAID technology to keep everything fast and reliable. Figure 6-5 provides a graphical illustration of how it works.
FlexClone reads are satisfied in either of the following ways:
Blocks that are written to the FlexClone
The parent FlexVol, if data is unchanged since the Snapshot
Figure 6-5 Snapshot
You can think of a FlexClone volume as a transparent writable layer in front of the Snapshot (Figure 6-6). A FlexClone volume is writable, so it needs some physical space to store the data that is written to the clone. It uses the same mechanism used by Snapshot copies to get available blocks from the containing aggregate.
Figure 6-6 Think of a FlexClone volume as a transparent writable layer in front of a Snapshot
A Snapshot simply links to existing data that was overwritten in the parent. In contrast, a FlexClone volume stores the data written to it on disk (using WAFL) and then links to the new data as well (Figure 6-7). The disk space associated with the Snapshot and FlexClone is accounted for separately from the data in the parent FlexVol.
Figure 6-7 FlexClone operation
When a FlexClone volume is first created, it needs to know the parent FlexVol and also the Snapshot of the parent to use as its base. The Snapshot can already exist, or it can be created automatically as part of the cloning operation.
The FlexClone volume takes a copy of the Snapshot metadata and then updates its metadata as the clone volume is created. Creating the FlexClone volume takes just a few moments, because the copied metadata is small compared with the actual data.
The parent FlexVol can change independently of the FlexClone volume because the Snapshot is there to keep track of the changes and prevent the original parent’s blocks from being reused while the Snapshot exists. The same Snapshot is read-only and can be efficiently reused as the base for multiple FlexClone volumes.
Space is used efficiently, because the only new disk space used is either associated with the small amounts of metadata, updates, or additions to either the parent FlexVol or the FlexClone volume.
FlexClone volumes appear to the storage administrator just like a FlexVol, that is, they look like a regular volume and have all of the same properties and capabilities. Using the CLI, FilerView, or DataFabric Manager, you can manage volumes, Snapshot copies, and FlexClone volumes, including getting their status (as shown in Example 6-1) and seeing the relationships between the parent, Snapshot, and clone.
Example 6-1 The vol status command
itsotuc1> vol status cifs_vol1_clone5
Volume State Status Options
cifs_vol1_clone5 online raid_dp, flex create_ucode=on,
sis convert_ucode=on
Clone, backed by volume 'cifs_vol1', snapshot 'clone_cifs_vol1_clone5.1'
Volume UUID: eb289e50-5c00-11e0-b9d8-00a098098a07
Containing aggregate: 'aggr1'
 
itsotuc1>
The CLI is required to create and split a FlexClone volume. FlexClone volumes are treated just like a FlexVol for most operations. The main limitation is that Data ONTAP forbids operations that will destroy the parent FlexVol or base Snapshot while dependent FlexClone volumes exist.
Other caveats are that management information in external files (for example, /etc) associated with the parent FlexVol is not copied, quotas for the clone volume get reset rather than added to the parent FlexVol, and LUNs in the cloned volume are automatically marked offline until they are uniquely mapped to a host system. Lastly, splitting the FlexClone volume from the parent volume to create a fully independent volume requires adequate free space in the aggregate to copy shared blocks.
6.3 Practical applications of FlexClone
FlexClone technology enables multiple, instant data set clones with no storage impact. It provides dramatic improvements for application test and development environments, and is tightly integrated with file system technology and a microkernel design in a way that renders competitive methods archaic.
FlexClone volumes are ideal for managing production data sets. They allow effortless error containment for bug fixing and development. They simplify platform upgrades for Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) applications. Instant FlexClone volumes provide data for multiple simulations against large data sets for ECAD, MCAD, and Seismic applications, without unnecessary duplication or waste of physical space.
The ability to split a FlexClone volume from its parent allows administrators to easily create new permanent, independent volumes for forking project data. FlexClone volumes have their limits, but the real range of applications is limited only by imagination. Table 6-1 lists a few of the more common examples.
Table 6-1 Application of FlexClone
Application area
Benefits
Application testing
You can make the necessary changes to infrastructure without worrying about crashing production systems, avoid making untested changes on the system under tight maintenance window deadlines, and experience less risk, less stress, and higher service level agreements.
Data mining
Data mining operations and software can be implemented more flexibly because both reads and writes are allowed.
Parallel processing
Multiple FlexClone volumes of a single milestone/production data set can be used by parallel processing applications across multiple servers to get results more quickly.
Online backup
You can resume immediately read-write workload upon discovering corruption in the production data set by mounting the clone instead. Use database features such as IBM DB2® write-suspend or Oracle hot backup mode to transparently prepare the database volumes for cloning by delaying write activity to the database. It is necessary because databases must maintain a point of consistency.
System deployment
You can maintain a template environment and use FlexClone volumes to build and deploy either identical or variation environments, create a test template that is cloned as needed for predictable testing, and have faster and more efficient migration using the Data ONTAP SnapMirror feature in combination with FlexClone volumes.
IT operations
You can maintain multiple copies of production systems (live, development, test, reporting, and so on) and refresh working FlexClone volumes regularly to work on data as close to live production systems as practical.
6.4 FlexClone performance
The performance of FlexClone volumes is nearly identical to the performance of flexible volumes. It is due to the way that cloning is tightly integrated with WAFL and the IBM System Storage N series architecture. Unlike other implementations of cloning technology, FlexClone volumes are implemented as a simple extension to existing core mechanisms.
The impact of cloning operations on other system activity will also be relatively light and transitory. The FlexClone create operation is nearly identical to creating a Snapshot. Some CPU, memory, and disk resources are used during the operation, which usually completes in seconds. The clone metadata is held in memory like a regular volume, so the impact on storage system memory consumption is identical to having another volume available. After the clone creation completes, all ongoing accesses to the clone are nearly identical to accessing a regular volume.
Splitting the FlexClone to create a fully independent volume also uses resources. While the split is occurring, free blocks in the aggregate are being copied (Figure 6-8). It incurs disk I/O operations and can potentially compete with other disk operations in the aggregate.
Figure 6-8 FlexClone split
The copy operation also uses CPU and memory resources, which might impact the performance of a fully loaded storage system. Data ONTAP addresses these potential issues by completing the split operation in the background, and sets priorities in a way that does not significantly impact foreground operations. It is also possible to manually stop and restart the split operation if some critical job requires the full resources of the storage system.
The final area to consider is the impact on disk usage from frequent operations where FlexClone volumes are split off and used to replace the parent FlexVol volume. The split volume is allocated free blocks in the aggregate, taking contiguous chunks as they are available. If there is ample free space in the aggregate, the blocks allocated to the split volume will be mostly contiguous. If the split is used to replace the original volume, the blocks associated with the destroyed original volume will become available and create a potentially large free area within the aggregate. That free area will also be mostly contiguous.
In cases where many simultaneous volume operations reduce contiguous regions for the volumes, Data ONTAP uses a block reallocation functionality. The reallocate command makes defragmentation and sequential reallocation even more flexible and effective. It reduces any impact of frequent clone split and replace operations, and optimizes performance after other disk operations (for example, adding disks to an aggregate) that might unbalance block allocations.
6.5 Creating a FlexClone
Our demonstration in this section uses the Data ONTAP CLI.
Perform the following steps:
1. When SnapClones are created, the vol clone create command automatically creates a new Snapshot for the clone if none is specified. A Snapshot can also be specified for the vol clone create command. We create a new Snapshot by issuing the snap create <volume_name> <snap_name> command, as shown in Example 6-2.
Example 6-2 Create and list Snapshots
itsotuc1> snap create cifs_vol1 cifs_vol1_snap1
itsotuc1> snap list cifs_vol1
Volume cifs_vol1
working...
 
%/used %/total date name
---------- ---------- ------------ --------
27% (27%) 0% ( 0%) Mar 29 22:51 cifs_vol1_snap1
44% (29%) 0% ( 0%) Mar 29 20:00 hourly.0
57% (35%) 0% ( 0%) Mar 29 16:00 hourly.1
63% (29%) 0% ( 0%) Mar 29 12:00 hourly.2
68% (29%) 0% ( 0%) Mar 29 08:00 hourly.3
71% (26%) 0% ( 0%) Mar 29 00:00 nightly.0
 
itsotuc1>
2. Run the vol status <volume_name> command on your volume to obtain the current status of the volume and to identify the aggregate to which it belongs (Example 6-3).
Example 6-3 The vol status command
itsotuc1> vol status cifs_vol1
Volume State Status Options
cifs_vol1 online raid_dp, flex create_ucode=on,
convert_ucode=on
Volume UUID: 2321fa5a-5b24-11e0-aade-00a098098a07
Containing aggregate: 'aggr1'
itsotuc1>
itsotuc1> vol container cifs_vol1
Volume 'cifs_vol1' is contained in aggregate 'aggr1'
 
itsotuc1>
3. Use the df -g command to check the available disk space on your volume (Example 6-4).
Example 6-4 df -g command (output modified for clarity)
itsotuc1> df -g
Filesystem total used avail capacity Mounted on
/vol/cifs_vol1/ 8GB 0GB 7GB 0% /vol/cifs_vol1/
/vol/cifs_vol1/.snapshot 2GB 0GB 1GB 0% /vol/cifs_vol1/.snapshot
/vol/vol0/ 164GB 4GB 159GB 3% /vol/vol0/
/vol/vol0/.snapshot 41GB 0GB 41GB 0% /vol/vol0/.snapshot
 
itsotuc1>
4. Use the df -Ag command to check the available disk space on the aggregates (Example 6-5).
Example 6-5 df -Ag command
itsotuc1> df -Ag
Aggregate total used avail capacity
aggr1 227GB 10GB 217GB 4%
aggr1/.snapshot 11GB 0GB 11GB 0%
aggr0 227GB 206GB 20GB 91%
aggr0/.snapshot 11GB 0GB 11GB 3%
itsotuc1>
5. Clone the existing volume by issuing the following command:
vol clone create <cloneVol> -s volume -b <parentVol> <parentSnap>
Where:
 – .<cloneVol> is the name of the new clone volume.
 – -s volume is the space guarantee for the volume.
 – -b <parentVol> is the volume to be cloned.
 – <parentSnap> is the parent Snapshot (can be omitted and a new Snapshot for the clone will be automatically be created).
6. Create the volume and check the status using the vol status command as shown in Example 6-6.
Example 6-6 The vol clone create command
itsotuc1> vol clone create cifs_vol1_clone1 -s volume -b cifs_vol1 cifs_vol1_snap1
Tue Mar 29 23:26:51 GMT [wafl.volume.clone.created:info]: Volume clone cifs_vol1_clone1 of volume cifs_vol1 was created successfully.
Creation of clone volume 'cifs_vol1_clone1' has completed.
 
itsotuc1> vol status cifs_vol1_clone1
Volume State Status Options
cifs_vol1_clone1 online raid_dp, flex create_ucode=on,
convert_ucode=on
Clone, backed by volume 'cifs_vol1', snapshot 'cifs_vol1_snap1'
Volume UUID: 5b069910-5bee-11e0-b9d8-00a098098a07
Containing aggregate: 'aggr1'
itsotuc1>
The snap list command shows that the snapshot we selected for our clone cifs_vol1_snap1 is busy. This Snapshot is the source for or new clone and hence it cannot be deleted before the SnapClone is deleted. See Example 6-7.
Example 6-7 Status of FlexClone creation
itsotuc1> snap list cifs_vol1
Volume cifs_vol1
working...
 
%/used %/total date name
---------- ---------- ------------ --------
10% (10%) 0% ( 0%) Mar 29 22:51 cifs_vol1_snap1 (busy,vclone)
18% (10%) 0% ( 0%) Mar 29 20:00 hourly.0
27% (13%) 0% ( 0%) Mar 29 16:00 hourly.1
32% (10%) 0% ( 0%) Mar 29 12:00 hourly.2
37% (10%) 0% ( 0%) Mar 29 08:00 hourly.3
41% ( 9%) 0% ( 0%) Mar 29 00:00 nightly.0
 
itsotuc1>
6.6 Accessing FlexClone volumes
In the previous sections, we demonstrated how to create FlexClone volumes based on a parent volume and a Snapshot.
In this section, we explain how to connect to a FlexClone volume.
To connect to a FlexClone volume from a Window 2008 server, do the following steps:
1. Use the vol status command to check which volumes are available as shown in Example 6-8.
Example 6-8 Listing volumes
itsotuc1> vol status
Volume State Status Options
cifs_vol1 online raid_dp, flex create_ucode=on,
sis convert_ucode=on
vol0 online raid_dp, flex root
cifs_vol1_clone1 online raid_dp, flex create_ucode=on,
sis convert_ucode=on
cifs_vol1_clone2 online raid_dp, flex create_ucode=on,
sis convert_ucode=on
cifs_vol1_clone3 online raid_dp, flex create_ucode=on,
sis convert_ucode=on
cifs_vol1_clone4 online raid_dp, flex create_ucode=on,
sis convert_ucode=on
cifs_vol1_clone5 online raid_dp, flex create_ucode=on,
sis convert_ucode=on
itsotuc1> vol status cifs_vol1_clone5
Volume State Status Options
cifs_vol1_clone5 online raid_dp, flex create_ucode=on,
sis convert_ucode=on
Clone, backed by volume 'cifs_vol1', snapshot 'clone_cifs_vol1_clone5.1'
Volume UUID: eb289e50-5c00-11e0-b9d8-00a098098a07
Containing aggregate: 'aggr1'
itsotuc1>
2. In order to connect to cifs_vol1_clone5, first create a share for our FlexClone volume.
We use System Manager for this task as shown in Figure 6-9.
Click Storage  Shares  Create  Browse.
Figure 6-9 Creating a CIFS share on our FlexClone volume
3. Use the Browse window to select cifs_vol1_clone5 to share as shown in Figure 6-10.
Click OK.
Figure 6-10 Select the location for the CIFS share
4. The location details will then update in the previous dialog, as shown in Figure 6-12.
Click Create.
Figure 6-11 Preparing to create a CIFS share
5. Observe that our CIFS share creation is successful as shown in Figure 6-12.
Normally, next you would select Edit to modify the share settings, such as access permissions, but we will leave the default values in this example.
Figure 6-12 Add CIFS share successful
6. From our Windows 2008 server, click Start  Run and type the name of the CIFS share as shown in Figure 6-13.
Figure 6-13 Connecting the FlexClone CIFS share from a Windows 2008 server
7. Open the CIFS share and verify its content as shown in Figure 6-14.
Figure 6-14 Verifying that our data from the FlexClone is available
8. Write to the FlexClone CIFS share as shown in Figure 6-15.
Figure 6-15 Writing data to the FlexClone CIFS share
We have successfully created a writable FlexClone copy and accessed it from a Windows 2008 server.
6.7 Splitting FlexClone volumes
This section describes how a FlexClone volume can be split from its parent volume.
There are some limitations to FlexClone volumes, and it is likely that at some point in time, an IBM N series administrator will want to get the full benefit of a FlexVol volume. In order to obtain it, the FlexClone volume can be split from its parent volume and Snapshot.
 
Reference: For more information about the limitations of FlexClone volumes, see the
IBM System Storage N series Data ONTAP 8.0 7-Mode File Access and Protocols Management Guide, available at:
FlexClone volumes, while being FlexClones, do not take up their own space in the containing aggregate. A FlexClone volume is based on pointers to a Snapshot, and the only space it requires from the aggregate is the delta from what has been written to the writable FlexClone volume since it was created.
We now demonstrate how to split a FlexClone volume from its parent FlexVol volume, so that it becomes an individual FlexVol volume.
Example 6-9 shows how we check the available space in the containing aggregate aggr1 and in the FlexClone volume cifs_vol1_clone5.
Example 6-9 Checking space in aggr1 before split
itsotuc1> df -Ag aggr1
Aggregate total used avail capacity
aggr1 227GB 75GB 151GB 33%
aggr1/.snapshot 11GB 0GB 11GB 0%
itsotuc1>
itsotuc1> df -g cifs_vol1_clone5
Filesystem total used avail capacity Mounted on
/vol/cifs_vol1_clone5/ 24GB 22GB 1GB 92% /vol/cifs_vol1_clone5/
/vol/cifs_vol1_clone5/.snapshot 6GB 0GB 5GB 0% /vol/cifs_vol1_clone5/.snapshot
 
itsotuc1>
Now we split the clone as shown in Example 6-10.
Example 6-10 Splitting the clone from its parent
itsotuc1> vol status cifs_vol1_clone5
Volume State Status Options
cifs_vol1_clone5 online raid_dp, flex create_ucode=on,
sis convert_ucode=on
Clone, backed by volume 'cifs_vol1', snapshot 'clone_cifs_vol1_clone5.1'
Volume UUID: eb289e50-5c00-11e0-b9d8-00a098098a07
Containing aggregate: 'aggr1'
 
itsotuc1> vol clone split start cifs_vol1_clone5
Wed Mar 30 04:05:56 GMT [wafl.volume.clone.split.started:info]: Clone split was started for volume cifs_vol1_clone5
Wed Mar 30 04:05:56 GMT [wafl.scan.start:info]: Starting volume clone split on volume cifs_vol1_clone5.
Clone volume 'cifs_vol1_clone5' will be split from its parent.
Monitor system log or use 'vol clone split status' for progress.
 
itsotuc1> vol clone split status
Volume 'cifs_vol1_clone5', 13567 of 135681 inodes processed (9%)
980097 blocks scanned. 974729 blocks updated.
 
itsotuc1>
Splitting a FlexClone volume can take hours or even days it the FlexClone volume is very large. Check the IBM N series syslog or check the actual progress with the CLI command,
vol clone split status
. In our case, the 24 GB volume split took about 15 minutes.
As soon as the FlexClone split action is started, we can immediately see the result in available capacity for aggr1 as shown in Example 6-11.
Example 6-11 Checking space in aggr1 after split
itsotuc1> df -Ag aggr1
Aggregate total used avail capacity
aggr1 227GB 97GB 129GB 43%
aggr1/.snapshot 11GB 0GB 11GB 0%
 
itsotuc1> df -g cifs_vol1_clone5
Filesystem total used avail capacity Mounted on
/vol/cifs_vol1_clone5/ 24GB 22GB 1GB 93% /vol/cifs_vol1_clone5/
/vol/cifs_vol1_clone5/.snapshot 6GB 0GB 6GB 0% /vol/cifs_vol1_clone5/.snapshot
 
itsotuc1>
By observing the output from df -Ag and from df -g cifs_vol1_clone5, we can see that the available space in the aggregate immediately reduces with 22 GB when clone split is initiated. This is exactly how much space our new FlexVol volume (it is no longer a FlexClone) takes up of space in the aggregate.
6.8 Summary
Storage administrators now have access to greater flexibility and performance. Flexible volumes, aggregates, and RAID-DP provide unparalleled levels of storage virtualization, enabling IT staff to economically manage and protect enterprise data without compromise. FlexClone volumes are one of the many powerful features that make it possible, providing instantaneous writable volume copies that use only as much storage as necessary to hold new data.
FlexClone volumes enable and simplify many operations. Application testing benefits from less risk, less stress, and higher service levels by using FlexClone volumes to try out changes on clone volumes and upgrade under tight maintenance windows by simply swapping tested FlexClone volumes for the originals.
Data mining and parallel processing benefit by using multiple writable FlexClone volumes from a single data set, all without using more physical storage than needed to hold the updates.
FlexClone volumes can be used as online backup and disaster recovery volumes immediately resuming read-write operation if a problem occurs. System deployment becomes much easier by cloning template volumes for testing and rollout. IT operations benefit from multiple copies of the production system that can be used for testing and development and refreshed as needed to more closely mirror the live data.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.84.150