Storage migration
This chapter describes the steps that are involved in migrating data from an existing external storage system to the capacity of the IBM SAN Volume Controller (SVC) by using the storage migration wizard. Migrating data from other storage systems to the SVC consolidates storage. It also allows for IBM Spectrum Virtualize features, such as Easy Tier, thin provisioning, compression, encryption, storage replication, and the easy-to-use graphical user interface (GUI) to be realized across all volumes.
Storage migration uses the volume mirroring functionality to allow reads and writes during the migration, and minimizing disruption and downtime. After the migration is complete, the existing system can be retired. SVC supports migration through Fibre Channel and Internet Small Computer Systems Interface (iSCSI) connections. Storage migration can be used to migrate data from other storage systems and IBM SVC.
This chapter includes the following topics:
 
Note: This chapter does not cover migration outside of the storage migration wizard. To migrate data outside of the wizard, you must use Import. For information about the Import action, see Chapter 6, “Storage pools” on page 197.
9.1 Storage migration overview
To migrate data from an existing storage system to the SVC, it is necessary to use the built-in external virtualization capability. This capability places external connected Logical Units (LUs) under the control of the SVC. After volumes are virtualized, hosts continue to access them but do so through the SVC, which acts as a proxy.
 
Attention: The system does not require a license for its own control and expansion enclosures. However, a license is required for each enclosure of any external systems that are being virtualized. Data can be migrated from existing storage systems to your system by using the external virtualization function within 45 days of purchase of the system without purchase of a license. After 45 days, any ongoing use of the external virtualization function requires a license for each enclosure in each external system.
Set the license temporarily during the migration process to prevent messages that indicate that you are in violation of the license agreement from being sent. When the migration is complete, or after 45 days, either reset the license to its original limit or purchase a new license.
The following topics give an overview of the storage migration process:
Typically, storage systems divide storage into many SCSI LUs that are presented to hosts.
I/O to the LUs must be stopped and changes made to the mapping of the external storage system LUs and to the fabric configuration so that the original LUs are presented directly to the SVC and not to the hosts anymore. The SVC discovers the external LUs as unmanaged MDisks.
The unmanaged MDisks are imported to the SVC as image-mode volumes and placed into a temporary storage pool. This storage pool is now a logical container for the LUs.
Each MDisk has a one-to-one mapping with an image-mode volume. From a data perspective, the image-mode volumes represent the LUs exactly as they were before the import operation. The image-mode volumes are on the same physical drives of the external storage system and the data remains unchanged. The SVC is presenting active images of the LUs and is acting as a proxy.
The hosts must have the existing storage system multipath device driver removed, and are then configured for SVC attachment. The SVC hosts are defined with worldwide port names (WWPNs) or iSCSI qualified names (IQNs), and the volumes are mapped to the hosts. After the volumes are mapped, the hosts discover the SVC volumes through a host rescan or reboot operation.
IBM Spectrum Virtualize volume mirroring operations are then initiated. The image-mode volumes are mirrored to generic volumes. Volume mirroring is an online migration task, which means a host can still access and use the volumes during the mirror synchronization process.
After the mirror operations are complete, the image-mode volumes are removed. The external storage system LUs are now migrated and the now redundant storage can be decommissioned or reused elsewhere.
9.1.1 Interoperability and compatibility
Interoperability is an important consideration when a new storage system is set up in an environment that contains existing storage infrastructure. Before attaching any external storage systems to the SVC, see the IBM System Storage Interoperation Center (SSIC):
Select IBM System Storage SAN Volume Controller in Storage Family, then SVC Storage Controller Support in Storage Model. You can then refine your search by selecting the external storage controller that you want to use in the Storage Controller menu.
The matrix results give you indications about the external storage that you want to attach to the SVC, such as minimum firmware level or support for disks greater than 2 TB.
9.1.2 Prerequisites
Before the storage migration wizard can be started, the external storage system must be visible to the SVC. You also need to confirm that the restrictions and prerequisites are met.
Administrators can migrate data from the external storage system to the system that uses either iSCSI connections or Fibre Channel or Fibre Channel over Ethernet connections. For more details about how to manage external storage, see Chapter 6, “Storage pools” on page 197.
Prerequisites for Fibre Channel connections
The following are prerequisites for Fibre Channel connections:
Cable this system into the SAN of the external storage that you want to migrate. Ensure that your system is cabled into the same storage area network (SAN) as the external storage system that you are migrating. If you are using Fibre Channel, connect the Fibre Channel cables to the Fibre Channel ports in both nodes of your system, and then to the Fibre Channel network. If you are using Fibre Channel over Ethernet, connect Ethernet cables to the 10 Gbps Ethernet ports.
Change VMware ESX host settings, or do not run VMware ESX. If you have VMware ESX server hosts, you must change settings on the VMware host so copies of the volumes can be recognized by the system after the migration is completed. To enable volume copies to be recognized by the system for VMware ESX hosts, you must complete one of the following actions:
 – Enable the EnableResignature setting.
 – Disable the DisallowSnapshotLUN setting.
To learn more about these settings, consult the documentation for the VMware ESX host.
 
Note: Test the setting changes on a non production server. The LUN has a different unique identifier after it is imported. It looks like a mirrored volume to the VMware server.
Prerequisites for iSCSI connections
The following are prerequisites for iSCSI connections:
Cable this system to the external storage system with a redundant switched fabric. Migrating iSCSI external storage requires that the system and the storage system are connected through an Ethernet switch. Symmetric ports on all nodes of the system must be connected to the same switch and must be configured on the same subnet.
In addition, modify the Ethernet port attributes to enable the external storage on the Ethernet port to enable external storage connectivity. To modify the Ethernet port for external storage, click Network  Ethernet Ports and right-click a configured port. Select Modify Storage Ports to enable the port for external storage connections.
Cable the Ethernet ports on the storage system to fabric in the same way as the system and ensure that they are configured in the same subnet. Optionally, you can use a virtual local area network (VLAN) to define network traffic for the system ports.
For full redundancy, configure two Ethernet fabrics with separate Ethernet switches. If the source system nodes and the external storage system both have more than two Ethernet ports, extra redundant iSCSI connection can be established for increased throughput.
Change VMware ESX host settings, or do not run VMware ESX. If you have VMware ESX server hosts, you must change settings on the VMware host so copies of the volumes can be recognized by the system after the migration is completed. To enable volume copies to be recognized by the system for VMware ESX hosts, complete one of the following actions:
 – Enable the EnableResignature setting.
 – Disable the DisallowSnapshotLUN setting.
To learn more about these settings, consult the documentation for the VMware ESX host.
 
Note: Test the setting changes on a non-production server. The LUN has a different unique identifier after it is imported. It appears as a mirrored volume to the VMware server.
If the external storage system is not detected, the warning message shown in Figure 9-1 is displayed when you attempt to start the migration wizard. Click Close and correct the problem before you try to start the migration wizard again.
Figure 9-1 Error message if no external storage is detected
9.2 Storage migration wizard
The storage migration wizard simplifies the migration task. The wizard features easy-to-follow windows that guide users through the entire process.
 
Attention: The risk of losing data when the storage migration wizard is used correctly is low. However, it is prudent to avoid potential data loss by creating a backup of all the data that is stored on the hosts, the existing storage systems, and the SVC before the wizard is used.
Complete the following steps to complete the migration by using the storage migration wizard:
1. Navigate to Pools → System Migration, as shown in Figure 9-2. The System Migration pane provides access to the storage migration wizard and displays information about the migration progress.
Figure 9-2 Accessing the System Migration pane
2. Click Start New Migration to begin the storage migration wizard, as shown in Figure 9-3.
Figure 9-3 Starting a migration
 
Note: Starting a new migration adds a volume to be migrated in the list displayed in the pane. After a volume is migrated, it will remain in the list until you “finalize” the migration.
3. If both Fibre Channel and iSCSI external systems are detected, a dialog is shown asking you which protocol should be used. Select the type of attachment between the SVC and the external system from which you want to migrate volumes and click Next. If only one type of attachment is detected, this dialog is not displayed.
 
4. When the wizard starts, you are prompted to verify the restrictions and prerequisites that are listed in Figure 9-4. Address the following restrictions and prerequisites:
 – Restrictions:
 • You are not using the storage migration wizard to migrate clustered hosts, including clusters of VMware hosts and Virtual I/O Servers (VIOS).
 • You are not using the storage migration wizard to migrate SAN boot images.
If you have either of these two environments, the migration must be performed outside of the wizard because more steps are required.
The VMware vSphere Storage vMotion feature might be an alternative for migrating VMware clusters. For information about this topic, see:
 – Prerequisites:
 • SVC nodes and the external storage system are connected to the same SAN fabric.
 • If there are VMware ESX hosts involved in the data migration, the VMware ESX hosts are set to allow volume copies to be recognized.
See 9.1.2, “Prerequisites” on page 393 for more details about the Storage Migration prerequisites.
If all restrictions are satisfied and prerequisites are met, select all of the boxes and click Next, as shown in Figure 9-4.
Figure 9-4 Restrictions and prerequisites confirmation
5. Prepare the environment migration by following the on-screen instructions that are shown in Figure 9-5.
Figure 9-5 Preparing your environment for storage migration
The preparation phase includes the following steps:
a. Before migrating storage, ensure that all host operations are stopped to prevent applications from generating I/Os to the migrated system.
b. Remove all existing zones between the hosts and the system you are migrating.
c. Hosts usually do not support concurrent multipath drivers at the same time. You might need to remove drivers that are not compatible with the SVC, from the hosts and use the recommended device drivers. For more information about supported drivers, check the IBM System Storage Interoperation Center (SSIC):
d. If you are migrating external storage systems that connect to the system that uses Fibre Channel or Fibre Channel over Ethernet connections, ensure that you complete appropriate zoning changes to simplify migration. Use the following guidelines to ensure that zones are configured correctly for migration:
 • Zoning rules
For every storage system, create one zone that contains this system's ports from every node and all external storage system ports, unless otherwise stated by the zoning guidelines for that storage system.
This system requires single-initiator zoning for all large configurations that contain more than 64 host objects. Each server Fibre Channel port must be in its own zone, which contains the Fibre Channel port and this system's ports. In configurations of fewer than 64 hosts, you can have up to 40 Fibre Channel ports in a host zone if the zone contains similar HBAs and operating systems.
 • Storage system zones
In a storage system zone, this system's nodes identify the storage systems. Generally, create one zone for each storage system. Host systems cannot operate on the storage systems directly. All data transfer occurs through this system's nodes.
 • Host zones
In the host zone, the host systems can identify and address this system's nodes. You can have more than one host zone and more than one storage system zone. Create one host zone for each host Fibre Channel port.
 – Because the SVC should now be seen as a host cluster from the external system to be migrated, you must define the SVC as a host or host group by using the WWPNs or IQNs, on the system to be migrated. Some legacy systems do not permit LUN-to-host mapping and would then present all the LUs to the SVC. In that case, all the LUs should be migrated.
6. If the previous preparation steps have been followed, the SVC is now seen as a host from the system to be migrated. LUs can then be mapped to the SVC. Map the external storage system by following the on-screen instructions that are shown in Figure 9-6.
Figure 9-6 Steps to map the LUs to be migrated to the SVC
Before you migrate storage, record the hosts and their WWPNs or IQNs for each volume that is being migrated, and the SCSI LUN when mapped to the SVC.
Table 9-1 shows an example of a table that is used to capture information that relates to the external storage system LUs.
Table 9-1 Example table for capturing external LU information
Volume Name or ID
Hosts accessing this LUN
Host WWPNs or IQNs
SCSI LUN when mapped
1 IBM DB2® logs
DB2server
21000024FF2...
0
2 DB2 data
DB2Server
21000024FF2...
1
3 file system
FileServer1
21000024FF2...
2
 
Note: Make sure to record the SCSI ID of the LUs to which the host is originally mapped. Some operating systems do not support changing the SCSI ID during the migration.
Click Next and wait for the system to discover external devices.
7. The next window shows all of the MDisks that were found. If the MDisks to be migrated are not in the list, check your zoning or IP configuration, as applicable, and your LUN mappings. Repeat the previous step to trigger the discovery procedure again.
Select the MDisks that you want to migrate, as shown in Figure 9-7. In this example, only mdisk7 and mdisk1 are being migrated. Detailed information about an MDisk is visible by double-clicking it. To select multiple elements from the table, use the standard Shift+left-click or Ctrl+left-click actions. Optionally, you can export the discovered MDisks list to a CSV file, for further use, by clicking Export to CSV.
Figure 9-7 Discovering mapped LUs from external storage
 
Note: Select only the MDisks that are applicable to the current migration plan. After step 15 on page 405 of the current migration completes, another migration can be started to migrate any remaining MDisks.
8. Click Next and wait for the MDisks to be imported. During this task, the system creates a new storage pool called MigrationPool_XXXX and adds the imported MDisks to the storage pool as image-mode volumes.
9. The next window lists all of the hosts that are configured on the system and enables you to configure new hosts. This step is optional and can be bypassed by clicking Next. In this example, the host ITSO_Host is already configured, as shown in Figure 9-8. If no host is selected, you will be able to create a host after the migration completes and map the imported volumes to it.
Figure 9-8 Listing of configured hosts to map the imported Volume to
10. If the host that needs access to the migrated data is not configured, select Add Host to begin the Add Host wizard. Enter the host connection type, name, and connection details. Optionally, click Advanced to modify the host type and I/O group assignment. Figure 9-9 shows the Add Host wizard with the details completed.
For more information about the Add Host wizard, see Chapter 8, “Hosts” on page 337.
Figure 9-9 If not already defined, you can create a host during the migration process
11. Click Add. The host is created and is now listed in the Configure Hosts window, as shown in Figure 9-8 on page 400. Click Next to proceed.
12. The next window lists the new volumes and enables you to map them to hosts. The volumes are listed with names that were automatically assigned by the system. The names can be changed to reflect something more meaningful to the user by selecting the volume and clicking Rename in the Actions menu.
 
13. Map the volumes to hosts by selecting the volumes and clicking Map to Host, as shown in Figure 9-10. This step is optional and can be bypassed by clicking Next.
Figure 9-10 Select the host to map the new Volume to
You can manually assign a SCSI ID to the LUNs you are mapping. This technique is particularly useful when the host needs to have the same LUN ID for a LUN before and after it is migrated. To assign the SCSI ID manually, select the Self Assign option and follow the instructions as shown in Figure 9-11.
Figure 9-11 Manually assign a LUN SCSI ID to mapped Volume
When your LUN mapping is ready, click Next. A new window is displayed with a summary of the new and existing mappings, as shown in Figure 9-12.
Figure 9-12 Volumes mapping summary before migration
Click Map Volumes and wait for the mappings to be created.
14. Select the storage pool that you want to migrate the imported volumes into. Ensure that the selected storage pool has enough space to accommodate the migrated volumes before you continue. This is an optional step. You can decide not to migrate to the Storage pool and to leave the imported MDisk as an image-mode volume. This technique is not recommended because no volume mirroring will be created. Therefore, there will be no protection for the imported MDisk, and there will be no data transfer from the system to be migrated and the SVC. Click Next, as shown in Figure 9-13.
Figure 9-13 Select the pool to migrate the MDisk to
The migration starts. This task continues running in the background and uses the volume mirroring function to place a generic copy of the image-mode volumes in the selected storage pool. For more information about Volume Mirroring, see Chapter 7, “Volumes” on page 251.
 
Note: With volume mirroring, the system creates two copies (Copy0 and Copy1) of a volume. Typically, Copy0 is located in the Migration Pool, and Copy1 is created in the target pool of the migration. When the host generates a write I/O on the volume, data is written at the same time on both copies. Read I/Os are performed on the preferred copy. In the background, a mirror synchronization of the two copies is performed and runs until the two copies are synchronized. The speed of this background synchronization can be changed in the volume properties.
See Chapter 7, “Volumes” on page 251 for more information about volume mirroring synchronization rate.
15. Click Finish to end the storage migration wizard, as shown in Figure 9-14.
Figure 9-14 Migration is started
16. The end of the wizard is not the end of the migration task. You can find the progress of the migration in the Storage Migration window, as shown in Figure 9-15. The target storage pool and the progress of the volume copy synchronization is also displayed there.
Figure 9-15 Ongoing Migrations are listed in the Storage Migration window
17. When the migration completes, select all of the migrations that you want to finalize, right-click the selection, and click Finalize, as shown in Figure 9-16.
Figure 9-16 Finalizing a migration
You are asked to confirm the Finalize action because this will remove the MDisk from the Migration Pool and delete the primary copy of the Mirrored Volume. The secondary copy remains in the destination pool and becomes the primary. Figure 9-17 displays the confirmation message.
Figure 9-17 Migration finalization confirmation
18. When finalized, the image-mode copies of the volumes are deleted and the associated MDisks are removed from the migration pool. The status of those MDisks returns to unmanaged. You can verify the status of the MDisks by navigating to Pools → External Storage, as shown in Figure 9-18. In the example, mdisk7 has been migrated and finalized, it appears as unmanaged in the external storage window. Mdisk1 is still being migrated and has not been finalized. It appears as image and belongs to the Migration Pool.
Figure 9-18 External Storage MDisks window
All the steps that are described in the Storage Migration wizard can be performed manually, but generally use the wizard as a guide.
 
Note: For a “real-life” demonstration of the storage migration capabilities offered with IBM Spectrum Virtualize, see the following page:
The demonstration includes three different step-by-step scenarios showing the integration of an SVC cluster into an existing environment with one Microsoft Windows Server (image mode), one IBM AIX server (LVM mirroring), and one VMware ESXi server (storage vMotion).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.166.22