File system conversion and migration
This chapter provides information regarding file system conversion and migration between Volume Manager (VxVM) in Symantec Storage Foundation powered by Veritas to AIX Logical Volume Manager (LVM).
This chapter contains the following topics:
2.1 Conversion versus migration
First, it is important to introduce the difference between conversion and migration.
For the purpose of this book, the term conversion should be applied to data that can be transformed in a way that it is presented in different form. Migration, however, applies when data is copied, moved, or restored between different media.
Although there are utilities to perform conversion, migration should be the most common and reliable way to make changes in computer data.
2.2 Volume and file system migration
During the planning phase of the cluster migration, one of the most important steps is to plan for the data migration, since it may include backup and restore activities that can span throughout several hours or even days.
In the following sections, we present some information about volume group and file system data migration. We discuss briefly how Veritas Volume Manager performs LVM migration and demonstrates how to achieve the reverse results.
2.2.1 Volume group conversion and migration
In this section, we provide a short overview of the LVM to VxVM migration in order for you to understand the overall process.
LVM to VxVM migration
Symantec Storage Foundation supports either the online migration of LVM logical volumes to VxVM volumes and the LVM and JFS/JFS2 data conversion offline.
Both of the operations are supported by the Symantec Storage Foundation suite with some limitations.
Offline conversion
The offline method implies that all JFS/JFS2 file systems and LVM logical volumes are not mounted during the process. The conversion of data is performed by the vxconvert tool.
The conversion is accomplished by overwriting LVM and JFS/JFS2 metadata with VxVM and VxFS metadata. During the process of file system conversion vxconvert saves a copy of the original JFS/JFS2 metadata, then it scans the inode tree and builds new VXFS metadata, which is finally written over the old JFS/JFS2 data within the logical volume. During the operation, data blocks are not touched, being therefore fully preserved.
This method of conversion is quick, mature, and proven to be reliable over many years. The time for the conversion will be directly impacted by the number of file systems and entries in the structure.
 
Note: Before attempting to convert or migrate data, always ensure that you have current backups.
Online migration
The online migration process requires that a diskgroup and volumes structure is created in the Veritas Volume Manager in a way that it reflects the existing LVM structure.
 
Note: In our scenario, it was required that at least two disks were assigned to the VxVM diskgroup to perform the migration. This is because VxVM creates mirrored volumes to store data.
After creating a structure on VxVM matching the current LVM layout, the vxmigadm command can be used to analyze, start, abort, and commit the migration.
During the analyze process, Veritas verifies whether all requirements for the migration are met. If not, error messages will be displayed. The following list shows some errors that we encountered during our trial:
The VxVM diskgroup name must match the LVM volume group name.
The VxVM diskgroup must not be in CDS format.
The VxVM volumes names must match the name and size of the LVM logical volumes.
The permissions of volume devices and raw devices must match between VxVM and LVM devices.
 
Note: We recommend to perform the analyze operation before attempting to perform the migration to avoid undesired results. The analyze operation can be started with the following command:
# vxmigadm analyze -g <volume-group>
Upon a successful analysis of the environment, the start operation can be performed. The vxmigadm utility creates a whole new structure and starts the process. This is a very interesting part of the migration. We have reproduced the steps manually, which are shown through the next examples with explanation for each step.
In the following examples, we migrate the LVM volume group bogusvg containing a single logical volume boguslv03 to the VxVM structure.
When the migration is started, vxmigadm performs changes to the VxVM diskgroups. Initially the volumes are stopped, putting them in the DISABLED state. The former LVM logical volumes and their respective entries under the directory /dev are renamed through a call to chlv as shown in Example 2-1.
Example 2-1 chlv is called to rename the LVM logical volume
[peter101:root] / # chlv -n boguslv03_vxlv boguslv03
[peter101:root] / # lsvg -l bogusvg
bogusvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
boguslv03_vxlv jfs2 8 8 1 closed/syncd /will/bogusfs03
Next, a call to vxddladm is performed and a foreign disk device is added to VxVM having the LVM logical volume as its backend device, as shown in Example 2-2.
Example 2-2 vxddladm creates a foreign disk device with the LVM logical volume as the backend
[peter101:root] / # /usr/sbin/vxddladm addforeign blockpath=/dev/boguslv03_vxlv charpath=/dev/rboguslv03_vxlv
 
[peter101:root] / # /usr/sbin/vxddladm listforeign
 
The Paths included are
-----------------------
 
Based on Directory names:
-----------------------
 
Based on Full Path:
--------------------
/dev/boguslv03_vxlv block /dev/rboguslv03_vxlv char Suppress auto
Now the LVM logical volume becomes a disk device in the VxVM database. New entries with the original names are then created with new major and minor numbers associating the devices to the VxVM diskgroup. The disk initially appears as having the type set to simple (second column) but additional commands readds it as nopriv as shown in Example 2-3.
Example 2-3 vxdisk list command
[peter101:root] / # vxdisk list
DEVICE TYPE DISK GROUP STATUS
boguslv03_vxlv nopriv - -           online
disk_0 auto:LVM - -             LVM
disk_1 auto:cdsdisk disk bogusvg       online
disk_2 auto:LVM - -             LVM
disk_3 auto:cdsdisk disk_3 bogusvg       online
ds3400-0_0 auto:cdsdisk - -             online
ds3400-0_1 auto:cdsdisk - -             online
ds3400-0_2 auto:cdsdisk corben_dg101 corben_dg1   online
ds3400-0_3 auto:cdsdisk - -             online
ds3400-0_4 auto:cdsdisk corben_dg102 corben_dg1   online
New calls to vxmake create a new pair of subdisk and plex, which are then associated by vxsd. At this time, vxmigadm create new entries in /dev using the original logical volume name, but binding the major and minor numbers to the VxVM diskgroup.
With the plex ready, vxmigadm replaces the original plex with the one associated to the LVM logical volume. Example 2-4 illustrates the removal of the plex. Notice that there are no plexes associated to the volume boguslv03.
Example 2-4 Removal of the plex
[peter101:root] / # /usr/sbin/vxplex -g bogusvg -f dis boguslv03-01
[peter101:root] / # vxprint -htv -g bogusvg
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
 
v boguslv03 - DISABLED CLEAN 2097152 SELECT - fsgen
Example 2-5 shows the step in which the new plex boguslv03-lvm_plex is attached to the diskgroup.
Example 2-5 The plex with the LVM volume as backend is attached to the diskgroup
[peter101:root] / # /usr/sbin/vxplex -g bogusvg att boguslv03 boguslv03-lvm_plex
[peter101:root] / # vxprint -htv -g bogusvg
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
 
v boguslv03 - DISABLED EMPTY 2097152 SELECT - fsgen
pl boguslv03-lvm_plex boguslv03 DISABLED EMPTY 2097152 CONCAT - RW
sd boguslv03_vxlv-01 boguslv03-lvm_plex boguslv03_vxlv 0 2097152 0 boguslv03_vxlv ENA
At this point, the initial migration setup is ready and the volume is set to be enabled. Notice that during the entire setup, the VxVM boguslv03 was disabled. Right after enabling the volume, vxmigadm calls vxsnap to create a snapshot of the volumes.
Now the file system can be made available to the application using the VxVM devices from the /dev/vx/dsk directory. The /etc/filesystems entries are then changed as shown in Example 2-6.
Example 2-6 /etc/filesystems changed device
/will/bogusfs03:
dev = /dev/vx/dsk/boguslv03
vfs = jfs2
log = INLINE
mount = false
options = rw
account = false
After the VxVM mirror copies achieve a consistent synchronized state with the LVM copies, two can commit the migration. Upon a commit, the old LVM volumes will be dissociated from the VxVM configuration.
As you may have noticed, during the entire volume group migration process, the file system itself was never touched remaining a JFS2 file system. The actual migration from JFS2 to VXFS cannot be performed online and requires an extended outage.
This process provides some flexibility to decide the best moment to perform the commit the migration or even to attempt an uninstallation.
While a utility is provided to automate all the steps of LVM to VxVM conversion, the opposite migration can be performed by following the uninstallation process manually, for each volume group.
 
Note: Detailed information about LVM to VxVM conversion is available in the Storage Foundation documentation at the Symantec website:
https://sort.symantec.com/documents/doc_details/sfha/6.0/Linux/ProductGuides
2.2.2 Limitations of migration
Although the vxmigadm command is convenient, it does have specific limitations that must be considered when planning for the migration.
In environments in which there are a high number of disks, it is often required to use LVM volume groups types of Big or Scalable types. At the time this documentation was written, the available version of Storage Foundation did not support the migration of Big or Scalable volume groups. Also, if advanced LVM functions like snapshots are used, the volume group cannot be migrated.
 
Note: For detailed information about migration limitations, refer to the Storage Foundation and high availability documentation at the Symantec website:
https://sort.symantec.com/documents/doc_details/sfha/6.0/Linux/ProductGuides
2.2.3 Migrating from VxVM to LVM
The management of VxVM volumes are not supported through the PowerHA management interface, thus if you are planning to migrate to PowerHA, it is a good idea to think about moving data from VxVM to LVM.
The conversion from VxVM to LVM is not supported the same way as the inverse. However, Symantec offers an uninstallation method that allows the administrators to migrate the logical volumes from VxVM back to LVM.
The procedure is quite simple and well explained in the Storage Foundations Installation Guide at the following site:
 
Important: Be aware that while JFS/JFS2 file systems can be converted to VxFS, the opposite direction is not supported by either IBM or Symantec. Also, always have valid backups before ever performing a migration.
Though the above mentioned lack of support statement may sway your decision on performing a storage migration, it does not impact a migration of the clustering software as PowerHA does support the VxFS file system.
Migrating a logical volume from VxVM to LVM
The migration process requires available disks to create a new LVM structure matching the VxVM structure to be migrated.
The following examples demonstrate how the migration of a volume from VxVM to LVM was possible by using the procedures described on the Symantec uninstall documentation. Example 2-7 on page 33 shows the list of available volumes in the disk group.
Example 2-7 List of available volumes in diskgroup
[peter101:root] / # vxprint -htv -g will_dg01
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
 
v will_lv01 - ENABLED ACTIVE 2097152 SELECT - fsgen
pl will_lv01-01 will_lv01 ENABLED ACTIVE 2097152 CONCAT - RW
sd vscsi-0_0-01 will_lv01-01 vscsi-0_0 0 2097152 0 disk_1 ENA
The next examples illustrate the steps taken to create the LVM volume group, the logical volume, and the actual migration step with a simple dd from the old volume into the new volume.
First, in Example 2-8, a new volume group is created using hdisk7 and hdisk8 and 128 Mb for the physical partition (PP) size. Next, a new 2 GB logical volume is created by using 16 PPs.
Example 2-8 Migrating a logical volume from VxVM to LVM
[peter101:root] / # mkvg -y bogusvg -s 128 hdisk7 hdisk8
bogusvg
 
[peter101:root] / # mklv -t vxfs -y boguslv02 bogusvg 16
boguslv01
The actual migration is shown in Example 2-9 by executing a dd from the old VxVM volume to the new LVM logical volume. Next, an fsck is run against the new logical volume and the file system is then mounted.
Example 2-9 Issuing the dd command
[peter101:root] / # dd if=/dev/vx/dsk/will_dg01/will_lv01 of=/dev/boguslv02 bs=4024k
260+1 records in.
260+1 records out.
[peter101:root] / # fsck -V vxfs /will/bogusfs
 
file system is clean - log replay is not required
[peter101:root] / # mount /will/bogusfs
[peter101:root] / # df -m /will/bogusfs
Filesystem MB blocks Free %Used Iused %Iused Mounted on
/dev/boguslv02 1024.00 941.85 9% 6 1% /will/bogusfs
Notice that the file system was mounted and has 1 Gb in size. However, the logical volume created before the migration was 2 Gb. This happens because the actual file system migrated had 1 Gb in size before the migration. This difference was intentional to illustrate that the migration can be performed even if the sizes of the old and new logical volumes do not match.
In order to adjust the sizes, the file system can be increased to match the size of the logical volume using the VxVM utility fsadm as shown in Example 2-10 on page 34.
Example 2-10 Adjusting the file system size
[peter101:root] / # fsadm -b 2G /will/bogusfs
UX:vxfs fsadm: INFO: V-3-25942: /dev/rboguslv02 size increased from 2097152 sectors to 4194304 sectors
 
[peter101:root] / # df -m /will/bogusfs
Filesystem MB blocks Free %Used Iused %Iused Mounted on
/dev/boguslv02 2048.00 1901.62 8% 7 1% /will/bogusfs
Notice that after running fsadm, our file system was increased to 2 Gb. Though this is a simple migration example, it can be applied to more complex scenarios.
 
Note: The actual uninstalling is a supported procedure by Symantec, however we suggest checking with Symantec if it is supported in a migration scenario.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.42.129