Introduction
IBM Storwize HyperSwap is a response to increasing demand for continuous application availability, minimizing downtime in the event of an outage, and non-disruptive migrations.
IT centers with IBM i can take full advantage of the HyperSwap solution.
In this IBM Redpaper™ publication, we provide instructions to implement Storwize HyperSwap with IBM i. We also describe some business continuity scenarios in this area, including solutions with HyperSwap and IBM i Live Partition Mobility, and a solution with HyperSwap and IBM PowerHA® for i.
1.1 IBM Storwize HyperSwap overview
The IBM HyperSwap function is a high availability feature that provides dual-site, active-active access to a volume. It provides continuous data availability in case of hardware failure, power failure, connectivity failure, or disasters. HyperSwap capabilities are also available on other IBM storage technologies that can support more than one I/O group (for example, Storwize V5030 systems) and also IBM FlashSystem V9000 and A9000.
This feature was introduced with IBM Spectrum™ Virtualize V7.5 on Storwize and SVC devices.
The HyperSwap function is a new level of security since it can also handle an outage of a Storwize V7000 control enclosure or a cluster in a single site. Before HyperSwap was introduced the Storwize V7000 was able to handle an outage of external (virtualized) or internal storage via the Volume mirroring feature. This did not cover an outage of one site or controller; this could only work at a host level in combination with a second storage system.
Starting with V7.5 a technology is available that provides a high availability (HA) solution transparent to a host over two locations which can have a distance of up to 300 km. This is the same as the Metro Mirror limitation for the distance.
The HA solution is based on HyperSwap volumes which have a copy at two different and independent sites. Data that is written to the volume is automatically sent to both sites, even if one site is no longer available then the other remaining site allows access to the volume.
A new Metro Mirror capability, active-active Metro Mirror, is used to maintain a fully independent copy of the data at each site. When data is written by hosts at either site, both copies are synchronously updated before the write operation is completed. The HyperSwap function automatically optimizes itself to minimize the data that is transmitted between sites and to minimize host read and write latency.
To define HyperSwap volumes, active-active relationships are made between the copies at each site. This is normally done using the GUI, but can also be created using the CLI. The relationships provide access to whichever copy is up to date through a single volume, which has a unique ID. As with normal remote mirror relationships, the HyperSwap relationships can be grouped into consistency groups. The consistency groups fail over consistently as a group based on the state of all of the copies in the group. An image that can be used in the case of a disaster recovery is maintained at each site.
As redundancy over the locations is needed, a Storwize V7000 HyperSwap configuration requires one system (one control enclosure with two nodes) in both locations. This results in a minimum configuration of two control enclosures, one in each site. Based on the used hardware additional devices can be used, as long as is supported by the used system (Storwize V7000 allow more control enclosures than Storwize V5000).
In addition to the two sites that are holding the data a third, independent site is required for any HyperSwap solution. This is mandatory to avoid so-called “split-brain” solutions, which can happen if both sites are not able to communicate anymore, but both sites are still up and running. This third site is the location of the quorum disk, which acts as a tiebreaker. For this, the third site needs an independent connection to both data sites.
Previously, this had to be a Fibre Channel-attached storage device. However, since V7.6 there is also the possibility to use an IP Quorum, which is a piece of software installed on a supported server (for example, Linux). Check the product documentation for all prerequisites that come along with this setup. By using this IP Quorum, this is selected automatically as the active quorum.
Several requirements must be validated for the Storwize V7000 HyperSwap implementations, specifically for the SAN extension. For more information about HyperSwap prerequisites, see the IBM Storwize V7000 Knowledge Center.
How a HyperSwap setup works is shown in Figure 1-1.
Figure 1-1 How HyperSwap works
Every volume has a copy that is mirrored via a special Metro Mirror (MM). With this mirror, the target volumes get the same ID as the source and is seen as the same volume. Because every volume is accessed at the same time via two I/O groups, the maximum number of paths is doubled here. So the server sees four nodes for every volume (two out of I/O group 0 and two from I/O group 1). The multipath driver selects the preferred path (via ALUA) and uses this path, as long as this path is online and reachable. For a case that the traffic is directed to the access I/O group, the data is forwarded to the owning I/O group.
1.2 IBM i overview
IBM i servers are the first-choice of systems for companies that want the benefits of business solutions without the complexity. IBM i product line offers the most integrated and flexible set of servers in the industry, designed for small and medium businesses, and scalability for large business solutions. IBM i servers run in partitions of IBM POWER® systems.
Next we present some features of IBM i servers that are important for working with external storage systems.
1.2.1 Single-level storage and object-orientated architecture
When you create a new file in a UNIX system, you must tell the system where to put the file and how big to make it. You must balance files across different disk units to provide good system performance. If you discover later that a file needs to be larger, you need to copy it to a location on disk that has enough space for the new, larger file. You may need to move files between disk units to maintain system performance.
IBM i server is different in that it takes responsibility for managing the information in auxiliary storage pools (also called disk pools or ASPs).
When you create a file, you estimate how many records it should have. You do not assign it to a storage location; instead, the system places the file in the best location that ensures the best performance. In fact, it normally spreads the data in the file across multiple disk units. When you add more records to the file, the system automatically assigns additional space on one or more disk units.
Therefore, it makes sense to use disk copy functions to operate on either the entire disk space or the IASP. Power HA supports only an asp-based copy.
IBM i uses a single-level storage, object-orientated architecture. It sees all disk space and the main memory as one storage area and uses the same set of virtual addresses to cover both main memory and disk space. Paging of the objects in this virtual address space is performed in 4 KB pages. However, data is usually blocked and transferred to storage devices in bigger than 4 KB blocks. Blocking of transferred data is based on many factors, for example, expert cache usage.
1.2.2 Translation from 520 byte blocks to 512 byte blocks
IBM i disks have a block size of 520 bytes. Most fixed block (FB) storage devices are formatted with a block size of 512 bytes so a translation or mapping is required to attach these to IBM i (IBM DS8000® supports IBM i with a native disk format of 520 bytes).
IBM i performs the following change of the data layout to support 512 byte blocks in external storage: for every page (8 * 520 byte sectors) it uses an additional 9th sector; it stores the 8-byte headers of the 520 byte sectors in the 9th sector, and therefore changes the previous 8* 520-byte blocks to 9* 512-byte blocks. The data that was previously stored in 8 * sectors is now spread across 9 * sectors, so the required disk capacity on V7000 is 9/8 of the IBM i usable capacity, and vice-versa, the usable capacity in IBM i is 8/9 of the allocated capacity in Storwize.
Therefore, when attaching a Storwize to IBM i, whether through vSCSI, NPIV or native attachment, this mapping of 520:512 byte blocks means that you will have a capacity constraint of being able to use only 8/9ths of the effective capacity.
The impact of this translation to IBM i disk performance is negligible.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.157.142