Typical use cases for IBM Spectrum Virtualize for Public Cloud
This chapter describes four use cases for IBM Spectrum Virtualize for Public Cloud and includes the following topics:
2.1 Deploying whole IT services in the public cloud
Companies are approaching and using public cloud services from multiple angles. Users that are rewriting and modernizing applications for cloud complement those users that are looking to move to cloud-only new services or to extend existing IT into a hybrid model to address quickly changing capacity and scalability requirements.
The delivery models for public cloud are available in the following general as-a-service categories:
Software as a Service: SaaS provides the greatest level of abstraction in which the user interacts only with the software. IBM Storage Insights is such an example where clients are not at all involved with any of the back-end components.
Infrastructure as a Service: In IaaS, server instances and even bare metal servers are provisioned on a subscription basis. IBM Cloud Classic Infrastructure is such an example. Network components also can be discretely subscribed, such as VPN gateways.
Platform as a Service: PaaS is the intermediate and most typically the most common cloud environment. Microsoft Azure and Amazon Web Services, and OpenShift are examples. In PaaS virtualization is managed by the provider and abstracted from the user.
The workload deployment is composed of two major use cases, as shown in Figure 2-1:
Hybrid cloud: The integration between the off-premises public cloud services with an on-premises IT environment.
Cloud-native: The full application’s stack is moved to cloud as SaaS, PaaS, IaaS, or as a combination of the three delivery models.
Figure 2-1 The two major deployment models for public cloud
Cloud-native implementations (that is, whole IT services that are deployed in the public cloud) are composed of several use cases, all with the lowest common denominator of having a full application deployment in the public cloud data centers. The technical details, final architecture, and roles and responsibilities depend on SaaS, PaaS, or IaaS usage.
Within the IaaS domain, the transparency of cloud services is the highest because the user’s visibility (and responsibility) into the application stack is much deeper compared to the other delivery models. Conversely, the burden for its deployment is higher because all the components must be designed from the server up.
At the time of this writing, IBM Spectrum Virtualize for Public Cloud is framed only within the IaaS cloud delivery model so that the user can interact with their storage environment as they did on-premises, which provides more granular control over performance.
2.1.1 Business justification
A stand-alone workload or an application, with few on-premises dependencies, relatively low-performance requirements, and that is not processing highly regulated data, represents a good fit for a cloud-native deployment. The drivers that motivate businesses toward cloud-native deployment are generally financial, such as decreasing capital expenditure (CapEx) and operating expenditure (OpEx), optimizing or eliminating resource management and controls against hidden or shadow IT resources. Other benefits are more flexibility and scalability, and streamlined flow in delivering IT service because of the global footprint of cloud data centers.
At its core, the cloud environment is highly focused on standardization and automation. Therefore, the full spectrum of features and customization that are available in a typical on-premises or outsourcing deployment might not be natively available in the cloud catalog.
Nevertheless, the client does not lose performance and capabilities when deploying a cloud-native application. In this context, the storage virtualization with IBM Spectrum Virtualize for Public Cloud enables the IT staff to maintain the technical capabilities and skills to deploy, run, and manage highly available and highly reliable cloud-native applications in a public cloud. In this context, the IBM Spectrum Virtualize for Public Cloud acts as a bridge between the standardized cloud delivery model and the enterprise assets that the client uses in their traditional IT environment.
In a hybrid multicloud environment, the orchestration of the infrastructure requires multiple entities that are tightly integrated with each other and smartly respond to administrator or user needs, and that is where a software-defined environment (SDE) has an important role in the overall orchestration.
Integration between service delivery, management, orchestration, automation, and hardware systems is becoming a requirement to support the emergence of SDEs. For SDEs to provide their benefits, they must understand and manage all the components of the infrastructure, including storage, and that makes software-defined storage (SDS) more relevant and important.
The capability of collecting the information from storage systems and providing a simplified multicloud deployment across IBM Storage systems is provided by IBM Spectrum Connect. IBM Spectrum Virtualize for Public Cloud on Microsoft Azure and IBM Spectrum Connect integrate vRealize Orchestrator with vRealize Automation, which takes the service around infrastructure beyond orchestration.
By integrating the Advanced Service Designer feature of vRealize Automation with vRealize Orchestrator, an organization can offer anything as a service (XaaS) to its users. By using the XaaS feature of vRealize Automation, IBM Spectrum Virtualize Storage System and IBM Spectrum Virtualize for Public Cloud on Azure can be delivered as SaaS in a multicloud environment, whether it is deployed in private cloud or a public cloud multicloud environment.
2.1.2 Highly available deployment models
The architecture is directly responsible for an application’s reliability and availability if a component failure (hardware and software) occurs. When an application is fully hosted on cloud, the cloud data center becomes the primary site (production site). Cloud deployment does not automatically guarantee 100% uptime, that the backups are available by default, or that the application is automatically replicated between different sites.
These security, availability, and recovery features are often incorporated into the SaaS model. They might be partially provided in the PaaS model. However, in the IaaS model, they are entirely the customer’s responsibility.
Having reliable cloud deployments means that the service provider must meet the required service level agreement (SLA), which guarantees service availability and uptime. Companies that use a public cloud IaaS can meet required SLAs by implementing highly available solutions and duplicating the infrastructure in the same data center or in two data centers to maintain business continuity in case of failures.
If business continuity is not enough to reach the requirements of the SLA, Disaster Recovery (DR) implementations, which split the application among multiple cloud data centers (usually with a distance of at least 300 Km [186.4 miles]) prevent failure in a major disaster in the organization’s main campus.
The highly available deployment models for an application that is fully deployed on public cloud are summarized as follows:
Highly available cloud deployment on a single primary site
All the solution’s components are duplicated (or more) within the same data center. This solution continues to function because there are not single points of failure (SPOF), but it does not function if the data center is unavailable.
Highly available cloud deployment on multi-site
The architecture is split among multiple cloud data centers from multiple cloud providers to mitigate the failure of an entire data center or provider, or spread globally to recover the solution if major disaster affects the campus.
Highly available cloud deployment on a single primary site
When fully moving an application to a cloud IaaS that is the primary site for service delivery, a reasonable approach is implementing at least a highly available architecture. Each component (servers, network components, and storage) is redundant to avoid SPOF.
Within the single primary site deployment, storage is deployed as native cloud storage. By using the public cloud catalog storage, users can take advantage of the intrinsic availability (and SLAs) of the storage service, which is this case, is Microsoft Azure Managed Disk.
When IBM Spectrum Virtualize for Public Cloud is deployed as clustered pair of Azure VM instances, it mediates between the Cloud Block Storage and the workload hosts. In the specific context of single-site deployment, IBM Spectrum Virtualize for Public Cloud supports extra features that enhance the public cloud block-storage offering.
At the storage level, IBM Spectrum Virtualize for Public Cloud resolves some limitations because of the standardized model of public cloud providers: a maximum number of LUNs per host, a maximum volume size, and poor granularity in the choice of tiers for storage snapshots.
IBM Spectrum Virtualize for Public Cloud also provides a new view for the storage management other than the cloud portal. It is a high-level view of the storage infrastructure and some limited specific operations at the volume level (such as volume size, IOPS tuning, and snapshot space increase).
What is not provided is a holistic view of the storage from the application perspective. Another advantage of Spectrum Virtualize Public Cloud is that it integrates with our Storage Insights product to provide advance monitoring, reporting, and alerting by using data that is gathered from the Spectrum Virtualize instances.
The benefits of an IBM Spectrum Virtualize for Public Cloud single site deployment are listed in Table 2-1.
Table 2-1 Benefits of IBM Spectrum Virtualize for Public Cloud single site deployment
Feature
Benefits
Single point of control for cloud storage resources.
Designed to increase management efficiency and to help to support application availability.
Pools the capacity of multiple storage volumes
Helps to overcome volume size limitations.
Helps to manage storage as a resource to meet business requirements, and not just as a set of independent volumes.
Helps administrator to better deploy storage as required beyond traditional “islands”.
Can help to increase the use of storage assets.
Insulate applications from maintenance or changes to a storage volume offering.
Manages tiered storage
Helps to balance performance needs against infrastructures costs in a tiered storage environment.
Automated policy-driven control to put data in the right place at the right time automatically among different storage tiers and classes.
Easy-to-use IBM FlashSystem family management interface
Has a single interface for storage configuration, management, and service tasks regardless of the configuration that is available from the public cloud portal.
Helps administrators use storage assets and volumes more efficiently.
Has IBM Spectrum Control Insights and IBM Spectrum Protect for extra capabilities to manage capacity and performance.
Dynamic data migration
Migrates data among volumes and LUNs without taking applications that use that data offline.
Manages and scales storage capacity without disrupting applications.
Advanced network-based copy services
Copy data across multiple storage systems with
IBM FlashCopy.
Copy data across metropolitan and global distances as needed to create high-availability storage solutions between multiple data centers.
Thin provisioning and snapshot replication
Reduces volume requirements by using storage only when data changes.
Improves storage administrator productivity through automated on-demand storage provisioning.
Snapshots are available on lower tier storage volumes.
IBM Spectrum Protect Snapshot application-aware snapshots
Perform near-instant and application-aware snapshot backups, with minimal performance impact for
IBM Db2®, Oracle, SAP, VMware, Microsoft SQL Server, and Microsoft Exchange.
Provide advanced and granular restoration of Microsoft Exchange data.
Third-party native integration
Integration with VMware vRealize.
Safeguarded Copy
The new Spectrum Virtualize functions provides a valuable ransomware mitigation solution, especially when combined with implementation of IBM Spectrum Virtualize for Public Cloud.
 
Highly available cloud deployment on multiple sites
When the application architecture spans over multiple data centers, it can tolerate the failure of the entire primary data center by switching to the secondary data center. The primary and secondary data centers can be deployed as:
Active-active: The secondary site is always running and synchronously aligned with the primary site.1
Active-passive: The secondary site is always running but asynchronously replicated (with a specific recovery point objective [RPO]) or running only for specific situations, such as acting as a recovery site or test environment. Storage is always active and available for data replication.
The active-passive configuration is usually the best fit for many cloud use cases, including DR, as described in 2.2, “Disaster Recovery” on page 15. The ability to provision compute resources on demand in a few minutes with only the storage that is provisioned and aligned with a specific RPO is a huge driver for a cost-effective DR infrastructure, and lowers the total cost of ownership (TCO).
The replication among multiple cloud data centers is no different from the traditional approach, except for the number of available tools in the cloud. Although solutions that are based on hypervisor or application-layer replication, such as VMware, Veeam, and Zerto, are available in the public cloud, storage-based replication is still the preferable approach if the environment is heterogeneous (virtual servers, bare metal servers, multiple hypervisors, and so on).
Active-passive asynchronous mirroring that uses Global Mirror with Change Volumes (GMCV) provides a minimum RPO of 2 minutes (the Change Volume [CV] cycle period ranges is 1 minute - 1 day, and a best practice is setting the cycle period to be half of the RPO), and can replicate a heterogeneous environment.
2.2 Disaster Recovery
Customers have long been adopting DR strategies to harness and secure proliferating data in their environment and infrastructure workloads in a cost-effective manner when a highly available (HA) level of recovery point objective (RPO) is not a business requirement.
Technology is only one crucial piece of a DR solution, and not the one that always dictates the overall approach.
This section describes DR approach and benefits of IBM Spectrum Virtualize for Public Cloud on Azure.
A DR strategy is the predominant aspect of an overall resiliency solution because it determines what classes of physical events the solution can address, sets the requirements in terms of distance, and sets constraints on technology.
2.2.1 Business justification
Table 2-2 lists the drivers and the challenges of having a DR solution on cloud and what capabilities IBM Spectrum Virtualize for Public Cloud provides in these areas.
Table 2-2 Drivers, challenges, and capabilities that are provided by IBM Spectrum Virtualize for Public Cloud
Adoption drivers
Challenges
IBM Spectrum Virtualize for IBM public cloud capabilities
The promise of reduced operational expenditures and capital expenditures
Hidden costs
Availability of data when needed
Optimized for Cloud Block Storage
IBM Easy Tier solution to optimize the most valuable storage usage, which maximizes Cloud Block Storage performance
Thin provisioning to control the storage provisioning
Snapshots feature for backup and DR solution
HA clusters architecture
Bridging technologies from on-premises to cloud
Disparate Infrastructure: How can my on-premises production data be readily available in the cloud in a disaster?
Any to any replication
Supporting over 400 different storage devices (on-premises), including iSCSI on-premises and when deployed in cloud
Using the cloud for backup and
DR
Covering virtual and physical environments
Solutions to meet a range of RPO/RTO needs
A storage-based, serverless replication with options for low RPO/RTO:
 – Global Mirror for Asynchronous replication with an RPO close to “0” (not recommended for Public Cloud)
 – Metro Mirror for Synchronous replication (not supported for Public Cloud)
 – GMCVs for Asynchronous replication with a tunable RPO
(recommended for Public Cloud deployments)
At the time of this writing, IBM Spectrum Virtualize for Public Cloud includes the following DR-related features:
Can be implemented at several locations in Microsoft Azure and installed by using Azure Marketplace.
Is deployed on an Azure VM instance.
Offers data replication with the FlashSystem family, V9000, IBM SAN Volume Controller, or VersaStack and public cloud.
Supports two node clusters in Microsoft Azure.
Offers data services for Azure Managed Disks.
Offers common management with the IBM Spectrum Virtualize GUI with full admin access and a dedicated instance.
No incoming data transfer cost.
Replicates between two Azure locations.
Replicates between on-premises and Microsoft Azure running IBM Spectrum Virtualize on-premises and IBM Spectrum Virtualize for Public Cloud on Azure.
2.2.2 Two common DR scenarios with IBM Spectrum Virtualize for Public Cloud
The following most common scenarios can be implemented with IBM Spectrum Virtualize for Public Cloud:
IBM Spectrum Virtualize Hybrid Cloud DR for “Any to Any”.
IBM Spectrum Virtualize for Public Cloud solution on Azure Cloud DR, as shown in Figure 2-2.
Figure 2-2 IBM Spectrum Virtualize for Public Cloud on Azure Cloud DR solution
As shown in Figure 2-2 on page 16, a customer can deploy a storage replication infrastructure in a public cloud by using IBM Spectrum Virtualize for Public Cloud.
This scenario includes the following scenarios:
Primary storage is in the customer’s physical data center. The customer has an on-premises IBM Spectrum Virtualize solution that is installed.
Auxiliary storage sits on the DR site, which can be an IBM Spectrum Virtualize cluster running in the public cloud.
The virtual IBM Spectrum Virtualize cluster manages the storage that is provided by an Amazon EBS volume.
A replication partnership that uses GMCVs is established between an on-premises IBM Spectrum Virtualize cluster or FlashSystem solution and the virtual IBM Spectrum Virtualize cluster to provide DR.
When talking about DR, understand that IBM Spectrum Virtualize for Public Cloud is an important piece of a more complex solution that has some prerequisites considerations and best practices that must be applied.
2.3 IBM FlashCopy in the public cloud
The IBM FlashCopy function in IBM Spectrum Virtualize can perform a point-in-time (PiT) copy of one or more volumes. You can use FlashCopy to help you solve critical and challenging business needs that require duplication of data of your source volume. Volumes can remain online and active while you create consistent copies of the data sets. Because the copy is performed at the block level, it operates below the host operating system and its cache. Therefore, the copy is not apparent to the host unless it is mapped.
2.3.1 Business justification
The business applications for FlashCopy are wide-ranging. Common use cases for FlashCopy include, but are not limited to, the following examples:
Rapidly creating consistent backups of dynamically changing data.
Rapidly creating consistent copies of production data to facilitate data movement or migration between hosts.
Rapidly creating copies of production data sets for:
 – Application development and testing
 – Auditing purposes and data mining
 – Quality assurance
Rapidly creating copies of replication targets for testing data integrity.
Regardless of your business needs, FlashCopy with IBM Spectrum Virtualize is flexible and offers a broad feature set, which makes it applicable to many scenarios.
2.3.2 FlashCopy mapping
The association between the source volume and the target volume is defined by a FlashCopy map. The FlashCopy map can have three different types (as defined in the GUI), four attributes, and seven different states.
FlashCopy in the GUI can be one of the following types:
Snapshot
Sometimes referred to as nocopy. A PiT copy of a volume without a background copy of the data from the source volume to the target. Only the changed blocks on the source volume are copied to preserve the point in time. The target copy cannot be used without an active link to the source, which is achieved by setting the copy and clean rate to zero.
Clone
Sometimes referred to as one time full copy. A PiT copy of a volume with a background copy of the data from the source volume to the target. All blocks from the source volume are copied to the target volume. The target copy becomes a usable independent volume, which is achieved with a copy and clean rate greater than zero and an autodelete flag; therefore, no cleanup of the map is necessary after the background copy is finished.
Backup
Sometimes referred to as an iterative incremental. A backup FlashCopy mapping consists of a PiT full copy of a source volume, plus periodic increments or “deltas” of data that changed between two points in time.
This mapping is where the copy and clean rates are greater than zero, no autodelete flag is set, and you use an incremental flag to preserve the bitmaps between activations so that only the deltas since the last “backup” must be copied.
It is named such as the most typical use case is with backup processes that cause heavy reads and so a full copy is made to insulate the primary volume against those heavy reads. Also, because backups occur periodical (typically daily), the incremental flag allows only the deltas between refresh to be copied.
The FlashCopy mapping has four property attributes (clean rate, copy rate, autodelete, and incremental) and seven different states. Users can perform the following tasks on a FlashCopy mapping:
Create: Define a source and a target, and set the properties of the mapping.
Prepare: The system must be prepared before a FlashCopy copy starts. It basically flushes the cache and makes it “transparent” for a short time so that no data is lost.
Start: The FlashCopy mapping is started and the copy begins immediately. The target volume is immediately accessible.
Stop: The FlashCopy mapping is stopped (by the system or user). Depending on the state of the mapping, the target volume is usable or not.
Modify: Some properties of the FlashCopy mapping can be modified after creation.
Delete: Delete the FlashCopy mapping, which does not delete any of the volumes (source or target) from the mapping.
The source and target volumes must be the same size. The minimum granularity that
IBM Spectrum Virtualize supports for FlashCopy is an entire volume. It is not possible to use FlashCopy to copy only part of a volume.
 
Important: As with any PiT copy technology, you are bound by operating system and application requirements for interdependent data and the restriction to an entire volume.
The source and target volumes must belong to the same IBM Spectrum Virtualize system, but they do not have to be in the same I/O group or storage pool. For scalability and performance reasons, FlashCopy source and target volumes and maps might need to be aligned in the same I/O group and possibly the same preferred node.
For more information, see section 6.2.4 “FlashCopy planning considerations” of IBM FlashSystem Best Practices and Performance Guidelines for IBM Spectrum Virtualize Version 8.4.2, SG24-8508.
Volumes that are members of a FlashCopy mapping cannot have their sizes increased or decreased while they are members of the FlashCopy mapping.
All FlashCopy operations occur on FlashCopy mappings. FlashCopy does not alter source volumes. Multiple operations can occur at the same time on multiple FlashCopy mappings by using consistency groups.
2.3.3 Consistency groups
To overcome the issue of dependent writes across volumes and create a consistent image of the client data, perform a FlashCopy operation on multiple volumes as an atomic operation. To accomplish this task, IBM Spectrum Virtualize supports the concept of consistency groups.
Consistency groups preserve PiT data consistency across multiple volumes for applications that include related data that spans multiple volumes. For these volumes, consistency groups maintain the integrity of the FlashCopy by ensuring that dependent writes are run in the application’s intended sequence.
FlashCopy mappings can be part of a consistency group, even if only one mapping exists in the consistency group. If a FlashCopy mapping is not part of any consistency group, it is referred to as stand alone.
2.3.4 Crash-consistent copy and host considerations
FlashCopy consistency groups do not provide application consistency. They ensure only that volume points-in-time are consistent between volumes.
Because FlashCopy is at the block level, you must understand the interaction between your application and the host operating system. From a logical standpoint, it is easiest to think of these objects as “layers” that sit on top of one another. The application is the topmost layer, and beneath it is the operating system layer.
Both of these layers have various levels and methods of caching data to provide better speed. Because the IBM SAN Volume Controller and FlashCopy sit below these layers, they are unaware of the cache at the application or operating system layers.
To ensure the integrity of the copy that is made, it is necessary to flush the host operating system and application cache for any outstanding reads or writes before the FlashCopy operation is performed. Failing to flush the host operating system and application cache produces what is referred to as a crash-consistent copy.
The resulting copy requires the same type of recovery procedure, such as log replay and file system checks, that is required following a host crash. FlashCopy copies that are crash-consistent often can be used after the file system and application recovery procedures.
This concept is shown in Figure 2-3, where in-flight I/Os in cache buffers (if unflushed) are not in the volume; therefore, they are not be captured in the FlashCopy.
Figure 2-3 Buffered I/Os are lost if unflushed
Various operating systems and applications provide facilities to stop I/O operations and ensure that all data is flushed from the host cache. If these facilities are available, they can be used to prepare a FlashCopy operation. When this type of facility is unavailable, the host cache must be flushed manually by quiescing the application and unmounting the file system or drives.
The target volumes are overwritten with a complete image of the source volumes. Before the FlashCopy mappings are started, it is important that any data that is held on the host operating system (or application) caches for the target volumes is discarded. The easiest way to ensure that no data is held in these caches is to unmount the target volumes before the FlashCopy operation starts.
 
Best practice: From a practical perspective, when you have an application that is backed by a database and you want to make a FlashCopy of that application’s data, it is sufficient in most cases to use the write-suspend method that is available in most modern databases because the database maintains strict control over I/O.
This method is as opposed to flushing data from the application and backing database, which is always the suggested method because it is safer. However, this method can be used when facilities do not exist or your environment includes time sensitivity.
2.4 Safeguarded Copy
Combining these use cases of remote replication and FlashCopy with the new safeguarded child pool function that was introduced in Spectrum Virtualize 8.4.2.0, we now have a powerful cyber-resilience use case for Spectrum Virtualize in Public Cloud on Azure. On-premises workloads can be protected from ransomware and other data corruption attacks with a truly air-gapped solution in the public cloud.
2.4.1 Business justification
The regulatory and business justifications for this use case are clear and widely reported in the news of high profile cases of ransomware attacks crippling business processes and in some cases threatening lives as healthcare organizations were attacked in the midst of the COVID-19 global pandemic.
2.4.2 Solution design
As shown in Figure 2-4, on-premises primary volumes are replicated by way of IP to a Spectrum Virtualize in Public Cloud instance on Azure.
Figure 2-4 Safeguarded Copy
The pool from which those destination or auxiliary volumes are created was configured with a safeguarded child pool, a volume group was set up to contain those volumes, and a safeguarded policy was assigned to the volume group that governs the frequency and retention duration for the safeguarded copies.
 
Copy Services Manager is installed on-premises or ideally, in Azure. It is configured to communicate with the Spectrum Virtualize for Public Cloud instance to translate the policy into scheduled actions. It also provides a convenience orchestration portal for managing the recovery and restoration from the safeguarded copies to recovery or original replication destination volume.
Because Copy Services Manager is primarily a replication orchestration tool, it is perfectly positioned to also manage the replication of the recovered or restored data back to the primary site. For more information, see IBM FlashSystem Safeguarded Copy Implementation Guide, REDP-5654.
2.4.3 Component summary
Spectrum Virtualize 8.4.2.0 features the following components:
Safeguarded Child Pool: A new feature that provides a region of a storage pool for making non-modifiable copies of volumes in that pool to guard against malicious or accidental data corruption.
Volume Group: A new container type that provides a way to group a set of volumes to which a safeguarded policy is applied and acted upon in a crash consistent manner. When safeguarded copies are taken for volumes in a volume group, a consistency group is automatically created to keep those volumes crash consistent with one another.
Safeguarded Policy: A new object type that governs the frequency and retention duration for safeguarded copies. Three default policies and other custom policies can be created by using the mksafeguardedpolicy CLI command. The three default policies are listed in Table 2-3.
Table 2-3 Default policies
Policy
Frequency
Retention
predefinedsgpolicy0
6 hour
7 days
predefinedsgpolicy1
1 week
30 days
predefinedsgpolicy2
1 month
365 days
Copy Services Manager (CSM): Application that has long existed as a replication and point-in-time copy orchestration tool for IBM storage (Spectrum Virtualize, DS8K, XIV). With version 6.2, CSM integrates with Spectrum Virtualize 8.4.2.0 to periodically scan for volume groups with volumes and a safeguarded policy that is associated. Upon detection of such, CSM creates objects within its own framework (sessions, copy sets, and scheduled tasks) to run on the policy and create safeguarded backups with the frequency that is stipulated in the policy.
Moreover, it allows for the orchestration of recovery (create a copy of a safeguarded copy onto a new volume) and restoration (copy data back to the source volume from a safeguarded copy).
2.5 Workload relocation into the public cloud
In this section, a use case for IBM Spectrum Virtualize for Public Cloud is described in which an entire workload segment is migrated from a customer’s enterprise into the cloud. Although the process for relocating a workload into the cloud by using IBM Spectrum Virtualize can use only Remote Copy, other mechanisms are available that can accomplish this task.
2.5.1 Business justification
All the drivers that motivate businesses to use virtualization technologies make deploying services into the cloud even more compelling because the cost of idle resources is further absorbed by the cloud provider. However, specific limitations in regulatory or process controls can prevent a business from moving all workloads and application services into the cloud.
An ideal case with regard to a hybrid cloud solution is the relocation of a specific segment of the environment that is well suited, such as development. Another might be a specific application group that does not require the regulatory isolation or low response time integration with on-premises applications.
Although performance might be a factor, do not assume that cloud deployments automatically create a diminished performance. Depending on the location of the cloud service data center and the intended audience for the migrated service, the performance can conceivably be superior to on-premises premigration.
In summary, moving a workload into the cloud might provide similar functions with better economies because of scaling physical resources in the cloud provider. Moreover, the cost of services in the cloud is structured, measurable, and predictable.
2.5.2 Data migration
Several methods are available for performing data migrations to the cloud, including the following general approaches:
IBM Spectrum Virtualize Remote Copy
Host-side mirroring (Storage vMotion or IBM AIX® Logical Volume Manager mirroring)
Appliance-based data transfer, such as IBM Aspera® or IBM Transparent Data Migration Facility
The first method was described in 2.3, “IBM FlashCopy in the public cloud” on page 17, and is essentially the same process as DR. The only difference is that instead of a persistent replication, after the initial synchronization is complete, the goal is to schedule the cutover of the application onto the compute nodes in the cloud environment that is attached to the
IBM Spectrum Virtualize storage.
Host-side mirroring requires the server to have concurrent access to local and remote storage, which is not feasible. Also, because the object is to relocate the workload (compute and storage) into the cloud environment, that task is more easily accomplished by replicating the storage and after it is synchronized, bringing up the server in the cloud environment and making the suitable adjustments to the server for use in the cloud.
The second method is largely impractical because it requires the host to access source and target simultaneously.
Also, the practical impediments to creating an iSCSI (the only connection method currently available for IBM Spectrum Virtualize in the Public Cloud) connection from on-premises host systems into the cloud are beyond the scope of this use case. Traditional VMware Storage vMotion is similar, but again, requires the target storage to be visible through iSCSI to the host.
The third method entails the use of third-party software and or hardware to move the data from one environment to another one. The general idea is that the target system includes an operating system and some empty storage that is provisioned to it that acts as a landing pad for data that is on the source system. Going into detail about these methods is also outside the scope of this document; however, the process is no different between an on-premises to cloud migration as it is to an on-premises to on-premises migration.
Table 2-4 lists the migration methods.
Table 2-4 Migration methods
Migration method
Best suited operating system
Pros versus cons
Remote Copy
Stand-alone Windows, Linux, or VMWare (any version)
Simple versus limited scope
Host Mirror
VMWare vSphere 5.1 or higher
Simple versus limited scope
Appliance
N/A
Flexible versus cost and complexity
2.5.3 Host provisioning
In addition to the replication of data, it is necessary for compute nodes and networking to be provisioned within the cloud provider upon which to run the relocated workload. Currently, in Azure the VM compute nodes are available with storage that is provisioned to the VM compute instance by using an iSCSI connection.
2.5.4 Implementation considerations
The workload relocation into the public cloud use case includes the following implementation considerations:
Naming conventions: This important consideration is in the manageability of a standard on-premises IBM Spectrum Virtualize environment. However, because of the many layers of virtualization in a cloud implementation, maintaining a consistent and meaningful naming convention for all objects, such as managed disks (MDisks), volumes, FlashCopy mappings, Remote Copy relationships, hosts, and host clusters, is necessary.
Monitoring integration: Integration into IBM Spectrum Control or some other performance monitoring framework is useful for maintaining metrics for reporting or troubleshooting. IBM Spectrum Control is well suited for managing IBM Spectrum Virtualize environments.
Planning and scheduling: Regardless of the method that is chosen, gather as much information ahead of time as possible (file system information, application custodians, full impact analysis of related systems, and so on).
Be sure to ensure a solid backout: If inter-related systems or other circumstances require rolling back the application servers to on-premises, plan the migration to ensure as little difficulty as possible in the roll-back, which might mean keeping zoning in the library (even if it is not in the active configuration), and not destroying source volumes for a specific period.

1 Spectrum Virtualize Highly Available multi-site topologies, such as HyperSwap and Enhanced Stretch Cluster, are not supported by Spectrum Virtualize Public Cloud as of this writing.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.110.119