Symantec Cluster Server powered by Veritas
This chapter introduces Symantec Cluster Server, previously known as Veritas Cluster Server (VCS) for AIX on IBM Power. It is a high availability software package that is designed to reduce both planned and unplanned downtime in a business critical environment.
 
Note: While previous versions of VCS work with Linux on IBM Power, at the time of writing Symantec does not have a version of this product for Linux on Power. The last version to work with Linux on Power was version 5, which is also no longer supported by Symantec.
The following topics are discussed:
3.1 Executive overview
Symantec Cluster Server is a clustering solution available on Oracle Solaris, HP-UX, AIX, Linux, VMWare, and Windows. It is scalable up to 64 nodes in an AIX cluster, and supports the management of multiple VCS clusters (Windows or UNIX) from a single web or Java based graphical user interface (GUI). However, individual clusters must be composed of systems running the same operating system.
Symantec Cluster Server has similar base functionality as IBM PowerHA SystemMirror for AIX (PowerHA) product, eliminating single points of failure through the provision of redundant components, automatic detection of application, adapter, network, and node failures, and managing failover to a remote server with limited outage to the end user.
The VCS GUI-based cluster management console provides a common administrative interface in a cross platform environment. There is also integration with other Symantec products, such as the Symantec Replicator Option and Symantec Cluster Server’s Global Cluster Option.
3.2 Components of a Symantec cluster
A Symantec cluster is composed of nodes, external shared disks, networks, applications, and clients. Specifically, a cluster is defined as all servers with the same cluster ID connected via a set of redundant heartbeat paths:
Nodes: Nodes in a Symantec cluster are called cluster servers. There can be up to 64 cluster servers in an AIX Symantec cluster. A node runs an application or multiple applications, and can be added to or removed from a cluster dynamically.
Shared external disk devices: Symantec Cluster Server supports a number of third-party storage vendors, and works in small computer system interface (SCSI), network-attached storage (NAS), and storage area network (SAN) environments. In addition, Symantec offers a Cluster Server Storage Certification Suite (SCS) for OEM disk vendors to certify their disks for use with VCS. Contact Symantec directly for more information about SCS.
Networks and disk channels: These channels, in VCS cluster networks, are required for both heartbeat communication to determine the status of resources in the cluster, and also for client traffic. VCS uses its own protocol, Low Latency Transport (LLT), for cluster heartbeat communication. A second protocol, Group Membership Services/Atomic Broadcast (GAB), is used for communicating cluster configuration and state information between servers in the cluster. The LLT and GAB protocols are used instead of a TCP/IP based communication mechanism. VCS requires a minimum of two dedicated private heartbeat connections, or high-priority network links, for cluster communication. To enable active takeover of resources, should one of these heartbeat paths fail, a third dedicated heartbeat connection is required.
Client traffic is sent and received over public networks. This public network can also be defined as a low-priority network, so should there be a failure of the dedicated high-priority networks, heartbeats can be sent at a slower rate over this secondary network. A further means of supporting heartbeat traffic is disk fencing via vxfen. The disks that act as coordination points are called coordinator disks. Coordinator disks are three standard disks or LUNs set aside for I/O fencing during cluster reconfiguration. Coordinator disks do not serve any other storage purpose in the VCS configuration. You can configure coordinator disks to use Symantec Dynamic Multi-Pathing. Dynamic Multi-pathing (DMP) allows coordinator disks to take advantage of the path failover and the dynamic adding and removal capabilities of DMP. So, you can configure I/O fencing to use either DMP devices or the underlying raw character devices. I/O fencing uses SCSI-3 disk policy that is either raw or dmp based on the disk device that you use. The disk policy is dmp by default. Previous versions of VCS used GABdisk, which is no longer supported from version 5.1.
Ethernet is the only supported IP network type for VCS.
3.3 Cluster resources
Resources to be made highly available include network adapters, shared storage, IP addresses, applications, and processes. Resources have a type associated with them and you can have multiple instances of a resource type. Control of each resource type involves bringing the resource online, taking it offline, and monitoring its health.
Agents: For each resource type, VCS has a cluster agent that controls the resource. Types of VCS agents include:
 – Bundled agents are standard agents that come bundled with the VCS software for basic resource types, such as disk, IP, and mount. Examples of actual agents are Application, IP, DiskGroup, and Mount. For more information, see the Symantec Bundled Agents Reference Guide.
 – Enterprise agents are for applications, and are purchased separately from VCS. Enterprise agents exist for products such as DB2, Oracle, and Symantec Netbackup.
 – Storage agents also exist to provide access and control over storage components, such as the Symantec ServPoint (NAS) appliance.
 – Custom agents can be created using the Symantec developer agent for additional resource types, including applications for which there is no enterprise agent. See the Symantec Cluster Server Agents Developers Guide for information about creating new cluster agents.
Symantec cluster agents are multithreaded, so they support the monitoring of multiple instances of a resource type.
Resource categories: A resource also has a category associated with it that determines how VCS handles the resource. Resources categories include:
On-Off VCS starts and stops the resource as required (most resources are On-Off).
On-Only Brought online by VCS, but is not stopped when the related service group is taken offline. An example of this kind of resource would be starting a daemon.
Persistent VCS cannot take the resource online or offline, but needs to use it, so it monitors its availability. An example would be the network card that an IP address is configured upon.
Service group: A set of resources that are logically grouped to provide a service. Individual resource dependencies must be explicitly defined when the service group is created to determine the order resources are brought online and taken offline. When Symantec cluster server is started, the cluster server engine examines resource dependencies and starts all the required agents. A cluster server can support multiple service groups.
Operations are performed on resources and also on service groups. All resources that comprise a service group will move if any resource in the service group needs to move in response to a failure. However, where there are multiple service groups running on a cluster server, only the affected service group is moved.
The service group type defines takeover relationships, which are either:
 – Failover: The service group runs only one cluster server at a time and supports failover of resources between cluster server nodes. Failover can be both unplanned (unexpected resource outage) and planned, for example, for maintenance purposes. Although the nodes, which can take over a service group, will be defined, there are three methods by which the destination failover node is decided:
 • Priority: The SystemList attribute is used to set the priority for a cluster server. The server with the lowest defined priority that is in the running state becomes the target system. Priority is determined by the order the servers are defined in the SystemList with the first server in the list being the lowest priority server. This is the default method of determining the target node at failover, although priority can also be set explicitly.
 • Round: The system running the smallest number of service groups becomes the target.
 • Load: The cluster server with the most available capacity becomes the target node. To determine available capacity, each service group is assigned a capacity. This value is used in the calculation to determine the failover node, which is based on the service groups active on the node.
 – Parallel: Service groups are active on all cluster nodes that run resources simultaneously. Applications must be able to run on multiple servers simultaneously with no data corruption. This type of service group is sometimes also described as concurrent. A parallel resource group is used for things like web hosting.
The web VCS interface is typically defined as a service group and kept highly available. It should be noted, however, that although actions can be initiated from the browser, it is not possible to add or remove elements from the configuration via the browser. The Java VCS console should be used for making configuration changes.
In addition, service group dependencies can be defined. Service group dependencies apply when a resource is brought online, when a resource faults, and when the service group is taken offline. Service group dependencies are defined in terms of a parent and child, and a service group can be both a child and parent. Service group dependencies are defined by three parameters:
 – Category
 – Location
 – Type
Values for these parameters are:
 – online/offline
 – local/global/remote
 – soft/hard
As an example, take two service groups with a dependency of online, remote, and soft. The category online means that the parent service group must wait for the child service group to be brought on online before it is started. Use of the remote location parameter requires that the parent and child must necessarily be on different servers. Finally, the type soft has implications for service group behavior should a resource fault. See the Symantec Cluster Server User Guide for detailed descriptions of each option. Configuring service group dependencies adds complexity, so must be carefully planned.
Attributes: All VCS components have attributes associated with them that are used to define their configuration. Each attribute has a data type and dimension. Definitions for data types and dimensions are detailed in the Symantec Cluster Server User Guide. An example of a resource attribute is the IP address associated with a network interface card.
System zones: VCS supports system zones, which are a subset of systems for a service group to use at initial failover. The service group chooses a host within its system zone before choosing any other host.
3.4 Cluster configurations
The Symantec terminology used to describe supported cluster configurations are:
Asymmetric There is a defined primary and a dedicated backup server. Only the primary server is running a production workload.
Symmetric There is a two node cluster where each cluster server is configured to provide a highly available service and acts as a backup to the other.
N-to-1 There are N production cluster servers and a single backup server. This setup relies on the concept that failure of multiple servers at any one time is relatively unlikely. In addition, the number of slots in a server limits the total number of nodes capable of being connected in this cluster configuration.
N+1 An extra cluster server is included as a spare. Should any of the N production servers fail, its service groups move to the spare cluster server. When the failed server is recovered, it simply joins as a spare so there is no further interruption to service to failback the service group.
N-to-N There are multiple service groups running on multiple servers, which can be failed to potentially different servers.
3.5 Cluster communication
Cross cluster communication is required to achieve automated failure detection and recovery in a high availability environment. Essentially all cluster servers in a Symantec cluster must run the following:
High availability daemon (HAD)
This is the primary process and is sometimes referred to as the cluster server engine. A further process, hashadow, monitors HAD and can restart it if required. VCS agents monitor the state of resources and pass information to their local HAD. The HAD then communicates information about cluster status to the other HAD processes using the GAB and LLT protocols.
Group membership services/atomic broadcast (GAB)
GAB operates in the kernel space, monitors cluster membership, tracks cluster status (resources and service groups), and distributes this information among cluster nodes using the low latency transport layer.
Low latency transport (LLT)
LLT operates in kernel space, supporting communication between servers in a cluster, and handles heartbeat communication. LLT runs directly on top of the DLPI layer in UNIX. LLT load balances cluster communication over the private network links.
A critical question related to cluster communication is, “What happens when communication is lost between cluster servers?” VCS uses heartbeats to determine the health of its peers and requires a minimum of two heartbeat paths, either private, public, or disk based. With only a single heartbeat path, VCS is unable to determine the difference between a network failure and a system failure. The process of handling loss of communication on a single network as opposed to a multiple network is called jeopardy. So, if there is a failure on all communication channels, the action taken depends on what channels have been lost and the state of the channels before the failure. Essentially, VCS will take action such that only one node has a service group at any one time; in some instances, disabling failover to avoid possible corruption of data. A full discussion is included in “Network partitions and split-brain” in Chapter 22, “Troubleshooting and Recovery”, in the Symantec Cluster Server 6.1 Administrator’s Guide - Linux (http://tinyurl.com/kb7pxrw).
3.6 Cluster installation and setup
Installation of VCS on AIX can be done via installp or SMIT. It should be noted, however, that if installp is used, LLT, GAB, and the main.cf file must be configured manually. Alternatively, we recommend that the /installer script bundled with the Symantec packages should be used to handle the installation of the required software and initial cluster configuration.
 
Note: All installations of the cluster during the creation of this IBM Redbooks publication were done via the /installer script bundled with the required Symantec package.
After the VCS software has been installed, configuration is typically done via the VCS Java GUI interface. The first step is to carry out careful planning of the wanted high availability environment. There are no specific tools in VCS to help with this process. When this has been done, service groups are created and resources are added to them, including resource dependencies. Resources are chosen from the bundled agents and enterprise agents, or if there are no existing agents for a particular resource, a custom agent can be built. After the service groups have been defined, the cluster definition is automatically synchronized to all cluster servers.
Under VCS, the cluster configuration is stored in ASCII files. The two main files are the main.cf and types.cf:
main.cf: Defines the entire cluster
types.cf: Defines the resources
These files are user readable and can be edited in a text editor. A new cluster can be created based on these files as templates.
3.7 Cluster administration facilities
Administration in a Symantec cluster is generally carried out via the cluster manager Java GUI interface. The cluster manager provides a graphical view of cluster status for resources, service groups, and heartbeat communication among others:
Security: A VCS administrator can have one of five user categories. These include Cluster Administrator, Cluster Operator, Group Administrator, Group Operator, and Cluster Guest. Functions within these categories overlap. The Cluster Administrator has full privileges and the ClusterGuest has read only function. User categories are set implicitly for the cluster by default, but can also be set explicitly for individual service groups.
Logging: VCS generates both error messages and log entries for activity in the cluster from both the cluster engine and each of the agents. Log files related to the cluster engine can be found in the /var/VRTSsvc/log directory, and agent log files in the $VCS_HOME/log directory. Each VCS message has a tag, which is used to indicate the type of the message. Tags are of the form TAG_A-E, where TAG_A is an error message and TAG_D indicates that an action has occurred in the VCS cluster. Log files are ASCII text and user readable. However, the cluster management interface is typically used to view logs.
Monitoring and diagnostic tools: VCS can monitor both system events and applications. Event triggers allow the system administrator to define actions to be performed when a service group or resource hits a particular trigger. Triggers can also be used to carry out an action before the service group comes online or goes offline. The action is typically a script, which can be edited by the user. The event triggers themselves are predefined. Some can be enabled by administrators, where others are enabled by default. In addition, VCS provides simple network management protocol (SNMP), management interface base (MIB), and simple mail transfer protocol (SMTP) notification. The severity level of a notification is configurable. Event notification is implemented in VCS using triggers.
Emulation tools: The VCS Java Cluster Manager GUI offers a feature called the HA Fire Drill. This feature runs checks against resources fixing and checking for specific errors. These checks verify the resources defined in the VCS configuration file (main.cf) have the required infrastructure to fail over on another node. This could involve checking for existence of mount directories and more. These checks can only be done when the service group is online, and it verifies that the specified node is a viable failover target capable of hosting that service group. More information can be found about this and how to do the HA Fire Drill in the Symantec Cluster Server Administrator’s Guide.
3.8 PowerHA and Symantec Cluster Server compared
The following section describes PowerHA and highlights where terminology and operation differ between PowerHA and Symantec Cluster Server (VCS). PowerHA and VCS have fairly comparable function, but differ in some areas. VCS has support for cross-platform management, is integrated with other Symantec products, and uses a GUI interface as its primary management interface. PowerHA is optimized for AIX and IBM POWER servers, and is tightly integrated with the AIX operating system. PowerHA can readily utilize availability functions in the operating system to extend its capabilities to monitoring and managing of non-cluster events.
3.8.1 Components of a PowerHA cluster
A PowerHA cluster is similarly comprised nodes, external shared disks, networks, applications, and clients:
Nodes Nodes in a PowerHA cluster are called cluster nodes, compared with VCS cluster server. There can be up to 16 nodes in a PowerHA/ES cluster, including in a concurrent access configuration. A node will run an application or multiple applications, and can be added to or removed from a cluster dynamically.
Shared disks PowerHA has built-in support for a wide variety of disk attachments, including Fibre Channel and several varieties of SCSI. PowerHA provides an interface for OEM disk vendors to provide additional attachments for NAS, SAN, and other disks.
Networks IP networks in a PowerHA cluster are used for both heartbeat/message communication to determine the status of the resources in the cluster, and also for client traffic. PowerHA uses an optimized heartbeat protocol over IP. Supported IP networks, up to PowerHA 6.1, include Ethernet, FDDI, token-ring, SP-Switch, and ATM. PowerHA 7.1 and above only supports Ethernet IP networks. Non-IP networks are also supported to prevent the Internet Protocol network from becoming a single point of failure in a cluster.
Networks based on SNA are also supported as cluster resources. Cluster configuration information is propagated over the public Internet Protocol networks in a PowerHA cluster. However, heartbeats and messages, including cluster status information, is communicated over all PowerHA networks.
3.8.2 Cluster resources
Resources to be made highly available include network adapters, shared storage, IP addresses, applications, and processes. Resources have a type, and you can have multiple instances of a resource type.
PowerHA event scripts: Both PowerHA and VCS support built-in processing of common cluster events. PowerHA provides a set of predefined event scripts that handle bringing resources online, taking them offline, and moving them if required. VCS uses bundled agents. PowerHA provides an event customization process and VCS provides a means to develop agents.
Application server This is the PowerHA term used to describe how applications are controlled in a PowerHA environment. Each application server is composed of a start and stop script, which can be customized on a per node basis. Sample start and stop scripts are available for download for common applications at no cost.
Application monitor Both PowerHA and VCS have support for application monitoring, providing for retry/restart recovery, relocation of the application, and for different processing requirements, based on the node where the application is being run.
The function of an application server coupled with an application monitor is similar to a VCS enterprise agent.
Resource group: This is equivalent to a VCS service group, and is the term used to define a set of resources that comprise a service. The resource group defines startup, fallover and failback behavior. The startup options include:
Online On Home Node
A list of participating nodes is defined for a resource group, with the order of nodes indicating the node priority for the resource group. Resources are owned by the highest priority node available. If there is a failure, the next active node with the highest priority will take over.
Online On First Available Node
A list of participating nodes is defined for a resource group. However, the resource group will come online to the node that activates first in the cluster. This could result in multiple resource groups starting up on only one node.
Online Using Node Distribution Policy
Similar to the previous option but it will activate only one resource group at a time during node startup. If there are more resource groups than nodes, it will spread the resource groups across all the nodes. This prevents all resource groups from starting on just one node.
Online On All Available Nodes
Resource group activates on all nodes as each node joins the cluster. Typically, this also infers concurrent shared data access across the nodes. An example of an application that uses this option Oracle Real Application Cluster. This option only supports the use of raw logical volumes or GPFS. Other options that usually do not require shared data may include some application servers.
The fallover options include:
Fallover To Next Priority Node In The List
The resource group that is online on only one node at a time follows the default node priority order specified in the resource group’s nodelist. It will move to the highest one available.
Fallover Using Dynamic Node Priority (DNP)
It is also possible to set a dynamic node priority (DNP) policy, which can be used at failover time to determine the best takeover node. Each potential takeover node is queried regarding the DNP policy, which might be something like most free CPU. This option is only utilized if there are more than two nodes in a cluster. There are both predefined and custom user-defined options available in PowerHA.
The fallback options include:
Never fallback
A resource group will not automatically fallback to the highest priority node when it rejoins the cluster.
Fallback To Higher Priority Node In The List
A resource group will automatically fallback to the highest priority node when it rejoins the cluster. This will incur a small outage time. This option can also be utilized with a feature called fallback timer. This allows one to specify a date and time for the resource group to move back.
By default, resource groups are brought online in parallel to minimize the total time required to bring resources online. It is possible, however, to define a temporal order if resource groups need to be brought online sequentially. Also, it is possible to define resource group dependencies to achieve the wanted results.
3.8.3 Cluster configurations
PowerHA and VCS are reasonably comparable in terms of supported cluster configurations, although the terminology differs. PowerHA cluster configurations include:
Standby configurations
Support a traditional hardware configuration where there is redundant equipment available as a hot standby. Though historically this implies a one-to-one, it could also be many-to-one.
Mutual Takeover configurations
All cluster nodes do useful work and act as a backup to each other.
Concurrent All cluster nodes are active and have simultaneous access to the same shared resources.
3.8.4 Cluster communications
Cross cluster communication is a part of all high availability software, and in PowerHA this task is carried out by the following components:
Cluster manager daemon (clstrmgrES): This can be considered similar to the VCS cluster engine and must be running on all active nodes in a PowerHA cluster. In the classic feature of PowerHA, the clstrmgrES is responsible for monitoring nodes and networks for possible failure, and keeping track of the cluster peers. Beginning with PowerHA v7 some of the functions carried out by RSCT, specifically topsvcs, moved to Cluster Aware AIX (CAA).3.8.3, “Cluster configurations” on page 43. The clstrmgr executes scripts in response to changes in the cluster (events) to maintain availability in the clustered environment.
Cluster communications daemon (clcomd): This provides cluster-based communications.
Reliable Scalable Cluster Technology (RSCT): This is used extensively in PowerHA for messaging, monitoring cluster status, and event monitoring. RSCT is part of the AIX base operating system and is composed of:
 – Group services: Coordinates distributed messaging and synchronization tasks.
 – Event management: Monitors system resources and generates events when resource status changes.
PowerHA and VCS both have a defined method to determine whether a remote system is alive, and a defined response to the situation where communication has been lost between all cluster nodes. These methods essentially achieve the same result, which is to avoid multiple nodes trying to grab the same resources.
3.8.5 Cluster installation and setup
Installation of PowerHA for AIX software is via the standard AIX installation process using installp, from the command line or via SMIT. Installation of PowerHA automatically updates a number of AIX files, such as /etc/services and /etc/inittab. No further system-related configuration is required following the installation of the PowerHA software.
The main SMIT PowerHA configuration menu (fast path smitty sysmirror) outlines the steps that are required to configure a cluster. The cluster topology is defined first and synchronized via the network to all nodes in the cluster and then the resource groups are set up. Resource groups can be created on a single PowerHA node and the definitions propagated to all other nodes in the cluster. The resources, which comprise the resource group, have implicit dependencies that are captured in the PowerHA software logic.
PowerHA configuration information is held in the object data manager (ODM) database, providing a secure but easily shareable means of managing the configuration. A cluster snapshot function is also available, which captures the current cluster configuration in two ASCII user readable files. The output from the snapshot can then be used to clone an existing PowerHA cluster or to reapply an earlier configuration. In addition, the snapshot can be easily modified to capture additional user-defined configuration information as part of the PowerHA snapshot. VCS does not have a snapshot function per se, but allows for the current configuration to be dumped to file. The resulting VCS configuration files can be used to clone cluster configurations. There is no VCS equivalent to applying a cluster snapshot.
3.8.6 Cluster administration facilities
Cluster management is typically via the System Management Interface Tool (SMIT). The PowerHA menus are tightly integrated with SMIT and are easy to use. There is also close integration with the AIX operating system:
Security: PowerHA employs AIX user management to control access to cluster management function. By default, the user must have root privilege to make any changes. AIX roles can be defined if wanted to provide a more granular level of user control. Achieving high availability requires good change management, and this includes restricting access to users who can modify the configuration.
Logging: PowerHA log files are simple ASCII text files. There are separate logs for messages from the cluster daemons and for cluster events. The primary log file for cluster events is the PowerHA.out file, which is by default in /tmp. The system administrator can define a non-default directory for individual PowerHA log files. The contents of the log files can be viewed via SMIT or a web browser. In addition, RSCT logs are also maintained for PowerHA/ES.
Monitoring and diagnostic tools: PowerHA has extensive event monitoring capability and it is possible to define a custom PowerHA event to run in response to the outcome of event monitoring. In addition, multiple pre-events and post-events can be scripted for all cluster events to tailor them for local conditions. PowerHA and VCS both support flexible notification methods, SNMP, SMTP, and email notification. PowerHA uses the AIX error notification facility and can be configured to react to any error reported to AIX. VCS is based on event triggers and reacts to information from agents. PowerHA also supports pager notification.
Emulation tools: PowerHA can emulate error log entries to validate any customization to error notification. The VCS Java Cluster Manager GUI offers a feature called the HA Fire Drill. This feature runs checks against resources fixing and checking for specific errors.
Both PowerHA and VCS provide tools to enable maintenance and change in a cluster without downtime. PowerHA has the cluster single point of control (CSPOC) and dynamic reconfiguration capability (DARE). CSPOC allows a cluster change to be made on a single node in the cluster and for the change to be applied to all nodes. Dynamic reconfiguration uses the cldare command to change configuration, status, and location of resource groups dynamically. It is possible to add nodes, remove nodes, and support rolling operating systems or other software upgrades. VCS has the same capabilities and cluster changes are automatically propagated to other cluster servers. However, PowerHA has the unique ability to emulate migrations for testing purposes.
3.8.7 PowerHA and Symantec Cluster Server feature comparison summary
Table 3-1 shows the PowerHA and Symantec Cluster Server environment support.
Table 3-1 PowerHA and Symantec Cluster Server environment support
Environment
PowerHA
VCS for AIX
Operating system
AIX 6.1.6 and above
AIX 7.1.0 and above
AIX 6.1.6 and above
AIX 7.1.0 and above
Network connectivity
Ethernet
Ethernet
Disk connectivity
iSCSI, Fibre Channel
iSCSI, Fibre Channel
Maximum servers in a cluster
16
64*
Concurrent disk access
Yes - Raw logical volumes only
N/A
LPAR support
Yes
Yes
Integrated DLPAR Fallover Support
Yes
No, can be customized via scripts
Storage subsystems
All supported by VIOS
All supported by VIOS
* VCS is capable of supporting clusters with up to 64 nodes. Symantec has tested and qualified configurations of up to 32 nodes at the time of the 6.1 release.
Table 3-2 shows the PowerHA and Veritas disaster recovery support.
Table 3-2 PowerHA and Veritas disaster recovery support
Replication option supported
PowerHA Enterprise Edition
VCS for AIX
GLVM
Yes
No
XIV Sync/Async
Yes
Yes
DS8000 Metro/Global
Yes
Yes
SVC Metro/Global
Yes
Yes
IBM Storwize® v7000 Metro/Global
Yes
Yes
EMC SRDF Sync/Async
Yes
Yes
EMC SRDF/Star
No
Yes
EMC RecoverPoint
No
No
Hitachi True Copy/Universal Replicator
Yes
Yes
HP Continuous Access
Yes
Yes
HP 3PAR Remote Copy
No
Yes
Netapp SnapMirror
No
Yes
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.172.50