Chapter 1. Strategic Information Technology Initiatives

Strategic Information Technology Initiatives

This chapter includes the following topics:

Organizations, companies, and governments have been using a distributed workforce for hundreds of years to more effectively reach a target audience, regional market, or geographic territory. Today, with advances in information technology, larger companies must compete in a global economy, which results in a workforce that is distributed globally. Supporting the applications that the distributed workforce must access to perform their daily tasks to drive productivity, revenue, and customer satisfaction has become a major business-impacting factor. Although this book does not address the process of employee management in a distributed workforce environment, it does address the IT aspects that directly impact the ability of employees to function efficiently in a distributed workforce environment.

This chapter introduces fundamental concepts related to applications, distributed servers, and wide-area networks (WAN) in a distributed workforce environment. It also explains how IT departments have had to modify their business models to support a distributed workforce.

Managing Applications

Software applications have become critical to an employee’s productivity in today’s workplace, driving greater competitive advantage and improving key business metrics. Utility applications are those that are pervasive across all employees in an enterprise (or subscribers in a service provider network). Utility applications generally include e-mail, file, print, portal, search, voice, video, collaboration, and similar applications.

Applications are no longer limited to simple word processing or spreadsheet applications. Critical business tools now range from the simple web browser to applications that support functions such as e-mail, video on demand (VoD), database applications, and streaming media. Applications, with the exception of Voice over IP (VoIP) and streaming media, now drive the majority of traffic that traverses most WAN connections today in the enterprise. These applications evolved from a centralized client/server model to a distributed architecture, which now includes client workstations, personal digital assistants (PDAs), printers, remote desktop terminals, and even telephones connecting over a broad array of WAN infrastructure possibilities.

Although maintaining a distributed workforce has many benefits, such as having knowledgeable employees closer to customers, these benefits cannot be realized without facing a list of challenges. Acquisitions, mergers, outsourcing, and diverse employee responsibilities are all contributors that force IT organizations to deal with a distributed workforce. Acquisitions and mergers create a unique set of challenges because common application platform “religions” need to be agreed upon, and the demands of corporate communication increase. Outsourcing creates not only network security concerns, but also several application-level challenges that require access to applications that might be housed in a corporate data center across potentially distant security boundaries. Lastly, diverse employee responsibilities create unique branch challenges, based on the role and expected output of each employee within a remote branch location.

In each of the previously mentioned scenarios, application performance becomes harder to effectively maintain as the distance between the user and applications grows. As network links increase to span larger geographies, so does the amount of latency due to the amount of time needed to transmit data from point to point across the network. Coupling the limitations of physics with application inefficiencies, commonly called application chatter, leads to exponentially slower response times than what would be encountered on a local-area network (LAN). While bandwidth capacity might continue to increase for such long-distance connections, applications and content continue to become more robust and rich-media centric. The need for greater bandwidth capacity will always outpace the capacities currently available. These variables, and many others, impact the overall performance of not just the application, but also the employee.

If you ask a network administrator why a specific application runs slowly in the remote branch office, he might say it is the application itself. If you pose the same question to an application manager, she might say it is the network that is causing slow application performance. Who is right in this situation? Many times, they are both right. This section describes testing new applications in the work environment and reducing application latency as methods of improving the usability of applications in a distributed environment.

Testing New Applications

Most enterprises have a structured testing model for the introduction of a new application into the work environment. Many times, the new application is written to meet an enterprise customer’s business objective: process the data input by the user, and save the processed data in a defined location. New application testing typically occurs within the customer’s controlled lab environment. A common test configuration includes a couple client workstations, a server, and a switched network environment. This type of testing environment proves the application’s offering abilities but many times does not demonstrate the limitations the application brings to the end user who is based in a remote branch office on the other side of a slow WAN link. In many cases, these limitations are not discovered until a production pilot, or if the application is deployed en masse.

Figure 1-1 shows a simple application test environment, which includes an application server, switch, and two clients.

Simple Application Test Environment

Figure 1-1. Simple Application Test Environment

Reducing Application Latency

Application vendors are aware of many of the limitations created in a distributed workforce environment. To reduce application latency, many have introduced a set of features enabled on local workstations specific to each of the client applications, such as application caching. Some vendors provide applications that require software to be loaded on the client workstation prior to launching the application on the workstation.

Client-based application caching is not enough to overcome the obstacles that are introduced when accessing centralized applications and the associated data over the WAN. Although application caches do aid in the overall performance for a given user, they do not address all of the limitations imposed by application inefficiency, physics, the exponential increase in need for capacity, and the growing geographically distributed workforce.

Two applications common to the distributed workforce are Microsoft Outlook and Internet Explorer. Both applications allow the user or application administrator to define a certain amount of client disk space for application caching. The application cache on the client workstation operates independently of the application server that hosts the data the client is requesting, providing a better-performing application experience.

Microsoft Outlook retains a copy of the user’s mailbox on the user’s local disk storage. A local copy allows for the application to access all of the user’s mail locally. On-disk access reduces the application’s frequent reactive dependency on the WAN; Outlook seeks new mail periodically, appending the new mail to the locally cached copy of the user’s mailbox.

Microsoft Internet Explorer supports configurable storage and location options for cached Internet and intranet content. The browser cache stores copies of any objects that do not contain header settings that prohibit the caching of objects for later use. Commonly cached objects include graphics files, Java objects, and sound files associated with web pages. A cached object is effective only if the object is requested two or more times. Users and application administrators have the option of increasing or decreasing the amount of space allowed for cached content, ranging from as little as 1 MB to as much as 32 GB of on-disk storage.

In Microsoft Outlook and Microsoft Internet Explorer, application caching is effective only for the application being cached and only for the user where the application caching is configured. Access to any content or object that does not reside within the client’s local application cache must traverse the WAN. Not all application traffic can be cached by the client’s local application cache.

Managing Distributed Servers

In the 1990s, it was common to build a distributed network architecture that involved deployment of application-specific or function-specific servers in each of the locations where users were present. High-speed WAN connectivity was considered very expensive by today’s terms, often involving connectivity at rates less than 512 kilobits per second (kbps). To allow for efficient access to applications and common, shared storage for collaboration and other purposes, distributed servers located in branch offices became commonplace, as illustrated in Figure 1-2. Having distributed servers creates several challenges for the IT organization, including a difficult path toward implementing reliable and secure data protection and recovery, timely onsite service and support, and efficient, centralized management.

Traditional Distributed Server Architecture

Figure 1-2. Traditional Distributed Server Architecture

Protecting Data on Distributed Servers

A common method for protecting data on distributed servers is to leverage a form of direct-attached tape backup or shared tape backup in each of the locations where servers are present. Tape cartridges have been used for years as a common and trusted form of data protection. As a common practice, third-party services, or even a local employee, will take the tape(s) offsite after each backup has been completed. Although this is a trusted method, tapes can be stolen, misplaced, or lost in transit, or can become defective. Furthermore, some employees might not feel the same sense of urgency about manually taking tapes offsite, which might lead to some or all of the tapes never actually leaving the location.

As an alternative to tape backups, centralized backups have been used, but at a cost that impacts the WAN itself. Although it is not uncommon to run centralized backups over the WAN via third-party applications, data transfer mechanisms such as the File Transfer Protocol (FTP), or host-based replication implementations, these models call for a reliable and high-capacity WAN connection. Such means of protecting data perform best if the WAN is in a state of low utilization, such as can be found if there are no other business transactions taking place, commonly after hours. Even in scenarios where the amount of WAN capacity is high and link utilization is low, performance might suffer due to other causes such as server limitations, latency, packet loss, or limitations within the transport protocol.

Centralized data protection is driven by a variety of forces including lower cost of management and less capital investment. Another key driver of centralized data protection is regulation initiated by government agencies or compliance agencies within a particular vertical. These regulations include Sarbanes-Oxley (SOX) and the Health Insurance Portability and Accountability Act (HIPAA).

Providing Timely Remote Service and Support

Remote service and support is another challenge with a distributed infrastructure; the further from the corporate data center the asset resides, the more costly the asset is to support. If a branch server fails, for instance, it is not uncommon for the users in that branch to go without access to their data or applications hosted on that server. In some cases, users might be able to make changes to their workstations to access information from another repository, which might require them to be introduced to the WAN. This can have disastrous impact on user productivity and also cause increased levels of WAN utilization, which might cause other applications and services using the WAN to suffer as well.

Using Centralized Management Methods

Several products exist today—either native to the operating system or offered by third parties—that allow for centralized management of distributed servers. Although these centralized management methods are effective, they still involve several aspects that impact the WAN. In some cases, remote desktop capabilities are required to manage the remote server, and this creates WAN traffic, as well as additional security considerations in the branch.

Operating system and application patch management, along with antivirus signature file distribution, can create a significant amount of WAN traffic, ranging from several hundred kilobytes to over 100 MB per patch. The distribution of this critical traffic needs to be timed in such a way that it does not impact the business-related traffic on the WAN. Once a patch is applied to an operating system, the process commonly involves a reboot of the branch server to enable the changes. The problem is further exacerbated when such mechanisms for software and patch distribution are extended to include desktop image management. In these cases, the objects being transferred over the network can be multiple gigabytes in size.

Alternatives to centralized server management include onsite administration of patches, which is often considered more expensive to the corporation due to the human factors involved.

Facing the Unavoidable WAN

Nearly all remote locations today have some form of network connection that connects the location to the data center or an intermediate location such as a regional office. This connection, commonly a WAN connection in the case of a remote office, carries all traffic to the data center and beyond via fiber, cable modem, DSL, satellite, metro Ethernet, or other interconnect technology. Today, WAN traffic comprises more than just file server access, file transfer, data protection, and e-mail message transmissions; business and personal Internet traffic, streaming media, printing, management, enterprise applications, and thin client sessions all traverse the same shared WAN connection. Although the WAN now has to support traffic that might not have been planned for in the past, all of this traffic needs to share this connection. In this way, the reliance of the IT organization on the network continues to increase over time, and the demands placed on the network increase as well.

In today’s business model, many times the users, and the applications and content needed by the users, dictate what services the WAN supports. The web browser, for example, was traditionally seen as a non-business-critical application on the user desktop. Some operating systems used to support the full removal of the web browser. Today, the web browser is one of the first applications a user launches after logging into the workstation. The web browser is now the portal into business-critical applications such as customer relationship management (CRM), enterprise resource planning (ERP), and document collaboration applications, and to personal destinations such as e-mail hosting sites and web logs, known as “blogs.”

As more and more applications transition from client/server to browser based, and as application vendors continue to standardize applications on web-based protocols such as the Hypertext Transfer Protocol (HTTP) and Extensible Markup Language (XML), the dependency of the web browser will only increase within the corporation. This is one form of traffic that will call for a significant amount of awareness and optimization when planning for the future.

Most traditional business functions rely on protocols that are more client/server centric. The Common Internet File System (CIFS) protocol is one of many widely used and accepted protocols for reading, writing, transferring, or otherwise manipulating content stored on a file server share. CIFS is commonly recognized as a protocol that has a lot of overhead in terms of client and server transactions, and is recognized by many enterprises as costly to WAN links, but necessary to support the needed business transactions and productivity functions when file shares are centralized.

Changing the Application Business Model

In light of the challenges discussed in this chapter so far, IT organizations have begun turning to new ways of solving complex infrastructure, productivity, and performance issues. A new class of networking technologies called application acceleration and WAN optimization helps to overcome many of these obstacles. These technologies are deployed on devices called accelerators, which reside at strategic points in the network—typically one or more exist on each end of a WAN connection—and employ advanced compression, flow optimization, latency mitigation, and other techniques to improve performance and minimize bandwidth consumption.

These devices are fundamentally changing the application business model and IT at large, as they enable centralization and consolidation of resources while ensuring performance service levels. As such, remote users are able to work with remote servers, applications, data, and more and receive performance that is similar to that of having the infrastructure in the same office. In short, accelerators help to mitigate the performance challenges presented by the WAN and ensure more efficient and effective utilization of network resources.

Accelerators and the foundational technologies that they employ are the topic of the remainder of this book and will be examined in more detail in later chapters.

The first step, prior to deploying accelerators, in transforming the way enterprise applications and service infrastructure are deployed and managed and optimizing networks to support business-critical application traffic is to have full awareness of how the network, in particular the WAN, is being used. Several utilities are available today to analyze and categorize the traffic that traverses a network. Utilities ranging in cost from freeware to multiple millions of dollars provide deeper inspection and granular examination of traffic flows.

These utilities help application and network administrators understand how to ensure that the network is provisioned in such a way that packet handling is aligned with business priority and application requirements (discussed at length in Chapter 3, “Aligning Network Resources with Business Priority”). These utilities also help application and network administrators understand what traffic needs to be addressed when considering a solution leveraging accelerators. Each application that traverses the WAN reacts differently to an accelerator, so understanding what traffic is crossing the network will help to determine which applications can be optimized and which ones will better function without optimization or require no optimization at all.

After determining which applications can be targeted for optimization, consider how the client uses these applications. Business applications utilize several different methods of interaction, including client to server, thin-client to server, and web-based sessions. Also included in this consideration should be any protocols that are natively leveraged by the operating system to map to remote resources, such as the Common Internet File System (CIFS), the Network File System (NFS), the Messaging Application Programming Interface (MAPI), and remote-procedure call (RPC)-based services.

In some cases, removing servers from branch locations and centralizing the applications, storage, and management in the data center will prove to be not only possible, but also more cost effective and efficient when combined with the addition of an accelerator solution. Leveraging an optimized WAN will allow branch locations to reduce their overall operating and capital expenses while maintaining the overall user experience in the branch, and allow for a greater level of control over the traffic that traverses the WAN.

Consolidating and Protecting Servers in the New IT Operational Model

Already burdened with the challenges of providing access to content, applications, and services for the growing geographically dispersed workforce, IT organizations are also faced with a conflicting challenge: controlling capital and operational costs across the enterprise while enabling an “always-on” infrastructure.

Companies sell into hypercompetitive markets that are driven by the explosion of the Internet, efficiencies in supply chaining, diminishing consumer prices and profits, and the entrance of larger profit-centric organizations into historically niche markets. Managers are now treating IT organizations as profit and loss centers, controlling expenses in the same way that other departments within an enterprise organization control their expenses.

Looking at infrastructure deployments globally, IT organizations quickly realize that the choice to move to a distributed server model globally has fulfilled the need of many initiatives, such as enabling productivity through high-performance access to local infrastructure resources. It has, however, also created a nightmare of a capital and operational expenditure model that, in the new economy, must be controlled. Enter server consolidation.

Server Consolidation

Servers and the associated infrastructure deployed in distributed locations, particularly remote branch offices, comprise arguably the largest cost center for any IT organization that operates within a distributed, global enterprise. Having a local server infrastructure for applications, content, and collaboration does certainly provide remote users with the access and performance metrics they need to drive productivity and revenue, but also requires many costly capital components, data protection, and management resources. Examining the remote branch office server infrastructure more closely reveals that most offices have a plethora of services and applications running on remote servers to provide basic networking functions and interfaces to revenue-producing corporate applications. Such services and applications could include the following:

  • Network access services: Dynamic Host Configuration Protocol (DHCP), Domain Name System (DNS), Windows Internet Name Service (WINS), and Microsoft Active Directory (AD) enable user workstations to become addressable on the network, identify and resolve other network resources, and provide login and security services.

  • File and print services: CIFS and NFS for file sharing and local spooling and management of print jobs enable collaboration, data storage, record keeping, and productivity.

  • E-mail and communication services: Simple Mail Transfer Protocol (SMTP), Post Office Protocol v3 (POP3), Internet Message Access Protocol (IMAP), MAPI, and a variety of call control, VoIP, and other telephony applications provide the foundation for employee communication within the organization and abroad.

  • Data protection services: These applications, including Symantec Veritas NetBackup, EMC Legato, Connected TLM, and others, enable the transfer of data from network nodes to be stored as a backup copy for compliance or recovery purposes.

  • Software distribution services: Applications such as Microsoft Systems Management Server (SMS), Novell ZENworks, or Symantec’s Software Download Solution allow IT organizations to distribute patches for operating systems and applications to ensure proper operation and security.

While the prices of servers might continue to fall, the costs of server management and data storage management are increasing. Each server deployed requires a number of costly components that, when examined as a solution, dwarf the capital cost of the server and associated components themselves. These costly components include the following:

  • Server hardware and maintenance: A server for the remote branch office costs approximately $1000 but can be as high as $10,000 or more. Purchasing a maintenance contract from the server vendor ensures fast replacement of failed components and potentially onsite troubleshooting but is generally accompanied by a price tag of $500 or more per year for the first three years and might increase year over year, thus making server replacement an attractive solution as the age of the server increases.

  • Data storage capacity: Each server needs a repository for the operating system, applications, and data that will be used by remote branch office users. For smaller sites that require no redundancy, a single disk might be used, but for medium and large branch offices where redundancy and performance are required, data storage might involve multiple disks within the server chassis (also known as direct-attached storage, or DAS) using Redundant Array of Independent Disks (RAID) technology. Many medium and large branches require capacity that goes beyond what the server itself can hold, dictating the need for costly external storage or dedicated storage infrastructure such as network-attached storage (NAS) devices or storage area networking (SAN) components. The cost of remote branch office storage can range from a few hundred dollars to tens of thousands of dollars or more.

  • Data protection hardware: Data stored on client workstations and servers must be protected, and copies must be kept for long periods of time. For the small remote branch office, a single tape drive attached to the server might suffice, but for offices with larger data storage requirements, external tape subsystems, including automation and libraries, might be required. Such solutions can range in price from a few hundred dollars to tens of thousands of dollars.

  • Data protection media and management: Costly tape media is required when using tape as a form of data protection. Some government regulations require that corporations retain copies of their data for longer periods of time; organizations will find that they need to have as much as ten times the disk storage capacity or more simply to hold archived copies of protected data to meet such regulations, requiring a large number of tapes. Furthermore, these tapes might be vaulted offsite for additional protection.

  • Server operating system and maintenance: Operating systems for servers range from zero cost (freeware) to $1000 or more, depending on the vendor and functions provided. Purchasing a maintenance contract or support contract from a server operating system vendor can be even more expensive than the server hardware maintenance contract. Adding to the costs, monthly patch management and related operating system costs increase the administrative staff’s direct involvement and costs associated with their department’s operating expense budget.

Alongside these components are a number of equally costly capital expenses that must be undertaken, including the cost of applications and application support, antivirus software and support, and management tools. When coupled with the operational costs incurred by having to manage the server infrastructure, including full-time, part-time, or contract IT resources, many organizations find that the first-year cost of a single server, including capital expenditure, can be $50,000 or more. Second- and third-year expenses, which do not include the majority of the original capital expense (with the exception of additional storage capacity and tape media), can be $35,000 or more per server. For an enterprise with 100 branch offices or more, with two servers in each branch, this adds up to a first-year investment of $5 million and a total three-year investment of $12 million.

Simply consolidating server infrastructure into one or more managed data centers is an attractive option. While this eliminates the need for deploying, managing, and maintaining the remote office servers, this creates two much larger problems:

  • Remote branch office users are subject to a dramatic change in information and application access. In a utopian environment where WAN bandwidth were unlimited and physics (and, more specifically, the speed of light) could be defeated, perceived user performance would not change noticeably. The reality is that WANs do not behave like LANs because they do not share the same characteristics (discussed in Chapter 2, “Barriers to Application Performance”), and application performance quickly degrades.

  • Supporting these centralized services requires greater WAN capacity. Many WAN connections run at nearly full utilization, driven by the increasing richness of content and value provided by deploying telephone communications (VoIP) using existing network connections.

Solutions such as application acceleration and WAN optimization can fill the gap between performance and consolidation while also ensuring that network capacity utilization is controlled and effectively used, thereby improving performance for accessing applications and services that are already centralized while also maintaining performance levels of services that are being centralized from the remote branch office. With these technologies in place, IT organizations can safely remove many of these high-cost items from the remote branch office infrastructure, replacing them with lower capital- and operational-cost components such as accelerators, without impeding upon the ability of the remote branch office worker to be productive and drive revenue. An example of an accelerator deployment is shown in Figure 1-3.

A Centralized Deployment Leveraging Accelerators

Figure 1-3. A Centralized Deployment Leveraging Accelerators

Compliance, Data Protection, Business Continuity, and Disaster Recovery

Compliance, data protection, business continuity, and disaster recovery have become priorities for IT organizations in the last decade, in particular due to acts of war, natural disasters, scandalous employee behavior (financial, security, and otherwise), and regulations from government or industry bodies. IT organizations of today have additional pressure on them to ensure that every byte of data is protected, secured, and replicated to a distant facility in real time (synchronous) or near real time (asynchronous) to ensure that transactions relative to key business applications, communications, or even basic productivity applications are recoverable:

  • Synchronous replication: Used for business-critical tier-1 applications, data, and infrastructure. Provides the ability to guarantee data coherency to the last successfully completed transaction. Generally bound to small geographic areas due to transmission latency and bandwidth requirements, typically a metropolitan area. Used primarily for hot sites, where recovery times need to be minimal.

  • Asynchronous replication: Used for less-critical applications or for longer-distance replication where transmission latency and bandwidth prohibit synchronous operation. Enables replication of data across a larger geography but might compromise coherency, as the replica site might not have received the latest transactions. Used primarily for warm or cold sites, where recovery times can take longer.

Meeting coherency challenges is next to impossible. With a distributed server infrastructure, the distance separating the IT personnel chartered with the responsibility of data management is commonly combined with reduced technical expertise and differences of opinion surrounding priority of such tasks. For example, IT resources chartered with data protection tasks in remote branch offices might treat data protection management as a repetitive annoyance. (Swapping tapes and rerunning failed backup jobs might not be the most enjoyable of responsibilities.) IT resources chartered with data protection tasks in the data center see data protection as a business-critical job function and generally take great strides to ensure its success.

Data Protection and Compliance

Server consolidation enables the cost-savings metrics discussed in the previous section and also allows IT organizations to better meet data protection requirements and regulation from industry or government bodies more effectively. Not only is it good practice to keep copies of data to use for recovery purposes, many industries and agencies are mandated to do so and must adhere to strict guidelines. Having fewer silos of data in fewer locations means fewer pieces of data need to be protected, which translates into better compliance with such regulation.

A side benefit of server consolidation is that fewer redundant copies of data need to be kept, and global collaboration across multiple locations can be safely enabled, thereby mitigating version control and data discrepancies in most cases. For example, assume that a company has 100 branch offices, each with two servers and 250 GB of total storage capacity per remote location. If the storage capacity at each location is 50 percent utilized, each location has approximately 125 GB of data that needs to be protected. If each site has even 20 GB of data that is common among each of the locations (an extremely conservative estimation), 2 TB of data storage capacity is wasted across the enterprise—not only on disk, but also on tape media housing the backup copies. Alongside having wasted capacity due to redundancy, approximately 12.5 TB of disk capacity is unutilized. Other, less commonly measured resources will also benefit from server consolidation, such as processor utilization, memory consumption, and network interface utilization. Figure 1-4 illustrates storage utilization in a distributed infrastructure.

Low Utilization and Stranded Capacity

Figure 1-4. Low Utilization and Stranded Capacity

By collapsing this infrastructure into the data center, redundant copies of data can be safely eliminated. Higher levels of efficiency can be gained, because data can be stored on shared arrays, which in turn enables capacity planning efforts and capital purchases to be amortized. The need to overprovision a remote branch office server with excess capacity that goes largely unutilized is nearly eliminated. Expert-level, experienced, and well-trained data center IT staff using enterprise-class data protection software and hardware can manage the protection of the data, ensuring that backup windows are better managed, fewer operations fail, and failed operations are corrected.

Figure 1-5 illustrates an efficiently designed network, using accelerators for storage consolidation and data protection. Within the illustration, an automated backup process has been centralized to the data center, eliminating the need for “human” intervention.

High Utilization, High Efficiency, and Lower Costs

Figure 1-5. High Utilization, High Efficiency, and Lower Costs

With fewer silos of data, a smaller number of data protection components need to be purchased and maintained, including tape drives, tape libraries, and expensive tape media, which again lowers costs.

Business Continuance and Disaster Recovery

Business continuance and disaster recovery planning are becoming commonplace in today’s enterprise, driven particularly by both compliance (as mentioned in the previous section) and the overarching threats presented by malicious attackers (inside and outside), natural disaster, and terrorist attacks. While business continuance and disaster recovery are two mutually exclusive and adjacent business initiatives, they are often coupled together to provide a more holistic approach to ensuring that businesses can survive in the presence of a disastrous scenario through articulate contingency planning, effective system recovery processes, service failover, and routing of application traffic and workload. Business continuance and disaster recovery are defined as follows:

  • Business continuance: The ability of a business to continue operations in the event of a disaster. This requires that systems be readily available, in an active (hot site), standby (warm site), or offline (cold site) mode to assume responsibility when primary systems are rendered unavailable.

  • Disaster recovery: The ability of a business to recover business-critical data, applications, and services after a disastrous event has taken place.

Both of these initiatives rely not only on well-documented and tested processes, but also on the availability of data, infrastructure, and personnel to run business-critical applications. To ensure that data and infrastructure are readily available, many organizations have deployed secondary (or tertiary) data centers that have facilities and hardware necessary to resume operation. Data can be replicated from the primary data center(s) to the secondary or tertiary data center(s) via synchronous or asynchronous methods to ensure different levels of recoverability and continuity.

With a consolidated server infrastructure, organizations are better positioned to implement more effective business continuance and disaster recovery solutions. Less server infrastructure is required and fewer silos of data must be replicated to distant data centers. Accelerators can even be deployed to improve the throughput and efficiency of replication by minimizing bandwidth consumption, enabling better utilization of available network capacity, and ensuring that the replicated data in the secondary site is more coherent to the data stored in the primary site.

Summary

Transitioning the IT business model from distributed to centralized will bring many enterprises a higher degree of control over their resource utilization and improve manageability, while also reducing the overall capital and operating expenses associated with managing a distributed server, application, and storage infrastructure.

Application acceleration and WAN optimization solutions help poise organizations for initiatives such as server consolidation and meeting regulatory compliance by overcoming WAN conditions to enable consolidation without compromising on performance. Accelerators not only enable consolidation and improve performance for applications that are already centralized, but also ensure that remote-user performance expectations for services that were initially distributed are maintained once that infrastructure is centralized. Such technologies help to improve overall user productivity and organizational posture relative to meeting business objectives and driving revenue. With a consolidated server, application, and storage infrastructure, IT organizations find themselves operating in a new model. Accelerators create uncompromising efficiency, positioning the WAN to meet the demands of the global workforce, government regulations, data protection needs, business continuance, and disaster recovery.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.138.202