Chapter 3
Hyperconnectivity Drives Innovation

As you are reading this chapter, you may have already noted the significance of business agility and the roadmap presented using the balanced scorecard in our previous chapters. We have attempted to lay a sound foundation to define various business stakeholders and how these technology forces have helped align chief information officers (CIOs), chief marketing officers (CMOs), and others in the current turbulent business environment. We have also put our best efforts into defining the Business Agility Readiness Roadmap with clear guideposts to help you make rapid progress in your business process optimization. We have made an effort to highlight cases of success and failure that would help you to identify clearly the stage you are at in your business and to transform your business with the impact of these technology forces. You have noticed that it is imperative to transform your current business model and processes to create business agility to survive and thrive in business irrespective of what business sector you are in today. It was okay to run a business independently just a few years ago, but no longer. We introduced and talked about business ecosystems, a concept that brings all stakeholders such as employees, customers, and partners (distributors, value-added resellers, systems integrators, independent consultants, investors, shareholders) to participate in your business to create business agility and to help your business succeed. Considering carefully, you would realize that all play a major role in your business, and none should be ignored or taken lightly.

We have entered the age of hyperconnectivity—the state in which we are always connected to the Internet through a number of devices (computers, PDAs, smart devices, smartphones), accessing applications and content for various purposes. Rapid innovation in the technology sector has continued to drive consumers and businesses to adopt devices that enable increasingly mobile and connected access to their content and services. This new era is about much more than just sheer numbers of new network-connected devices, computers, and applications. It is about providing pervasive access to, and continuous presence on, the network, where anyone or anything can interact with anyone or anything else no matter where they are located. We also referred to a paradigm shift from a system of record (SOR) to a system of engagement (SOE) that is maturing rapidly in adoption both at the consumer level and at the business level.

As Fabio Castiglioni and Michele Crudele at IBM write, online consumers of current age require a rich experience that leverages information from disparate structured and unstructured sources, social channels, and their friends' recommendations. This offers quick information and better resources for effective decision making in general. This has become a norm in the consumer space, and we need to replicate the same in our businesses, too. A SOE enables this rich experience by extracting value from the information that comes from multiple channels and enabling new digitized business models. A cloud operating environment (CloudOE) is the platform that supports SOE workloads. The CloudOE enables the agility and velocity that an SOE needs by providing an ecosystem for developing, deploying, and operating SOE applications.

Hyperconnectivity has led major innovations in the area of emerging technologies such as the cloud, social, mobility, big data, and predictive analytics. In this chapter we stay focused on cloud technology and architecture that would enable business agility. In recent times, cloud computing has emerged as a much hyped and talked-about concept in information technology (IT). It has drawn the attention of not only IT professionals but all industry and management people seeking to understand and reap benefits from it. The IT media have written about it and major conferences are still drawing a large attendance of knowledge workers. It has gone to the extent that software mogul Marc Benioff of Salesforce.com made a sharp statement that “Software is dead,” while in response, Larry Ellison of Oracle commented, “If we're dead—if there's no hardware or software in the cloud—we are so screwed.…But it's not water vapor! All of it is a computer attached to a network. What do you think Google runs on? Do they run on water vapor? I mean, cloud—it's all databases, operating systems, memory, microprocessors, and the Internet. Then there's a definition: What's cloud computing? It's using a computer that's out there. ‘Open source is going to destroy our business, and there'll be nothing but open source and we'll be out of business.’ And minicomputers are going to destroy mainframes and PCs are going to destroy minicomputers and open source is going to destroy all standards and all software is going to be delivered as a service. I've been at this a long time, and there are still mainframes—but it was the first industry that was going to be destroyed, and watching mainframes be destroyed is like watching a glacier melt.…” Ellison concluded in anger and frustration, “What the hell is cloud computing??”

Bob Evans, senior vice president at Oracle, further wrote some revealing facts in his blog post at Forbes, clearing away all doubts people may have had in their minds. You may like to consider some interesting facts in this regard. Almost eight years ago, when cloud terms were not yet established, Oracle started developing a new generation of application suites (called Fusion Applications) designed for all modes of cloud deployment. Oracle Database 12c, which was released just recently, supports the cloud deployment framework of major data centers today and is the outcome of development efforts of the past few years. Oracle's software as a service (SaaS) revenue has already exceeded the $1 billion mark, and it is the only company today to offer all levels of cloud services, such as SaaS, platform as a service (PaaS), and infrastructure as a service (IaaS). Oracle has helped over 10,000 customers to reap the benefits of the cloud infrastructure and now supports over 25,000 users globally. One may argue that this could not have been possible if Larry Ellison hadn't appreciated cloud computing. Sure, we may understand the dilemma he must have faced as an innovator when these emerging technologies were creating disruption in the business (www.forbes.com/sites/oracle/2013/01/18/oracle-cloud-10000-customers-and-25-million-users/).

You will agree with us that cloud computing is undoubtedly the hottest and most commonly used buzzword in IT, and it is undergoing numerous interpretations by various IT service providers for their advantage and benefit. In our endeavor, we will make an attempt to simplify the term, concept, and paradigm of cloud computing and its related technologies for all business executives, technical developers, and knowledge workers. Cloud computing has become a prominent battleground among major IT vendors, offering hardware, software, network infrastructure, and consulting services providers. Enterprises have been slow to adopt it for various reasons, namely immature standards, concern for data security, and the fact that the whole model is not yet fit for running mission-critical applications (P. Hunter 2009).

Cloud computing is becoming a new platform for enterprise and personal computing. It competes with traditional desktop or handheld computers (including smartphones) that run applications directly on the devices. But SaaS and cloud computing will rise to the level of an industry platform only when firms open their technology to other industry players. For example, Salesforce.com created a customer relationship management (CRM) product and configured it not as packaged software but as software delivered over servers and accessed through a browser. Salesforce.com developed its own in-house platform for delivering the software as a service to its customers. It then developed AppExchange as an open integration platform for other application companies that built products utilizing some features in the Salesforce.com CRM product. Salesforce.com created a new industry platform named Force.com, a development and deployment environment using Salesforce's SaaS infrastructure.

Paradigm Shifts: Mainframes to Client-Server to Cloud Computing

Information technology has undergone several paradigm shifts: from mainframes to client-server to Internet computing and then to cloud computing. We witnessed the advent of mainframes in the 1960s, which were initially meant for single users but gradually evolved in the 1970s as multi-users where several users were connected with terminals sharing computing resources in real time. In this model, the large computing resource was virtualized, and a virtual machine was allocated to individual users who were sharing the system. In reality, these terminals were accessing virtual instances of computing resources of a mainframe. The similar concept of virtual instances has been practiced in cloud computing by many thousands of machines.

We then witnessed the next wave of client-server computing, the concept where the mainframe as the computing center was diluted and computing resources were distributed. As computing power increased, work gradually shifted away from centralized computing resources toward increasingly powerful distributed systems. In this age of client-server, PC and PC-based applications dominated; many routine tasks were moved to the desktop and more resources were deployed on the desktop to run PC- or client-based applications. The mainframe was used only for corporate enterprise resource planning (ERP) and data processing–based applications. The standardization of networking technology simplified the ability to connect systems as Transmission Control Protocol/Internet Protocol (TCP/IP) became the protocol of the growing Internet in the 1980s. The emergence of the web and HTTP in the late 1990s brought computing back to the data center environment.

Next Evolution: Large Data Center—Grid Computing—Cluster Computing

In recent times, there has been an enormous evolution in computing capabilities, including hardware, software, and storage. Hardware and storage devices are mass-produced and commoditized. A few commodity servers can handle extensive data processing that was tackled on large mainframes or minicomputers earlier.

Grid computing enables groups of networked commodity computers to be pooled and provisioned on demand to meet the changing needs of business. Instead of dedicated servers and storage for each application, grid computing enables multiple applications to share computing infrastructure, resulting in much greater flexibility, cost, power efficiency, performance, scalability, and availability, all at the same time.

Cluster computing, consisting of a set of loosely or tightly connected low-cost commodity computers that work together so that in many respects they can be viewed as a single system, provides scalable on-demand flexibility.

Defining Cloud Computing

There have been many interpretations of cloud computing concepts and technology by many vendors and service providers. Some of the leading firms that have defined cloud computing are:

  • Gartner. Cloud computing is a style of computing where massively scalable IT-related capabilities are provided as a service across the Internet to multiple external customers.
  • Forrester Research. The cloud provides a pool of abstracted, highly scalable, and managed infrastructure capable of hosting end-customer applications and billed by consumption.
  • The 451 Group. The cloud is IT as a service, delivered by IT resources that are independent of location.
  • Wikipedia. Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet.
  • IBM. A cloud computing platform dynamically provisions, configures, reconfigures, and deprovisions servers as needed. Cloud applications use large data centers and powerful servers that host web applications and web services.
  • University of California at Berkeley. The cloud offers the illusion of infinite computing resources available on demand, the elimination of up-front commitments by cloud users, and the ability to pay for use of computing resources on a short-term basis as needed.

Prior to the popularity of cloud computing, there were a number of related service offerings that attracted only minor attention—grid computing, utility computing, elastic computing, and software as a service. The basic technologies from each of these have been incorporated into cloud computing and appear to be attracting more interest than these predecessors. This attraction may be a function of the maturity of the technology and the services offered, or it may be driven by a marketing blitz that has occurred as Amazon, Google, Apple, and other big names have gotten behind it. (Smith, R. 2009).

What Is Cloud Computing?

The National Institute of Standards and Technology (NIST) is an agency of the U.S. Department of Commerce that defines and sets standards for any emerging technology. NIST defines cloud computing as a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the data centers that provide those services.

According to NIST, this cloud model promotes availability and is comprised of five essential characteristics, three service models, and four deployment models.

  1. On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed, automatically and without requiring any human intervention with each service provider.
  2. Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick platforms (e.g., laptops, PDAs, and smartphones).
  3. Resource pooling. The provider's computing resources are pooled to serve multiple consumers using a multitenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control over or knowledge of the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or data center). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.
  4. Rapid elasticity. Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and can be rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
  5. Measured service. Cloud systems automatically control and optimize resource use by leveraging metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resources usage can be monitored, controlled, and reported, providing transparency for both the provider and the consumer of the utilized service.
  1. We can classify cloud computing several ways, such as hardware, infrastructure, platform, framework, application, and even data center.
  1. Software as a Service (SaaS). The capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser. The consumer does not manage or control the underlying cloud infrastructure, including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Salesforce.com CRM is an example of this service model.
  2. Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure, including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
  3. Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision processing, storage, networks, and the other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control on selecting network components. Amazon EC2 and S3 are examples of this service model.

In the IaaS model, users subscribe to uses of certain components of a provider's IT infrastructure. Although the subscribers don't have control of the entire cloud infrastructure, they do have control over selected portions of it, such as firewalls, operating system, deployed applications, and storage. In the PaaS model, a combination of applications forming a platform is subscribed to as a service by users. For example, a combination of software tools may be used as a programming and software platform. The SaaS model can be seen as a special case of PaaS, where a single application can be subscribed to as a service. Such services are often accessed through a web browser.

A private cloud is usually owned and used by the same organization, such as a corporation. It often refers to a proprietary computing infrastructure owned by the organization, and provides computing and information services to its employees behind the organization's firewall.

A public cloud often refers to computing and IT infrastructure that is owned by an organization but provides computing and information services to external users or subscribers. By subscribing to services provided by other well-established companies, new start-ups, for example, can quickly realize their computing and information technology (CIT) needs without investing so much money and time to implement their own computing and IT infrastructure.

  1. Private cloud. Here the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on or off the premises.
  2. Public cloud. The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
  3. Community cloud. The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, and policy and compliance considerations). It may be managed by organizations or a third party and may exist on or off the premises.
  4. Hybrid cloud. The cloud infrastructure is a combination of two or more clouds (private, public, or community) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).

Key Business Drivers for Cloud Services

We all have witnessed and experienced information and communication technology (ICT) with changing times that have offered greater benefits and challenges to business. In a challenging business and economic environment, businesses have constantly looked for technology and services to optimize cost and increase efficiency. Start-ups and small and medium-sized businesses (SMBs) always look for low investment for infrastructure that requires capital expenditure (capex) and optimized operational cost to run their business. In a tough economy, even large enterprises are pressured to cut costs and rightsize their needs, depending on market conditions and customer demand. Cloud computing, in the recent past, has emerged as a promising technology and services paradigm to address these business challenges.

Business Value Propositions for Cloud Computing

There are a number of business benefits offered by cloud computing, and we enumerate several and support them with some real-life case references to help your business achieve greater agility for sustainable competitive advantage.

  • Cloud computing is dynamically scalable. Businesses can draw as much computing power as is necessary on an hourly basis. As demand from internal users or external customers grows or shrinks, the necessary computer, storage, and network capacity can be added or subtracted on an hourly basis.
  • The resources can be purchased with operational funds, rather than as a capital expenditure. Many IT departments face a long approval process for capital funding, in addition to the wait for equipment delivery and installation. Cloud computing allows them to bring capacity online within a day and to do so using their operational budgets.
  • The equipment does not reside in the company facility. It does not require upgrades to the electrical system, the allocation of floor space, modifications to the air-conditioning, or expanding the IT staff. Computers at Amazon.com consume space, power, and staffing support at Amazon instead of within the customer's company.
  • There are competing providers for this service. If the first cloud provider does not deliver acceptable performance, a company can always shift its business to another company offering better service or lower prices.

President Obama Election Campaign Leverages Cloud to Win

We all know how President Barack Obama's election campaign was successfully executed, run, and managed using all these technologies led by its chief technology officer (CTO), Harper Reed. Similar to any other business operation, data was critical to the campaign and came in various sizes, types, and frequency that needed to be consolidated, integrated, and analyzed in real time to provide insights to Obama to tailor his message to the right audience. Moreover, the team also needed to manage the media, volunteers, and donors to support the campaign. One can imagine that the campaign team must have experienced challenges and limitations of computing infrastructure resources that were required for specific time frames during the campaign.

Cloud infrastructure and especially platform as a service (PaaS) came in handy to the campaign team, rendering scalable computing and storage capacity on demand on a pay-as-you-go basis when they needed it at a crucial time. The team used autoscaling to quickly add cloud resources in order to enable as many as 7,000 volunteers to make more than two million calls to voters in four days at the campaign's end.

Concerns and Risk Assessment of Cloud Computing

Cloud computing offers numerous benefits, but each of these also offers corresponding disadvantages or raises concerns; therefore, it is imperative to assess all risks before implementation and deployment of major business-critical applications on the cloud.

  • Security, data privacy, and regulatory compliance are the foremost among many concerns. Many companies are hesitant to host their internal data on a computer that is external to their own premises and that is potentially cohosted with another company's applications. Companies may be concerned about the physical location of the data that are being stored in the cloud. The laws of the host country of the equipment apply to the data on the machines. European and Asian companies have expressed concerns about having their data stored on computers in the United States that fall under the jurisdiction of the U.S. Patriot Act, allowing the U.S. government to access that data very easily.
  • Experienced users of cloud computing services have noticed a big variation in the performance of their applications running in the cloud. Because many companies are all sharing the resources, it is possible to arrive in a neighborhood that is extremely busy and very noisy, leaving little room for one's applications to run and communicate.
  • Interesting bugs in this large system have yet to be worked out. There have been instances in which entire cloud services have crashed and been unavailable for hours or days. When this happens, applications will be offline until the larger problem is fixed.
  • Each cloud vendor offers unique services and unique ways to communicate with the computer resources. It is possible for subscribers to get so deeply embedded into these unique and proprietary services that they cannot move applications without some major changes to both software and data.
  • It appears that a cloud provider has an infinite number of computers and storage disks to meet subscribers' needs. But there is a finite number of these resources available, and the provider is multiplexing these between the thousands of applications that are starting and stopping every hour. If all customers called for services at the same time, the provider could run out of available resources. This is the cloud computing equivalent of a busy signal on Mother's Day or an insurance claim following a major hurricane. These concerns, and others that are much more technical in nature, are well known to the major cloud computing providers and the intermediary support companies.
  • Cloud computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the data centers that provide those services. From a hardware provisioning and pricing point of view, three aspects are new in cloud computing.
    1. The appearance of infinite computing resources available on demand, quickly enough to follow load surges, thereby eliminating the need for cloud computing users to plan far ahead for provisioning.
    2. The elimination of an up-front commitment by cloud users, thereby allowing companies to start small and increase hardware resources only when there is an increase in their needs.
    3. The ability to pay for use of computing resources on a short-term basis as needed (for example, processors by the hour and storage by the day) and release them as needed, thereby rewarding conservation by letting machines and storage go when they are no longer useful (Armbrust et al. 2010).

Understanding Cloud Architecture

Having covered cloud terminologies, nomenclature, benefits, and inherent challenges, let us understand its underling architecture that would help in adoption and deployment with ease and confidence. Cloud infrastructure and deployment of various services such as platform, application, infrastructure, and database became possible with the proliferation of virtualization as a robust technology.

Virtualization provides the high server utilization needed in the cloud computing paradigm. It smooths out the variations between applications that need barely any CPU time (they can share a CPU with other applications) and those that are compute-intensive and need every CPU cycle they can get. Virtualization is the single most revolutionary cloud technology whose broad acceptance and deployment truly enabled the cloud computing trend to begin. Without virtualization, the economics of the cloud would not have been viable.

Virtualization Strengthens Cloud Deployment

We can view virtualization technology as part of an overall optimization strategy that includes client and overall server platform (hardware, operating systems, database, and storage). Platform virtualization is a technique to abstract computer resources such that it separates the operating system from the underlying physical server resources. Instead of the operating system (OS) running on hardware resources, the OS interacts with a new software layer called a virtual machine monitor that accesses the hardware and presents the OS with a virtual set of hardware resources. This means multiple virtual machine images or instances can run on a single physical server, and new instances can be generated and run on demand, creating the basis for elastic computing resources. Basically, server virtualization removes physical barriers and isolates one technology from the other, thereby eliminating dependencies. It allows multiple independent virtual operating systems, databases, storage, and applications on the same physical hardware. This reminds us of the IBM mainframes used for time-sharing virtualization in the 1960s to enable many people to share a large computer without interacting or interfering with each other. (See Figure 3.1.)

img

Figure 3.1 Cloud Computing and Virtualization Deployment Architecture

Virtualization Enables Server Consolidation

Virtualization helps in server consolidation, which is the cornerstone of the cloud computing deployment architecture and enables economies of scale and elasticity. Server consolidation helps in reducing hardware requirements by a 10-to-1 ratio or better. It accelerates server provisioning time by 50 to 70 percent. It reduces the energy cost by 80 percent and powers down servers without affecting applications. Examining server utilization, we observe that the average server in a corporate data center has typical utilization of only 6 percent. Even at peak load, utilization is no better than 20 percent. In the best-run data centers, servers run on average at only 15 percent or less of their maximum capacity. But when these same data centers fully adopt server virtualization, their CPU utilization increases to 65 percent or higher, and that helps in making cloud services appealing and cost effective. Virtualization reduces capex, operational expenditure (opex), and total cost of ownership (TCO).

In the continued endeavor to increase efficiency and agility in business, virtualization in cloud infrastructure offers server consolidation that reduces overall cost, including capital expenditure (capex), operational expenditure (opex), and total cost of ownership (TCO). Capex cost is minimized as multiple machines are consolidated onto a single host. By consolidation, opex is reduced significantly and in turn this lowers the overall total cost of ownership of all these computing services.

There are two main server virtualization platforms in deployment in many data centers today.

  1. Hosted virtualization. This needs general-purpose operating systems such as a Windows server or Linux below the virtualization layer. VMware server and Microsoft Virtual Server are good examples of this deployment.
  2. Hypervisor-based virtualization. The most popular virtualization platform doesn't need to run on any general-purpose operating systems such as a Windows server or Linux and is also referred to as bare-metal virtualization. VMware ESX server and Oracle VM (based on Xen server) are good examples of this implementation in various data centers.

Cloud Service Models and Security

As cloud computing matures and gains acceptance as the mainstream of business technology strategies and deployments, those engaged in enterprise business computing tend to become concerned regarding security, time to value, public versus private, lock-in, cost, and more. For businesses, these questions are reasonable as cloud adoptions grow rapidly around the world and increasingly essential workloads move from their traditional on-premises location to the cloud.

Cloud computing will not be accepted by common users unless the trust and dependability issues are addressed satisfactorily. (See Figure 3.2.)

img

Figure 3.2 Cloud Service Model

Cloud-Based Solutions to Meet the IT Needs of Multiregional Branch Offices

Cloud computing has offered various deployment models to meet the IT needs of companies with multiregional branch offices globally. These cloud-based solutions can be deployed on one or a combination of these deployment models.

Private Cloud

For organizations that have a well-established computing and information technology infrastructure, such as some high-tech companies in the IT business, a completely private cloud-based solution may be a better choice for providing computing and IT services to new branch offices in other cities or other countries. In such a case, the data center, servers, and all major computing and IT devices reside behind the firewall on the organization's enterprise network, located on-site at the head office, while users in branch offices access the computing and information services through a virtual private network (VPN) or a web browser if a web interface has been made available for accessing the service (Harris 2011).

Figure 3.3 depicts the architecture of a private cloud-based solution.

img

Figure 3.3 Architecture of Private Cloud-Based Solution

In a private cloud-based solution, since everything is under the control of the very same organization, this solution gives the organization total freedom and autonomy in managing all the components of the computing and information technology infrastructure.

Federated Cloud

Federated cloud computing can be seen as a variant of private cloud computing. As in private cloud computing, in federated cloud computing the information technology infrastructure is still privately owned solely by the organization, but the equipment, servers, and services are distributed among the head office and branch offices. This may be necessary when different branch offices have different missions and each needs more dedicated computing and information services. For example, one branch may be working on data mining, while another branch may be concentrating more on server development. (See Figure 3.4.)

img

Figure 3.4 Architecture of Federated Cloud-Based Solution

Public Cloud

A public cloud-based solution is for the organization to subscribe all needed computing and information technology services from providers in the public cloud. This solution is suitable for organizations that have no resources or interest in implementing their own CIT infrastructure, or small start-ups. By subscribing to the needed computing and IT services readily available in the public cloud, a start-up can quickly get its business going and have its creative ideas tested. If the business doesn't fly, it can easily get off the boat with less to lose. (See Figure 3.5.)

img

Figure 3.5 Architecture of Public Cloud-Based Solution

Hybrid Cloud

Consistent with the definition of hybrid, a hybrid cloud involves both a private and a public cloud. A hybrid cloud-based solution may be applicable for an organization that has an established computing and information technology (CIT) infrastructure sufficient for its current needs, but doesn't want to invest big money and time to expand its current CIT infrastructure for a new business. In this case, it would rather obtain the needed CIT services from a reliable source in the public cloud. By doing this, it may be more easily turned around if the new venture doesn't go well. (See Figure 3.6.)

img

Figure 3.6 Architecture of Hybrid Cloud-Based Solution

Advantages of Cloud Computing-Based Solution

A cloud computing–based IT solution for organizations with multiregional offices brings both advantages and challenges. The advantages are as follows:

  • Cloud computing provides organizations with more agile solutions to meet their IT needs. With cloud computing, subscribers can quickly get the IT services needed to run their businesses. This is especially suitable for new businesses undertaking brilliant but risky ventures. By subscribing to services from a third party, an organization can save money on capital investment. It can also save on maintenance because the computing and information technology (CIT) infrastructure is owned and maintained by the third party.
  • Cloud computing-based solutions are reliable and offered by trusted cloud services providers such as Amazon, Rackspace, Salesforce, Oracle, and others. Cloud computing solutions provide scalability based on business needs. It can also be easily scaled down if required.
  • Cloud computing solutions offer high-end security with high-end data centers equipped and installed with all security tools and technologies.

Design and Deployment Strategy for Cloud Services

In an endeavor to create business agility, as you (or your company) begin to formulate your cloud strategy, you need to understand the inherent capabilities that are offered by cloud computing. These capabilities can help to gain competitive advantage by creating opportunities for cost advantage and organizational agility. We see the following seven capabilities (if not more):

  1. Managed interface
  2. Geolocation independence
  3. Source independence
  4. Access from anywhere
  5. Digital business setups
  6. Audit control
  7. On-demand elasticity

In deriving these capabilities, enterprises should consider their information as an asset and should want to find ways of generating and accessing it. When formulating its cloud strategy, a company should consider the extent to which it needs each of the seven capabilities. Subscribing to these services should be based on the company's needs and cost considerations. Cost considerations occur because each capability implies investments in technology, processes, people, and governance. It is recommended that companies evaluate these capabilities before implementing them in order to derive value from cloud computing (Iyer and Henderson 2010).

Managed Interface

This capability creates an infrastructure that is organic and responsive to changing user requirements. As companies develop and deploy application services, each service can be used by other services using an interface called an application program interface (API). In determining the APIs to provide, involvement can vary from highly involved as a gatekeeper and moderator in the provisioning of applications to low involvement by allowing developers to freely write applications.

This results in four strategies for managing APIs. Under the open co-innovation model, companies such as Amazon and Google have a very open and community-driven approach to services. A developer wanting access makes a request through Amazon Web Services' discussion forum, to which both Amazon employees and community members contribute and decide how open and participative they want to make their API processes. Companies that adopt the Apple or Salesforce.com model need to make huge investments in internal resources for staffing and implementation.

The Amazon and Google model requires the creation of forums and ground rules for participation. Internal experts are also needed to track discussions, identify emerging trends, and respond appropriately. The open model followed by Google and Amazon allows unfettered innovation but no quality control of the user experience. The qualified model, on the other hand, guarantees a level of user experience but may cause developer consternation with the certification process. Overall, creating an API-based interface to existing applications unlocks their potential by making them more accessible to internal and external requests, creating immense opportunities for collaboration and innovation.

Geolocation Independence

This capability controls access to services and information assets from anywhere within an enterprise without needing to know their location. Today, organizations with large data sets typically employ experts whose sole role is to know where data elements exist within their databases. Usually, these experts work for the IT architecture group to ensure that each application follows the internal architecture policies and that proper approval is given to access data. The downside is that business unit personnel often feel that these experts slow down responses to competitive pressures by increasing application development times.

Application development at Google follows a different path. Google has made huge investments in an infrastructure that is “built to build.” This infrastructure allows developers to build new applications that can access information assets without knowing the exact location of their physical storage. The infrastructure ensures that applications access the right data sources, keeping data integrity high. When a few software engineers in Bangalore set about building it, they were able to tap into many of the preexisting features, such as discussion forums and news feeds that already existed in Google's infrastructure, and reuse them to create a new product.

Source Independence

This capability of cloud computing enables a company to control access to services and also helps to switch service providers easily and at low cost. The current model of information systems development carries the risk of a company being locked into a vendor or solution. With cloud computing, the functionality that an application delivers is based on a call to a service, which should make it easy and less expensive to move to a different vendor to obtain the same functionality. The key factor enabling a choice to be made is that access to both existing data and the metadata allows the existing data to be imported easily into a new application. The same principle must apply to a service inside the company firewall. Just as a company can move from one vendor to another, so business units must be able to use or disassociate from their internal IT service, leading to the sourcing independence described here.

The data that users create and store within their favorite social networking sites can be accessed by applications that conform to the OpenSocial Foundation API specifications. Prior to this initiative, each application required a proprietary development effort to enable it to run on a social networking site. Using the OpenSocial standards, application developers can now write a program once and have it run on any social networking site conforming to these standards. Social networking sites gain value by having many applications run on top of their data. Users benefit from this sourcing independence because of the many applications that run using their data.

Access from Anywhere

This capability of cloud computing allows users to access any company service from any platform or device via a web browser. The first generation of the web ensured that information was linked via hyperlinks, with access by pointing and clicking. In the current mashup era, every bit of information should be accessible via a program. A second expectation is that information assets should be accessible from any device. However, to prevent performance problems with ubiquitous access, companies may have to implement intelligent caching techniques and provide high bandwidth connectivity.

A digital business environment can be defined as a suite of integrated applications (processes) and tools that support specific major business capabilities or needs. With cloud computing, this capability provides decision makers with integrated and seamless access to all the capabilities needed to analyze and execute business decisions. Application is done by configuring various services (memory, processors, and so on) to suit a business need. With EC2, all the work of configuration is preserved and reusable. If the same environment is required at a later time, it can be mirrored by Amazon's servers and invoked identically at a future date. A digital business environment is analogous to the long-established concept of a virtual machine (VM).

Audit Control

This cloud capability enables users and the usage of every information service within an organization to be tracked. The ability to verify the history, location, or application of an item through recorded documentation is crucial for ensuring that companies comply with internal and external constraints. Internally, it may be important to track the services partner's use. In other instances, safe harbor compliance rules may require companies to audit the use of their data from other parts of the world.

On-Demand Elasticity

On-demand elasticity provides a self-service capability for rapidly scaling service usage up or down, transparently and automatically. Most organizations plan their processing capacity to cope with peak loads. As a result, much of the capacity remains unused most of the time. In addition, the pricing scheme for using cloud computing should match the needs of the organization; however, this has proven difficult to enforce.

Five Steps in Cloud Apps Deployment

It is highly recommended to follow the best practices of cloud applications deployment in various steps and based on proven guidelines.

  1. Populate the cloud application with initial data.
  2. Provide user access.
  3. Manage user access.
  4. Integrate applications and their data.
  5. Optimize business processes.
  • Extracting, transforming, and loading large volumes of data across the various cloud deployments
  • Connectivity with diverse systems
  • High performance requirements
  • Autodetection of new employees from the human resources (HR) system
  • Approval work flows for auditing and traceability
  • Role-based profiles to provision users based on job function
  • Password policies to set initial passwords and force password reset
  • Reconciling who has access to what applications and entitlements
  • Detecting excessive access and dormant accounts
  • Password aging and detection of orphaned accounts
  • Self-service account management and password reset
  • Automated account disabled upon termination
  • Certification review reporting and role management
  • Extracting, transforming, and loading bulk data in real time
  • Capturing, transforming, and updating transactions in real time
  • High performance requirements with a nonintrusive solution
  • Connectivity with diverse systems

Obstacles and Opportunities for Cloud Computing

Now let us examine various obstacles and opportunities pertaining to cloud-based services deployment. The first three affect adoption, the next five affect growth, and the last two are policy and business obstacles. Each obstacle is paired with an opportunity to overcome that obstacle, ranging from product development to research projects (Armbrust et al. 2010).

  1. Business continuity and service availability. Organizations worry about whether utility computing services will have adequate availability, and this makes some wary of cloud computing. Ironically, existing SaaS products have set a high standard in this regard. Google Search has a reputation for being highly available, to the point that even a small disruption is picked up by major news sources. Users expect similar availability from new services, which is difficult to do.
  2. Data lock-in. Application programming interfaces (APIs) for cloud computing are still essentially proprietary, or at least have not been the subject of active standardization. Thus, customers cannot easily extract their data and programs from one site to run on another. The difficulty of extracting data from the cloud is preventing some organizations from adopting cloud computing. Customer lock-in may be attractive to cloud computing providers, but their users are vulnerable to price increases, to reliability problems, or even to providers going out of business.

    One solution would be to standardize the APIs in such a way that an SaaS developer could deploy services and data across multiple cloud computing providers so that the failure of a single company would not take all copies of customer data with it. One might worry that this would lead to a race to the bottom of cloud pricing and flatten the profits of cloud computing providers. There may be two arguments to minimize this fear. First, the quality of a service matters as well as the price, so customers may not jump to the lowest-cost service. Some Internet service providers today cost a factor of 10 more than others because they are more dependable and offer extra services to improve usability. Second, in addition to mitigating data lock-in concerns, standardization of APIs enables a new usage model in which the same software infrastructure can be used in an internal data center and in a public cloud. Such an option could enable hybrid cloud computing or surge computing in which the public cloud is used to capture the extra tasks that cannot be easily run in the data center (or private cloud) due to temporarily heavy workloads.

  3. Data confidentiality/auditability. Despite most companies outsourcing payroll and many companies using external e-mail services to hold sensitive information, security is one of the most often cited objections to cloud computing. There are also requirements for auditability, in the sense of Sarbanes-Oxley Act and Health Insurance Portability and Accountability Act (HIPAA) regulations that must be provided for corporate data to be moved to the cloud. Cloud users face security threats from both outside and inside the cloud. Many of the security issues involved in protecting clouds from outside threats are similar to those already facing large data centers.

    In the cloud, however, this responsibility is divided among potentially many parties, including the cloud user, the cloud vendor, and any third-party vendors that users rely on for security-sensitive software or configurations. The cloud user is responsible for application-level security. The cloud provider is responsible for physical security, and likely for enforcing external firewall policies. Security for intermediate layers of the software stack is shared between the user and the operator; the lower the level of abstraction exposed to the user, the more responsibility goes with it. Amazon EC2 users have more technical responsibility (that is, they must implement or procure more of the necessary functionality themselves) for their security than do Azure users, who in turn have more responsibilities than AppEngine customers.

  4. Data transfer bottlenecks. Applications continue to become more data-intensive. At $100 to $150 per terabyte transferred, these costs can quickly add up, making data transfer costs an important issue. Cloud users and cloud providers have to think about the implications of placement and traffic at every level of the system if they want to minimize costs. This kind of reasoning can be seen in Amazon's development of its new cloud front service. One opportunity to overcome the high cost of Internet transfers is to ship disks. While this does not address every use case, it effectively handles the case of large delay-tolerant point-to-point transfers, such as importing large data sets.
  5. Performance unpredictability. Multiple virtual machines (VMs) can share CPUs and main memory surprisingly well in cloud computing, but that network and disk input/putput (I/O) sharing is more problematic. As a result, different EC2 instances vary more in their I/O performance than in main memory performance. One opportunity is to improve architectures and operating systems to efficiently virtualize interrupts and I/O channels.
  6. Scalable storage. Three properties whose combination gives cloud computing its appeal are: short-term usage (which implies scaling down as well as up when demand drops), no up-front cost, and infinite capacity on demand. Storage is offered on demand according to business needs.
  7. Bugs in large-scale distributed systems. One of the difficult challenges in cloud computing is removing errors in these very large-scale distributed systems. A common occurrence is that these bugs cannot be reproduced in smaller configurations, so the debugging must occur at scale in the production data centers. One opportunity may be the reliance on virtual machines in cloud computing. Many traditional SaaS providers developed their infrastructure without using VMs, either because they preceded the recent popularity of VMs or because they felt they could not afford the performance hit of VMs.
  8. Scaling quickly. The pay-as-you-go model certainly applies to storage and to network bandwidth, both of which count bytes used. Computation is slightly different, depending on the virtualization level. Google AppEngine automatically scales in response to load increases and decreases, and users are charged by the cycles used. Amazon Web Services (AWS) charges by the hour for the number of instances you occupy, even if your machine is idle. The opportunity is then to scale quickly up and down automatically in response to load in order to save money, but without violating service-level agreements.
  9. Reputation fate sharing. One customer's bad behavior can affect the reputations of others using the same cloud. For instance, blacklisting of EC2 IP addresses by spam prevention services may limit which applications can be effectively hosted. An opportunity would be to create reputation-guarding services similar to the “trusted e-mail” services currently offered (for a fee) to services hosted on smaller Internet service providers (ISPs), which experience a microcosm of this problem. Another legal issue is the question of transfer of legal liability—cloud computing providers would want customers to be liable and not the providers (e.g., the company sending the spam should be held liable, not Amazon).
  10. Software licensing. Current software licenses commonly restrict the computers on which the software can run. Users pay for the software and then pay an annual maintenance fee. Many cloud computing providers originally relied on open source software in part because the licensing model for commercial software is not a good match to utility computing. The primary opportunity is either for open source to remain popular or simply for commercial software companies to change their licensing structure to better fit cloud computing. For example, Microsoft and Amazon now offer pay-as-you-go software licensing for Windows Server and Windows SQL Server on EC2.

There are a few challenges and concerns surrounding cloud computing–based IT solutions as described below: The operational costs may be high. The capital cost can be much lower when a cloud computing–based solution is chosen. However, the company's operational cost may be higher than when running its own CIT infrastructure because it may be required to pay higher subscription fees for the subscribed services. A cloud computing–based IT solution should provide reliable computing and information technology (CIT) services if care is taken in selecting the providers. However, there are two factors that may cause the CIT services obtained from cloud computing to be unreliable:

  1. Unpredictable failure of network external to the organization. Since those portions of the network are not under the organization's control, it cannot really do anything to avoid such failure, or to ensure a quick recovery once such a failure has occurred.
  2. Services discontinuity. It is very important to choose a reliable and trustworthy service provider. Financial stability and their proven services should be the major criteria in selection. As organizations become increasingly global in their business, supply chains and transaction management become more complex. For most businesses, growth is tied to the ability to operate both domestic and overseas, with the addition of each location increasing complexity.

Driving Growth More Efficiently

Global transactions and sourcing are becoming increasingly important, and in this area, cloud solutions have a significant impact on efficiency and cost savings. The major benefit of cloud technology versus typical on-premises software is the elimination of IT staff and support. Transacting with new global partners is significantly easier and less expensive using the cloud, thereby allowing businesses more flexibility to transact with multiple business partners in different regions. In today's environment, it's evident that businesses require an adoption methodology for flexibility and responsiveness to changing dynamics.

In this scenario, however, chief financial officers (CFOs) struggle to manage risk as business becomes increasingly dependent on layers of global partners and suppliers. A massive amount of risk exists in the following areas:

  • Trading partner risk. Who are partners? Are there hidden suppliers that could put brand and business at risk?
  • Compliance risk. Is there visibility into documents and data to file and comply with Sarbanes-Oxley and other regulations in a timely manner?
  • Finance-related risk. Is credit or liquidity an issue that will hinder suppliers, and hence business? Is the business making money and benefiting in the supply chain? Are rising costs in China going to impact margins?

Cloud technology provides a standard platform that generates visibility into transaction parties, payments, and documents. Visibility, agility, and a collaborative environment can mitigate the risks associated with trading partners, compliance, and credit. The average consumer product transaction can involve anywhere from five to 20 different parties in the steps leading from product manufacture to the store shelf. Each of the parties involved in a supply chain poses several elements of risk.

Database Services on a Private Cloud

For database environments, the PaaS cloud model provides better IT services than the IaaS model. The PaaS model provides enough resources in the cloud that databases can quickly get up and running and still have enough latitude for users to create the applications they need. Additionally, central IT management, security, and efficiency are greatly enhanced through consistency and economies of scale. Conversely, with the IaaS model, each tenant must build most of the stack on its own, lengthening the time to deployment and resulting in inconsistent stacks that are harder to manage. A private cloud is an efficient way to deliver database services because it enables IT departments to consolidate servers, storage, and database workloads onto a shared hardware and software infrastructure. Databases deployed on a private cloud offer compelling advantages in cost, quality of service, and agility by providing on-demand access to database services in a self-service, elastically scalable, and metered manner. Private clouds are a better option than public clouds for many reasons.

Shared Services

Information technology departments can leverage shared services to reduce costs and meet the demands of their business users, but there are many operational, securities, organizational, and financial aspects of shared services that must be managed to ensure effective adoption. Consolidation is vital to shared services, as it allows IT to restructure resources by combining multiple applications into a cohesive environment. Consolidation goes beyond hard cost savings; it simplifies management, improves resource utilization, and streamlines conformity to security and compliance standards.

  • Server consolidation. Reduce the number of physical servers and consolidate databases onto a smaller server footprint.
  • Storage consolidation. Unify the storage pool through improved use of free space in a virtual storage pool.
  • Operating system consolidation. Reduce the number of operating system installations. Reducing server footprint does not always provide the best return on investment (ROI), but reducing the number of operating systems will improve overall manageability.
  • Database consolidation. Reduce the number of database instances through schema consolidation. Consolidate separate databases as schemas in a single database, reducing the number of databases to manage and maintain.
  • Workload consolidation. Merge the redundant databases that support business intelligence or operational data store systems. When consolidated into a single data store, these workloads benefit from the additional resources and scalability provided by the private cloud infrastructure.

Security Considerations in Private and Public Clouds

Cloud computing offers promising convenience, elasticity, transparency, and economy. But with the many benefits come some challenging issues of security and privacy. The history of computing since the 1960s can be viewed as a continuous move toward ever-greater specialization and distribution of computing resources. First we had mainframes and security was fairly simple. Then we added minicomputers and desktop and laptop computers and client-server models, and it got more complicated. Tim Mather, Subra Kumaraswamy, and Shahed Latif outlined information security and privacy issues in depth pertaining to various cloud deployment models in their publication Cloud Security and Privacy (Mather et al. 2009).

Components of Information Security

We need to consider broadly three components of infrastructure-level security and their various implications in cloud deployment.

  1. Network level. Shared infrastructure such as a virtual local area network (VLAN) (private and public) and Dynamic Host Configuration Protocol (DHCP) server, firewall, and load balancer have limitations of point-to-point encryption, extranet security, and monitoring. This may cause a threat of domain hijacking due to domain naming systems (DNS), denial of service (DoS), or distributed denial of service (DDoS). However, this may be mitigated by deploying a virtual private cloud and virtual private network (VPN)–based solution with strong authentication.
  2. Host level. Shared infrastructure such as hardware (CPU, memory, disks, network) or software (virtualization layer [e.g., Xen], Web Console) provisioning also offers limitations of patch, configuration management of a large number of dynamic nodes, host-based intrusion detection system (IDS), and access management. This offers a threat of image configuration drift and vulnerabilities, targeted DoS attack, and attack on standard OS services. However, we can mitigate these threats by such strategies as: secure-by-default, harden image, turn off OS services, use software firewall, enable logging, access provisioning, patch, or configuration management.
  3. Application level. Shared infrastructure such as virtualized host, network, firewall (if hosted on IaaS or PaaS), virtualized stack (e.g., LAMP), and database versus dataspace (e.g., SimpleDB, BigTable) also offers limitations in SaaS and SaaS/PaaS deployment. In cloud deployment, application-level security is highly critical, especially with denial of service (DoS) or economic denial of service (EDoS)—an attack against the billing model that underlies the cost of providing a service with the goal of bankrupting the service itself.

Data Security and Storage

Data security and storage are other major issues that need attention during cloud deployment.

  • Knowing when and where the data was located within the cloud is important for audit/compliance purposes.
  • Example: Amazon Web Services (AWS)
    1. img Store <d1, t1, ex1.s3.amazonaws.com>
    2. img Process <d2, t2, ec2.compute2.amazonaws.com>
    3. img Restore <d3, t3, ex2.s3.amazonaws.com>
  • Computational accuracy (as well as data integrity)
  • Example: Financial calculation: sum ((((2*3)*4)/6)-2) = $2.00
    1. img Correct, assuming U.S. dollars
    2. img How about dollars of different countries?

Lack of Control in the Cloud

Most security problems stem from consumer's loss of control and lack of trust (mechanisms).

  • Data, applications, and resources are located with provider.
  • User identity management is handled by the cloud.
  • User access control rules, security policies, and enforcement are managed by the cloud provider.
  • Consumer relies on provider to ensure:
    1. img Data security and privacy
    2. img Resource availability
    3. img Monitoring and repairing of services/resources

Identity and Access Management

Managing access for diverse user populations (employees, contractors, partners, etc.) is a critical administration task of a cloud administrator. More and more personal, financial, and medical data hosted in the cloud will need increased authentication. Software applications hosted in the cloud require access control, and there will be greater need for higher assurance authentication, including authentication from mobile devices.

What Are the Key Privacy Concerns?

Privacy rights or obligations are related to the collection, use, disclosure, storage, and destruction of personal data or personally identifiable information (PII). They are about the accountability of organizations to data subjects, as well as the transparency to an organization's practice around personal information.

Some considerations to mitigate privacy concerns are storage, retention, and destruction; auditing, monitoring, and risk management; and privacy breaches.

Storage.

The aggregation of data raises new privacy issues. Some governments may decide to search through data without necessarily notifying the data owner, depending on where the data resides and whether the cloud provider itself has any right to see and access customer data. Some services today track user behavior for a range of purposes, from sending targeted advertising to improving services.

Retention.

It is important to determine who enforces the retention policy in the cloud, and how exceptions to this policy are managed. Also, policy needs to clearly define how long personal information (that is transferred to the cloud) is retained. And does the organization own the data, or does the cloud service provider (CSP) own it?

Destruction.

Cloud service providers usually replicate the data across multiple systems and sites, and increased availability is one of the benefits they provide.

  • How do we know that the CSP didn't retain additional copies?
  • Did the CSP really destroy the data, or just make it inaccessible to the organization?
  • Is the CSP keeping the information longer than necessary so that it can mine the data for its own use?

Auditing, Monitoring, and Risk Management.

If business-critical processes are migrated to a cloud computing model, internal security processes need to evolve to allow multiple cloud providers to participate in those processes as needed.

These include processes such as security monitoring, auditing, forensics, incident response, and business continuity.

Privacy Breaches.

It is critical to define a clear policy regarding privacy breaches with cloud service providers, such as:

  • How do we know that a breach has occurred?
  • How do we ensure that the Cloud Service Provider (CSP) notifies us when a breach occurs?
  • Who is responsible for managing the breach notification process (and costs associated with the process)?
  • If contracts include liability for breaches resulting from negligence of the Cloud Service Providers(CSP):
    1. img How is the contract enforced?
    2. img How is it determined who is at fault?

Virtual Machine Introspection

IBM Research is pursuing a similar approach called “virtual machine introspection.” It puts security inside a protected VM running on the same physical machine as the guest VMs running in the cloud. The security VM employs a number of protective methods, including the whitelisting and blacklisting of guest kernel functions.

It can determine the operating system and version of the guest VM and can start monitoring a VM without any beginning assumption of its running state or integrity.

Instead of running 50 virus scanners on a machine with 50 guest VMs, virtual machine introspection uses just one, which is much more efficient, says Matthias Schunter, a researcher at IBM Research's Zurich lab. “Another big advantage is the VM can't do anything against the virus scan since it's not aware it's being scanned,” he says. In another application, a virtual intrusion detection system runs inside the physical machine to monitor traffic among the guest VMs. The virtual networks hidden inside a physical machine are not visible to conventional detectors because the detectors usually reside in a separate machine, Schunter says.

Service-Level Agreement

The following are some of the key areas that need to be addressed by the cloud computing contract, generally referred to as a service-level agreement (SLA):

  • A clear articulation of fees for base services and modifications over time.
  • Well-defined performance metrics and remedies for service failures and an understanding of how the metrics may change over time.
  • Security, privacy, and audit commitments that will satisfy regulatory concerns, including an understanding of where data and information (including intellectual property) reside.
  • Clear delineation of the affiliated entities that may receive services under the contract as well as provision for the continued receipt of services by divested entities during a transition period.
  • Understanding the process for changes to the solution over time and the impact on connections between the cloud solution and other systems and processes used by a customer.
  • Adequate provision for termination of the contract and moving to a substitute provider, including termination assistance and recovery of all data.
  • Addressing business continuity, disaster recovery, and force majeure events.
  • Clear restrictions on use and ownership of customer data and any intellectual property of the customer resident in the cloud.
  • Access to and recovery of customer data as needed, and an understanding of the customer's rights with regard to litigation holds and e-discovery requirements.
  • A reasonable allocation of risk for breaches of contract and for third-party claims related to the solution.
  • Understanding subcontractors that may be used by the service provider and the conditions for the service provider using subcontractors.
  • Addressing the resolution and impact of disputes and bankruptcy (e.g., software escrow arrangements for SaaS offering).

Intellectual Property Rights

In an IaaS environment, the customer maintains all intellectual property ownership rights related to any applications that it runs using the IaaS platform. Of course, reasonable confidentiality provisions will need to be included to protect any trade secrets that the customer places on the IaaS platform. The obligations to acquire third-party consents should also be straightforward in the cloud computing environment, with the provider being responsible for any consent required for it to operate its solution and the customer acquiring necessary third-party consents required in connection with any application or data that the customer brings to the public cloud platform. The intellectual property and licensing structure for an SaaS or a PaaS solution could be more complex, depending on the intellectual property at issue. The provider will retain ownership of its solution, but the customer will need to consider the ownership of any intellectual property for any interfaces or add-ons that the customer develops in connection with using the services as well as the ownership of applications developed on a PaaS platform.

References

  1. Armbrust, M., A. Fox, R. Griffith, A. D. Joseph, R. Katz, A Konwinski, and M. Zaharia. 2010. “A View of Cloud Computing.” Communications of the ACM, 53(4): 50–58.
  2. Harris, W. 2011. “Cloud Computing-Based IT Solutions for Organizations with Multiregional Branch Offices.” Proceedings of the European Conference on Information Management & Evaluation, 435–440.
  3. Hunter, P. 2009. “Cloud Aloud [Cloud Computing in Enterprises].” Engineering & Technology (17509637), 4 (16): 54–56. doi: 10.1049/et.2009.1612.
  4. Iyer, B., and J. C. Henderson. 2010. “Preparing for the Future: Understanding the Seven Capabilities of Cloud Computing.” MIS Quarterly Executive, 9(2): 117–131.
  5. Mather, Tim, Subra Kumaraswamy, and Shahed Latif. 2009. Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance. O'Reilly Media.
  6. Smith, R. 2009. “Computing in the Cloud.” Research Technology Management, 52(5): 65–68.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.67.25