As you are reading this chapter, you may have already noted the significance of business agility and the roadmap presented using the balanced scorecard in our previous chapters. We have attempted to lay a sound foundation to define various business stakeholders and how these technology forces have helped align chief information officers (CIOs), chief marketing officers (CMOs), and others in the current turbulent business environment. We have also put our best efforts into defining the Business Agility Readiness Roadmap with clear guideposts to help you make rapid progress in your business process optimization. We have made an effort to highlight cases of success and failure that would help you to identify clearly the stage you are at in your business and to transform your business with the impact of these technology forces. You have noticed that it is imperative to transform your current business model and processes to create business agility to survive and thrive in business irrespective of what business sector you are in today. It was okay to run a business independently just a few years ago, but no longer. We introduced and talked about business ecosystems, a concept that brings all stakeholders such as employees, customers, and partners (distributors, value-added resellers, systems integrators, independent consultants, investors, shareholders) to participate in your business to create business agility and to help your business succeed. Considering carefully, you would realize that all play a major role in your business, and none should be ignored or taken lightly.
We have entered the age of hyperconnectivity—the state in which we are always connected to the Internet through a number of devices (computers, PDAs, smart devices, smartphones), accessing applications and content for various purposes. Rapid innovation in the technology sector has continued to drive consumers and businesses to adopt devices that enable increasingly mobile and connected access to their content and services. This new era is about much more than just sheer numbers of new network-connected devices, computers, and applications. It is about providing pervasive access to, and continuous presence on, the network, where anyone or anything can interact with anyone or anything else no matter where they are located. We also referred to a paradigm shift from a system of record (SOR) to a system of engagement (SOE) that is maturing rapidly in adoption both at the consumer level and at the business level.
As Fabio Castiglioni and Michele Crudele at IBM write, online consumers of current age require a rich experience that leverages information from disparate structured and unstructured sources, social channels, and their friends' recommendations. This offers quick information and better resources for effective decision making in general. This has become a norm in the consumer space, and we need to replicate the same in our businesses, too. A SOE enables this rich experience by extracting value from the information that comes from multiple channels and enabling new digitized business models. A cloud operating environment (CloudOE) is the platform that supports SOE workloads. The CloudOE enables the agility and velocity that an SOE needs by providing an ecosystem for developing, deploying, and operating SOE applications.
Hyperconnectivity has led major innovations in the area of emerging technologies such as the cloud, social, mobility, big data, and predictive analytics. In this chapter we stay focused on cloud technology and architecture that would enable business agility. In recent times, cloud computing has emerged as a much hyped and talked-about concept in information technology (IT). It has drawn the attention of not only IT professionals but all industry and management people seeking to understand and reap benefits from it. The IT media have written about it and major conferences are still drawing a large attendance of knowledge workers. It has gone to the extent that software mogul Marc Benioff of Salesforce.com made a sharp statement that “Software is dead,” while in response, Larry Ellison of Oracle commented, “If we're dead—if there's no hardware or software in the cloud—we are so screwed.…But it's not water vapor! All of it is a computer attached to a network. What do you think Google runs on? Do they run on water vapor? I mean, cloud—it's all databases, operating systems, memory, microprocessors, and the Internet. Then there's a definition: What's cloud computing? It's using a computer that's out there. ‘Open source is going to destroy our business, and there'll be nothing but open source and we'll be out of business.’ And minicomputers are going to destroy mainframes and PCs are going to destroy minicomputers and open source is going to destroy all standards and all software is going to be delivered as a service. I've been at this a long time, and there are still mainframes—but it was the first industry that was going to be destroyed, and watching mainframes be destroyed is like watching a glacier melt.…” Ellison concluded in anger and frustration, “What the hell is cloud computing??”
Bob Evans, senior vice president at Oracle, further wrote some revealing facts in his blog post at Forbes, clearing away all doubts people may have had in their minds. You may like to consider some interesting facts in this regard. Almost eight years ago, when cloud terms were not yet established, Oracle started developing a new generation of application suites (called Fusion Applications) designed for all modes of cloud deployment. Oracle Database 12c, which was released just recently, supports the cloud deployment framework of major data centers today and is the outcome of development efforts of the past few years. Oracle's software as a service (SaaS) revenue has already exceeded the $1 billion mark, and it is the only company today to offer all levels of cloud services, such as SaaS, platform as a service (PaaS), and infrastructure as a service (IaaS). Oracle has helped over 10,000 customers to reap the benefits of the cloud infrastructure and now supports over 25,000 users globally. One may argue that this could not have been possible if Larry Ellison hadn't appreciated cloud computing. Sure, we may understand the dilemma he must have faced as an innovator when these emerging technologies were creating disruption in the business (www.forbes.com/sites/oracle/2013/01/18/oracle-cloud-10000-customers-and-25-million-users/).
You will agree with us that cloud computing is undoubtedly the hottest and most commonly used buzzword in IT, and it is undergoing numerous interpretations by various IT service providers for their advantage and benefit. In our endeavor, we will make an attempt to simplify the term, concept, and paradigm of cloud computing and its related technologies for all business executives, technical developers, and knowledge workers. Cloud computing has become a prominent battleground among major IT vendors, offering hardware, software, network infrastructure, and consulting services providers. Enterprises have been slow to adopt it for various reasons, namely immature standards, concern for data security, and the fact that the whole model is not yet fit for running mission-critical applications (P. Hunter 2009).
Cloud computing is becoming a new platform for enterprise and personal computing. It competes with traditional desktop or handheld computers (including smartphones) that run applications directly on the devices. But SaaS and cloud computing will rise to the level of an industry platform only when firms open their technology to other industry players. For example, Salesforce.com created a customer relationship management (CRM) product and configured it not as packaged software but as software delivered over servers and accessed through a browser. Salesforce.com developed its own in-house platform for delivering the software as a service to its customers. It then developed AppExchange as an open integration platform for other application companies that built products utilizing some features in the Salesforce.com CRM product. Salesforce.com created a new industry platform named Force.com, a development and deployment environment using Salesforce's SaaS infrastructure.
Information technology has undergone several paradigm shifts: from mainframes to client-server to Internet computing and then to cloud computing. We witnessed the advent of mainframes in the 1960s, which were initially meant for single users but gradually evolved in the 1970s as multi-users where several users were connected with terminals sharing computing resources in real time. In this model, the large computing resource was virtualized, and a virtual machine was allocated to individual users who were sharing the system. In reality, these terminals were accessing virtual instances of computing resources of a mainframe. The similar concept of virtual instances has been practiced in cloud computing by many thousands of machines.
We then witnessed the next wave of client-server computing, the concept where the mainframe as the computing center was diluted and computing resources were distributed. As computing power increased, work gradually shifted away from centralized computing resources toward increasingly powerful distributed systems. In this age of client-server, PC and PC-based applications dominated; many routine tasks were moved to the desktop and more resources were deployed on the desktop to run PC- or client-based applications. The mainframe was used only for corporate enterprise resource planning (ERP) and data processing–based applications. The standardization of networking technology simplified the ability to connect systems as Transmission Control Protocol/Internet Protocol (TCP/IP) became the protocol of the growing Internet in the 1980s. The emergence of the web and HTTP in the late 1990s brought computing back to the data center environment.
In recent times, there has been an enormous evolution in computing capabilities, including hardware, software, and storage. Hardware and storage devices are mass-produced and commoditized. A few commodity servers can handle extensive data processing that was tackled on large mainframes or minicomputers earlier.
Grid computing enables groups of networked commodity computers to be pooled and provisioned on demand to meet the changing needs of business. Instead of dedicated servers and storage for each application, grid computing enables multiple applications to share computing infrastructure, resulting in much greater flexibility, cost, power efficiency, performance, scalability, and availability, all at the same time.
Cluster computing, consisting of a set of loosely or tightly connected low-cost commodity computers that work together so that in many respects they can be viewed as a single system, provides scalable on-demand flexibility.
There have been many interpretations of cloud computing concepts and technology by many vendors and service providers. Some of the leading firms that have defined cloud computing are:
Prior to the popularity of cloud computing, there were a number of related service offerings that attracted only minor attention—grid computing, utility computing, elastic computing, and software as a service. The basic technologies from each of these have been incorporated into cloud computing and appear to be attracting more interest than these predecessors. This attraction may be a function of the maturity of the technology and the services offered, or it may be driven by a marketing blitz that has occurred as Amazon, Google, Apple, and other big names have gotten behind it. (Smith, R. 2009).
The National Institute of Standards and Technology (NIST) is an agency of the U.S. Department of Commerce that defines and sets standards for any emerging technology. NIST defines cloud computing as a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the data centers that provide those services.
According to NIST, this cloud model promotes availability and is comprised of five essential characteristics, three service models, and four deployment models.
In the IaaS model, users subscribe to uses of certain components of a provider's IT infrastructure. Although the subscribers don't have control of the entire cloud infrastructure, they do have control over selected portions of it, such as firewalls, operating system, deployed applications, and storage. In the PaaS model, a combination of applications forming a platform is subscribed to as a service by users. For example, a combination of software tools may be used as a programming and software platform. The SaaS model can be seen as a special case of PaaS, where a single application can be subscribed to as a service. Such services are often accessed through a web browser.
A private cloud is usually owned and used by the same organization, such as a corporation. It often refers to a proprietary computing infrastructure owned by the organization, and provides computing and information services to its employees behind the organization's firewall.
A public cloud often refers to computing and IT infrastructure that is owned by an organization but provides computing and information services to external users or subscribers. By subscribing to services provided by other well-established companies, new start-ups, for example, can quickly realize their computing and information technology (CIT) needs without investing so much money and time to implement their own computing and IT infrastructure.
We all have witnessed and experienced information and communication technology (ICT) with changing times that have offered greater benefits and challenges to business. In a challenging business and economic environment, businesses have constantly looked for technology and services to optimize cost and increase efficiency. Start-ups and small and medium-sized businesses (SMBs) always look for low investment for infrastructure that requires capital expenditure (capex) and optimized operational cost to run their business. In a tough economy, even large enterprises are pressured to cut costs and rightsize their needs, depending on market conditions and customer demand. Cloud computing, in the recent past, has emerged as a promising technology and services paradigm to address these business challenges.
There are a number of business benefits offered by cloud computing, and we enumerate several and support them with some real-life case references to help your business achieve greater agility for sustainable competitive advantage.
We all know how President Barack Obama's election campaign was successfully executed, run, and managed using all these technologies led by its chief technology officer (CTO), Harper Reed. Similar to any other business operation, data was critical to the campaign and came in various sizes, types, and frequency that needed to be consolidated, integrated, and analyzed in real time to provide insights to Obama to tailor his message to the right audience. Moreover, the team also needed to manage the media, volunteers, and donors to support the campaign. One can imagine that the campaign team must have experienced challenges and limitations of computing infrastructure resources that were required for specific time frames during the campaign.
Cloud infrastructure and especially platform as a service (PaaS) came in handy to the campaign team, rendering scalable computing and storage capacity on demand on a pay-as-you-go basis when they needed it at a crucial time. The team used autoscaling to quickly add cloud resources in order to enable as many as 7,000 volunteers to make more than two million calls to voters in four days at the campaign's end.
Cloud computing offers numerous benefits, but each of these also offers corresponding disadvantages or raises concerns; therefore, it is imperative to assess all risks before implementation and deployment of major business-critical applications on the cloud.
Having covered cloud terminologies, nomenclature, benefits, and inherent challenges, let us understand its underling architecture that would help in adoption and deployment with ease and confidence. Cloud infrastructure and deployment of various services such as platform, application, infrastructure, and database became possible with the proliferation of virtualization as a robust technology.
Virtualization provides the high server utilization needed in the cloud computing paradigm. It smooths out the variations between applications that need barely any CPU time (they can share a CPU with other applications) and those that are compute-intensive and need every CPU cycle they can get. Virtualization is the single most revolutionary cloud technology whose broad acceptance and deployment truly enabled the cloud computing trend to begin. Without virtualization, the economics of the cloud would not have been viable.
We can view virtualization technology as part of an overall optimization strategy that includes client and overall server platform (hardware, operating systems, database, and storage). Platform virtualization is a technique to abstract computer resources such that it separates the operating system from the underlying physical server resources. Instead of the operating system (OS) running on hardware resources, the OS interacts with a new software layer called a virtual machine monitor that accesses the hardware and presents the OS with a virtual set of hardware resources. This means multiple virtual machine images or instances can run on a single physical server, and new instances can be generated and run on demand, creating the basis for elastic computing resources. Basically, server virtualization removes physical barriers and isolates one technology from the other, thereby eliminating dependencies. It allows multiple independent virtual operating systems, databases, storage, and applications on the same physical hardware. This reminds us of the IBM mainframes used for time-sharing virtualization in the 1960s to enable many people to share a large computer without interacting or interfering with each other. (See Figure 3.1.)
Virtualization helps in server consolidation, which is the cornerstone of the cloud computing deployment architecture and enables economies of scale and elasticity. Server consolidation helps in reducing hardware requirements by a 10-to-1 ratio or better. It accelerates server provisioning time by 50 to 70 percent. It reduces the energy cost by 80 percent and powers down servers without affecting applications. Examining server utilization, we observe that the average server in a corporate data center has typical utilization of only 6 percent. Even at peak load, utilization is no better than 20 percent. In the best-run data centers, servers run on average at only 15 percent or less of their maximum capacity. But when these same data centers fully adopt server virtualization, their CPU utilization increases to 65 percent or higher, and that helps in making cloud services appealing and cost effective. Virtualization reduces capex, operational expenditure (opex), and total cost of ownership (TCO).
In the continued endeavor to increase efficiency and agility in business, virtualization in cloud infrastructure offers server consolidation that reduces overall cost, including capital expenditure (capex), operational expenditure (opex), and total cost of ownership (TCO). Capex cost is minimized as multiple machines are consolidated onto a single host. By consolidation, opex is reduced significantly and in turn this lowers the overall total cost of ownership of all these computing services.
There are two main server virtualization platforms in deployment in many data centers today.
As cloud computing matures and gains acceptance as the mainstream of business technology strategies and deployments, those engaged in enterprise business computing tend to become concerned regarding security, time to value, public versus private, lock-in, cost, and more. For businesses, these questions are reasonable as cloud adoptions grow rapidly around the world and increasingly essential workloads move from their traditional on-premises location to the cloud.
Cloud computing will not be accepted by common users unless the trust and dependability issues are addressed satisfactorily. (See Figure 3.2.)
Cloud computing has offered various deployment models to meet the IT needs of companies with multiregional branch offices globally. These cloud-based solutions can be deployed on one or a combination of these deployment models.
For organizations that have a well-established computing and information technology infrastructure, such as some high-tech companies in the IT business, a completely private cloud-based solution may be a better choice for providing computing and IT services to new branch offices in other cities or other countries. In such a case, the data center, servers, and all major computing and IT devices reside behind the firewall on the organization's enterprise network, located on-site at the head office, while users in branch offices access the computing and information services through a virtual private network (VPN) or a web browser if a web interface has been made available for accessing the service (Harris 2011).
Figure 3.3 depicts the architecture of a private cloud-based solution.
In a private cloud-based solution, since everything is under the control of the very same organization, this solution gives the organization total freedom and autonomy in managing all the components of the computing and information technology infrastructure.
Federated cloud computing can be seen as a variant of private cloud computing. As in private cloud computing, in federated cloud computing the information technology infrastructure is still privately owned solely by the organization, but the equipment, servers, and services are distributed among the head office and branch offices. This may be necessary when different branch offices have different missions and each needs more dedicated computing and information services. For example, one branch may be working on data mining, while another branch may be concentrating more on server development. (See Figure 3.4.)
A public cloud-based solution is for the organization to subscribe all needed computing and information technology services from providers in the public cloud. This solution is suitable for organizations that have no resources or interest in implementing their own CIT infrastructure, or small start-ups. By subscribing to the needed computing and IT services readily available in the public cloud, a start-up can quickly get its business going and have its creative ideas tested. If the business doesn't fly, it can easily get off the boat with less to lose. (See Figure 3.5.)
Consistent with the definition of hybrid, a hybrid cloud involves both a private and a public cloud. A hybrid cloud-based solution may be applicable for an organization that has an established computing and information technology (CIT) infrastructure sufficient for its current needs, but doesn't want to invest big money and time to expand its current CIT infrastructure for a new business. In this case, it would rather obtain the needed CIT services from a reliable source in the public cloud. By doing this, it may be more easily turned around if the new venture doesn't go well. (See Figure 3.6.)
A cloud computing–based IT solution for organizations with multiregional offices brings both advantages and challenges. The advantages are as follows:
In an endeavor to create business agility, as you (or your company) begin to formulate your cloud strategy, you need to understand the inherent capabilities that are offered by cloud computing. These capabilities can help to gain competitive advantage by creating opportunities for cost advantage and organizational agility. We see the following seven capabilities (if not more):
In deriving these capabilities, enterprises should consider their information as an asset and should want to find ways of generating and accessing it. When formulating its cloud strategy, a company should consider the extent to which it needs each of the seven capabilities. Subscribing to these services should be based on the company's needs and cost considerations. Cost considerations occur because each capability implies investments in technology, processes, people, and governance. It is recommended that companies evaluate these capabilities before implementing them in order to derive value from cloud computing (Iyer and Henderson 2010).
This capability creates an infrastructure that is organic and responsive to changing user requirements. As companies develop and deploy application services, each service can be used by other services using an interface called an application program interface (API). In determining the APIs to provide, involvement can vary from highly involved as a gatekeeper and moderator in the provisioning of applications to low involvement by allowing developers to freely write applications.
This results in four strategies for managing APIs. Under the open co-innovation model, companies such as Amazon and Google have a very open and community-driven approach to services. A developer wanting access makes a request through Amazon Web Services' discussion forum, to which both Amazon employees and community members contribute and decide how open and participative they want to make their API processes. Companies that adopt the Apple or Salesforce.com model need to make huge investments in internal resources for staffing and implementation.
The Amazon and Google model requires the creation of forums and ground rules for participation. Internal experts are also needed to track discussions, identify emerging trends, and respond appropriately. The open model followed by Google and Amazon allows unfettered innovation but no quality control of the user experience. The qualified model, on the other hand, guarantees a level of user experience but may cause developer consternation with the certification process. Overall, creating an API-based interface to existing applications unlocks their potential by making them more accessible to internal and external requests, creating immense opportunities for collaboration and innovation.
This capability controls access to services and information assets from anywhere within an enterprise without needing to know their location. Today, organizations with large data sets typically employ experts whose sole role is to know where data elements exist within their databases. Usually, these experts work for the IT architecture group to ensure that each application follows the internal architecture policies and that proper approval is given to access data. The downside is that business unit personnel often feel that these experts slow down responses to competitive pressures by increasing application development times.
Application development at Google follows a different path. Google has made huge investments in an infrastructure that is “built to build.” This infrastructure allows developers to build new applications that can access information assets without knowing the exact location of their physical storage. The infrastructure ensures that applications access the right data sources, keeping data integrity high. When a few software engineers in Bangalore set about building it, they were able to tap into many of the preexisting features, such as discussion forums and news feeds that already existed in Google's infrastructure, and reuse them to create a new product.
This capability of cloud computing enables a company to control access to services and also helps to switch service providers easily and at low cost. The current model of information systems development carries the risk of a company being locked into a vendor or solution. With cloud computing, the functionality that an application delivers is based on a call to a service, which should make it easy and less expensive to move to a different vendor to obtain the same functionality. The key factor enabling a choice to be made is that access to both existing data and the metadata allows the existing data to be imported easily into a new application. The same principle must apply to a service inside the company firewall. Just as a company can move from one vendor to another, so business units must be able to use or disassociate from their internal IT service, leading to the sourcing independence described here.
The data that users create and store within their favorite social networking sites can be accessed by applications that conform to the OpenSocial Foundation API specifications. Prior to this initiative, each application required a proprietary development effort to enable it to run on a social networking site. Using the OpenSocial standards, application developers can now write a program once and have it run on any social networking site conforming to these standards. Social networking sites gain value by having many applications run on top of their data. Users benefit from this sourcing independence because of the many applications that run using their data.
This capability of cloud computing allows users to access any company service from any platform or device via a web browser. The first generation of the web ensured that information was linked via hyperlinks, with access by pointing and clicking. In the current mashup era, every bit of information should be accessible via a program. A second expectation is that information assets should be accessible from any device. However, to prevent performance problems with ubiquitous access, companies may have to implement intelligent caching techniques and provide high bandwidth connectivity.
A digital business environment can be defined as a suite of integrated applications (processes) and tools that support specific major business capabilities or needs. With cloud computing, this capability provides decision makers with integrated and seamless access to all the capabilities needed to analyze and execute business decisions. Application is done by configuring various services (memory, processors, and so on) to suit a business need. With EC2, all the work of configuration is preserved and reusable. If the same environment is required at a later time, it can be mirrored by Amazon's servers and invoked identically at a future date. A digital business environment is analogous to the long-established concept of a virtual machine (VM).
This cloud capability enables users and the usage of every information service within an organization to be tracked. The ability to verify the history, location, or application of an item through recorded documentation is crucial for ensuring that companies comply with internal and external constraints. Internally, it may be important to track the services partner's use. In other instances, safe harbor compliance rules may require companies to audit the use of their data from other parts of the world.
On-demand elasticity provides a self-service capability for rapidly scaling service usage up or down, transparently and automatically. Most organizations plan their processing capacity to cope with peak loads. As a result, much of the capacity remains unused most of the time. In addition, the pricing scheme for using cloud computing should match the needs of the organization; however, this has proven difficult to enforce.
It is highly recommended to follow the best practices of cloud applications deployment in various steps and based on proven guidelines.
Now let us examine various obstacles and opportunities pertaining to cloud-based services deployment. The first three affect adoption, the next five affect growth, and the last two are policy and business obstacles. Each obstacle is paired with an opportunity to overcome that obstacle, ranging from product development to research projects (Armbrust et al. 2010).
One solution would be to standardize the APIs in such a way that an SaaS developer could deploy services and data across multiple cloud computing providers so that the failure of a single company would not take all copies of customer data with it. One might worry that this would lead to a race to the bottom of cloud pricing and flatten the profits of cloud computing providers. There may be two arguments to minimize this fear. First, the quality of a service matters as well as the price, so customers may not jump to the lowest-cost service. Some Internet service providers today cost a factor of 10 more than others because they are more dependable and offer extra services to improve usability. Second, in addition to mitigating data lock-in concerns, standardization of APIs enables a new usage model in which the same software infrastructure can be used in an internal data center and in a public cloud. Such an option could enable hybrid cloud computing or surge computing in which the public cloud is used to capture the extra tasks that cannot be easily run in the data center (or private cloud) due to temporarily heavy workloads.
In the cloud, however, this responsibility is divided among potentially many parties, including the cloud user, the cloud vendor, and any third-party vendors that users rely on for security-sensitive software or configurations. The cloud user is responsible for application-level security. The cloud provider is responsible for physical security, and likely for enforcing external firewall policies. Security for intermediate layers of the software stack is shared between the user and the operator; the lower the level of abstraction exposed to the user, the more responsibility goes with it. Amazon EC2 users have more technical responsibility (that is, they must implement or procure more of the necessary functionality themselves) for their security than do Azure users, who in turn have more responsibilities than AppEngine customers.
There are a few challenges and concerns surrounding cloud computing–based IT solutions as described below: The operational costs may be high. The capital cost can be much lower when a cloud computing–based solution is chosen. However, the company's operational cost may be higher than when running its own CIT infrastructure because it may be required to pay higher subscription fees for the subscribed services. A cloud computing–based IT solution should provide reliable computing and information technology (CIT) services if care is taken in selecting the providers. However, there are two factors that may cause the CIT services obtained from cloud computing to be unreliable:
Global transactions and sourcing are becoming increasingly important, and in this area, cloud solutions have a significant impact on efficiency and cost savings. The major benefit of cloud technology versus typical on-premises software is the elimination of IT staff and support. Transacting with new global partners is significantly easier and less expensive using the cloud, thereby allowing businesses more flexibility to transact with multiple business partners in different regions. In today's environment, it's evident that businesses require an adoption methodology for flexibility and responsiveness to changing dynamics.
In this scenario, however, chief financial officers (CFOs) struggle to manage risk as business becomes increasingly dependent on layers of global partners and suppliers. A massive amount of risk exists in the following areas:
Cloud technology provides a standard platform that generates visibility into transaction parties, payments, and documents. Visibility, agility, and a collaborative environment can mitigate the risks associated with trading partners, compliance, and credit. The average consumer product transaction can involve anywhere from five to 20 different parties in the steps leading from product manufacture to the store shelf. Each of the parties involved in a supply chain poses several elements of risk.
For database environments, the PaaS cloud model provides better IT services than the IaaS model. The PaaS model provides enough resources in the cloud that databases can quickly get up and running and still have enough latitude for users to create the applications they need. Additionally, central IT management, security, and efficiency are greatly enhanced through consistency and economies of scale. Conversely, with the IaaS model, each tenant must build most of the stack on its own, lengthening the time to deployment and resulting in inconsistent stacks that are harder to manage. A private cloud is an efficient way to deliver database services because it enables IT departments to consolidate servers, storage, and database workloads onto a shared hardware and software infrastructure. Databases deployed on a private cloud offer compelling advantages in cost, quality of service, and agility by providing on-demand access to database services in a self-service, elastically scalable, and metered manner. Private clouds are a better option than public clouds for many reasons.
Information technology departments can leverage shared services to reduce costs and meet the demands of their business users, but there are many operational, securities, organizational, and financial aspects of shared services that must be managed to ensure effective adoption. Consolidation is vital to shared services, as it allows IT to restructure resources by combining multiple applications into a cohesive environment. Consolidation goes beyond hard cost savings; it simplifies management, improves resource utilization, and streamlines conformity to security and compliance standards.
Cloud computing offers promising convenience, elasticity, transparency, and economy. But with the many benefits come some challenging issues of security and privacy. The history of computing since the 1960s can be viewed as a continuous move toward ever-greater specialization and distribution of computing resources. First we had mainframes and security was fairly simple. Then we added minicomputers and desktop and laptop computers and client-server models, and it got more complicated. Tim Mather, Subra Kumaraswamy, and Shahed Latif outlined information security and privacy issues in depth pertaining to various cloud deployment models in their publication Cloud Security and Privacy (Mather et al. 2009).
We need to consider broadly three components of infrastructure-level security and their various implications in cloud deployment.
Data security and storage are other major issues that need attention during cloud deployment.
Most security problems stem from consumer's loss of control and lack of trust (mechanisms).
Managing access for diverse user populations (employees, contractors, partners, etc.) is a critical administration task of a cloud administrator. More and more personal, financial, and medical data hosted in the cloud will need increased authentication. Software applications hosted in the cloud require access control, and there will be greater need for higher assurance authentication, including authentication from mobile devices.
Privacy rights or obligations are related to the collection, use, disclosure, storage, and destruction of personal data or personally identifiable information (PII). They are about the accountability of organizations to data subjects, as well as the transparency to an organization's practice around personal information.
Some considerations to mitigate privacy concerns are storage, retention, and destruction; auditing, monitoring, and risk management; and privacy breaches.
The aggregation of data raises new privacy issues. Some governments may decide to search through data without necessarily notifying the data owner, depending on where the data resides and whether the cloud provider itself has any right to see and access customer data. Some services today track user behavior for a range of purposes, from sending targeted advertising to improving services.
It is important to determine who enforces the retention policy in the cloud, and how exceptions to this policy are managed. Also, policy needs to clearly define how long personal information (that is transferred to the cloud) is retained. And does the organization own the data, or does the cloud service provider (CSP) own it?
Cloud service providers usually replicate the data across multiple systems and sites, and increased availability is one of the benefits they provide.
If business-critical processes are migrated to a cloud computing model, internal security processes need to evolve to allow multiple cloud providers to participate in those processes as needed.
These include processes such as security monitoring, auditing, forensics, incident response, and business continuity.
It is critical to define a clear policy regarding privacy breaches with cloud service providers, such as:
IBM Research is pursuing a similar approach called “virtual machine introspection.” It puts security inside a protected VM running on the same physical machine as the guest VMs running in the cloud. The security VM employs a number of protective methods, including the whitelisting and blacklisting of guest kernel functions.
It can determine the operating system and version of the guest VM and can start monitoring a VM without any beginning assumption of its running state or integrity.
Instead of running 50 virus scanners on a machine with 50 guest VMs, virtual machine introspection uses just one, which is much more efficient, says Matthias Schunter, a researcher at IBM Research's Zurich lab. “Another big advantage is the VM can't do anything against the virus scan since it's not aware it's being scanned,” he says. In another application, a virtual intrusion detection system runs inside the physical machine to monitor traffic among the guest VMs. The virtual networks hidden inside a physical machine are not visible to conventional detectors because the detectors usually reside in a separate machine, Schunter says.
The following are some of the key areas that need to be addressed by the cloud computing contract, generally referred to as a service-level agreement (SLA):
In an IaaS environment, the customer maintains all intellectual property ownership rights related to any applications that it runs using the IaaS platform. Of course, reasonable confidentiality provisions will need to be included to protect any trade secrets that the customer places on the IaaS platform. The obligations to acquire third-party consents should also be straightforward in the cloud computing environment, with the provider being responsible for any consent required for it to operate its solution and the customer acquiring necessary third-party consents required in connection with any application or data that the customer brings to the public cloud platform. The intellectual property and licensing structure for an SaaS or a PaaS solution could be more complex, depending on the intellectual property at issue. The provider will retain ownership of its solution, but the customer will need to consider the ownership of any intellectual property for any interfaces or add-ons that the customer develops in connection with using the services as well as the ownership of applications developed on a PaaS platform.
3.144.86.233