Chapter 9
The Importance of Cloud Computing

The cloud refers to software and computing services that run on a remote computer and are available over the internet using a web browser or applications on your computing device. The term cloud is new, but not the concept. Actually, it is a twist on an old practice. Remote computing is nothing new. Many organizations operate a data center that provides software and computing services throughout the organization over a virtual private network (VPN). Software resides on an application server—not on local computers throughout the organization. Data resides on a database server in the data center. All computing services are provided by the data center.

Cloud computing is provided by a vendor such as Amazon Cloud Drive, Microsoft OneDrive, Apple iCloud, Google Drive, Dropbox, Yahoo Mail, and Netflix. Many cloud computing services offer storage for data and applications. Some offer their own applications such as Microsoft Office and Google Docs, enabling collaboration on projects within and outside the organization. Data and applications are available 24/7 over the internet. The cloud vendor is responsible for maintaining the cloud environment—the data center, servers, network connections, power, and applications.

In theory, there is no end to the cloud. The organization can use as much or as little space as necessary—for a fee. The cloud vendor has computing resources ready to meet practically any demand for service. Cloud computing is any pay-per-use service in real-time over the internet that extends an organization’s existing capabilities at a fraction of the cost of expanding a data center.

Rather than acquiring new servers, networks, and related applications within the organization’s data center, the organization uses a cloud service. The cloud provider is an aggregator of computing services from other vendors, offering one-stop shopping for an organization. The benefits of the cloud are seen as a viable alternative to operating a data center. It is estimated that every piece of data—voice, data, and images—that is transmitted over a public network at some point is in the cloud.

It Can Rain Too

The cloud seems to be the utopia to all the organization’s computing needs. However, there is a downside. Cloud-based applications and data require internet access. No internet connection means no access to applications and data. Furthermore, technical issues affecting the cloud vendor’s data center become the organization’s issues too, because when the cloud goes down, the organization no longer has access to applications and data. Compounding the problem is that the organization has no control over rectifying these issues. Unlike with the organization’s data center where the organization controls every facet of resolving technical issues, the cloud provider is responsible for fixing the problem.

Organizations that place their data and applications in the cloud are at risk of losing control. Data and applications that reside in the cloud vendor’s data center are out of the organization’s control. The organization must trust the cloud vendor to provide adequate security measures to protect the organization’s data. Furthermore, the organization must determine if the organization itself can be sustained if the cloud vendor denies access to the data and applications.

Governmental Access

An organization’s data is always at risk for being legally accessed by governmental agencies. The government typically gains access by serving legal notice to the organization that holds the data. The organization itself receives such notice if the data is held in the organization’s data center. What happens if the data is held at the cloud vendor’s data center? Must the cloud vendor hand over the data without notifying the organization? These are much-debated questions.

An organization can expect privacy unless data is disclosed to a third party. This is referred to as the third-party doctrine. A cloud vendor providing cloud computing services can be considered a third party; the government may search the organization’s data with the proper legal papers and issue an indefinite gag order to the cloud vendor, preventing the cloud vendor from disclosing that the government searched the data.

Cloud vendors are taking steps to address privacy concerns. Microsoft relocated its cloud servers to data centers in Germany and transferred both physical and logical access to cloud data to a data trustee. This greatly reduces Microsoft’s access to customer data. However, the privacy debate continues. Microsoft reported that within an 18-month period, the government made 5,600 legal demands of Microsoft to provide customer data stored on remote servers. Half required Microsoft not to inform the customer of the search indefinitely.

The Cloud and Data Science

Data science, commonly referred to as big data, focuses on making sense out of large amounts of data by finding data patterns that can be used to develop predictions. Data scientists were relatively stifled by technological limitations available to extract, store, process, and analyze huge data sets. The computing power available within an organization lacked the processing power, production environment, memory, and storage to effectively study sizable amounts of data.

Data scientists hit an electronic wall. The local computing environment was not scalable. Data grew on a magnitude scale monthly while the organization’s computing technology remained stagnant, and resources for big data analysis competed head-to-head with mission-critical applications. They simply ran out of computing resources. Living within the allocated computing technology required big data analysis to be performed in steps—loading and unloading and loading data and applications resulted in reliability errors and performance degradation. Compounding the challenge was the heavily data processing requirements to clean the data for analysis and the need to test and retest fine-tuning data models using the massive amount of data.

The cloud radically changed data science by removing the electronic wall that held back the big data revolution that is driving machine-learning and other eye-opening knowledge. The cloud offers practically unlimited scalability, using the most powerful computing environments and technology that is available all at a cost that most organizations can afford. As large amounts of data compound monthly, the organization acquires additional cloud services to store and process the data at an incremental cost without the hassle of investing in new equipment, expanding the data center, and hiring staff.

There are cloud providers who have services especially designed to manage big data. They have the capability to acquire, clean, store, and share the data throughout the organization, and the resources to develop, test, and implement data models based on big data. The cloud enables data scientists to quickly build prototypes without worrying about computing assets. Once proven, the full version of the data model can be implemented in the cloud.

The Cloud Services

Cloud technology is the latest in the evolution that began with stand-alone computing. Late in the last century, computing devices were connected to servers using client/server architecture. The computing device, called a client, requested services from a remote server over a local area network, known today as an intranet. Services included applications, data, and processing. In client/server architecture, some processing is performed locally on the computing device while processing required by all clients is processed on one or more common remote servers.

Client/server architecture is referred to as two-tier architecture, with the client as one tier and the server as the second tier. Multiple-tier architectures are commonplace today. For example, a client accesses a remote application and the remote application accesses a database. This is three-tier architecture: client, application, and database. Client/server architecture has a major disadvantage (Figure 9.1). There are no economies of scale. Investments in new infrastructure and new software licenses are necessary to expand capacity.

Figure 9.1: Client-server lacks economies of scale

There are three types of services offered by a cloud provider. These are:

  1. Software as a Service (SaaS): With SaaS, the cloud provider offers access to applications hosted by the cloud provider using a web browser point of access. The cloud provider is responsible for deploying, managing, and maintaining applications. Examples are Google Apps, Dropbox, and Salesforce. Organizations subscribe to the service. The cost of ownership of applications is covered by the cloud provider.
  2. Platform as a Service (PaaS): With PaaS, the cloud provider offers the platform that can be used to develop and deploy applications. The cloud provider offers the organization the operating system and related hardware and network infrastructures to develop and run the organization’s own applications. The organization focuses on building the applications. The cloud provider offers the tools and scalability to enable the organization to quickly respond to changing markets by requesting access to additional resources from the cloud provider. Examples are: OpenShift, Heroku, and Google App Engine.
  3. Infrastructure as a Service (IaaS): With IaaS, the cloud provider offers the basic infrastructure building blocks to the organization, enabling the organization to assemble computing resources on-demand. The cloud provider enables the organization to build a virtual data center. Components of the virtual data center can be accessed by the organization as if it were the organization’s traditional data center. However, there is no need for the organization to invest in the data center. It simply pays for components as needed. The cloud provider is responsible for management and maintenance of the physical data center. IaaS gives the organization virtual control over servers, storage, and processing. Examples are: Exoscale, Navisite, and SoftLayer. IaaS is sometimes referred to as utility computing because it provides a utility-type service to organizations.

The Private Cloud

A private cloud is very similar to the traditional data center architecture in that services are provided only to entities within the organization. There are no commercial clients. An entity is a division of the organization, sometimes considered as internal clients to the group that operates the private cloud. Internal clients don’t have control over the cloud environment. Control resides with the group that operates the private cloud.

The private cloud operation creates a virtual environment for each internal client using a pool of computing resources. The group operating the private cloud reconfigures computing resources to respond to the needs of internal clients. The private cloud can be created in one of three ways. The organization can own and operate computing resources that create the cloud—the traditional data center environment. The organization can outsource the private cloud to a vendor where computing resources provided by the vendor are solely used by the organization and not shared with other organizations.

A hybrid is another option—referred to as cloud bursting—where primary computing resources are owned and operated by the organization and additional on-demand computing resources are provided by a cloud vendor. Non-sensitive computing assets are moved to the public cloud, freeing private cloud resources for sensitive computing assets.

Private clouds are ideal for organizations that require secured processing and storage because the organization is in total control of security. Communication with the private cloud is conducted over private-leased, secured lines with encryption. This offers greater security than is provided in the public cloud—all computing devices in the private cloud operate behind the organization’s firewall and applications and personnel are under the organization’s control. No resources are shared outside the organization.

Private clouds come at a cost because there is one client—the organization—who underwrites the entire operation. Economies of scale are limited to internal clients, compared to the many clients associated with a public cloud operation. The organization can allocate computing resources quickly since it controls cloud resources. Using a public cloud may delay allocation because an agreement to use those resources must be reached between the organization and the cloud vendor.

The Public Cloud

The public cloud offers computing resources to the public over the internet to individuals and organizations who do not require the security provided by a private cloud. The public cloud offers computing resources on demand for typically a monthly fee. Computing resources can include expensive sophisticated applications, processing devices, and storage devices that otherwise might be out of the financial reach of the client. Access is seamless from anywhere at any time. Clients pay for servers they need for as long as they need those services.

The public cloud offers economies of scale because expensive cloud infrastructure, computing devices, and applications are shared among many organizations. The public cloud vendor can provide state-of-the-art centralized operations with redundant architectures and environments because costs are leveraged among its client base. Redundancy enables the vendor to balance loads, which provides an expected level of services regardless of demand. Multiple computing devices and cloud operation centers located in multiple states and countries guarantee continuous availability of the cloud to all clients as long as the client has internet access.

The cloud vendor accepts the operational risks associated with the cloud. It ensures that services are available; applications and operating systems are updated; and computing resources are maintained to meet the client’s and regulatory requirements. Furthermore, the cloud vendor incorporates sophisticated security measures that might be out-of-reach in a private cloud environment. The cloud vendor also has certified full-time staff with skill sets that that may not be economically available to organizations that operate a private cloud.

Hybrid Clouds

A hybrid cloud is a combination of a private cloud and a public cloud. The private cloud is used for sensitive processing and the public cloud is used for non-sensitive processing. Access to both clouds is seamless by using a browser. Users gain access through a browser-based portal that redirects requests to either the private or public cloud. A key benefit of a hybrid cloud is the private cloud can be used to satisfy regulatory requirements for secure processing and storage of data, while the public cloud provides the flexibility to meet growing demands.

There are a number of ways to implement a hybrid cloud. An organization can use two cloud vendors to supply the cloud—one for the private cloud and the other for the public cloud. Alternatively, a cloud vendor can provide a complete service where the private cloud computing resources are not shared and the public cloud computing resources are shared.

Still another option is for the organization to internally provide a private cloud and rely on a vendor to provide a public cloud. The drawback to implementing an internal private cloud is limited scalability. The organization would need to acquire more computing resources to expand. A cloud vendor needs only to reallocate existing resource to the private cloud (Figure 9.2).

Figure 9.2: A cloud vendor offers flexibility

Why Implement a Cloud?

The cloud offers many advantages for an organization that is growing and whose computing resource requirements fluctuate. The cloud provides the operational agility to meet growing demands with a sound economic foundation.

Easy to increase computing capacity.

Scalability both up and down.

A competitive advantage by increasing/decreasing computing capacity as needed without incurring long-term financial obligations.

Taking advantage of the latest technology without the burden of acquiring scarce resources.

Reduced time to market. Start-up time for a new initiative might require nine months to acquire computing resources. The cloud offers computing resources within days.

Disaster recovery. The cloud provider has the computing resources and expertise to handle recovery in a disaster. A cloud provider typically has replicated cloud data centers throughout the United States and outside the country.

Frees real estate. The cloud is off the organization’s premises. Space used for computing resources can be reallocated for other purposes.

No upfront investment in computing resources. The organization pays for computing resources using a subscription model.

No maintenance. The cloud vendor takes care of software updates and security patches as part of their core business.

No longer an information technology organization. Information technology has become a necessary part of the organization’s operation, although information technology is not the organization’s core business. The cloud shifts information technology to a cloud vendor whose core business is the cloud. The cloud vendor’s investment in cloud technology is an investment in the cloud vendor’s core business.

Shared resources. The cloud enables the staff to collaborate in real-time, increasing productivity.

Balanced work schedule. The cloud enables the staff to collaborate from anywhere in real time over the internet. Cloud vendors also offer cloud apps that can be used on mobile computing devices, giving staff access to the organization’s computing resources while on the go.

Reduces the carbon footprint. Rather than the organization maintaining computing resources that have a large carbon footprint, the organization shares those computing resources with other organizations in the cloud.

Staff focuses on business. In many organizations, the information technology staff account for 25 percent of the employees. The organization frees up headcount by moving to the cloud. Fewer information technology staff are required.

Hidden security. Staff can work with cloud-based applications that automatically save files to the cloud rather than on a local computing device. Files are never lost even if the local computing device crashes.

Why Not Use the Cloud?

The cloud is less than a perfect technological solution to computing. Here are common disadvantages:

Connectivity. The cloud internally and externally depends on network operations. Internally, the cloud provision connects its data centers located around the world over a network—the same public network that is used to connected everyone else. The organization connects to the cloud over the same network. Any network issues are also a cloud issue.

Traffic volume. The public network is a multi-lane highway that can handle very high volumes of traffic. An off-ramp is a narrower roadway to a cloud provider’s site. A traffic jam occurs unless the cloud provider manages the load to its sites as demand for its cloud services increase. Failure to do so results in slow response time, which is something an organization doesn’t expect from the vendor.

Software incompatibility. The presumption is that applications run on all computing devices, which is not necessarily the case. An organization may be using older applications and databases that are not compatible with computing resources offered by the cloud provider. The organization’s custom applications and third-party applications are built using a framework such as Java, C++, MySQL, and Oracle that require frequent upgrades, both on computing devices accessing the application and computing devices running the application. Cloud vendors are noted for installing upgrades faster than the organizations that use their services. Some upgrades require upgrades to both computing devices. Failure to upgrade prevents access to the application. Likewise, some upgrades may not be compatible with an organization’s applications, preventing the application from running in the cloud.

Support. An advantage of using the cloud is to offload responsibility for most of the organization’s computing responsibilities to the cloud vendor. The presumption is that it is economical for the cloud vendor to hire specialists since the expense can be allocated to other customers who also require those services. However, legacy applications that run an organization can become problematic since other organizations may not require the same specialist to maintain the application. The cloud vendor may refuse to accept the application or charge a premium to accept it.

Security. Responsibility for providing cybersecurity moves from the organization to the cloud provider. The cloud provider has multiple data centers around the world, each connected to customers and to each other. If a security gap exists in any of the data centers or connections, then it is highly likely that the vendor’s entire infrastructure is susceptible to the breach. An organization typically has one or a few data centers, decreasing points of failure compared with the cloud vendor.

Dependency. By switching computing responsibility from the organization to the cloud vendor, the organization’s sustainability is dependent on the sustainability of the cloud vendor. The organization cannot change cloud vendors quickly. If the relationship between the organization and the cloud vendor breaks down, the organization needs to have a contingency that enables the organization to move its cloud business to another cloud vendor with minimum interruptions. The breakdown in the relationship may not have anything to do with providing services. For example, the cloud vendor may be taken over by another cloud vendor, which might result in the organization sharing computing resources with competitors.

Mitigating Risk

Although computing risks seem to be offloaded to the cloud provider, the organization remains at risk. The organization remains exposed. However, steps can be taken to mitigate risk by carefully selecting a cloud provider. Here are steps that need to be taken (Figure 9.3).

Figure 9.3: Cloud risks to mitigate in a SLA

Encryption. The organization’s data must be encrypted at all times. AES-256 encryption is the most desirable because it has never been broken.

Demarcation. The cloud is a multi-tenant environment. All clients can share resources. Therefore, the cloud provider must demonstrate how the organization’s applications and data are segregated from other customer’s applications and data. There should be an electronic or physical wall between clients. The cloud provider may cage-in computing devices for each client and require a separate key to unlock the cage.

Data replication. The cloud provider must show the organization how data is replicated and restored as part of the organization’s and cloud provider’s data recovery plan, should the cloud provider’s facilities experience a catastrophe.

Data ownership. Make sure it is clear to the cloud provider that the organization owns the data and the format of the data. The cloud provider is simply providing storage and applications to manipulate the data.

Application ownership. Make sure it is clear who owns the rights to the application. Let’s say that the cloud provider licenses a SQL database management system (DBMS). Queries are used to interact with the DBMS. Who owns the queries? This is especially important if the cloud provider’s staff writes queries for the organization. The sustainability of the organization may depend on those queries.

Termination. Negotiate terms of terminating the relationship prior to engaging the cloud provider. Termination terms clearly define who owns what and the process for moving the organization’s owned resources to another cloud provider. Furthermore, the terms of termination should clearly explain how applications and data residing on the cloud provider’s computing devices will be destroyed after they are moved to another cloud provider. Termination terms also identify the conditions under which the relationship can be terminated. If and when the time comes to move, simply execute terms of the termination agreement.

Costs. Identify all costs associated when engaging the cloud provider. There should be no surprises or hidden costs. Costs are initial setup costs; ongoing costs; maintenance costs; change costs; and termination costs. Initial setup costs involve expenses to transfer the organization’s computing operations to the cloud provider. Ongoing costs are usually included in the monthly fee. Maintenance costs involve routine upgrades to applications and databases. Will the organization be charged a fee for moving data from the cloud to its own facility? Change costs involve non-routine enhancements to the cloud services, such as new applications, new databases, and services not covered in the original agreement. It is important to come to terms about these changes before they occur so there are no surprises at the time of the change. Termination costs are expenses associated with termination of the agreement, including transferring applications and data to another cloud vendor.

Security. Be sure that the cloud provider upgrades security to meet the organization’s requirements. The organization should set minimum security requirements and not waiver from them.

Limitations. Ensure that the cloud provider has the computing resources and staff that they promise in their sales presentation. Trust but verify. The organization is buying experience and the cloud provider’s organization should reflect that experience. Years of operation are not the only criteria to consider. The cloud provider’s infrastructure must reflect current technology.

Bandwidth. The cloud provider must have sufficient bandwidth today to meet demand for the next five years. Think of bandwidth as highway lanes. There should be sufficient lanes on the electronic highway—both the off ramp from the internet and internal highways—to maintain an acceptable response time. Many cloud providers are in a Catch-22 situation. Do they invest in a super-speed infrastructure hoping to attract clients or build the super-speed infrastructure as they bring on clients? The organization should be looking for a cloud provider who has the financial resources to build a super-speed infrastructure first. How much bandwidth is needed? There are tools available such as the Microsoft Assessment and Planning Toolkit that help an organization assess its needs.

Service-level agreement. The service-level agreement defines the relationship between the organization and the cloud provider. It contains expectations, limitations, liabilities, responsibilities, termination, fees, and other understandings that govern the relationship between both parties.

The Cloud Life Cycle

The cloud offers many options from à la carte to full-service. The cloud life cycle process helps to decide which options to choose. There are eight steps in the cloud life cycle process.

  1. Define the purpose. Decide the organization’s requirements first. The cloud can meet a variety of needs once those needs are identified. An organization experiencing a surge can use the cloud to quickly expand its capabilities practically overnight. An organization that hasn’t kept pace with technology can use the cloud to become current with technology to operate the organization. Still other organizations use the cloud to expand services to customers. For example, Adobe produces many creative applications, originally selling each product separately. The cloud is now used to provide customers access to all their creative applications online for one monthly subscription fee.
  2. Define the hardware. The cloud vendor offers a variety of hardware to run an organization’s applications, data, and computing operations.
  3. Define storage service. Storage is the place in the cloud in which you house applications and data. Vendors offer different services that are optimized for backing up applications and data or archiving it.
  4. Define the network. Decide on the requirements for communicating with the cloud. Factors to consider are security; amount of network traffic generated by the organization, such as data, voice, and video; and transfer speeds.
  5. Define security. Security factors are authentication, authorization, encryption at rest, and encryption in transit.
  6. Define management processes and tools. Management processes and tools are used to give the organization control over its cloud assets. These include monitoring activities and managing applications, data residing in the cloud, and developing and deploying applications to the cloud.
  7. Define building and testing requirements. The cloud is more than a remote data center. The cloud can be the organization’s computing environment within which developers build and test applications. Identifying the organization’s needs to continue creating and maintaining applications in the cloud helps to select the best vendor and services to use for the organization.
  8. Define analytics. Analytics are used to monitor operations and provide decision support information to assist management in making decisions. Vendors are able to provide an assortment of analytical tools that can provide instant results and can respond to any query for information. The organization must identify its analytical requirements when selecting a vendor.

Cloud Architecture

The cloud architecture is a service-oriented architecture where the focus is for the cloud vendor to provide a wealth of services to customers. Each customer picks services that augment its organization’s operations. Customers pay only for services that they use. The vendor’s objective is to identify needs and provide services to meet the needs of its customers. The vendor then leverages the costs of development, operations, and maintenance of each microservice across customers who subscribe to the microservice.

A key element of the cloud architecture is microservices used to develop an application (Figure 9.4). The microservices concept has been seen elsewhere in computing such as with the Unix operating system and web services. The basis of microservices is to create self-contained mini-applications called services that do something very well. Each performs a granular function that can be assembled with other microservices to form an application (Figure 9.4).

Figure 9.4: Microservices in a cloud architecture

Think of a microservice as an event handler. An event handler is a common structure in a Windows-like operating environment in which there are many events happening at the same time. An event handler is a self-contained function that responds to a specific event. For example, in a Windows-like operating environment there are multiple applications appearing on the screen. When the user resizes the window of an application, all other applications need to adjust their screen to accommodate the change. Each application has a function called an event handler that contains code that resizes its window. Microservices are like event handlers, except the microservice is outside the application and is called in response to events occurring within the application or with any application that uses the microservice.

Let’s say an application needs to process credit card payments. Instead of embedding code that processes credit card payments into the application—and other applications that need to do the same—a microservice that processes credit card payments is created and is used by applications that need to perform this task. Developers need to call the microservice, provide it with necessary information, and process data returned by the microservice.

Each microservice is developed independently of other microservices to meet the needs of vendors in the cloud community. However, each has an application programing interface (API) that is shared with developers. The API describes the microservice function; information that is needed to perform the function; any codes to turn on or off sub-features of the function; instructions on how to call the microservice; and instructions on how to interpret values returned by the microservice.

APIs, Fintech and Blockchain

The flexibility offered by open APIs and microservices has helped spur the rapid growth in developments in the financial technologies (fintech) arena. Companies like PayPal whose API enables safe payments worldwide utilizing their technology to tie into third party applications large and small. Today, hundreds of startups are providing services that interface with the customer in new and unique ways using big data to tap into their needs and providing useful services which are driving innovation in the financial fintech sector. These startups are offering developers best of breed technology, saving time and money often beyond the expertise of a project development team. The fintech area has caught fire, combining with developments in cybercurrencies, utilizing distributed ledger (blockchain) technology, creating alternative currencies such as bitcoin, Ethereum and others. But perhaps more importantly, blockchain technology is likely to support a new breed of innovations which utilize the immutability of the blockchain to enable smart contract based applications that do not require expensive and time consuming third party support and maintenance as the blockchain is by design, irreversible.

The microservice is maintained by a development team. Upgrades are made usually without the knowledge of developers who use the microservice unless the change affects the API. For example, a change in credit card processing is implemented immediately and brings all applications that use the microservice current. One change instantaneously occurs in many applications.

Furthermore, a microservice may be assembled from other microservices. For example, processing a credit card requires sub-processes such as authorizing access to perform the process; access secure information relating to the purchase from a database; and updating activity logs. Each of these might be a microservice that can be accessed by other applications aside from processing a credit card. The idea is that a microservice can be called from anywhere and from any application that is authorized to use the microservice.

There is a tendency to associate microservices with a vendor, but that’s too narrow a scope to view microservices. Keep in mind that the cloud can be a private cloud, public cloud, or a hybrid of both public and private. An application can be configured to use microservices available on a private and public cloud—and clouds offered by different vendors.

Microservices must have a product owner who is responsible for maintaining the microservices and upgrade them based on feedback from developers. Microservices must be organized within a library management system, making it easy for developers to locate microservices that can be incorporated into their application.

Serverless Computing

Another element of the cloud architecture is serverless computing. When developing and deploying an application, the organization needs to consider the computing resources necessary to run the application. Computing resources include various hardware and software components. At times, developers are limited to building an application that can run on the existing computing resources. Other times, developers have to estimate computing resources needed to run the application, and then the organization needs to allocate the finances to acquire those resources. Furthermore, the organization has to allocate computing resources among applications.

The cloud practically eliminates the challenges of building an application to run in the organization’s computing environment by giving developers the freedom to design an application without consideration of computing resources. In another words, developers and the organization are working with serverless computing, which is computing with a virtually endless availability of hardware and software to run an application. Yes, applications require computing resources, including servers. However, the cloud vendor has what appears to be all the computing power an organization would ever require. Therefore, it seems as if the cloud is serverless.

The cloud vendor offers computing resources on an as-needed basis. Let’s say an application requires heavy data crunching, but only occasionally. The organization pays for the computing resources for those moments. There is no idling time. The organization no longer needs to acquire the computing power to crunch the data. Computing power is acquired just when it is required—and the acquisition is automatic once the application is configured for the cloud. The operation switches to the needed computing resources behind the scenes.

Developers and the organization focus on building the application using a blend of custom code and microservers without concern over limitations of computing resources. The cloud environment ensures that the necessary computing resources are available when required by the application. Configuration of the application for the cloud takes care of the fulfilling the computing requirements for the application.

DevOps

There are many scenarios that may be used inside or outside of a relationship with a cloud provider. The methods used may be the most important factor in your decision on a cloud vendor, multiple cloud vendors, or hybrids. How are you to run your sales organization, your backend services such as accounting and finance, your supply chain, your web presence and customer outreach, and your development needs on all of the above? Who provides these services?

Development operations (DevOps) is the process used in the cloud to eliminate barriers between applications’ development and the operations that run the applications. DevOps replaces the traditional development and delivery methods that require many processes and staff who typically work in silos that impede the agility required for fast, economical responses to the organization’s demands. This was commonly referred to as the waterfall method, in which one silo passed along the work to the next until the last silo deployed the application.

DevOps automate many of the processes required to move an application from development into production. Developers move applications into the cloud using DevOps tools directly. The cloud provider may then manage the process of functional and nonfunctional, unit and iterative testing (continuous testing); version control; configuration management; change management; and other functions necessary to deploy the application.

At each stage of implementation, the application is either returned to the previous stage if there are issues with the application or pushed forward in the deployment process. For example, the cloud returns the application to the developers if the application fails to pass a test. In doing so, DevOps refocuses the organization on developing the application while the cloud is focused on managing the process.

The DevOps process enables developers to build code and move it into building an application followed by automated testing, and then automatic deployment where the application is immediately used. The operations portion controls image management, rolling upgrades, security configuration, patch management, and environment configuration and deployment. The DevOps process brings a synergy of development staff and operations staff by forming a uniform process across silos, removing barriers that traditionally exist in the development and operational environments.

With the developer figuratively pressing the button in DevOps to test an application, testing then occurs that identifies policy issues, coding problems, quality problems, and issues regarding security. Test results are returned to the developer, who then modifies the code accordingly to address those problems. Results are returned by the DevOps process.

Before DevOps, the development team and operations team worked relatively independently, resulting in risky deployment of new applications because of a lack of collaboration and synchronization. This led to increased costs and challenges tracking changes to applications. DevOps enables both teams to work as one team, each looking to produce a quality application. There became a continual feedback cycle that uses automated DevOps processes to help the team monitor and share information about development. The entire process from development through operations becomes measurable, and any delays in the process clearly highlight the breakdown in the process, thereby making the delay actionable.

Key to DevOps is a lean methodology that automates hand-offs between development, operations, and customers. Prior to DevOps a “customer,” internal or external, enters a ticket for a change to an application—perhaps through the help desk, which is part of operations. The operations team records and sends the request to the development team who works on the changes. The upgraded application is then sent to the testing team. The testing team needs operations to set up the testing environment. Testing also reviews security requirements, quality control, and compliance with the organization’s policies. Results of testing are then sent to the development team. The application is then modified and returned to testing if changes are necessary. Otherwise, the application is turned over to operations to begin the deployment process. There are too many gaps and hand-offs where details can be overlooked. Furthermore, delays occur because each group knows about the application when it receives the application.

DevOps reduces the number of manual hand-offs by making all stakeholders aware of the status or the project beginning with the initial change request. Tools are used to automate the process where possible. In some situations the tool performs the process and in others the tool enables the team to efficiently perform its role. For example, DevOps typically produces real-time reports that help the teams improve the process. These include change fail rates that determine the rate at which changes fail to achieve the desired goal; mean time to recover (MTTR) that calculates the average time to recover from a failure; and lead time for change, which is the elapsed time from the time the request for change is received and the time the change is fully implemented.

DevOps uses selective automation to optimize the development and operations process. The goal is to automate the process of developing applications and getting the applications deployed so customers can use them. Each phase of the process is automatic to track the application and objectively measure the progress, giving feedback to both the development and operations teams who then improve the process. The DevOps process provides staff with tools needed to optimize their role in the development and operations process.

It is smart to begin adoption of DevOps with a pilot application that can be used as a proof of concept. This is often done in coordination with a cloud provider. The cloud vendor provides the tools and environment to implement the DevOps process. The pilot application uses a lean development and operations team of approximately ten staff compared with an estimate of thirty staff members for implementing a typical application. The goal is to demonstrate that the concept of DevOps is a viable option for the organization. Aspects of the DevOps process are proven and there is no need for the staff to reinvent it since they can leverage existing solutions. The pilot application also identifies training needs for the developers and the operations staff on how to use the DevOps tool to automate their processes.

Once the pilot application has successfully been developed and implemented using the DevOps process, the organization makes a conscious effort to break down silos and bring the entire staff onboard using the DevOps process. In its purest form, all applications going forward must use the DevOps process without exception. Applications should be designed around microservices. Rather than focusing on designing a complete application, developers should be focused on designing microservices that provide functionality that can be utilized by many applications.

The DevOps Maturity Model

Not all applications are suited for the cloud. The DevOps maturity model helps to identify applications that are appropriate for the cloud. The DevOps maturity model is used to categorize applications based on objective criteria that are organized into five levels. These are:

Level One: Ad-Hoc Communication. There is no automation; no governance of the process; and no quality standards exist.

Level Two: Controlled Communication and Collaboration. Automation is ad-hoc without a formal automation process. There are no governance standards and quality management is ad-hoc with no formal quality management plan in place.

Level Three: Standard Communication Process. There is a standardized automation process in place and a standardized form of governance over the process. However, there are no quality standards in place.

Level Four: Communication Metrics Exits for Improvement. Automation metrics are in place to measure progress in developing and deploying the application with application goals. There are also metrics to measure the effectiveness of governance over the process, and quality metrics are in place to measure improvement performance.

Level Five: Constructive Communication Environment, Tools, and Processes. Optimization methods are in place to maximize throughput, govern the process, and provide continuous quality improvement.

Compliance

Depending on the business, organizations are governed by countless regulations. In the US, healthcare organizations must comply with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) that requires the organization to protect health information. Public corporations must adhere to the Sarbanes–Oxley Act (SOX). Organizations that use information from European citizens must adhere to processes defined in the General Data Protection Regulation (GDPR). Failure to adhere to regulations exposes the organization to fines and possibly litigation.

The organization’s data is in the cloud. It is critical that the cloud provider has the necessary measures in place to ensure that regulatory requirements are met. The organization needs to perform a detailed walkthrough of processes available in the cloud to provide the degree of compliance required by regulators. In addition to protecting data, the cloud provider must have tools in place for internal auditors and regulatory auditors to use to audit the organization’s data to ensure regulatory compliance. The cloud provider and the organization must make sure data protection is compliant and both can prove it to regulators.

Cloud Security

The thought of placing the organization’s mission-critical information and applications in an unseen, remote location called the cloud is frightening. All the confidential and innermost data required to run the organization seems to be somewhere in space—obviously not space, but in remote servers owned and operated by the cloud provider.

The reality is that the cloud is more secure than the organization’s own facilities that house data and applications. The cloud provider has the resources and motivation to employ the latest security measures and to ensure that those measures are updated (at times, hourly). Many organizations see security as a necessary evil that is secondary to its business. This attitude usually exposes the organization to potential security faults.

“Trust, but verify” is the foundation of using any vendor. Trust that the cloud provider has the best security defenses in place, but also verify this fact before a cloud provider is engaged. Executives of the organization remain liable if a security breach occurs, even if it occurs in the cloud. Here are some data breaches:

Denial of service. Denial of service occurs when services are cut off or in some way limited, often when a hacker floods the cloud’s IP address with requests more than the cloud can process, resulting in decreased response time. The cloud provider must explain how it defends against such an attack.

Encryption break-in. Breaking into an encrypted file is difficult—however, older encryption algorithms could be defeated. It is important to ensure that the cloud provider uses the latest encryption algorithms for files at rest and in-transit.

Physical theft. By now you realize that data and applications don’t reside in a cloud but on a server located in the cloud provider’s data center. Visiting the data center provides the opportunity to assess the physical security policies and practices of the vendor.

Ransomware. Ransomware is software that prevents access to applications and data (denial of service) by using encryption. Only the hacker has the ability to decipher it.

Data theft. Employees from the organization and from the cloud provider have access to the organization’s data. Assess what steps are employed by the cloud— and within the organization—to prevent such theft.

Vulnerability exploitation. Operating systems, applications, and development tools are not perfect when it comes to security. Hackers are aware of this and exploit these vulnerabilities to gain access to information. The cloud provider— and the organization’s applications—must be using the latest products that have removed these vulnerabilities. The old reliable sales management system, for example, may have known vulnerabilities that haven’t been addressed. The cloud vendor may suggest that these be addressed or replace the system with new technology.

Levels of Security

A cloud provider typically has data center facilities in one or more regions, possibly in a region of the United States or in countries outside of the United States. The organization can select the region for its applications and data. Furthermore, the organization can have different regions used for specific applications and databases.

The organization can add a level of security by encrypting data on the client-side, where only the organization can decipher the data. This is in addition to encryption provided by the cloud vendor in-transit and at-rest in the vendor’s facility. Even if data is intercepted, encryption makes the data useless to the hacker who gains access to this data.

Application-level security focuses on preventing unauthorized access to the application. The organization and the cloud provider should have logs that indicate when the application is accessed and the IDs and IP addresses that have access. Logs should also indicate all writing and reading of data with specific information to trace who had access or at least what computing device was used.

Another important security implementation is for the cloud provider to have application programming interface (API) logs. The cloud offers microservices that can be accessed from practically anywhere in the cloud. API logs record information about when the microservice was called and the application that called it. This enables the security staff to trace access back to the application if it was hacked.

Data import and export logs should also be in place by the cloud provider to record any large movement of data. Ideally, the cloud has an alert system that calls attention to unusual transfers of data. The security staff can immediately monitor and investigate the activity and possibly halt the transfer. Similar alerts should occur when there have been a set number of failed attempts to access the application or data. Alerts should also be sounded when access is attempted from an unexpected IP address. Alerts trigger a real-time response to a potential hack.

Object-level security is another area to focus on. Objects are a collection of data in a database. Security concerns are at the database level and at the data level. Database-level security centers on access of the database, while data-level security looks at access to specific types of data within the database. In addition to encryption, data can be limited by views of data. Based on authorization, the database management system can assemble virtual tables of data from tables in the database.

Platform-level security is a security process that prevents unauthorized access to the computing device such as computers, network services, application servers, and database servers. Without access, data and applications are secured. It is important to ensure that the cloud provider offers and implements all security levels to the product and the organization’s applications and data.

Critical to successful security of the cloud is the organization’s ability to manage security access. As employees are hired, terminated, and transferred into new roles, the organization must modify security access to the organization’s computing resources. Some resources are internal and others are on the cloud. The cloud provider should offer a way for the organization to change security access settings for cloud resources quickly and in coordination with changes to internal security settings. Ideally, changes to the internal security settings should flow automatically to the cloud security settings.

An option to consider is acquiring security services from a third-party other than the cloud provider. Third-party vendors offer security services across cloud providers. This is a valuable service to consider, since organizations tend to use multiple cloud providers. These vendors have the knowledge to leverage the assets of each cloud provider to the advantage of the organization.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.40.177