Chapter 9

Securing the Cloud

This chapter covers the following topics:

What Is Cloud and What Are the Cloud Service Models?

DevOps, Continuous Integration (CI), Continuous Delivery (CD), and DevSecOps

Describing the Customer vs. Provider Security Responsibility for the Different Cloud Service Models

Cisco Umbrella

Cisco Email Security in the Cloud

Cisco Cloudlock

Stealthwatch Cloud

AppDynamics Cloud Monitoring

Cisco Tetration

The following SCOR 350-701 exam objectives are covered in this chapter:

  • Domain 3.0 Securing the Cloud

    • 3.1 Identify security solutions for cloud environments

      • 3.1.a Public, private, hybrid, and community clouds

      • 3.1.b Cloud service models: SaaS, PaaS, IaaS (NIST 800-145)

    • 3.2 Compare the customer vs. provider security responsibility for the different cloud service models

      • 3.2.a Patch management in the cloud

      • 3.2.b Security assessment in the cloud

      • 3.2.c Cloud-delivered security solutions such as firewall, management, proxy, security intelligence, and CASB

    • 3.3 Describe the concept of DevSecOps (CI/CD pipeline, container orchestration, and security)

    • 3.4 Implement application and data security in cloud environments

    • 3.5 Identify security capabilities, deployment models, and policy management to secure the cloud

    • 3.6 Configure cloud logging and monitoring methodologies

    • 3.7 Describe application and workload security concepts

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 9-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes and Q&A Sections.”

Table 9-1 “Do I Know This Already?” Section-to-Question Mapping

Foundation Topics Section

Questions

What Is Cloud and What Are the Cloud Service Models?

1

DevOps, Continuous Integration (CI), Continuous Delivery (CD), and DevSecOps

2–3

Describing the Customer vs. Provider Security Responsibility for the Different Cloud Service Models

4

Cisco Umbrella

5

Cisco Email Security in the Cloud

6

Cisco Cloudlock

7

Stealthwatch Cloud

8

AppDynamics Cloud Monitoring

9

Cisco Tetration

10

Caution

The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you do not know the answer to a question or are only partially sure of the answer, you should mark that question as wrong for purposes of the self-assessment. Giving yourself credit for an answer you incorrectly guess skews your self-assessment results and might provide you with a false sense of security.

1. Which of the following is a cloud computing model that provides everything except applications? Services provided by this model include all phases of the system development life cycle (SDLC) and can use application programming interfaces (APIs), website portals, or gateway software. These solutions tend to be proprietary, which can cause problems if the customer moves away from the provider’s platform.

  1. IaaS

  2. PaaS

  3. SaaS

  4. Hybrid clouds

2. Which of the following is a software and hardware development and project management methodology that has at least five to seven phases that follow in strict linear order, where each phase cannot start until the previous phase has been completed?

  1. Agile

  2. Waterfall

  3. DevOps

  4. CI/CD

3. Which of the following is a software development practice where programmers merge code changes in a central repository multiple times a day?

  1. Continuous Integration (CI)

  2. Agile Scrum

  3. Containers

  4. None of these answers is correct.

4. In which of the following cloud models is the end customer responsible for maintaining and patching applications and making sure that data is protected, but not the virtual network or operating system?

  1. PaaS

  2. SaaS

  3. IaaS

  4. IaaS and PaaS

5. Which technology is used by Cisco Umbrella to scale and to provide reliability of recursive DNS services?

  1. Umbrella Investigate

  2. Multicast

  3. BGP Route Reflectors

  4. Anycast

6. Which Cisco Email Security feature is used to detect spear phishing attacks by examining one or more parts of the SMTP message for manipulation, including the “Envelope-From,” “Reply To,” and “From” headers?

  1. Forged Email Detection (FED)

  2. Forged Email Protection (FEP)

  3. Sender Policy Framework (SPF)

  4. Domain-based Message Authentication, Reporting, and Conformance (DMARC)

7. Which of the following is a cloud access security broker (CASB) solution provided by Cisco?

  1. Tetration

  2. Stealthwatch Cloud

  3. Cloudlock

  4. Umbrella

8. The Cisco Stealthwatch Cloud Sensor appliance can be deployed in which two different modes?

  1. Processing network metadata from a SPAN or a network TAP

  2. Processing metadata out of NetFlow or IPFIX flow records

  3. Processing data from Tetration

  4. PROCESSING data from Cloudlock

  5. Processing data from Umbrella

9. AppDynamics provides cloud monitoring and supports which of the following platforms?

  1. Kubernetes

  2. Azure

  3. AWS Lambda

  4. All of these answers are correct.

10. Which statement is not true about Cisco Tetration?

  1. Tetration uses software agents or can obtain telemetry information from Cisco’s network infrastructure devices.

  2. You can use the Application Dependency Mapping (ADM) functionality to provide insight into the kind of complex applications that run in a data center, but not in the cloud.

  3. ADM enables network admins to build tight network security policies based on various signals such as network flows, processes, and other side information like load balancer configs and route tags.

  4. Tetration’s Vulnerability Dashboard supports CVSS versions 2 and 3.

Foundation Topics

What Is Cloud and What Are the Cloud Service Models?

In Chapter 1, “Cybersecurity Fundamentals,” you learned that the National Institute of Standards and Technology (NIST) created Special Publication (SP) 800-145, “The NIST Definition of Cloud Computing,” to provide a standard set of definitions for the different aspects of cloud computing. The SP 800-145 document also compares the different cloud services and deployment strategies. In short, the advantages of using a cloud-based service include the use of distributed storage, scalability, resource pooling, access to applications and resources from any location, and automated management.

images

According to NIST, the essential characteristics of cloud computing include the following:

  • On-demand self-service

  • Broad network access

  • Resource pooling

  • Rapid elasticity

  • Measured service

images

Cloud deployment models include the following:

  • Public cloud: Open for public use

  • Private cloud: Used just by the client organization on-premises (on-prem) or at a dedicated area in a cloud provider

  • Community cloud: Shared between several organizations

  • Hybrid cloud: Composed of two or more clouds (including on-prem services)

images

Cloud computing can be broken into the following three basic models:

  • Infrastructure as a Service (IaaS): IaaS describes a cloud solution where you are renting infrastructure. You purchase virtual power to execute your software as needed. This is much like running a virtual server on your own equipment, except you are now running a virtual server on a virtual disk. This model is similar to a utility company model because you pay for what you use.

  • Platform as a Service (PaaS): PaaS provides everything except applications. Services provided by this model include all phases of the system development life cycle (SDLC) and can use application programming interfaces (APIs), website portals, or gateway software. These solutions tend to be proprietary, which can cause problems if the customer moves away from the provider’s platform.

  • Software as a Service (SaaS): SaaS is designed to provide a complete packaged solution. The software is rented out to the user. The service is usually provided through some type of front end or web portal. While the end user is free to use the service from anywhere, the company pays a per-use fee.

Note

NIST Special Publication 500-292, “NIST Cloud Computing Reference Architecture,” is another resource to learn more about cloud architecture.

images

DevOps, Continuous Integration (CI), Continuous Delivery (CD), and DevSecOps

DevOps (including the underlying technical, architectural, and cultural practices) characterizes a convergence of many technical, project management, and management movements. Before we start to define what is DevOps, let’s take a look at the history of development methodologies. There are decades of lessons learned from software development, high reliability organizations, manufacturing, high-trust management models, and others that have evolved to the DevOps practices we know today.

The Waterfall Development Methodology

The waterfall model is a software and hardware development and project management methodology that has at least five to seven phases that follow in strict linear order. Each phase cannot start until the previous phase has been completed.

Figure 9-1 illustrates the typical phases of the waterfall development methodology.

images

Figure 9-1 The Typical Phases of the Waterfall Development Methodology

There a few reasons why organizations use the waterfall methodology. One of the main reasons is because project requirements are agreed upon from the beginning; consequently, planning and scheduling is simple and clear. With a fully laid-out project schedule, an accurate estimate can be given, including development project cost, resources, and deadlines. Another reason is because measuring progress is easy as you move through the phases and hit the different milestones. Your end customer is not perpetually adding new requirements to the project, thus delaying production.

There are several disadvantages in the waterfall methodology, as well. One of the disadvantages is that it can be difficult for customers to enumerate and communicate all of their needs at the beginning of the project. If your end customer is dissatisfied with the product in the verification phase, it can be very costly to go back and design the code again. In the waterfall methodology, a linear project plan is rigid and lacks flexibility for adapting to unexpected events.

images

The Agile Methodology

Agile is a software development and project management process where a project is managed by breaking it up into several stages and involving constant collaboration with stakeholders and continuous improvement and iteration at every stage. The Agile methodology begins with end customers describing how the final product will be used and clearly articulating what problem it will solve. Once the coding begins, the respective teams cycle through a process of planning, executing, and evaluating. This process may allow the final deliverable to change in order to better fit the customer’s needs. In an Agile environment, continuous collaboration is key. Clear and ongoing communication among team members and project stakeholders allows for fully informed decisions to be made.

The Agile methodology was originally developed by 17 people in 2001 in written form, and it is documented at “The Manifesto for Agile Software Development” (https://agilemanifesto.org).

Figure 9-2 illustrates Agile’s four main values, as documented in “The Manifesto for Agile Software Development.”

images

Figure 9-2 The Agile Methodology’s Four Main Values

In Agile, the input to the development process is the creation of a business objective, concept, idea, or hypothesis. Then the work is added to a committed “backlog.” From there, software development teams that follow the standard Agile or iterative process will transform that idea into “user stories” and some sort of feature specification. This specification is then implemented in code. The code is then checked in to a version control repository (for example, GitLab or GitHub), where each change is integrated and tested with the rest of the software system.

In Agile, value is created only when services are running in production; consequently, you must ensure that you are not only delivering fast flow, but that your deployments can also be performed without causing chaos and disruptions, such as service outages, service impairments, or security or compliance failures.

Figure 9-3 illustrates the general steps of the Agile methodology.

images

Figure 9-3 The Agile Methodology’s General Steps

There is a concept adopted by many organizations related to Agile called “Scrum.” Scrum is a framework that helps organizations work together because it encourages teams to learn through experiences, self-organize while working on a solution, and reflect on their wins and losses to continuously improve. Scrum is used by software development teams; however, its principles and lessons can be applied to all kinds of teamwork. Scrum describes a set of meetings, tools, and roles that work in concert to help teams structure and manage their work.

Tip

Scrum.org has a set of resources, certification, and training materials related to Scrum.

Figure 9-4 illustrates the high-level concepts of the Scrum framework. The Scrum framework uses the concept of “sprints” (a short, time-boxed period when a Scrum team works to complete a predefined amount of work). Sprints are one of the key concepts of the Scrum and Agile methodologies.

images

Figure 9-4 The Scrum Framework and Sprints

Agile is an implementation of the Lean management philosophy created to eliminate waste of time and resources across all aspects of business. The Lean management philosophy was derived from the “Toyota Production System” from the 1980s.

Tip

The following video provides a good overview of the Agile methodology: https://www.youtube.com/watch?v=Z9QbYZh1YXY. The following GitHub repository includes a very detailed list of resources related to the Agile methodology: https://github.com/lorabv/awesome-agile.

Agile also uses the Kanban process. Kanban is a scheduling system for lean development and just-in-time (JIT) manufacturing originally developed by Taiichi Ohno from Toyota.

There is yet another concept called Extreme Programming (EP). EP is a software development methodology designed to improve quality and for teams to adapt to the changing needs of the end customer. EP was originally developed by Ken Beck, who used it in the Chrysler Comprehensive Compensation System (C3) to help manage the company’s payroll software. EP is similar to Agile, as its main goal is to provide iterative and frequent small releases throughout the development process. This enables both team members and customers to assess and review the development progress throughout the entire software development life cycle (SDLC).

Note

SDLC is also often used as an acronym for secure development life cycle. You will learn more about the secure development life cycle later in this chapter.

Figure 9-5 provides a good high-level overview of the Lean, Agile, Scrum, Kanban, and Extreme Programming concepts and associations.

images

Figure 9-5 Lean, Agile, Scrum, Kanban, and Extreme Programming Concepts

images

DevOps

DevOps is the outcome of many trusted principles—from software development, manufacturing, and leadership to the information technology value stream. DevOps relies on bodies of knowledge from Lean, Theory of Constraints, resilience engineering, learning organizations, safety culture, human factors, and many others. Today’s technology DevOps value stream includes the following areas:

  • Product management

  • Software (or hardware) development

  • Quality assurance (QA)

  • IT operations

  • Infosec and cybersecurity practices

Figure 9-6 illustrates the steps to embrace DevOps within an organization.

images

Figure 9-6 Embracing DevOps

There are “three general ways” to DevOps. The first “way” (illustrated in Figure 9-7) includes systems and flow. In this way (or method), you make work visible by reducing the work “batch” sizes, reducing intervals of work, and preventing defects from being introduced by building in quality and control.

images

Figure 9-7 DevOps Systems and Flow

The second way is illustrated in Figure 9-8. This way includes a feedback loop to prevent problems from happening again (enabling faster detection and recovery by seeing problems as they occur and maximizing opportunities to learn and improve).

images

Figure 9-8 DevOps Feedback Loop

Figure 9-9 illustrates the third way (continuous experimentation and learning). In a true DevOps environment, you conduct dynamic, disciplined experimentation and take risks. You also define the time to fix issues and make systems better. The creation of shared code repositories help tremendously to achieve this continuous experimentation and learning process.

images

Figure 9-9 DevOps Feedback Loop

images

CI/CD Pipelines

Continuous Integration (CI) is a software development practice where programmers merge code changes in a central repository multiple times a day. Continuous Delivery (CD) sits on top of CI and provides a way for automating the entire software release process. When you adopt CI/CD methodologies, each change in code should trigger an automated build-and-test sequence. This automation should also provide feedback to the programmers who made the change.

CI/CD has been adopted by many organizations that provide cloud services (that is, SaaS, PaaS, and so on). For instance, CD can include cloud infrastructure provisioning and deployment, which traditionally have been done manually and consist of multiple stages. The main goal of the CI/CD processes is to be fully automated, with each run fully logged and visible to the entire team.

With CI/CD, most software releases go through the set of stages illustrated in Figure 9-10. A failure in any stage typically triggers a notification. For example, you can use Cisco WebEx Teams or Slack to let the responsible developers know about the cause of a given failure or to send notifications to the whole team after each successful deployment to production.

images

Figure 9-10 CI/CD Pipeline Stages

In Figure 9-10, the pipeline run is triggered by a source code repository (Git in this example). The code change typically sends a notification to a CI/CD tool, which runs the corresponding pipeline. Other notifications include automatically scheduled or user-initiated workflows, as well as results of other pipelines.

The Build stage includes the compilation of programs written in languages such as Java, C/C++, and Go. On the contrary, Ruby, Python and JavaScript programs work without this step; however, they could be deployed using Docker and other container technologies. Regardless of the language, cloud-native software is typically deployed with containers (in a microservice environment).

In the Test stage, automated tests are run to validate the code and the application behavior. The Test stage is an important stage, since it acts as a “safety net” to prevent easily reproducible bugs from being introduced. This concept can be applied to preventing security vulnerabilities, since at the end of the day, a security vulnerability is typically a software (or hardware) bug. The responsibility of writing test scripts can fall to a developer or a dedicated QA engineer. However, it is best done while new code is being written.

Tip

Depending on the size and complexity of the project, the Test phase can last from seconds to hours. Many organizations with large-scale projects run tests in multiple stages, starting with tests (typically called “smoke tests”) that perform quick sanity checks from the user’s point of view. Large-scale tests are typically parallelized to reduce runtime. It’s very important that the test stage produce feedback to developers quickly, while the code is still fresh in their minds and they can maintain the state of flow.

Once you have a built your code and passed all predefined tests, you are ready to deploy it (the Deploy stage). Traditionally, there have been multiple deploy environments used by engineers (for example, a “beta” or “staging” environment used internally by the product team and a “production” environment).

Note

Organizations that have adopted the Agile methodology usually deploy work-in-progress manually to a staging environment for additional manual testing and review, and automatically deploy approved changes from the master branch to production.

images

The Serverless Buzzword

First, serverless does not mean that you do not need a “server” somewhere. Instead, it means that you will be using cloud platforms to host and/or to develop your code. For example, you might have a “serverless” app that is distributed in a cloud provider like AWS, Azure, or Google Cloud Platform.

Serverless is a cloud computing execution model where the cloud provider (AWS, Azure, Google Cloud, and so on) dynamically manages the allocation and provisioning of servers. Serverless applications run in stateless containers that are ephemeral and event-triggered (fully managed by the cloud provider).

AWS Lambda is one of the most popular serverless architectures in the industry. Figure 9-11 shows an example of a “function” or application in AWS Lamda.

images

Figure 9-11 AWS Lamda Serverless Application Function Example

Note

In AWS Lambda, you run code without provisioning or managing servers, and you only pay for the compute time you consume. When you upload your code, Lambda takes care of everything required to run and scale your application (offering high availability and redundancy).

As demonstrated in Figure 9-12, computing has evolved from traditional physical (bare-metal) servers to virtual machines (VMs), containers, and serverless architectures.

images

Figure 9-12 The Evolution of Computing

images

Container Orchestration

There have been multiple technologies and solutions to manage, deploy, and orchestrate containers in the industry. The following are the most popular:

  • Kubernetes: One of the most popular container orchestration and management frameworks, originally developed by Google, Kubernetes is a platform for creating, deploying, and managing distributed applications. You can download Kubernetes and access its documentation at https://kubernetes.io.

  • Nomad: A container management and orchestration platform by HashCorp. You can download and obtain detailed information about Nomad at https://www.nomadproject.io.

  • Apache Mesos: A distributed Linux kernel that provides native support for launching containers with Docker and AppC images. You can download Apache Mesos and access its documentation at https://mesos.apache.org.

  • Docker Swarm: A container cluster management and orchestration system integrated with the Docker Engine. You can access the Docker Swarm documentation at https://docs.docker.com/engine/swarm.

images

A Quick Introduction to Containers and Docker

Before you can even think of building a distributed system, you must first understand how the container images that contain your applications make up all the “underlying pieces” of such distributed system. Applications are normally composed of a language runtime, libraries, and source code. For instance, your application may use third-party or open source shared libraries such as libc and OpenSSL. These shared libraries are typically shipped as shared components in the operating system that you installed on a system. The dependency on these libraries introduces difficulties when an application developed on your desktop, laptop, or any other development machine (dev system) has a dependency on a shared library that isn’t available when the program is deployed out to the production system. Even when the dev and production systems share the exact same version of the operating system, bugs can occur when programmers forget to include dependent asset files inside a package that they deploy to production.

The good news is that you can package applications in a way that makes it easy to share them with others. This is an example where containers become very useful. Docker, one of the most popular container runtime engines, makes it easy to package an executable and push it to a remote registry where it can later be pulled by others.

Note

Container registries are available in all of the major public cloud providers (for example, AWS, Google Cloud Platform, and Microsoft Azure) as well as services to build images. You can also run your own registry using open source or commercial systems. These registries make it easy for developers to manage and deploy private images, while image-builder services provide easy integration with continuous delivery systems.

Container images bundle a program and its dependencies into a single artifact under a root file system. Containers are made up of a series of file system layers. Each layer adds, removes, or modifies files from the preceding layer in the file system. The overlay system is used both when packaging up the image and when the image is actually being used. During runtime, there are a variety of different concrete implementations of such file systems, including aufs, overlay, and overlay2.

Tip

The most popular container image format is the Docker image format, which has been standardized by the Open Container Initiative (OCI) to the OCI image format. Kubernetes supports both Docker and OCI images. Docker images also include additional metadata used by a container runtime to start a running application instance based on the contents of the container image.

Let’s take a look at an example of how container images work. In Figure 9-13 are three container images: A, B, and C. Container Image B is “forked” from Container Image A. Then, in Container Image B, Python version 3 is added. Furthermore, Container Image C is built upon Container Image B and the programmer adds OpenSSL and ngnix to develop a web server and enable TLS.

images

Figure 9-13 How Container Images Work

Abstractly, each container image layer builds upon the previous one. Each parent reference is a pointer. The example in Figure 9-13 includes a simple set of containers; in many environments, you will encounter a much larger directed acyclic graph.

Even though the SCOR exam does not cover Docker in detail, it is still good to see a few examples of Docker containers, images, and related commands. Figure 9-14 shows the output of the docker images command.

images

Figure 9-14 The docker images Command

Tip

The Docker images shown in Figure 9-14 are images of intentionally vulnerable applications that you can also use to practice your skills. These Docker images and containers are included in a VM created by Omar Santos called WebSploit (websploit.org). The VM is built on top of Kali Linux and includes several additional tools, along with the aforementioned Docker containers. This can be a good tool, not only to get familiarized with Docker, but also to learn and practice offensive and defensive security skills.

Figure 9-15 shows the output of the docker ps command used to see all the running Docker containers in a system.

images

Figure 9-15 The docker ps Command

As you learned earlier in this chapter, you can use a public, cloud provider, or private Docker image repository. Docker’s public image repository is called Docker Hub (https://hub.docker.com). You can find images by going to the Docker Hub website or by using the docker search command, as demonstrated in Figure 9-16.

images

Figure 9-16 The docker search Command

In Figure 9-16, the user searches for a container image that matches the “python” keyword.

Tip

You can practice and deploy your first container by using Katacoda (an interactive system that allows you to learn many different technologies, including Docker, Kubernetes, Git, Tensorflow, and many others). You can access Katacoda at https://www.katacoda.com. There are numerous interactive scenarios provided by Katacoda. For instance, you can use the “Deploying your first container” scenario to learn (hands-on) Docker: https://www.katacoda.com/courses/docker/deploying-first-container.

A Dockerfile can be used to automate the creation of a Docker container image. Example 9-1 shows an example of a Dockerfile. The Dockerfile shown in Example 9-1 is the official Python Docker image from Docker Hub.

Example 9-1 A Dockerfile Example

FROM alpine:3.10
# ensure local python is preferred over distribution python
ENV PATH /usr/local/bin:$PATH
# http://bugs.python.org/issue19846
# > At the moment, setting "LANG=C" on a Linux system *fundamentally
# breaks Python 3*, and that's not OK.
ENV LANG C.UTF-8
# install ca-certificates so that HTTPS works consistently
# other runtime dependencies for Python are installed later
RUN apk add --no-cache ca-certificates
ENV GPG_KEY E3FF2839C048B25C084DEBE9B26995E310250568
ENV PYTHON_VERSION 3.8.0
RUN set -ex 
        && apk add --no-cache --virtual .fetch-deps 
                 gnupg 
                 tar 
                 xz 
        
        && wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_
VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" 
        && wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_
VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" 
        && export GNUPGHOME="$(mktemp -d)" 
        && gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys
<output omitted for brevity>
CMD ["python3"]

The full code in Example 9-1 can be obtained from https://github.com/The-Art-of-Hacking/h4cker/blob/master/SCOR/Dockerfile_example.

Let’s create a simple Docker container based on the Dockerfile in Example 9-1. First, put the Dockerfile in a new directory/folder and then execute the command shown in Example 9-2 to create a Docker image called mypython.

Example 9-2 Building the Docker Image

┌─[omar@us-dev1]─[~/mypython]
└─── $ docker build -t mypython
Sending build context to Docker daemon  5.632kB
Step 1/13 : FROM alpine:3.10
 ---> 965ea09ff2eb
Step 2/13 : ENV PATH /usr/local/bin:$PATH
 ---> Using cache
 ---> 3801354cb4a4
Step 3/13 : ENV LANG C.UTF-8
 ---> Using cache
 ---> f5ee976b0ef2
Step 4/13 : RUN apk add --no-cache ca-certificates
<output omitted for brevity>                                                        
Step 12/13 : RUN set -ex;
wget -O get-pip.py "$PYTHON_GET_PIP_URL";
echo "$PYTHON_GET_PIP_SHA256 *get-pip.py" | sha256sum -c -;
python get-pip.py  --disable-pip-version-check  --no-cache-dir
pip==$PYTHON_PIP_VERSION"; pip --version; find /usr/local -depth
(( -type d -a ( -name test -o -name tests -o -name idle_test ) ) -o ( -type f
-a ( -name '*.pyc' -o -name '*.pyo' ) )) -exec rm -rf '{}' +; rm -f get-pip.py
 ---> Using cache
 ---> 646992bb197a
Step 13/13 : CMD ["python3"]
 ---> Using cache
 ---> 4790dbb6b084
Successfully built 4790dbb6b084
Successfully tagged mypython:latest

Example 9-3 shows the newly created image using the docker images command.

Example 9-3 The Newly Created Docker Image

┌─[omar@us-dev1]─[~/mypython]
└─── $ docker images
REPOSITORY       TAG           IMAGE ID         CREATED             SIZE
mypython         latest        4790dbb6b084     2 minutes ago       110MB

You can now execute the docker run mypython command to run a new container.

Tip

You can access the Docker documentation at https://docs.docker.com. You can also complete a free and quick hands-on tutorial to learn more about Docker containers at https://www.katacoda.com/courses/container-runtimes/what-is-a-container-image.

images

Kubernetes

In larger environments, you will not deploy and orchestrate Docker containers in a manual way. You want to automate as much as possible. This is where Kubernetes comes into play. Kubernetes (often referred to as k8s) automates the distribution, scheduling, and orchestration of application containers across a cluster. Figure 9-17 illustrates the Kubernetes cluster concept.

images

Figure 9-17 The Kubernetes Cluster

The following are the Kubernetes components:

  • Master: Coordinates all the activities in your cluster (scheduling, scaling, and deploying applications).

  • Node: A VM or physical server that acts as a worker machine in a Kubernetes cluster.

  • Pod: A group of one or more containers with shared storage and networking, including a specification for how to run the containers. Each pod has an IP address, and it is expected to be able to reach all other pods within the environment.

Table 9-2 lists the differences between the legacy “rules” of standalone Docker containers and Kubernetes deployments.

Table 9-2 Differences Between the Legacy “Rules” of Standalone Docker Containers and Kubernetes Deployments

Standalone Docker Legacy “Rules”

Kubernetes “Rules”

No native container-to-container networking unless on the same VM.

All containers can communicate with each other without NAT.

Proxies or port forwarding needed.

All nodes can communicate with all containers (and vice versa) without NAT.

Built-in segmentation.

The IP that a container sees itself as is the same IP that others will see it as.

Figure 9-18 shows a high-level overview of a Kubernetes deployment.

images

Figure 9-18 A Kubernetes Deployment

One of the easiest way to learn Kubernetes is to use minikube (a lightweight Kubernetes implementation). Example 9-4 demonstrates how to start minikube with the minikube start command.

Example 9-4 Starting Kubernetes (minikube)

$ minikube start
* minikube v1.3.0 on Ubuntu 18.04
* Running on localhost (CPUs=2, Memory=2461MB, Disk=47990MB) ...
* OS release is Ubuntu 18.04.2 LTS
* Preparing Kubernetes v1.15.0 on Docker 18.09.5 ...
  - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
* Pulling images ...
* Launching Kubernetes ...
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
$

Once Kubernetes is deployed, you can check the version running the kubctl (the official Kubernetes client) version command, as demonstrated in Example 9-5.

Example 9-5 Verifying the Version of Kubernetes

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:
"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:
"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:
"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:
"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

$

You can view the nodes in a cluster by running the kubectl get nodes command, as demonstrated in Example 9-6. In Example 9-6, only one node is deployed (minikube).

Example 9-6 Displaying the Kubernetes Nodes

$ kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   2m12s   v1.15.0

Example 9-7 shows a deployment of a new app (omar-k8s-example).

Example 9-7 Deploying a New App

$ kubectl create deployment omar-k8s-example --image=g omar-k8s-example-image:v1
deployment.apps/omar-k8s-example created
$ kubectl get deployments
NAME                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
omar-k8s.example      1         1         1            1           10s

Tip

In a multi-node Kubernetes cluster, you use kubeadm to manage the nodes. Kubeadm is a Kubernetes component that provides a way to get a minimum viable Kubernetes cluster up and running. It is also worth noting that kubeadm only performs bootstrapping services and it does provision machines.

The Kubernetes official website includes different free hands-on tutorials that will help get you started with Kubernetes at https://kubernetes.io/docs/tutorials. The Kadacoda website also has several hands-on Kubernetes tutorials at https://www.katacoda.com/courses/kubernetes.

The Google Cloud Platform (GCP) offers a hosted Kubernetes-as-a-Service called Google Kubernetes Engine (GKE). Azure and AWS offer similar solutions. Figure 9-19 shows a Kubernetes cluster in GCP called omar-k8s-cluster-1.

images

Figure 9-19 The omar-k8s-cluster-1 Kubernetes Cluster in GCP

Figure 9-20 shows the details of the Kubernetes cluster.

images

Figure 9-20 The omar-k8s-cluster-1 Kubernetes Cluster Details

Figure 9-21 shows the nodes within the Kubernetes cluster.

images

Figure 9-21 The Kubernetes Cluster Nodes

Example 9-8 shows the output of the kubectl get nodes command in Google’s Cloud Shell (the Google Cloud Platform interactive shell).

Example 9-8 The Output of the kubectl get nodes Command in Google’s Cloud Shell

santosomar@cloudshell:~ (omar-cyber-range)$ kubectl get nodes
NAME                                 STATUS   ROLES    AGE     VERSION
gke-omark8s-cluster1-4a3f623e-5cv0   Ready    <none>   5m28s   v1.14.8-gke.17
gke-omark8s-cluster1-4a3f623e-cq6c   Ready    <none>   5m27s   v1.14.8-gke.17
gke-omark8s-cluster1-4a3f623e-khm4   Ready    <none>   5m28s   v1.14.8-gke.17
santosomar@cloudshell:~ (omar-cyber-range)$

The GCP project in Example 9-8 is called omar-cyber-range.

Kubernetes supports a proxy that is responsible for routing network traffic to load-balanced services in a cluster. When deployed, the proxy must be present on every node in the Kubernetes cluster.

Kubernetes also runs a DNS server that provides naming and discovery for the services that are defined in the cluster. The Kubernetes DNS server also runs as a replicated service on the cluster. In other words, depending on how large your cluster is, you might see one or more DNS servers running at all times. Example 9-9 shows kube-dns running in the previously created Kubernetes cluster in the omar-cyber-range GCP project.

Example 9-9 The Kubernetes DNS Server

santosomar@cloudshell:~ (omar-cyber-range)$ kubectl get deployments
--namespace=kube-system kube-dns
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
kube-dns   2/2     2            2           36m

In Kubernetes 1.12, Kubernetes transitioned from the kube-dns DNS server to the core-dns DNS server. If you are running a newer Kubernetes cluster, you may see core-dns instead.

Kubernetes also has a GUI. The Kubernetes GUI is run as a single replica, but it is still managed by a Kubernetes deployment for reliability and upgrades. You can see this UI server using the command shown in Example 9-10.

Example 9-10 Deploying the Kubernetes System GUI (Dashboard)

santosomar@cloudshell:~ (omar-cyber-range)$ kubectl get deployments
--namespace=kube-system kubernetes-dashboard
NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kubernetes-dashboard   1         1         1            1           1d
images

Microservices and Micro-Segmentation

The ability to enforce network segmentation in container and VM environments is what people call micro-segmentation. Micro-segmentation is at the VM level or between containers regardless of a VLAN or a subnet. Micro-segmentation solutions need to be “application aware.” This means that the segmentation process starts and ends with the application itself.

Most micro-segmentation environments apply a zero-trust model. This model dictates that users cannot talk to applications and that applications cannot talk to other applications unless a defined set of policies permits them to do so.

In Chapter 3, “Software-Defined Networking Security and Network Programmability,” you learned about Contiv. As a refresher, Contiv is an open source project that allows you to deploy micro-segmentation policy-based services in container environments. It offers a higher level of networking abstraction for microservices by providing a policy framework. Contiv has built-in service discovery and service routing functions to allow you to scale out services.

Note

You can download Contiv and access its documentation at https://contiv.io.

With Contiv, you can assign an IP address to each container. This feature eliminates the need for host-based port NAT. Contiv can operate in different network environments, such as traditional Layer 2 and Layer 3 networks, as well as overlay networks. Contiv can be deployed with all major container orchestration platforms (or schedulers), such as Kubernetes and Docker Swarm. For instance, Kubernetes can provide compute resources to containers, and then Contiv provides networking capabilities.

Tip

The Contiv website includes several tutorials and step-by-step integration documentation at https://contiv.io/documents/tutorials/index.html.

images

DevSecOps

DevSecOps is a concept used in recent years to describe how to move security activities to the start of the development life cycle and have built-in security practices in the CI/CD pipeline. The business environment, culture, law compliance, and external market drive relate to how a secure development life cycle (also referred to as SDLC) and a DevSecOps program is implemented in an organization.

Tip

The DevSecOps project (https://devsecops.github.io) includes a set of tools and tutorials about DevSecOps and underlying practices. The Open DevSecOps GitHub organization (https://github.com/opendevsecops) includes a series of open source tools and resources, too.

The OWASP Proactive Controls (https://www.owasp.org/index.php/OWASP_Proactive_Controls) is a collection of secure development practices and guidelines that any software developer should follow to build secure applications. These practices will help you to shift security earlier into design, coding, and testing. Here are the OWASP Top 10 Proactive Controls:

  1. Define Security Requirements

  2. Leverage Security Frameworks and Libraries

  3. Secure Database Access

  4. Encode and Escape Data

  5. Validate All Inputs

  6. Implement Digital Identity

  7. Enforce Access Controls

  8. Protect Data Everywhere

  9. Implement Security Logging and Monitoring

  10. Handle All Errors and Exceptions

Additional best practices include security functions and tools in automated testing in CI/CD pipelines, parameterize queries to prevent SQL injection, and encoding data to protect against cross-site scripting (XSS) and other injection attacks. You should safely encode data before passing it on to an interpreter. Validate all inputs and treat all data as untrusted! Validate parameters and data elements using whitelisting and other techniques. Implement proper identity and authentication controls, as well as access controls. You should always have the “secure by default” and “shifting security to the left” mentality. This means “making it easy” for developers to write secure code and difficult for them to make dangerous mistakes, wiring secure defaults into their templates and frameworks, and building in the proactive controls you learned earlier.

Before you can begin adding security checks and controls, you need to understand the workflows and tools that the engineering teams are using in a CI/CD pipeline. You should ask the questions listed in Figure 9-22.

images

Figure 9-22 CI/CD and DevSecOps

You should also use software assurance tools and methods, including fuzzing, static application security testing (SAST), and dynamic application security testing (DAST). For example, you can use tools like Findsecbugs and SonarQube. Findsecbugs is a tool designed to find bugs in applications created in the Java programming language. It can be used with continuous integration systems such as Jenkins and SonarQube. Findsecbugs provides support for popular Java frameworks, including Spring-MCV, Apache Struts, Tapestry, and others. You can download and obtain more information about Findsecbugs at https://find-sec-bugs.github.io.

SonarQube is a tool that can be used to find vulnerabilities in code, and it provides support for continuous integration and DevOps environments. You can obtain additional information about SonarQube at https://www.sonarqube.org.

Fuzz testing, or fuzzing, is a technique that can be used to find software errors (or bugs) and security vulnerabilities in applications, operating systems, infrastructure devices, IoT devices, and other computing devices. Fuzzing involves sending random data to the unit being tested in order to find input validation issues, program failures, buffer overflows, and other flaws. Tools that are used to perform fuzzing are referred to as “fuzzers.” Examples of popular fuzzers are Peach, Munity, American Fuzzy Lop, and Synopsys Defensics.

The Mutiny Fuzzing Framework is an open source fuzzer created by Cisco. It works by replaying packet capture files (pcaps) through a mutational fuzzer. You can download and obtain more information about Mutiny Fuzzing Framework at https://github.com/Cisco-Talos/mutiny-fuzzer. The Mutiny Fuzzing Framework uses Radamsa to perform mutations. Radamsa is a tool that can be used to generate test cases for fuzzers. You can download and obtain additional information about Radamsa at https://gitlab.com/akihe/radamsa.

American Fuzzy Lop (AFL) is a tool that provides features of compile-time instrumentation and genetic algorithms to automatically improve the functional coverage of fuzzing test cases. You can obtain additional information about AFL at http://lcamtuf.coredump.cx/afl/.

Peach is one of the most popular fuzzers in the industry. There is a free (open source) version, the Peach Fuzzer Community Edition, and a commercial version. You can download the Peach Fuzzer Community Edition and obtain additional information about the commercial version at https://www.peach.tech.

Tip

I have additional examples of fuzzers and fuzzing at my GitHub repository at https://h4cker.org/github.

images

Describing the Customer vs. Provider Security Responsibility for the Different Cloud Service Models

Cloud service providers (CSPs) such as Azure, AWS, and GCP have no choice but to take their security and compliance responsibilities very seriously. For instance, Amazon created a Shared Responsibility Model to describe what are the responsibilities of the AWS customers and Amazon’s responsibilities in detail. The Amazon Shared Responsibility Model can be accessed at https://aws.amazon.com/compliance/shared-responsibility-model.

The “shared responsibility” depends on the type of cloud model (SaaS, PaaS, or IaaS). Figure 9-23 shows the responsibilities of a CSP and their customers in a SaaS environment.

images

Figure 9-23 SaaS Shared Security Responsibility

Figure 9-24 shows the responsibilities of a CSP and their customers in a PaaS environment.

images

Figure 9-24 PaaS Shared Security Responsibility

Figure 9-25 shows the responsibilities of a CSP and their customers in an IaaS environment.

images

Figure 9-25 IaaS Shared Security Responsibility

images

Patch Management in the Cloud

Patch management in the cloud is also a shared responsibility in IaaS and PaaS environments, but not in a SaaS environment. For example, in a SaaS environment, the CSP is the one responsible for patching all software and hardware vulnerabilities. However, in an IaaS environment, the CSP is responsible only for patching the hypervisors, physical compute and storage servers, and the physical network. You are responsible for patching the applications, operating systems (VMs), and any virtual networks you deploy.

images

Security Assessment in the Cloud and Questions to Ask Your Cloud Service Provider

When performing penetration testing in the cloud, you must first understand what you can do and what you cannot do. Most CSPs have detailed guidelines on how to perform security assessments and penetration testing in the cloud. Regardless, there are many potential threats when organizations move to a cloud model. For example, although your data is in the cloud, it must reside in a physical location somewhere. Your cloud provider should agree in writing to provide the level of security required for your customers. As discussed in Chapter 1 the following are questions to ask a cloud provider before signing a contract for its services:

  • Who has access? Access control is a key concern because insider attacks are a huge risk. Anyone who has been approved to access the cloud is a potential hacker, so you want to know who has access and how they were screened. Even if it was not done with malice, an employee can leave, and then you find out that you don’t have the password, or the cloud service gets canceled because maybe the bill didn’t get paid.

  • What are the provider’s regulatory requirements? Organizations operating in the United States, Canada, or the European Union have many regulatory requirements that they must abide by (for example, ISO/IEC 27002, EU-U.S. Privacy Shield Framework, ITIL, FedRAMP, and COBIT). You must ensure that your cloud provider can meet these requirements and is willing to undergo certification, accreditation, and review.

  • Do you have the right to audit? This particular item is no small matter in that the cloud provider should agree in writing to the terms of the audit. With cloud computing, maintaining compliance could become more difficult to achieve and even harder to demonstrate to auditors and assessors. Of the many regulations touching upon information technology, few were written with cloud computing in mind. Auditors and assessors may not be familiar with cloud computing generally or with a given cloud service in particular. Division of compliance responsibilities between cloud provider and cloud customer must be determined before any contracts are signed or service is started.

  • What type of training does the provider offer its employees? This is a rather important item to consider because people will always be the weakest link in security. Knowing how your provider trains its employees is an important item to review.

  • What type of data classification system does the provider use? Questions you should be concerned with here include what data classification standard is being used and whether the provider even uses data classification.

  • How is your data separated from other users’ data? Is the data on a shared server or a dedicated system? A dedicated server means that your information is the only thing on the server. With a shared server, the amount of disk space, processing power, bandwidth, and so on, is limited because others are sharing this device. If it is shared, the data could potentially become comingled in some way.

  • Is encryption being used? Encryption should be discussed. Is it being used while the data is at rest and in transit? You will also want to know what type of encryption is being used. For example, there are big technical differences between DES and AES; however, for both of these algorithms, the basic questions are the same: Who maintains control of the encryption keys? Is the data encrypted at rest in the cloud? Is the data encrypted in transit, or is it encrypted at rest and in transit?

  • What are the service level agreement (SLA) terms? The SLA serves as a contracted level of guaranteed service between the cloud provider and the customer that specifies what level of services will be provided.

  • What is the long-term viability of the provider? How long has the cloud provider been in business, and what is its track record? If it goes out of business, what happens to your data? Will your data be returned, and, if so, in what format?

  • Will they assume liability in the case of a breach? If a security incident occurs, what support will you receive from the cloud provider? While many providers promote their services as being “unhackable,” cloud-based services are an attractive target to hackers.

  • What is the disaster recovery/business continuity plan (DR/BCP)? Although you may not know the physical location of your services, it is physically located somewhere. All physical locations face threats such as fire, storms, natural disasters, and loss of power. In case of any of these events, how will the cloud provider respond, and what guarantee of continued services is it promising?

Tip

Even when you end a contract, you must ask what happens to the information after your contract with the cloud service provider ends. Insufficient due diligence is one of the biggest issues when moving to the cloud. Security professionals must verify that issues such as encryption, compliance, incident response, and so forth are all worked out before a contract is signed.

Cisco has several security solutions that can help protect the cloud and/or that are delivered from the cloud. The following sections cover these solutions.

images

Cisco Umbrella

Cisco Umbrella is a solution that evolved from the OpenDNS acquisition. The Cisco Umbrella is a cloud-delivered solution that blocks malicious destinations using DNS.

Tip

Cisco Umbrella can be used on any device, including IoT devices, on any network, at any time. This is because its implementation is very straightforward and can be accomplished by forwarding DNS queries to the Umbrella cloud on existing DNS servers, running the Umbrella virtual appliances, and/or using the Microsoft Windows or macOS roaming client or the Cisco Security Connector for iOS.

OpenDNS is a suite of consumer products aimed at making your Internet faster, safer, and more reliable. The following website provides information on how to use OpenDNS to protect your system and your home: https://www.opendns.com/home-internet-security/.

Cisco Umbrella has the ability to see attacks before the application connection occurs. This limits the load on a firewall or any other network security infrastructure device. In addition, it helps to reduce alerts and improve security operations and incident response.

Umbrella looks at the patterns of DNS requests from devices and uses them to detect the following:

  • Compromised systems

  • Command-and-control callbacks

  • Malware and phishing attempts

  • Algorithm-generated domains

  • Domain co-occurrences

  • Newly registered domains

  • Malicious traffic and payloads that never reach the target

images

The Cisco Umbrella Architecture

The Cisco Umbrella global infrastructure includes dozens of data centers around the world that resolve more than 100 billion DNS requests from millions of users every day. Umbrella data centers are peered with more than 500 of the top ISPs and content delivery networks to exchange BGP routes and ensure that requests are routed efficiently, without adding any latency over regional DNS providers.

Cisco Umbrella uses Anycast IP routing in order to provide reliability of the recursive DNS service. All data centers announce the same IP address, and all requests are transparently sent to the fastest and lowest-latency data center available.

Tip

You can use Cisco Umbrella (OpenDNS) by just pointing your DNS configuration to the Anycast IP addresses 208.67.222.222 and 208.67.220.220. The website https://use.opendns.com provides additional instructions on how to configure your system to use OpenDNS/Umbrella.

Its scale and speed give Umbrella a massive amount of data and, perhaps more importantly, a very diverse data set that is not just from one geography or one protocol. This diversity enables Umbrella to offer unprecedented insight into staged and launched attacks. The data and threat analytics engines learn where threats are coming from, who is launching them, where they are going, and the width of the net of the attack—even before the first victim is hit. Umbrella uses authoritative DNS logs to find the following:

  • Newly staged infrastructures

  • Malicious domains, IP addresses, and ASNs

  • DNS hijacking

  • Fast flux domains

  • Related domains

Fast flux is a DNS technique used by botnets to hide phishing and malware delivery sites behind an ever-changing network of compromised hosts acting as proxies. Umbrella is able to find these types of threats by using modeling inside the data analytics. Machine learning and advanced algorithms are used heavily to find and automatically block malicious domains.

The following are a few examples of available models:

  • Co-occurrence model: This model identifies domains queried right before or after a given domain. This model helps uncover domains linked to the same attack, even if they’re hosted on separate networks.

  • Traffic spike model: This model recognizes when spikes in traffic to a domain match patterns seen with other attacks. For example, if the traffic for one domain matches the request patterns seen with exploit kits, you might want to block the domain before the full attack launches.

  • Predictive IP space monitoring model: This model starts with domains identified by the spike rank model and scores the steps attackers take to set up infrastructure (for example, hosting provider, name server, and IP address) to predict whether the domain is malicious. This identifies other destinations that can be proactively blocked before an attack launches.

images

Secure Internet Gateway

When Cisco Umbrella servers receive a DNS request, they first identify which end customer the request came from and which policy to apply. Next, Cisco Umbrella determines whether the request is safe or whitelisted, malicious or blacklisted, or “risky.” Safe requests are allowed to be routed as usual, and malicious requests are routed to a block page. Risky requests can be routed to the cloud-based proxy for deeper inspection.

This concept of a cloud-based proxy is the basis for the secure Internet gateway (SIG). Before looking at the functionality of the proxy, it is helpful to understand what traffic is typically sent to the proxy. Most phishing, malware, ransomware, and other threats are hosted at domains that are classified as malicious. Yet some domains host both malicious and safe content; these are the domains that are classified as risky. These sites (such as facebook.com, reddit.com, pastebin.com, and so on) allow users to upload and share content, making them difficult to police.

Traditional web proxies or gateways examine all Internet requests, which adds latency and complexity. The Cisco Umbrella secure Internet gateway proxy intercepts and inspects only requests for risky domains. When the secure Internet gateway identifies a risky domain and begins to proxy that traffic, it uses the URL inspection engine to first classify the URL. The Cisco Umbrella secure Internet gateway uses Cisco Talos threat intelligence, the Cisco web reputation system, and third-party feeds to determine if a URL is malicious.

If the disposition of a web resource is still unknown after the URL inspection, if a file is present, the secure Internet gateway can also look at the file’s reputation. The file is inspected by both antivirus (AV) engines and Cisco Advanced Malware Protection (AMP) to block malicious files based on known signatures before they are downloaded.

Figure 9-26 shows the Cisco Umbrella Overview dashboard, which breaks down the different network requests, including the total number of DNS requests, proxy requests, total blocks, and security blocks.

images

Figure 9-26 The Cisco Umbrella Overview Dashboard

Cisco Umbrella provides different dashboards and detailed reports. Figure 9-27 shows the Security Overview report dashboard.

images

Figure 9-27 The Cisco Umbrella Security Overview Report

The report shown in Figure 9-27 can be used to obtain a high-level overview of all the blocked requests and the Umbrella deployment activity for your organization.

Tip

You can schedule Umbrella reports to be emailed to a specific recipient at regular intervals. The report will be displayed as a table showing an HTML version of the report and an attached .csv file with the complete data set. The email also includes a link to the “live version” of the same report.

images

Cisco Umbrella Investigate

Cisco Umbrella Investigate provides organizations access to global intelligence that can be used to enrich security data and events or help with incident response. Investigate provides the most complete view of an attacker’s infrastructure and enables security teams to discover malicious domains, IP addresses, and file hashes, and even predict emergent threats. Investigate provides access to this intelligence via a web console or an application programming interface (API). With the integration of the Cisco AMP Threat Grid data in Investigate, intelligence about an attacker’s infrastructure can be complemented by AMP Threat Grid’s intelligence about malware files, providing a complete view of the infrastructure used in an attack. Figure 9-28 shows a query of binarycousins.com using Investigate.

images

Figure 9-28 The Cisco Umbrella Investigate

Note

You can use Investigate, if your Umbrella license allows, by going to https://investigate.umbrella.com.

Cisco Umbrella Investigate provides a single, correlated source of intelligence and adds the security context needed to help organizations uncover and predict attacks. Investigate provides the following features:

  • Passive DNS Database: Provides historical DNS data.

  • WHOIS Record Data: Allows you to see domain ownership and uncover malicious domains registered with the same contact information.

  • Malware File Analysis: Provides behavioral indicators and network connections of malware samples with data from Cisco AMP Threat Grid.

  • Autonomous System Number (ASN) Attribution: Provides IP-to-ASN mappings.

  • IP Geolocation: Allows you to see in which country an IP address is located.

  • Domain and IP Reputation Scores: Allows you to leverage Investigate’s risk scoring across a number of domain attributes to assess suspicious domains.

  • Domain Co-Occurrences: Returns a list of domain names that were looked up around the same time as the domain being checked. The score next to the domain name in a co-occurrence is a measurement of requests from client IPs for these related domains. The co-occurrences are for the previous seven days and are shown whether the co-occurrence is suspicious or not.

  • Anomaly Detection: Allows you to detect fast flux domains and domains created by domain generation algorithms. This score is generated based on the likeliness of the domain name being generated by an algorithm rather than a human. This score ranges from –100 (suspicious) to 0 (benign).

  • DNS Request Patterns and Geo Distribution of Those Requests: Allows you to see suspicious spikes in global DNS requests to a specific domain.

images

Cisco Email Security in the Cloud

Chapter 10, “Content Security,” provides details about the Cisco Email Security Appliance (ESA) and the Cisco Web Security Appliance (WSA). The Cisco ESA is an on-premises email security solution. However, there is also a cloud-based email security solution provided by Cisco. This allows you to provide protection against threats like ransomware, business email compromise (BEC), phishing, spear phishing, whaling, and many other email-driven attacks.

Tip

The Cisco ESA, Cisco WSA, and the cloud-based email security solution use AsyncOS as the main operating system.

Cisco cloud email security supports several techniques to create the multiple layers of security needed to defend against the aforementioned attack types. These techniques include the following:

  • Geolocation-based filtering: To protect against sophisticated spear phishing by quickly controlling email content based on the location of the sender.

  • The Cisco Context Adaptive Scanning Engine (CASE): CASE leverages hundreds of thousands of adaptive attributes that are tuned automatically based on real-time analysis of cyber threats. CASE combines adaptive rules and the real-time outbreak rules published by Cisco Talos to evaluate every message and assign a unique threat level. Based on the threat level, CASE recommends a period of time to quarantine the message to prevent an outbreak (as well as rescan intervals to reevaluate the message based on updated outbreak rules from Talos). The higher the threat level, the more often CASE rescans the message while it is quarantined.

  • Automated threat data drawn from Cisco Talos: Threat intelligence information from Cisco’s security research organization (Talos) to provide a deeper understanding of underlying cybersecurity threats.

  • Advanced Malware Protection (AMP): To provide global visibility and continuous analytics across all components of the AMP architecture for endpoints and mobile devices. The Cisco AMP integration with Cisco Email Security also provides persistent protection against URL-based threats via real-time analysis of potentially malicious links.

images

Forged Email Detection

Cisco Email Security also provides a feature called Forged Email Detection (FED). FED is used to detect spear phishing attacks by examining one or more parts of the SMTP message for manipulation, including the “Envelope-From,” “Reply To,” and “From” headers.

images

Sender Policy Framework

Cisco Email Security also has the Sender Policy Framework (SPF) for sender authentication and DomainKeys Identified Mail (DKIM) and Domain-based Message Authentication, Reporting, and Conformance (DMARC) for domain authentication.

images

Email Encryption

Cisco Email Security supports advanced encryption key services to manage email recipient registration, authentication, and per-message/per-recipient encryption keys. The email security gateway also gives compliance and security officers the control of and visibility into how sensitive data is delivered. The cloud-based Cisco Email Security solution also provides a customizable reporting dashboard to access information about encrypted email traffic, including the delivery method used and the top senders and receivers.

The Cisco Email Security cloud service also supports S/MIME. Secure/Multipurpose Internet Mail Extensions (S/MIME) is a standards-based method for sending and receiving secure, verified email messages. S/MIME uses a public/private key pair to encrypt or sign messages.

Note

The Cisco Email Security administrator guide includes details the features and deployment of the cloud email security solution at https://www.cisco.com/c/en/us/td/docs/security/ces/user_guide/sma_user_guide_11-4/b_SMA_Admin_Guide_11_4.html.

images

Cisco Email Security for Office 365

Cisco Email Security can provide protection for Office 365 deployments. Figure 9-29 shows how a traditional Office 365 email exchange is done without Cisco Email Security.

images

Figure 9-29 Office 365 Without Cisco Email Security

Figure 9-30 shows how an Office 365 email exchange is done with Cisco Email Security. You can see that the MX records are changed to the Cisco ESAs in the cisco-ces.com domain in this example. The interaction between the Cisco Email Security solution and Office 365 relays all emails for inspection. The Cisco Email Security cloud service provides a “public” email listener to protect all incoming and outgoing emails.

images

Figure 9-30 Office 365 with Cisco Email Security

Figure 9-31 shows the integration of the cloud-based Cisco Email Security solution and the AMP and Thread Grid clouds.

images

Figure 9-31 Office 365 with Cisco Email Security Advanced Protection

images

Cisco Cloudlock

Cloudlock was a company that Cisco acquired a few years ago. Now called Cisco Cloudlock, the solution is a cloud access security broker (CASB). A CASB provides visibility and compliance checks, protects data against misuse and exfiltration, and provides threat protections against malware like ransomware.

Cisco Cloudlock integrates with cloud services such as the following:

  • Box

  • Dropbox

  • G Suite

  • Office 365

  • Okta

  • OneLogin

  • Salesforce

  • ServiceNow

  • Slack

  • Webex Teams

Figure 9-32 shows the Cisco Cloudlock main dashboard. There you can see the different anomalies, top users with admin activity, and an overview of overall security risk based on location.

images

Figure 9-32 Cisco Cloudlock Main Dashboard

Figure 9-33 shows the Cisco Cloudlock Incidents dashboard. An incident in Cisco Cloudlock is a record of an instance of a document, object, event, or app triggering a Cloudlock policy. In the case of incidents related to data loss prevention (DLP), the incident includes information about the asset in violation, the platform account that owns the asset, how widely the asset is shared, and the history of the asset. Depending on the generating policy, the incident may contain a link to the asset itself and its content. Any access to content by security administrators is recorded in the Cloudlock Audit Log.

images

Figure 9-33 The Cisco Cloudlock Incidents Dashboard

A Cloudlock incident is typically created when an object or asset (document, object, field, post, and so on) in a platform monitored by Cisco Cloudlock meets three criteria:

  1. The object/asset has been created or changed.

  2. The object/asset is monitored by an active policy.

  3. The object/asset is in violation of the criteria (content and/or context criteria) in the policy.

Figure 9-34 shows an example of a Cisco Cloudlock incident.

images

Figure 9-34 A Cisco Cloudlock Incident

Figure 9-35 shows the Cisco Cloudlock Policies dashboard.

images

Figure 9-35 The Cisco Cloudlock Policies Dashboard

Policies are the automated rules you create in Cisco Cloudlock to customize information protection to match your organization’s needs. Policies generate incidents that enable you to monitor and correct security issues. Cisco Cloudlock policies operate independently of one another; they do not interact directly, and there is no explicit order of execution. The response actions associated with one policy do not interact with the response actions of another policy. You can add a new policy by selecting a predefined policy or by creating your own policy. Figure 9-36 shows how to add a predefined policy. In this example, a policy is added to alert and block and transactions that may include United States Social Security numbers.

images

Figure 9-36 Adding a Predefined Cisco Cloudlock Policy

You can also design and build your own policies in Cisco Cloudlock by starting with one of these categories:

  • Context-Only: Ignores the content contained in assets, and monitors only how widely assets are shared, their file types, and other metadata.

  • Custom Regex: Monitors content matching a regular expression you create.

  • Event Analysis: Monitors the platform events you select. Events are specific to each monitored platform. You can select individual raw events to monitor and/or events in normalized categories established by Cloudlock (for example, login events).

  • Salesforce Report Export Activity: Applies only to the Salesforce platform. You can use it to monitor when, where, and by whom Salesforce reports are exported from the platform.

Figure 9-37 shows the Cisco Cloudlock App Discovery dashboard.

images

Figure 9-37 The Cisco Cloudlock App Discovery Dashboard

In order to use the Cisco Cloudlock App Discovery feature to review and investigate usage of cloud apps on your network, at least one log source must be connected to Cloudlock. A log source is a network device logging network activity, such as a Cisco Web Security Appliance (WSA) or Cisco Umbrella. When at least one log source is integrated with App Discovery, data will become available in the App Discovery dashboard and the Discovered Apps list page.

Tip

Cisco Cloudlock provides a Composite Risk Score (CRS) in order to assess the relative risk of cloud-connected apps and services according to the following factors:

  • Business Risk, which incorporates the following:

    • The type of app usage—indirect use (such as a content delivery network), personal use (such as a game), or corporate use (such as a productivity tool).

    • The web reputation of the app, based on Cisco Talos research.

    • Financial viability risk to the provider of the app or service, based on Dun & Bradstreet’s Dynamic Risk Score.

    • Data storage risk—an assessment of the nature of the data stored by the app or service. This ranges from no storage (lowest risk) to unstructured data (highest risk). Unstructured data consists of individual files such as emails, photos, documents, and the like.

  • Usage Risk, which incorporates the following:

    • Traffic volume—the higher the volume of data flowing to and from an app, the higher the potential risk.

    • User count, as measured by number of unique IP addresses within your network that are used to access the app or service. The number of IP addresses is positively correlated with risk.

  • Vendor Compliance, which includes the security controls put in place by the vendor of the app or service as well as any security-related compliance certifications earned by the vendor.

images

Stealthwatch Cloud

In Chapter 5, “Network Visibility and Segmentation,” you learned about the Cisco Stealthwatch solution. The Cisco Stealthwatch solution uses NetFlow telemetry and contextual information from the Cisco network infrastructure. This solution allows network administrators and cybersecurity professionals to analyze network telemetry in a timely manner to defend against advanced cyber threats, including the following:

  • Network reconnaissance

  • Malware proliferation across the network for the purpose of stealing sensitive data or creating back doors to the network

  • Communications between the attacker (or command-and-control servers) and the compromised internal hosts

  • Data exfiltration

Cisco Stealthwatch aggregates and normalizes considerable amounts of NetFlow data to apply security analytics to detect malicious and suspicious activity. You can also monitor on-premises networks in your organizations using Cisco Stealthwatch Cloud. In order to do so, you need to deploy at least one Cisco Stealthwatch Cloud Sensor appliance (virtual or physical appliance). The Cisco Stealthwatch Cloud Sensor appliance can be deployed in two different modes (not mutually exclusive):

  • By processing network metadata from a SPAN or a network TAP

  • By processing metadata out of NetFlow or IPFIX flow records

Cisco Stealthwatch Cloud can also be integrated with Meraki and Cisco Umbrella. The following document details the integration with Meraki: https://www.cisco.com/c/dam/en/us/td/docs/security/stealthwatch/cloud/portal/SWC_Meraki_DV_1_1.pdf.

The following document outlines how to integrate Stealthwatch Cloud with the Cisco Umbrella Investigate REST API in order to provide additional information in the Stealthwatch Cloud environment for external entity domain information: https://www.cisco.com/c/dam/en/us/td/docs/security/stealthwatch/cloud/portal/SWC_Umbrella_DV_1_0.pdf.

Tip

The Cisco Stealthwatch Cloud integrates with Kubernetes (in GCP/GKE, other public clouds, and on-premises Kubernetes deployments). The solution deploys as a pod on a Kubernetes node and shims into the node-level network communication abstraction layer. This provides visibility, baselining, and anomaly detection into container-to-container and pod-to-pod communications.

images

AppDynamics Cloud Monitoring

AppDynamics was another company acquired by Cisco. AppDynamics (or AppD for short) provides end-to-end visibility of applications and can provide insights about application performance. AppD is able to automatically discover the flow of all traffic requests in your environment by creating a dynamic topology map of all your applications.

AppD also provides cloud monitoring and supports the following platforms:

  • AWS Monitoring

  • Microsoft Azure

  • Pivotal Cloud Foundry Monitoring

  • Cloud Foundry Foundation

  • Rackspace Monitoring

  • Kubernetes Monitoring

  • OpenShift Monitoring

  • HP Cloud Monitoring

  • Citrix Monitoring

  • OpenStack Monitoring

  • IBM Monitoring

  • Docker Monitoring

  • AWS Lambda Monitoring

AppDynamics can also be integrated with the Workload Optimization Manager, which is a server application running on a VM that you install on your network. You then assign Virtual Management services running on your network to be Workload Optimization Manager targets. Workload Optimization Manager discovers the devices each target manages, and then performs analysis, anticipates risks to performance or efficiency, and recommends actions you can take to avoid problems before they occur.

Figure 9-38 shows the Cloud Executive Dashboard of the Workload Optimization Manager.

images

Figure 9-38 The Cloud Executive Dashboard of the Workload Optimization Manager

Figure 9-39 shows the Workload Optimization Manager Cloud dashboard.

images

Figure 9-39 The Workload Optimization Manager Cloud Dashboard

The Workload Optimization Manager is a Cisco agentless technology that detects relationships and dependencies between applications and the infrastructure layers. It provides a global topological mapping of your environment (local and remote, and across private and public clouds) and the interdependent relationships within the environment, mapping each layer of the full infrastructure stack to application demand. This allows you to get real-time actions that ensure workloads get the resources they need when they need them, enabling continuous placement, resizing, and capacity decisions that can be automated, driving continuous health in the environment.

Figure 9-40 shows the cloud applications used in the organization’s cloud deployment. In this example, all applications are hosted in AWS.

images

Figure 9-40 The List of Cloud Applications

Figure 9-40 shows the list of cloud applications within your environment. The integration with the Cisco Workload Optimization Manager helps you to monitor and analyze application performance across your data centers and into public clouds (AWS in this example). AppDynamics and the Cisco Workload Optimization Manager quickly model what-if scenarios based on the real-time environment to accurately forecast capacity needs. In addition, you can track compute, storage, and database consumption (CPU, memory, latency, and Database Transaction Unit [DTU]) across cloud regions and availability zones.

Note

The Cisco Workload Optimization Manager and AppDynamics support AWS, Google Cloud Platform, and Microsoft Azure.

images

Cisco Tetration

Cisco Tetration is a solution created by Cisco that utilizes rich traffic flow telemetry to address critical data center operationality use cases. It uses both hardware and software agents as telemetry sources and performs advanced analytics on the collected data. To access the information, Cisco Tetration provides a scalable point-and-click web UI to search information using visual queries and visualizes statistics using variety of charts and tables. In addition, all the administrative functions and cluster monitoring can be done through the same web UI. Cisco Tetration supports both on-premises and public cloud workloads.

Tetration Agents

Tetration uses software agents and can also obtain telemetry information from Cisco network infrastructure devices. The Tetration software agent is a piece of software running within a host operation system (such as Linux or Windows). Its core functionality is to monitor and collect network flow information. It also collects other host information such as network interfaces and active processes running in the system. The information collected by the agent is exported to a set of collectors running within Tetration cluster for further analytical processing. In addition, software agents also have capability to set firewall rules on the installed hosts.

In order for Tetration to import user annotation from external orchestrators, Tetration needs to establish outgoing connections to the orchestrator API servers (Vcenter, Kubernetes, F5 BIG-IP, and so on). Sometimes it is not possible to allow direct incoming connections to the orchestrators from the Tetration cluster. Secure Connector solves this issue by establishing an outgoing connection from the same network as the orchestrator to the Tetration cluster. This connection is used as a reverse tunnel to pass requests from the cluster back to the orchestrator API server.

Application Dependency Mapping

Application Dependency Mapping (ADM) is a functionality in Cisco Tetration that helps provide insight into the kind of complex applications that run in a data center or in the cloud.

ADM enables network admins to build tight network security policies based on various signals such as network flows, processes, and other side information like load balancer configs and route tags. Not only can these policies be exported in various formats for consumption by different enforcement engines, but Tetration can also verify policies against ongoing traffic in near real time.

Tetration Forensics Feature

The Forensics feature set enables monitoring and alerting for possible security incidents by capturing real-time forensic events and applying user-defined rules. The Forensics feature enables the following features:

  • Defining of rules to specify forensic events of interest

  • Defining trigger actions for matching forensic events

  • Searching for specific forensic events

  • Visualizing event-generating processes and their full lineages

Tip

For each Workload we compute a Forensics Score. A Workload’s Forensics Score is derived from the Forensic Events observed on that Workload based on the profiles enabled for this scope. A score of 100 means no Forensic Events were observed via configured rules in enabled profiles, and a score of 0 means there is a Forensic Event detected that requires immediate action. The Forensics Score for a Scope is the average Workload score within that Scope. The Forensics Score for a given hour is a minimum of all scores within that hour.

Tetration Security Dashboard

The Tetration Security Dashboard, shown in Figure 9-41, presents actionable security scores by bringing together multiple signals available in Tetration. It helps in understanding the current security position and improving it. Security Score is a number between 0 and 100. It indicates the security position in the category. A score of 100 is the best score, and a score of 0 is the worst. Scores closer to 100 are better.

images

Figure 9-41 The Tetration Security Dashboard

The Security Score computation takes into account vulnerabilities in installed software packages, consistency of process hashes, open ports on different interfaces, forensic and network anomaly events, and compliance/non-compliance to policies.

The Vulnerability Dashboard, shown in Figure 9-42, enables administrators to focus their effort on critical vulnerabilities and workloads that need the most attention. Administrators can select relevant scope at the top of this page as well as select the Common Vulnerability Scoring System (CVSS) score. The new page highlights the distribution of vulnerabilities in the chosen scope. This new page also displays vulnerabilities by different attributes, such as the complexity of exploits, whether the vulnerabilities can be exploited over the network, and whether the attacker needs local access to the workload. Furthermore, there are statistics to quickly filter out vulnerabilities that are remotely exploitable and have the lowest complexity to exploit.

images

Figure 9-42 The Tetration Vulnerability Dashboard

Tetration provides a REST API for interacting with all features in a programmatic way. Tetration also has the concept of connectors. Connectors are integrations that Tetration supports for a variety of use cases, including flow ingestion, inventory enrichment, and alert notifications.

Exam Preparation Tasks

As mentioned in the section “How to Use This Book” in the Introduction, you have a couple of choices for exam preparation: the exercises here, Chapter 12, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

Review All Key Topics

Review the most important topics in this chapter, noted with the Key Topic icon in the outer margin of the page. Table 9-3 lists these key topics and the page numbers on which each is found.

images

Table 9-3 Key Topics for Chapter 9

Key Topic Element

Description

Page Number

List

Identifying the essential characteristics of cloud computing

551

List

Understanding the different cloud deployment models

552

List

Identifying the different cloud service models

552

Paragraph

Understanding what is DevOps

552

Section

The Agile Methodology

553

Section

DevOps

556

Section

CI/CD Pipelines

558

Section

The Serverless Buzzword

559

Section

Container Orchestration

559

Section

A Quick Introduction to Containers and Docker

561

Section

Kubernetes

565

Section

Microservices and Micro-Segmentation

570

Section

DevSecOps

571

Section

Describing the Customer vs. Provider Security Responsibility for the Different Cloud Service Models

573

Section

Patch Management in the Cloud

575

Section

Security Assessment in the Cloud and Questions to Ask Your Cloud Service Provider

575

Section

Cisco Umbrella

577

Section

The Cisco Umbrella Architecture

577

Section

Secure Internet Gateway

578

Section

Cisco Umbrella Investigate

580

Section

Cisco Email Security in the Cloud

582

Section

Forged Email Detection

583

Section

Sender Policy Framework

583

Section

Email Encryption

583

Section

Cisco Email Security for Office 365

583

Section

Cisco Cloudlock

584

Section

Stealthwatch Cloud

590

Section

AppDynamics Cloud Monitoring

590

Section

Cisco Tetration

593

Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:

cloud access security broker (CASB)

Continuous Integration (CI)

Continuous Delivery (CD)

DevOps

DevSecOps

Kubernetes (k8s)

Nomad

Apache Mesos

Docker Swarm

Review Questions

1. What is Extreme Programming (EP)?

  1. A software development methodology designed to improve quality and for teams to adapt to the changing needs of the end customer

  2. A DevSecOps concept to provide better SAST and DAST solutions in a DevOps environment

  3. A software development methodology designed to provide cloud providers with the ability to scale and deploy more applications per workload

  4. None of these answers is correct.

2. Which of the following is a framework that helps organizations work together because it encourages teams to learn through experiences, self-organize while working on a solution, and reflect on their wins and losses to continuously improve?

  1. DevSecOps

  2. Scrum

  3. Waterfall

  4. None of these answers is correct.

3. Which of the following is the CI/CD pipeline stage that includes the compilation of programs written in languages such as Java, C/C++, and Go?

  1. Develop

  2. Build

  3. Deploy

  4. Package and Compile

4. Which of the following is a Kubernetes component that is a group of one or more containers with shared storage and networking, including a specification for how to run the containers?

  1. Pod

  2. k8s node

  3. kubectl

  4. kubeadm

5. Which of the following is a technique that can be used to find software errors (or bugs) and security vulnerabilities in applications, operating systems, infrastructure devices, IoT devices, and other computing devices? This technique involves sending random data to the unit being tested in order to find input validation issues, program failures, buffer overflows, and other flaws.

  1. Scanning

  2. DAST

  3. Fuzzing

  4. SAST

6. Which of the following is a Cisco Umbrella component that provides organizations access to global intelligence that can be used to enrich security data and events or help with incident response? It also provides the most complete view of an attacker’s infrastructure and enables security teams to discover malicious domains, IP addresses, and file hashes and even predict emergent threats.

  1. Investigate

  2. Internet Security Gateway

  3. Cloudlock

  4. CASB

7. Cisco Cloud Email Security supports which of the following techniques to create the multiple layers of security needed to defend against?

  1. Geolocation-based filtering

  2. The Cisco Context Adaptive Scanning Engine (CASE)

  3. Advanced Malware Protection (AMP)

  4. All of these answers are correct.

8. Which of the following statements are true about the Cisco Email Security solution?

  1. The Sender Policy Framework (SPF) is used for sender authentication.

  2. DomainKeys Identified Mail (DKIM) is used for domain authentication.

  3. Domain-based Message Authentication, Reporting, and Conformance (DMARC) is used for domain authentication.

  4. All of these answers are correct.

9. You can design and build your own policies in Cisco Cloudlock by starting with which of the following categories?

  1. Custom Regex

  2. Event Analysis

  3. Salesforce Report Export Activity

  4. All of these answers are correct.

10. Cisco Cloudlock provides a ___________ in order to assess the relative risk of cloud-connected apps and services according to business risk, usage risk, and vendor compliance.

  1. Composite Risk Score (CRS)

  2. Composite Risk Rating (CRR)

  3. Common Vulnerability Scoring System (CVSS)

  4. None of these answers is correct.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.236.100.210