This chapter covers the following topics:
What Is Cloud and What Are the Cloud Service Models?
DevOps, Continuous Integration (CI), Continuous Delivery (CD), and DevSecOps
Describing the Customer vs. Provider Security Responsibility for the Different Cloud Service Models
Cisco Email Security in the Cloud
The following SCOR 350-701 exam objectives are covered in this chapter:
Domain 3.0 Securing the Cloud
3.1 Identify security solutions for cloud environments
3.1.a Public, private, hybrid, and community clouds
3.1.b Cloud service models: SaaS, PaaS, IaaS (NIST 800-145)
3.2 Compare the customer vs. provider security responsibility for the different cloud service models
3.2.a Patch management in the cloud
3.2.b Security assessment in the cloud
3.2.c Cloud-delivered security solutions such as firewall, management, proxy, security intelligence, and CASB
3.3 Describe the concept of DevSecOps (CI/CD pipeline, container orchestration, and security)
3.4 Implement application and data security in cloud environments
3.5 Identify security capabilities, deployment models, and policy management to secure the cloud
3.6 Configure cloud logging and monitoring methodologies
3.7 Describe application and workload security concepts
The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 9-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes and Q&A Sections.”
Table 9-1 “Do I Know This Already?” Section-to-Question Mapping
Foundation Topics Section |
Questions |
What Is Cloud and What Are the Cloud Service Models? |
1 |
DevOps, Continuous Integration (CI), Continuous Delivery (CD), and DevSecOps |
2–3 |
Describing the Customer vs. Provider Security Responsibility for the Different Cloud Service Models |
4 |
Cisco Umbrella |
5 |
Cisco Email Security in the Cloud |
6 |
Cisco Cloudlock |
7 |
Stealthwatch Cloud |
8 |
AppDynamics Cloud Monitoring |
9 |
Cisco Tetration |
10 |
1. Which of the following is a cloud computing model that provides everything except applications? Services provided by this model include all phases of the system development life cycle (SDLC) and can use application programming interfaces (APIs), website portals, or gateway software. These solutions tend to be proprietary, which can cause problems if the customer moves away from the provider’s platform.
IaaS
PaaS
SaaS
Hybrid clouds
2. Which of the following is a software and hardware development and project management methodology that has at least five to seven phases that follow in strict linear order, where each phase cannot start until the previous phase has been completed?
Agile
Waterfall
DevOps
CI/CD
3. Which of the following is a software development practice where programmers merge code changes in a central repository multiple times a day?
Continuous Integration (CI)
Agile Scrum
Containers
None of these answers is correct.
4. In which of the following cloud models is the end customer responsible for maintaining and patching applications and making sure that data is protected, but not the virtual network or operating system?
PaaS
SaaS
IaaS
IaaS and PaaS
5. Which technology is used by Cisco Umbrella to scale and to provide reliability of recursive DNS services?
Umbrella Investigate
Multicast
BGP Route Reflectors
Anycast
6. Which Cisco Email Security feature is used to detect spear phishing attacks by examining one or more parts of the SMTP message for manipulation, including the “Envelope-From,” “Reply To,” and “From” headers?
Forged Email Detection (FED)
Forged Email Protection (FEP)
Sender Policy Framework (SPF)
Domain-based Message Authentication, Reporting, and Conformance (DMARC)
7. Which of the following is a cloud access security broker (CASB) solution provided by Cisco?
Tetration
Stealthwatch Cloud
Cloudlock
Umbrella
8. The Cisco Stealthwatch Cloud Sensor appliance can be deployed in which two different modes?
Processing network metadata from a SPAN or a network TAP
Processing metadata out of NetFlow or IPFIX flow records
Processing data from Tetration
PROCESSING data from Cloudlock
Processing data from Umbrella
9. AppDynamics provides cloud monitoring and supports which of the following platforms?
Kubernetes
Azure
AWS Lambda
All of these answers are correct.
10. Which statement is not true about Cisco Tetration?
Tetration uses software agents or can obtain telemetry information from Cisco’s network infrastructure devices.
You can use the Application Dependency Mapping (ADM) functionality to provide insight into the kind of complex applications that run in a data center, but not in the cloud.
ADM enables network admins to build tight network security policies based on various signals such as network flows, processes, and other side information like load balancer configs and route tags.
Tetration’s Vulnerability Dashboard supports CVSS versions 2 and 3.
Foundation Topics
In Chapter 1, “Cybersecurity Fundamentals,” you learned that the National Institute of Standards and Technology (NIST) created Special Publication (SP) 800-145, “The NIST Definition of Cloud Computing,” to provide a standard set of definitions for the different aspects of cloud computing. The SP 800-145 document also compares the different cloud services and deployment strategies. In short, the advantages of using a cloud-based service include the use of distributed storage, scalability, resource pooling, access to applications and resources from any location, and automated management.
According to NIST, the essential characteristics of cloud computing include the following:
On-demand self-service
Broad network access
Resource pooling
Rapid elasticity
Measured service
Cloud deployment models include the following:
Public cloud: Open for public use
Private cloud: Used just by the client organization on-premises (on-prem) or at a dedicated area in a cloud provider
Community cloud: Shared between several organizations
Hybrid cloud: Composed of two or more clouds (including on-prem services)
Cloud computing can be broken into the following three basic models:
Infrastructure as a Service (IaaS): IaaS describes a cloud solution where you are renting infrastructure. You purchase virtual power to execute your software as needed. This is much like running a virtual server on your own equipment, except you are now running a virtual server on a virtual disk. This model is similar to a utility company model because you pay for what you use.
Platform as a Service (PaaS): PaaS provides everything except applications. Services provided by this model include all phases of the system development life cycle (SDLC) and can use application programming interfaces (APIs), website portals, or gateway software. These solutions tend to be proprietary, which can cause problems if the customer moves away from the provider’s platform.
Software as a Service (SaaS): SaaS is designed to provide a complete packaged solution. The software is rented out to the user. The service is usually provided through some type of front end or web portal. While the end user is free to use the service from anywhere, the company pays a per-use fee.
DevOps (including the underlying technical, architectural, and cultural practices) characterizes a convergence of many technical, project management, and management movements. Before we start to define what is DevOps, let’s take a look at the history of development methodologies. There are decades of lessons learned from software development, high reliability organizations, manufacturing, high-trust management models, and others that have evolved to the DevOps practices we know today.
The waterfall model is a software and hardware development and project management methodology that has at least five to seven phases that follow in strict linear order. Each phase cannot start until the previous phase has been completed.
Figure 9-1 illustrates the typical phases of the waterfall development methodology.
There a few reasons why organizations use the waterfall methodology. One of the main reasons is because project requirements are agreed upon from the beginning; consequently, planning and scheduling is simple and clear. With a fully laid-out project schedule, an accurate estimate can be given, including development project cost, resources, and deadlines. Another reason is because measuring progress is easy as you move through the phases and hit the different milestones. Your end customer is not perpetually adding new requirements to the project, thus delaying production.
There are several disadvantages in the waterfall methodology, as well. One of the disadvantages is that it can be difficult for customers to enumerate and communicate all of their needs at the beginning of the project. If your end customer is dissatisfied with the product in the verification phase, it can be very costly to go back and design the code again. In the waterfall methodology, a linear project plan is rigid and lacks flexibility for adapting to unexpected events.
Agile is a software development and project management process where a project is managed by breaking it up into several stages and involving constant collaboration with stakeholders and continuous improvement and iteration at every stage. The Agile methodology begins with end customers describing how the final product will be used and clearly articulating what problem it will solve. Once the coding begins, the respective teams cycle through a process of planning, executing, and evaluating. This process may allow the final deliverable to change in order to better fit the customer’s needs. In an Agile environment, continuous collaboration is key. Clear and ongoing communication among team members and project stakeholders allows for fully informed decisions to be made.
The Agile methodology was originally developed by 17 people in 2001 in written form, and it is documented at “The Manifesto for Agile Software Development” (https://agilemanifesto.org).
Figure 9-2 illustrates Agile’s four main values, as documented in “The Manifesto for Agile Software Development.”
In Agile, the input to the development process is the creation of a business objective, concept, idea, or hypothesis. Then the work is added to a committed “backlog.” From there, software development teams that follow the standard Agile or iterative process will transform that idea into “user stories” and some sort of feature specification. This specification is then implemented in code. The code is then checked in to a version control repository (for example, GitLab or GitHub), where each change is integrated and tested with the rest of the software system.
In Agile, value is created only when services are running in production; consequently, you must ensure that you are not only delivering fast flow, but that your deployments can also be performed without causing chaos and disruptions, such as service outages, service impairments, or security or compliance failures.
Figure 9-3 illustrates the general steps of the Agile methodology.
There is a concept adopted by many organizations related to Agile called “Scrum.” Scrum is a framework that helps organizations work together because it encourages teams to learn through experiences, self-organize while working on a solution, and reflect on their wins and losses to continuously improve. Scrum is used by software development teams; however, its principles and lessons can be applied to all kinds of teamwork. Scrum describes a set of meetings, tools, and roles that work in concert to help teams structure and manage their work.
Figure 9-4 illustrates the high-level concepts of the Scrum framework. The Scrum framework uses the concept of “sprints” (a short, time-boxed period when a Scrum team works to complete a predefined amount of work). Sprints are one of the key concepts of the Scrum and Agile methodologies.
Agile is an implementation of the Lean management philosophy created to eliminate waste of time and resources across all aspects of business. The Lean management philosophy was derived from the “Toyota Production System” from the 1980s.
Agile also uses the Kanban process. Kanban is a scheduling system for lean development and just-in-time (JIT) manufacturing originally developed by Taiichi Ohno from Toyota.
There is yet another concept called Extreme Programming (EP). EP is a software development methodology designed to improve quality and for teams to adapt to the changing needs of the end customer. EP was originally developed by Ken Beck, who used it in the Chrysler Comprehensive Compensation System (C3) to help manage the company’s payroll software. EP is similar to Agile, as its main goal is to provide iterative and frequent small releases throughout the development process. This enables both team members and customers to assess and review the development progress throughout the entire software development life cycle (SDLC).
Figure 9-5 provides a good high-level overview of the Lean, Agile, Scrum, Kanban, and Extreme Programming concepts and associations.
DevOps is the outcome of many trusted principles—from software development, manufacturing, and leadership to the information technology value stream. DevOps relies on bodies of knowledge from Lean, Theory of Constraints, resilience engineering, learning organizations, safety culture, human factors, and many others. Today’s technology DevOps value stream includes the following areas:
Product management
Software (or hardware) development
Quality assurance (QA)
IT operations
Infosec and cybersecurity practices
Figure 9-6 illustrates the steps to embrace DevOps within an organization.
There are “three general ways” to DevOps. The first “way” (illustrated in Figure 9-7) includes systems and flow. In this way (or method), you make work visible by reducing the work “batch” sizes, reducing intervals of work, and preventing defects from being introduced by building in quality and control.
The second way is illustrated in Figure 9-8. This way includes a feedback loop to prevent problems from happening again (enabling faster detection and recovery by seeing problems as they occur and maximizing opportunities to learn and improve).
Figure 9-9 illustrates the third way (continuous experimentation and learning). In a true DevOps environment, you conduct dynamic, disciplined experimentation and take risks. You also define the time to fix issues and make systems better. The creation of shared code repositories help tremendously to achieve this continuous experimentation and learning process.
Continuous Integration (CI) is a software development practice where programmers merge code changes in a central repository multiple times a day. Continuous Delivery (CD) sits on top of CI and provides a way for automating the entire software release process. When you adopt CI/CD methodologies, each change in code should trigger an automated build-and-test sequence. This automation should also provide feedback to the programmers who made the change.
CI/CD has been adopted by many organizations that provide cloud services (that is, SaaS, PaaS, and so on). For instance, CD can include cloud infrastructure provisioning and deployment, which traditionally have been done manually and consist of multiple stages. The main goal of the CI/CD processes is to be fully automated, with each run fully logged and visible to the entire team.
With CI/CD, most software releases go through the set of stages illustrated in Figure 9-10. A failure in any stage typically triggers a notification. For example, you can use Cisco WebEx Teams or Slack to let the responsible developers know about the cause of a given failure or to send notifications to the whole team after each successful deployment to production.
In Figure 9-10, the pipeline run is triggered by a source code repository (Git in this example). The code change typically sends a notification to a CI/CD tool, which runs the corresponding pipeline. Other notifications include automatically scheduled or user-initiated workflows, as well as results of other pipelines.
The Build stage includes the compilation of programs written in languages such as Java, C/C++, and Go. On the contrary, Ruby, Python and JavaScript programs work without this step; however, they could be deployed using Docker and other container technologies. Regardless of the language, cloud-native software is typically deployed with containers (in a microservice environment).
In the Test stage, automated tests are run to validate the code and the application behavior. The Test stage is an important stage, since it acts as a “safety net” to prevent easily reproducible bugs from being introduced. This concept can be applied to preventing security vulnerabilities, since at the end of the day, a security vulnerability is typically a software (or hardware) bug. The responsibility of writing test scripts can fall to a developer or a dedicated QA engineer. However, it is best done while new code is being written.
Once you have a built your code and passed all predefined tests, you are ready to deploy it (the Deploy stage). Traditionally, there have been multiple deploy environments used by engineers (for example, a “beta” or “staging” environment used internally by the product team and a “production” environment).
First, serverless does not mean that you do not need a “server” somewhere. Instead, it means that you will be using cloud platforms to host and/or to develop your code. For example, you might have a “serverless” app that is distributed in a cloud provider like AWS, Azure, or Google Cloud Platform.
Serverless is a cloud computing execution model where the cloud provider (AWS, Azure, Google Cloud, and so on) dynamically manages the allocation and provisioning of servers. Serverless applications run in stateless containers that are ephemeral and event-triggered (fully managed by the cloud provider).
AWS Lambda is one of the most popular serverless architectures in the industry. Figure 9-11 shows an example of a “function” or application in AWS Lamda.
As demonstrated in Figure 9-12, computing has evolved from traditional physical (bare-metal) servers to virtual machines (VMs), containers, and serverless architectures.
There have been multiple technologies and solutions to manage, deploy, and orchestrate containers in the industry. The following are the most popular:
Kubernetes: One of the most popular container orchestration and management frameworks, originally developed by Google, Kubernetes is a platform for creating, deploying, and managing distributed applications. You can download Kubernetes and access its documentation at https://kubernetes.io.
Nomad: A container management and orchestration platform by HashCorp. You can download and obtain detailed information about Nomad at https://www.nomadproject.io.
Apache Mesos: A distributed Linux kernel that provides native support for launching containers with Docker and AppC images. You can download Apache Mesos and access its documentation at https://mesos.apache.org.
Docker Swarm: A container cluster management and orchestration system integrated with the Docker Engine. You can access the Docker Swarm documentation at https://docs.docker.com/engine/swarm.
Before you can even think of building a distributed system, you must first understand how the container images that contain your applications make up all the “underlying pieces” of such distributed system. Applications are normally composed of a language runtime, libraries, and source code. For instance, your application may use third-party or open source shared libraries such as libc and OpenSSL. These shared libraries are typically shipped as shared components in the operating system that you installed on a system. The dependency on these libraries introduces difficulties when an application developed on your desktop, laptop, or any other development machine (dev system) has a dependency on a shared library that isn’t available when the program is deployed out to the production system. Even when the dev and production systems share the exact same version of the operating system, bugs can occur when programmers forget to include dependent asset files inside a package that they deploy to production.
The good news is that you can package applications in a way that makes it easy to share them with others. This is an example where containers become very useful. Docker, one of the most popular container runtime engines, makes it easy to package an executable and push it to a remote registry where it can later be pulled by others.
Container images bundle a program and its dependencies into a single artifact under a root file system. Containers are made up of a series of file system layers. Each layer adds, removes, or modifies files from the preceding layer in the file system. The overlay system is used both when packaging up the image and when the image is actually being used. During runtime, there are a variety of different concrete implementations of such file systems, including aufs, overlay, and overlay2.
Let’s take a look at an example of how container images work. In Figure 9-13 are three container images: A, B, and C. Container Image B is “forked” from Container Image A. Then, in Container Image B, Python version 3 is added. Furthermore, Container Image C is built upon Container Image B and the programmer adds OpenSSL and ngnix to develop a web server and enable TLS.
Abstractly, each container image layer builds upon the previous one. Each parent reference is a pointer. The example in Figure 9-13 includes a simple set of containers; in many environments, you will encounter a much larger directed acyclic graph.
Even though the SCOR exam does not cover Docker in detail, it is still good to see a few examples of Docker containers, images, and related commands. Figure 9-14 shows the output of the docker images command.
Figure 9-15 shows the output of the docker ps command used to see all the running Docker containers in a system.
As you learned earlier in this chapter, you can use a public, cloud provider, or private Docker image repository. Docker’s public image repository is called Docker Hub (https://hub.docker.com). You can find images by going to the Docker Hub website or by using the docker search command, as demonstrated in Figure 9-16.
A Dockerfile can be used to automate the creation of a Docker container image. Example 9-1 shows an example of a Dockerfile. The Dockerfile shown in Example 9-1 is the official Python Docker image from Docker Hub.
FROM alpine:3.10 # ensure local python is preferred over distribution python ENV PATH /usr/local/bin:$PATH # http://bugs.python.org/issue19846 # > At the moment, setting "LANG=C" on a Linux system *fundamentally # breaks Python 3*, and that's not OK. ENV LANG C.UTF-8 # install ca-certificates so that HTTPS works consistently # other runtime dependencies for Python are installed later RUN apk add --no-cache ca-certificates ENV GPG_KEY E3FF2839C048B25C084DEBE9B26995E310250568 ENV PYTHON_VERSION 3.8.0 RUN set -ex && apk add --no-cache --virtual .fetch-deps gnupg tar xz && wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_ VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" && wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_ VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" && export GNUPGHOME="$(mktemp -d)" && gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys <output omitted for brevity> CMD ["python3"]
The full code in Example 9-1 can be obtained from https://github.com/The-Art-of-Hacking/h4cker/blob/master/SCOR/Dockerfile_example.
Let’s create a simple Docker container based on the Dockerfile in Example 9-1. First, put the Dockerfile in a new directory/folder and then execute the command shown in Example 9-2 to create a Docker image called mypython.
┌─[omar@us-dev1]─[~/mypython]
└─── $ docker build -t mypython
Sending build context to Docker daemon 5.632kB
Step 1/13 : FROM alpine:3.10
---> 965ea09ff2eb
Step 2/13 : ENV PATH /usr/local/bin:$PATH
---> Using cache
---> 3801354cb4a4
Step 3/13 : ENV LANG C.UTF-8
---> Using cache
---> f5ee976b0ef2
Step 4/13 : RUN apk add --no-cache ca-certificates
<output omitted for brevity>
Step 12/13 : RUN set -ex;
wget -O get-pip.py "$PYTHON_GET_PIP_URL";
echo "$PYTHON_GET_PIP_SHA256 *get-pip.py" | sha256sum -c -;
python get-pip.py --disable-pip-version-check --no-cache-dir
pip==$PYTHON_PIP_VERSION"; pip --version; find /usr/local -depth
(( -type d -a ( -name test -o -name tests -o -name idle_test ) ) -o ( -type f
-a ( -name '*.pyc' -o -name '*.pyo' ) )) -exec rm -rf '{}' +; rm -f get-pip.py
---> Using cache
---> 646992bb197a
Step 13/13 : CMD ["python3"]
---> Using cache
---> 4790dbb6b084
Successfully built 4790dbb6b084
Successfully tagged mypython:latest
Example 9-3 shows the newly created image using the docker images command.
┌─[omar@us-dev1]─[~/mypython] └─── $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mypython latest 4790dbb6b084 2 minutes ago 110MB
You can now execute the docker run mypython command to run a new container.
In larger environments, you will not deploy and orchestrate Docker containers in a manual way. You want to automate as much as possible. This is where Kubernetes comes into play. Kubernetes (often referred to as k8s) automates the distribution, scheduling, and orchestration of application containers across a cluster. Figure 9-17 illustrates the Kubernetes cluster concept.
The following are the Kubernetes components:
Master: Coordinates all the activities in your cluster (scheduling, scaling, and deploying applications).
Node: A VM or physical server that acts as a worker machine in a Kubernetes cluster.
Pod: A group of one or more containers with shared storage and networking, including a specification for how to run the containers. Each pod has an IP address, and it is expected to be able to reach all other pods within the environment.
Table 9-2 lists the differences between the legacy “rules” of standalone Docker containers and Kubernetes deployments.
Table 9-2 Differences Between the Legacy “Rules” of Standalone Docker Containers and Kubernetes Deployments
Standalone Docker Legacy “Rules” |
Kubernetes “Rules” |
No native container-to-container networking unless on the same VM. |
All containers can communicate with each other without NAT. |
Proxies or port forwarding needed. |
All nodes can communicate with all containers (and vice versa) without NAT. |
Built-in segmentation. |
The IP that a container sees itself as is the same IP that others will see it as. |
Figure 9-18 shows a high-level overview of a Kubernetes deployment.
One of the easiest way to learn Kubernetes is to use minikube (a lightweight Kubernetes implementation). Example 9-4 demonstrates how to start minikube with the minikube start command.
$ minikube start * minikube v1.3.0 on Ubuntu 18.04 * Running on localhost (CPUs=2, Memory=2461MB, Disk=47990MB) ... * OS release is Ubuntu 18.04.2 LTS * Preparing Kubernetes v1.15.0 on Docker 18.09.5 ... - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf * Pulling images ... * Launching Kubernetes ... * Waiting for: apiserver proxy etcd scheduler controller dns * Done! kubectl is now configured to use "minikube" $
Once Kubernetes is deployed, you can check the version running the kubctl (the official Kubernetes client) version command, as demonstrated in Example 9-5.
$ kubectl version Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit: "f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate: "2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit: "e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate: "2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} $
You can view the nodes in a cluster by running the kubectl get nodes command, as demonstrated in Example 9-6. In Example 9-6, only one node is deployed (minikube).
$ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready master 2m12s v1.15.0
Example 9-7 shows a deployment of a new app (omar-k8s-example).
$ kubectl create deployment omar-k8s-example --image=g omar-k8s-example-image:v1 deployment.apps/omar-k8s-example created $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE omar-k8s.example 1 1 1 1 10s
The Google Cloud Platform (GCP) offers a hosted Kubernetes-as-a-Service called Google Kubernetes Engine (GKE). Azure and AWS offer similar solutions. Figure 9-19 shows a Kubernetes cluster in GCP called omar-k8s-cluster-1.
Figure 9-20 shows the details of the Kubernetes cluster.
Example 9-8 shows the output of the kubectl get nodes command in Google’s Cloud Shell (the Google Cloud Platform interactive shell).
santosomar@cloudshell:~ (omar-cyber-range)$ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-omark8s-cluster1-4a3f623e-5cv0 Ready <none> 5m28s v1.14.8-gke.17 gke-omark8s-cluster1-4a3f623e-cq6c Ready <none> 5m27s v1.14.8-gke.17 gke-omark8s-cluster1-4a3f623e-khm4 Ready <none> 5m28s v1.14.8-gke.17 santosomar@cloudshell:~ (omar-cyber-range)$
The GCP project in Example 9-8 is called omar-cyber-range.
Kubernetes supports a proxy that is responsible for routing network traffic to load-balanced services in a cluster. When deployed, the proxy must be present on every node in the Kubernetes cluster.
Kubernetes also runs a DNS server that provides naming and discovery for the services that are defined in the cluster. The Kubernetes DNS server also runs as a replicated service on the cluster. In other words, depending on how large your cluster is, you might see one or more DNS servers running at all times. Example 9-9 shows kube-dns running in the previously created Kubernetes cluster in the omar-cyber-range GCP project.
santosomar@cloudshell:~ (omar-cyber-range)$ kubectl get deployments --namespace=kube-system kube-dns NAME READY UP-TO-DATE AVAILABLE AGE kube-dns 2/2 2 2 36m
In Kubernetes 1.12, Kubernetes transitioned from the kube-dns DNS server to the core-dns DNS server. If you are running a newer Kubernetes cluster, you may see core-dns instead.
Kubernetes also has a GUI. The Kubernetes GUI is run as a single replica, but it is still managed by a Kubernetes deployment for reliability and upgrades. You can see this UI server using the command shown in Example 9-10.
santosomar@cloudshell:~ (omar-cyber-range)$ kubectl get deployments --namespace=kube-system kubernetes-dashboard NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kubernetes-dashboard 1 1 1 1 1d
The ability to enforce network segmentation in container and VM environments is what people call micro-segmentation. Micro-segmentation is at the VM level or between containers regardless of a VLAN or a subnet. Micro-segmentation solutions need to be “application aware.” This means that the segmentation process starts and ends with the application itself.
Most micro-segmentation environments apply a zero-trust model. This model dictates that users cannot talk to applications and that applications cannot talk to other applications unless a defined set of policies permits them to do so.
In Chapter 3, “Software-Defined Networking Security and Network Programmability,” you learned about Contiv. As a refresher, Contiv is an open source project that allows you to deploy micro-segmentation policy-based services in container environments. It offers a higher level of networking abstraction for microservices by providing a policy framework. Contiv has built-in service discovery and service routing functions to allow you to scale out services.
With Contiv, you can assign an IP address to each container. This feature eliminates the need for host-based port NAT. Contiv can operate in different network environments, such as traditional Layer 2 and Layer 3 networks, as well as overlay networks. Contiv can be deployed with all major container orchestration platforms (or schedulers), such as Kubernetes and Docker Swarm. For instance, Kubernetes can provide compute resources to containers, and then Contiv provides networking capabilities.
DevSecOps is a concept used in recent years to describe how to move security activities to the start of the development life cycle and have built-in security practices in the CI/CD pipeline. The business environment, culture, law compliance, and external market drive relate to how a secure development life cycle (also referred to as SDLC) and a DevSecOps program is implemented in an organization.
The OWASP Proactive Controls (https://www.owasp.org/index.php/OWASP_Proactive_Controls) is a collection of secure development practices and guidelines that any software developer should follow to build secure applications. These practices will help you to shift security earlier into design, coding, and testing. Here are the OWASP Top 10 Proactive Controls:
Define Security Requirements
Leverage Security Frameworks and Libraries
Secure Database Access
Encode and Escape Data
Validate All Inputs
Implement Digital Identity
Enforce Access Controls
Protect Data Everywhere
Implement Security Logging and Monitoring
Handle All Errors and Exceptions
Additional best practices include security functions and tools in automated testing in CI/CD pipelines, parameterize queries to prevent SQL injection, and encoding data to protect against cross-site scripting (XSS) and other injection attacks. You should safely encode data before passing it on to an interpreter. Validate all inputs and treat all data as untrusted! Validate parameters and data elements using whitelisting and other techniques. Implement proper identity and authentication controls, as well as access controls. You should always have the “secure by default” and “shifting security to the left” mentality. This means “making it easy” for developers to write secure code and difficult for them to make dangerous mistakes, wiring secure defaults into their templates and frameworks, and building in the proactive controls you learned earlier.
Before you can begin adding security checks and controls, you need to understand the workflows and tools that the engineering teams are using in a CI/CD pipeline. You should ask the questions listed in Figure 9-22.
You should also use software assurance tools and methods, including fuzzing, static application security testing (SAST), and dynamic application security testing (DAST). For example, you can use tools like Findsecbugs and SonarQube. Findsecbugs is a tool designed to find bugs in applications created in the Java programming language. It can be used with continuous integration systems such as Jenkins and SonarQube. Findsecbugs provides support for popular Java frameworks, including Spring-MCV, Apache Struts, Tapestry, and others. You can download and obtain more information about Findsecbugs at https://find-sec-bugs.github.io.
SonarQube is a tool that can be used to find vulnerabilities in code, and it provides support for continuous integration and DevOps environments. You can obtain additional information about SonarQube at https://www.sonarqube.org.
Fuzz testing, or fuzzing, is a technique that can be used to find software errors (or bugs) and security vulnerabilities in applications, operating systems, infrastructure devices, IoT devices, and other computing devices. Fuzzing involves sending random data to the unit being tested in order to find input validation issues, program failures, buffer overflows, and other flaws. Tools that are used to perform fuzzing are referred to as “fuzzers.” Examples of popular fuzzers are Peach, Munity, American Fuzzy Lop, and Synopsys Defensics.
The Mutiny Fuzzing Framework is an open source fuzzer created by Cisco. It works by replaying packet capture files (pcaps) through a mutational fuzzer. You can download and obtain more information about Mutiny Fuzzing Framework at https://github.com/Cisco-Talos/mutiny-fuzzer. The Mutiny Fuzzing Framework uses Radamsa to perform mutations. Radamsa is a tool that can be used to generate test cases for fuzzers. You can download and obtain additional information about Radamsa at https://gitlab.com/akihe/radamsa.
American Fuzzy Lop (AFL) is a tool that provides features of compile-time instrumentation and genetic algorithms to automatically improve the functional coverage of fuzzing test cases. You can obtain additional information about AFL at http://lcamtuf.coredump.cx/afl/.
Peach is one of the most popular fuzzers in the industry. There is a free (open source) version, the Peach Fuzzer Community Edition, and a commercial version. You can download the Peach Fuzzer Community Edition and obtain additional information about the commercial version at https://www.peach.tech.
Cloud service providers (CSPs) such as Azure, AWS, and GCP have no choice but to take their security and compliance responsibilities very seriously. For instance, Amazon created a Shared Responsibility Model to describe what are the responsibilities of the AWS customers and Amazon’s responsibilities in detail. The Amazon Shared Responsibility Model can be accessed at https://aws.amazon.com/compliance/shared-responsibility-model.
The “shared responsibility” depends on the type of cloud model (SaaS, PaaS, or IaaS). Figure 9-23 shows the responsibilities of a CSP and their customers in a SaaS environment.
Figure 9-24 shows the responsibilities of a CSP and their customers in a PaaS environment.
Figure 9-25 shows the responsibilities of a CSP and their customers in an IaaS environment.
Patch management in the cloud is also a shared responsibility in IaaS and PaaS environments, but not in a SaaS environment. For example, in a SaaS environment, the CSP is the one responsible for patching all software and hardware vulnerabilities. However, in an IaaS environment, the CSP is responsible only for patching the hypervisors, physical compute and storage servers, and the physical network. You are responsible for patching the applications, operating systems (VMs), and any virtual networks you deploy.
When performing penetration testing in the cloud, you must first understand what you can do and what you cannot do. Most CSPs have detailed guidelines on how to perform security assessments and penetration testing in the cloud. Regardless, there are many potential threats when organizations move to a cloud model. For example, although your data is in the cloud, it must reside in a physical location somewhere. Your cloud provider should agree in writing to provide the level of security required for your customers. As discussed in Chapter 1 the following are questions to ask a cloud provider before signing a contract for its services:
Who has access? Access control is a key concern because insider attacks are a huge risk. Anyone who has been approved to access the cloud is a potential hacker, so you want to know who has access and how they were screened. Even if it was not done with malice, an employee can leave, and then you find out that you don’t have the password, or the cloud service gets canceled because maybe the bill didn’t get paid.
What are the provider’s regulatory requirements? Organizations operating in the United States, Canada, or the European Union have many regulatory requirements that they must abide by (for example, ISO/IEC 27002, EU-U.S. Privacy Shield Framework, ITIL, FedRAMP, and COBIT). You must ensure that your cloud provider can meet these requirements and is willing to undergo certification, accreditation, and review.
Do you have the right to audit? This particular item is no small matter in that the cloud provider should agree in writing to the terms of the audit. With cloud computing, maintaining compliance could become more difficult to achieve and even harder to demonstrate to auditors and assessors. Of the many regulations touching upon information technology, few were written with cloud computing in mind. Auditors and assessors may not be familiar with cloud computing generally or with a given cloud service in particular. Division of compliance responsibilities between cloud provider and cloud customer must be determined before any contracts are signed or service is started.
What type of training does the provider offer its employees? This is a rather important item to consider because people will always be the weakest link in security. Knowing how your provider trains its employees is an important item to review.
What type of data classification system does the provider use? Questions you should be concerned with here include what data classification standard is being used and whether the provider even uses data classification.
How is your data separated from other users’ data? Is the data on a shared server or a dedicated system? A dedicated server means that your information is the only thing on the server. With a shared server, the amount of disk space, processing power, bandwidth, and so on, is limited because others are sharing this device. If it is shared, the data could potentially become comingled in some way.
Is encryption being used? Encryption should be discussed. Is it being used while the data is at rest and in transit? You will also want to know what type of encryption is being used. For example, there are big technical differences between DES and AES; however, for both of these algorithms, the basic questions are the same: Who maintains control of the encryption keys? Is the data encrypted at rest in the cloud? Is the data encrypted in transit, or is it encrypted at rest and in transit?
What are the service level agreement (SLA) terms? The SLA serves as a contracted level of guaranteed service between the cloud provider and the customer that specifies what level of services will be provided.
What is the long-term viability of the provider? How long has the cloud provider been in business, and what is its track record? If it goes out of business, what happens to your data? Will your data be returned, and, if so, in what format?
Will they assume liability in the case of a breach? If a security incident occurs, what support will you receive from the cloud provider? While many providers promote their services as being “unhackable,” cloud-based services are an attractive target to hackers.
What is the disaster recovery/business continuity plan (DR/BCP)? Although you may not know the physical location of your services, it is physically located somewhere. All physical locations face threats such as fire, storms, natural disasters, and loss of power. In case of any of these events, how will the cloud provider respond, and what guarantee of continued services is it promising?
Cisco has several security solutions that can help protect the cloud and/or that are delivered from the cloud. The following sections cover these solutions.
Cisco Umbrella is a solution that evolved from the OpenDNS acquisition. The Cisco Umbrella is a cloud-delivered solution that blocks malicious destinations using DNS.
OpenDNS is a suite of consumer products aimed at making your Internet faster, safer, and more reliable. The following website provides information on how to use OpenDNS to protect your system and your home: https://www.opendns.com/home-internet-security/.
Cisco Umbrella has the ability to see attacks before the application connection occurs. This limits the load on a firewall or any other network security infrastructure device. In addition, it helps to reduce alerts and improve security operations and incident response.
Umbrella looks at the patterns of DNS requests from devices and uses them to detect the following:
Compromised systems
Command-and-control callbacks
Malware and phishing attempts
Algorithm-generated domains
Domain co-occurrences
Newly registered domains
Malicious traffic and payloads that never reach the target
The Cisco Umbrella global infrastructure includes dozens of data centers around the world that resolve more than 100 billion DNS requests from millions of users every day. Umbrella data centers are peered with more than 500 of the top ISPs and content delivery networks to exchange BGP routes and ensure that requests are routed efficiently, without adding any latency over regional DNS providers.
Cisco Umbrella uses Anycast IP routing in order to provide reliability of the recursive DNS service. All data centers announce the same IP address, and all requests are transparently sent to the fastest and lowest-latency data center available.
Its scale and speed give Umbrella a massive amount of data and, perhaps more importantly, a very diverse data set that is not just from one geography or one protocol. This diversity enables Umbrella to offer unprecedented insight into staged and launched attacks. The data and threat analytics engines learn where threats are coming from, who is launching them, where they are going, and the width of the net of the attack—even before the first victim is hit. Umbrella uses authoritative DNS logs to find the following:
Newly staged infrastructures
Malicious domains, IP addresses, and ASNs
DNS hijacking
Fast flux domains
Related domains
Fast flux is a DNS technique used by botnets to hide phishing and malware delivery sites behind an ever-changing network of compromised hosts acting as proxies. Umbrella is able to find these types of threats by using modeling inside the data analytics. Machine learning and advanced algorithms are used heavily to find and automatically block malicious domains.
The following are a few examples of available models:
Co-occurrence model: This model identifies domains queried right before or after a given domain. This model helps uncover domains linked to the same attack, even if they’re hosted on separate networks.
Traffic spike model: This model recognizes when spikes in traffic to a domain match patterns seen with other attacks. For example, if the traffic for one domain matches the request patterns seen with exploit kits, you might want to block the domain before the full attack launches.
Predictive IP space monitoring model: This model starts with domains identified by the spike rank model and scores the steps attackers take to set up infrastructure (for example, hosting provider, name server, and IP address) to predict whether the domain is malicious. This identifies other destinations that can be proactively blocked before an attack launches.
When Cisco Umbrella servers receive a DNS request, they first identify which end customer the request came from and which policy to apply. Next, Cisco Umbrella determines whether the request is safe or whitelisted, malicious or blacklisted, or “risky.” Safe requests are allowed to be routed as usual, and malicious requests are routed to a block page. Risky requests can be routed to the cloud-based proxy for deeper inspection.
This concept of a cloud-based proxy is the basis for the secure Internet gateway (SIG). Before looking at the functionality of the proxy, it is helpful to understand what traffic is typically sent to the proxy. Most phishing, malware, ransomware, and other threats are hosted at domains that are classified as malicious. Yet some domains host both malicious and safe content; these are the domains that are classified as risky. These sites (such as facebook.com, reddit.com, pastebin.com, and so on) allow users to upload and share content, making them difficult to police.
Traditional web proxies or gateways examine all Internet requests, which adds latency and complexity. The Cisco Umbrella secure Internet gateway proxy intercepts and inspects only requests for risky domains. When the secure Internet gateway identifies a risky domain and begins to proxy that traffic, it uses the URL inspection engine to first classify the URL. The Cisco Umbrella secure Internet gateway uses Cisco Talos threat intelligence, the Cisco web reputation system, and third-party feeds to determine if a URL is malicious.
If the disposition of a web resource is still unknown after the URL inspection, if a file is present, the secure Internet gateway can also look at the file’s reputation. The file is inspected by both antivirus (AV) engines and Cisco Advanced Malware Protection (AMP) to block malicious files based on known signatures before they are downloaded.
Figure 9-26 shows the Cisco Umbrella Overview dashboard, which breaks down the different network requests, including the total number of DNS requests, proxy requests, total blocks, and security blocks.
Cisco Umbrella provides different dashboards and detailed reports. Figure 9-27 shows the Security Overview report dashboard.
The report shown in Figure 9-27 can be used to obtain a high-level overview of all the blocked requests and the Umbrella deployment activity for your organization.
Cisco Umbrella Investigate provides organizations access to global intelligence that can be used to enrich security data and events or help with incident response. Investigate provides the most complete view of an attacker’s infrastructure and enables security teams to discover malicious domains, IP addresses, and file hashes, and even predict emergent threats. Investigate provides access to this intelligence via a web console or an application programming interface (API). With the integration of the Cisco AMP Threat Grid data in Investigate, intelligence about an attacker’s infrastructure can be complemented by AMP Threat Grid’s intelligence about malware files, providing a complete view of the infrastructure used in an attack. Figure 9-28 shows a query of binarycousins.com using Investigate.
Cisco Umbrella Investigate provides a single, correlated source of intelligence and adds the security context needed to help organizations uncover and predict attacks. Investigate provides the following features:
Passive DNS Database: Provides historical DNS data.
WHOIS Record Data: Allows you to see domain ownership and uncover malicious domains registered with the same contact information.
Malware File Analysis: Provides behavioral indicators and network connections of malware samples with data from Cisco AMP Threat Grid.
Autonomous System Number (ASN) Attribution: Provides IP-to-ASN mappings.
IP Geolocation: Allows you to see in which country an IP address is located.
Domain and IP Reputation Scores: Allows you to leverage Investigate’s risk scoring across a number of domain attributes to assess suspicious domains.
Domain Co-Occurrences: Returns a list of domain names that were looked up around the same time as the domain being checked. The score next to the domain name in a co-occurrence is a measurement of requests from client IPs for these related domains. The co-occurrences are for the previous seven days and are shown whether the co-occurrence is suspicious or not.
Anomaly Detection: Allows you to detect fast flux domains and domains created by domain generation algorithms. This score is generated based on the likeliness of the domain name being generated by an algorithm rather than a human. This score ranges from –100 (suspicious) to 0 (benign).
DNS Request Patterns and Geo Distribution of Those Requests: Allows you to see suspicious spikes in global DNS requests to a specific domain.
Chapter 10, “Content Security,” provides details about the Cisco Email Security Appliance (ESA) and the Cisco Web Security Appliance (WSA). The Cisco ESA is an on-premises email security solution. However, there is also a cloud-based email security solution provided by Cisco. This allows you to provide protection against threats like ransomware, business email compromise (BEC), phishing, spear phishing, whaling, and many other email-driven attacks.
Cisco cloud email security supports several techniques to create the multiple layers of security needed to defend against the aforementioned attack types. These techniques include the following:
Geolocation-based filtering: To protect against sophisticated spear phishing by quickly controlling email content based on the location of the sender.
The Cisco Context Adaptive Scanning Engine (CASE): CASE leverages hundreds of thousands of adaptive attributes that are tuned automatically based on real-time analysis of cyber threats. CASE combines adaptive rules and the real-time outbreak rules published by Cisco Talos to evaluate every message and assign a unique threat level. Based on the threat level, CASE recommends a period of time to quarantine the message to prevent an outbreak (as well as rescan intervals to reevaluate the message based on updated outbreak rules from Talos). The higher the threat level, the more often CASE rescans the message while it is quarantined.
Automated threat data drawn from Cisco Talos: Threat intelligence information from Cisco’s security research organization (Talos) to provide a deeper understanding of underlying cybersecurity threats.
Advanced Malware Protection (AMP): To provide global visibility and continuous analytics across all components of the AMP architecture for endpoints and mobile devices. The Cisco AMP integration with Cisco Email Security also provides persistent protection against URL-based threats via real-time analysis of potentially malicious links.
Cisco Email Security also provides a feature called Forged Email Detection (FED). FED is used to detect spear phishing attacks by examining one or more parts of the SMTP message for manipulation, including the “Envelope-From,” “Reply To,” and “From” headers.
Cisco Email Security also has the Sender Policy Framework (SPF) for sender authentication and DomainKeys Identified Mail (DKIM) and Domain-based Message Authentication, Reporting, and Conformance (DMARC) for domain authentication.
Cisco Email Security supports advanced encryption key services to manage email recipient registration, authentication, and per-message/per-recipient encryption keys. The email security gateway also gives compliance and security officers the control of and visibility into how sensitive data is delivered. The cloud-based Cisco Email Security solution also provides a customizable reporting dashboard to access information about encrypted email traffic, including the delivery method used and the top senders and receivers.
The Cisco Email Security cloud service also supports S/MIME. Secure/Multipurpose Internet Mail Extensions (S/MIME) is a standards-based method for sending and receiving secure, verified email messages. S/MIME uses a public/private key pair to encrypt or sign messages.
Cisco Email Security can provide protection for Office 365 deployments. Figure 9-29 shows how a traditional Office 365 email exchange is done without Cisco Email Security.
Figure 9-30 shows how an Office 365 email exchange is done with Cisco Email Security. You can see that the MX records are changed to the Cisco ESAs in the cisco-ces.com domain in this example. The interaction between the Cisco Email Security solution and Office 365 relays all emails for inspection. The Cisco Email Security cloud service provides a “public” email listener to protect all incoming and outgoing emails.
Figure 9-31 shows the integration of the cloud-based Cisco Email Security solution and the AMP and Thread Grid clouds.
Cloudlock was a company that Cisco acquired a few years ago. Now called Cisco Cloudlock, the solution is a cloud access security broker (CASB). A CASB provides visibility and compliance checks, protects data against misuse and exfiltration, and provides threat protections against malware like ransomware.
Cisco Cloudlock integrates with cloud services such as the following:
Box
Dropbox
G Suite
Office 365
Okta
OneLogin
Salesforce
ServiceNow
Slack
Webex Teams
Figure 9-32 shows the Cisco Cloudlock main dashboard. There you can see the different anomalies, top users with admin activity, and an overview of overall security risk based on location.
A Cloudlock incident is typically created when an object or asset (document, object, field, post, and so on) in a platform monitored by Cisco Cloudlock meets three criteria:
The object/asset has been created or changed.
The object/asset is monitored by an active policy.
The object/asset is in violation of the criteria (content and/or context criteria) in the policy.
Figure 9-34 shows an example of a Cisco Cloudlock incident.
Figure 9-35 shows the Cisco Cloudlock Policies dashboard.
Policies are the automated rules you create in Cisco Cloudlock to customize information protection to match your organization’s needs. Policies generate incidents that enable you to monitor and correct security issues. Cisco Cloudlock policies operate independently of one another; they do not interact directly, and there is no explicit order of execution. The response actions associated with one policy do not interact with the response actions of another policy. You can add a new policy by selecting a predefined policy or by creating your own policy. Figure 9-36 shows how to add a predefined policy. In this example, a policy is added to alert and block and transactions that may include United States Social Security numbers.
You can also design and build your own policies in Cisco Cloudlock by starting with one of these categories:
Context-Only: Ignores the content contained in assets, and monitors only how widely assets are shared, their file types, and other metadata.
Custom Regex: Monitors content matching a regular expression you create.
Event Analysis: Monitors the platform events you select. Events are specific to each monitored platform. You can select individual raw events to monitor and/or events in normalized categories established by Cloudlock (for example, login events).
Salesforce Report Export Activity: Applies only to the Salesforce platform. You can use it to monitor when, where, and by whom Salesforce reports are exported from the platform.
Figure 9-37 shows the Cisco Cloudlock App Discovery dashboard.
In order to use the Cisco Cloudlock App Discovery feature to review and investigate usage of cloud apps on your network, at least one log source must be connected to Cloudlock. A log source is a network device logging network activity, such as a Cisco Web Security Appliance (WSA) or Cisco Umbrella. When at least one log source is integrated with App Discovery, data will become available in the App Discovery dashboard and the Discovered Apps list page.
In Chapter 5, “Network Visibility and Segmentation,” you learned about the Cisco Stealthwatch solution. The Cisco Stealthwatch solution uses NetFlow telemetry and contextual information from the Cisco network infrastructure. This solution allows network administrators and cybersecurity professionals to analyze network telemetry in a timely manner to defend against advanced cyber threats, including the following:
Network reconnaissance
Malware proliferation across the network for the purpose of stealing sensitive data or creating back doors to the network
Communications between the attacker (or command-and-control servers) and the compromised internal hosts
Data exfiltration
Cisco Stealthwatch aggregates and normalizes considerable amounts of NetFlow data to apply security analytics to detect malicious and suspicious activity. You can also monitor on-premises networks in your organizations using Cisco Stealthwatch Cloud. In order to do so, you need to deploy at least one Cisco Stealthwatch Cloud Sensor appliance (virtual or physical appliance). The Cisco Stealthwatch Cloud Sensor appliance can be deployed in two different modes (not mutually exclusive):
By processing network metadata from a SPAN or a network TAP
By processing metadata out of NetFlow or IPFIX flow records
Cisco Stealthwatch Cloud can also be integrated with Meraki and Cisco Umbrella. The following document details the integration with Meraki: https://www.cisco.com/c/dam/en/us/td/docs/security/stealthwatch/cloud/portal/SWC_Meraki_DV_1_1.pdf.
The following document outlines how to integrate Stealthwatch Cloud with the Cisco Umbrella Investigate REST API in order to provide additional information in the Stealthwatch Cloud environment for external entity domain information: https://www.cisco.com/c/dam/en/us/td/docs/security/stealthwatch/cloud/portal/SWC_Umbrella_DV_1_0.pdf.
AppDynamics was another company acquired by Cisco. AppDynamics (or AppD for short) provides end-to-end visibility of applications and can provide insights about application performance. AppD is able to automatically discover the flow of all traffic requests in your environment by creating a dynamic topology map of all your applications.
AppD also provides cloud monitoring and supports the following platforms:
AWS Monitoring
Microsoft Azure
Pivotal Cloud Foundry Monitoring
Cloud Foundry Foundation
Rackspace Monitoring
Kubernetes Monitoring
OpenShift Monitoring
HP Cloud Monitoring
Citrix Monitoring
OpenStack Monitoring
IBM Monitoring
Docker Monitoring
AWS Lambda Monitoring
AppDynamics can also be integrated with the Workload Optimization Manager, which is a server application running on a VM that you install on your network. You then assign Virtual Management services running on your network to be Workload Optimization Manager targets. Workload Optimization Manager discovers the devices each target manages, and then performs analysis, anticipates risks to performance or efficiency, and recommends actions you can take to avoid problems before they occur.
Figure 9-38 shows the Cloud Executive Dashboard of the Workload Optimization Manager.
Figure 9-39 shows the Workload Optimization Manager Cloud dashboard.
The Workload Optimization Manager is a Cisco agentless technology that detects relationships and dependencies between applications and the infrastructure layers. It provides a global topological mapping of your environment (local and remote, and across private and public clouds) and the interdependent relationships within the environment, mapping each layer of the full infrastructure stack to application demand. This allows you to get real-time actions that ensure workloads get the resources they need when they need them, enabling continuous placement, resizing, and capacity decisions that can be automated, driving continuous health in the environment.
Figure 9-40 shows the cloud applications used in the organization’s cloud deployment. In this example, all applications are hosted in AWS.
Figure 9-40 shows the list of cloud applications within your environment. The integration with the Cisco Workload Optimization Manager helps you to monitor and analyze application performance across your data centers and into public clouds (AWS in this example). AppDynamics and the Cisco Workload Optimization Manager quickly model what-if scenarios based on the real-time environment to accurately forecast capacity needs. In addition, you can track compute, storage, and database consumption (CPU, memory, latency, and Database Transaction Unit [DTU]) across cloud regions and availability zones.
Cisco Tetration is a solution created by Cisco that utilizes rich traffic flow telemetry to address critical data center operationality use cases. It uses both hardware and software agents as telemetry sources and performs advanced analytics on the collected data. To access the information, Cisco Tetration provides a scalable point-and-click web UI to search information using visual queries and visualizes statistics using variety of charts and tables. In addition, all the administrative functions and cluster monitoring can be done through the same web UI. Cisco Tetration supports both on-premises and public cloud workloads.
Tetration uses software agents and can also obtain telemetry information from Cisco network infrastructure devices. The Tetration software agent is a piece of software running within a host operation system (such as Linux or Windows). Its core functionality is to monitor and collect network flow information. It also collects other host information such as network interfaces and active processes running in the system. The information collected by the agent is exported to a set of collectors running within Tetration cluster for further analytical processing. In addition, software agents also have capability to set firewall rules on the installed hosts.
In order for Tetration to import user annotation from external orchestrators, Tetration needs to establish outgoing connections to the orchestrator API servers (Vcenter, Kubernetes, F5 BIG-IP, and so on). Sometimes it is not possible to allow direct incoming connections to the orchestrators from the Tetration cluster. Secure Connector solves this issue by establishing an outgoing connection from the same network as the orchestrator to the Tetration cluster. This connection is used as a reverse tunnel to pass requests from the cluster back to the orchestrator API server.
Application Dependency Mapping (ADM) is a functionality in Cisco Tetration that helps provide insight into the kind of complex applications that run in a data center or in the cloud.
ADM enables network admins to build tight network security policies based on various signals such as network flows, processes, and other side information like load balancer configs and route tags. Not only can these policies be exported in various formats for consumption by different enforcement engines, but Tetration can also verify policies against ongoing traffic in near real time.
The Forensics feature set enables monitoring and alerting for possible security incidents by capturing real-time forensic events and applying user-defined rules. The Forensics feature enables the following features:
Defining of rules to specify forensic events of interest
Defining trigger actions for matching forensic events
Searching for specific forensic events
Visualizing event-generating processes and their full lineages
The Tetration Security Dashboard, shown in Figure 9-41, presents actionable security scores by bringing together multiple signals available in Tetration. It helps in understanding the current security position and improving it. Security Score is a number between 0 and 100. It indicates the security position in the category. A score of 100 is the best score, and a score of 0 is the worst. Scores closer to 100 are better.
The Security Score computation takes into account vulnerabilities in installed software packages, consistency of process hashes, open ports on different interfaces, forensic and network anomaly events, and compliance/non-compliance to policies.
The Vulnerability Dashboard, shown in Figure 9-42, enables administrators to focus their effort on critical vulnerabilities and workloads that need the most attention. Administrators can select relevant scope at the top of this page as well as select the Common Vulnerability Scoring System (CVSS) score. The new page highlights the distribution of vulnerabilities in the chosen scope. This new page also displays vulnerabilities by different attributes, such as the complexity of exploits, whether the vulnerabilities can be exploited over the network, and whether the attacker needs local access to the workload. Furthermore, there are statistics to quickly filter out vulnerabilities that are remotely exploitable and have the lowest complexity to exploit.
Tetration provides a REST API for interacting with all features in a programmatic way. Tetration also has the concept of connectors. Connectors are integrations that Tetration supports for a variety of use cases, including flow ingestion, inventory enrichment, and alert notifications.
Exam Preparation Tasks
As mentioned in the section “How to Use This Book” in the Introduction, you have a couple of choices for exam preparation: the exercises here, Chapter 12, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.
Review the most important topics in this chapter, noted with the Key Topic icon in the outer margin of the page. Table 9-3 lists these key topics and the page numbers on which each is found.
Table 9-3 Key Topics for Chapter 9
Key Topic Element |
Description |
Page Number |
List |
Identifying the essential characteristics of cloud computing |
|
List |
Understanding the different cloud deployment models |
|
List |
Identifying the different cloud service models |
|
Paragraph |
Understanding what is DevOps |
|
Section |
The Agile Methodology |
|
Section |
DevOps |
|
Section |
CI/CD Pipelines |
|
Section |
The Serverless Buzzword |
|
Section |
Container Orchestration |
|
Section |
A Quick Introduction to Containers and Docker |
|
Section |
Kubernetes |
|
Section |
Microservices and Micro-Segmentation |
|
Section |
DevSecOps |
|
Section |
Describing the Customer vs. Provider Security Responsibility for the Different Cloud Service Models |
|
Section |
Patch Management in the Cloud |
|
Section |
Security Assessment in the Cloud and Questions to Ask Your Cloud Service Provider |
|
Section |
Cisco Umbrella |
|
Section |
The Cisco Umbrella Architecture |
|
Section |
Secure Internet Gateway |
|
Section |
Cisco Umbrella Investigate |
|
Section |
Cisco Email Security in the Cloud |
|
Section |
Forged Email Detection |
|
Section |
Sender Policy Framework |
|
Section |
Email Encryption |
|
Section |
Cisco Email Security for Office 365 |
|
Section |
Cisco Cloudlock |
|
Section |
Stealthwatch Cloud |
|
Section |
AppDynamics Cloud Monitoring |
|
Section |
Cisco Tetration |
Define the following key terms from this chapter and check your answers in the glossary:
1. What is Extreme Programming (EP)?
A software development methodology designed to improve quality and for teams to adapt to the changing needs of the end customer
A DevSecOps concept to provide better SAST and DAST solutions in a DevOps environment
A software development methodology designed to provide cloud providers with the ability to scale and deploy more applications per workload
None of these answers is correct.
2. Which of the following is a framework that helps organizations work together because it encourages teams to learn through experiences, self-organize while working on a solution, and reflect on their wins and losses to continuously improve?
DevSecOps
Scrum
Waterfall
None of these answers is correct.
3. Which of the following is the CI/CD pipeline stage that includes the compilation of programs written in languages such as Java, C/C++, and Go?
Develop
Build
Deploy
Package and Compile
4. Which of the following is a Kubernetes component that is a group of one or more containers with shared storage and networking, including a specification for how to run the containers?
Pod
k8s node
kubectl
kubeadm
5. Which of the following is a technique that can be used to find software errors (or bugs) and security vulnerabilities in applications, operating systems, infrastructure devices, IoT devices, and other computing devices? This technique involves sending random data to the unit being tested in order to find input validation issues, program failures, buffer overflows, and other flaws.
Scanning
DAST
Fuzzing
SAST
6. Which of the following is a Cisco Umbrella component that provides organizations access to global intelligence that can be used to enrich security data and events or help with incident response? It also provides the most complete view of an attacker’s infrastructure and enables security teams to discover malicious domains, IP addresses, and file hashes and even predict emergent threats.
Investigate
Internet Security Gateway
Cloudlock
CASB
7. Cisco Cloud Email Security supports which of the following techniques to create the multiple layers of security needed to defend against?
Geolocation-based filtering
The Cisco Context Adaptive Scanning Engine (CASE)
Advanced Malware Protection (AMP)
All of these answers are correct.
8. Which of the following statements are true about the Cisco Email Security solution?
The Sender Policy Framework (SPF) is used for sender authentication.
DomainKeys Identified Mail (DKIM) is used for domain authentication.
Domain-based Message Authentication, Reporting, and Conformance (DMARC) is used for domain authentication.
All of these answers are correct.
9. You can design and build your own policies in Cisco Cloudlock by starting with which of the following categories?
Custom Regex
Event Analysis
Salesforce Report Export Activity
All of these answers are correct.
10. Cisco Cloudlock provides a ___________ in order to assess the relative risk of cloud-connected apps and services according to business risk, usage risk, and vendor compliance.
Composite Risk Score (CRS)
Composite Risk Rating (CRR)
Common Vulnerability Scoring System (CVSS)
None of these answers is correct.
3.236.100.210