Chapter 10

Security in the Cloud

Images

CERTIFICATION OBJECTIVES

10.01     Data Security

10.02     Network Security

10.03     Access Control

Images     Two-Minute Drill

Q&A   Self Test


This chapter covers the concepts of security in the cloud as they apply to data both in motion across networks and at rest in storage, as well as the controlled access to data in both states. Our security coverage begins with some high-level best practices and then delves into the details of the mechanisms and technologies required to deliver against those practices. Some of these technologies include encryption (data confidentiality) and digital signatures (data integrity) and their supporting systems.

Access control is the process of determining who should be able to view, modify, or delete information. Controlling access to network resources such as files, folders, databases, and web applications is reliant upon effective access control techniques.

CERTIFICATION OBJECTIVE 10.01

Data Security

Data security encompasses data as it traverses a network as well as stored data, or data at rest. Data security is concerned with data confidentiality (encryption), ensuring that only authorized parties can access data, and data integrity (digital signatures) ensuring that data is tamper-free and comes from a trusted party. These can be implemented alongside a public key infrastructure (PKI). Encryption is also used to create secure connections between locations in a technique called tunneling. These control mechanisms can be used separately or together for the utmost in security, and this section will explore them in detail in the following sections:

Images   Public key infrastructure

Images   Encryption protocols

Images   Tunneling protocols

Images   Ciphers

Images   Storage security

Images   Protected backups

Public Key Infrastructure

A public key infrastructure (PKI) is a hierarchy of trusted security certificates, as seen in Figure 10-1. These security certificates (also called X.509 certificates, or PKI certificates) are issued to users, applications, or computing devices. PKI certificates are used to encrypt and decrypt data, as well as to digitally sign and verify the integrity of data. Each certificate contains a unique, mathematically related public and private key pair. When the certificate is issued, it has an expiration date; certificates must be renewed before the expiration date. Otherwise, they are not usable.

FIGURE 10-1   Illustration of a public key infrastructure hierarchy

Images

The certificate authority (CA) exists at the top of the PKI hierarchy, and it can issue, revoke, and renew all security certificates. Under it reside either user, application, and device certificates or subordinate certificate authorities.

Subordinate CAs can also issue, revoke, and renew certificates for the scope of operations provided in their mandate from the root CA. A large enterprise, for example, Acme, might have a CA named Acme-CA. For each of its three U.S. regions, Acme might create subordinate CAs named West, East, and Central. These regions could be further divided into sections by creating subordinate CAs from each of those, one for production and one for development. Such a subordinate CA configuration would allow the cloud operations and development personnel in each of the three regions to control their own user and device PKI certificates without having control over resources from other regions or departments.

Certificates have a defined expiration date, but some need to be deactivated before their expiration. In such cases, administrators can revoke the certificate. Revoking a certificate places the certificate on the certificate revocation list (CRL). Computers check the CRL to verify that a certificate is not on the list when they validate a certificate.

To help you better understand encryption and PKI, this section is divided into the following topics:

Images   Plaintext

Images   Obfuscation

Images   Ciphertext

Images   Cryptographic key

Images   Symmetric encryption

Images   Asymmetric encryption

Images   Digital signatures

Images

Instead of creating its own PKI, an organization may want to consider acquiring PKI certificates from a trusted third party such as Symantec or Entrust. Modern operating systems have a list of trusted CAs, and if an organization uses its own PKI, it has to ensure that all of its devices trust their CA.

Plaintext

Before data is encrypted, it is called plaintext. When an unencrypted e-mail message (i.e., an e-mail in plaintext form) is transmitted across a network, it is possible for a third party to intercept that message in its entirety.

Obfuscation

Obfuscation is a practice of using some defined pattern to mask sensitive data. This pattern can be a substitution pattern, a shuffling of characters, or a patterned removal of selected characters. Obfuscation is more secure than plaintext but can be reverse engineered if a malicious entity were willing to spend the time to decode it.

Ciphertext

Ciphers are mathematical algorithms used to encrypt data. Applying an encryption algorithm (cipher) and a value to make the encryption unique (key) against plaintext results in what is called ciphertext; it is the encrypted version of the originating plaintext.

Cryptographic Key

Many people can own the same model of lock while each has their own unique key that opens that specific lock. The same is true for cryptography. Multiple people can each use the same cryptographic algorithm, but they will each use a unique cryptographic key, or key for short, so that one person cannot decrypt the data of another.

Let’s continue with the lock example. Some locks have more tumblers than others and this makes them harder to pick. Similarly, some keys are longer than others. Longer keys result in more unique ciphertext and this makes the resulting ciphertext harder to break.

Keys are used for data at rest (stored data) and data in motion (transmission data). Keys are generated and distributed as part of the encryption process or they are configured before encryption begins.

Keys that are generated as part of the encryption process are said to be created in-band. Keys are generated and exchanged during the negotiation phase of communication. Similarly, systems that encrypt stored data will need to generate a key when encrypting the data if one has not been provided to it prior to encryption.

The other approach is to provide keys before engaging in encrypted communication or encryption. For transmission partners, keys are generated out-of-band, meaning that they are created in a separate process and distributed to transmission partners before communication starts. Such a key is known as a pre-shared key (PSK). Software or tools that encrypt stored data may be provided with a key upon installation or configuration and then this key will be used in the encryption process. Some systems use a PSK to encrypt unique keys that are generated for each file, backup job, or volume. This allows a different key to be used for discrete portions of data but these keys do not all have to be provided beforehand. Without such a process, new jobs or drives would require more administrative involvement before they could be encrypted.

Distributed Key Generation   Multiple devices can work together to create a key in a process known as distributed key generation. Distributed key generation makes it harder for the system to be corrupted because there is not a single device responsible for the key. An attacker would need to compromise several systems. This is the method blockchain uses to create keys for Bitcoin and the many other services that depend on blockchain.

Key Management System   However they are provided, each key must be documented and protected so that data can be decrypted when needed. Organizations can deploy a key management system (KMS) to manage issuing, validating, distributing, and revoking cryptographic keys so that keys are stored and managed in a single place. Cloud KMSs include such systems as AWS KMS, Microsoft Azure Key Vault, and Oracle Key Manager.

Elliptic Curve Cryptography   Elliptic curve cryptography (ECC) is a cryptographic function that allows for smaller keys to be used through the use of finite field algebraic structure of elliptic curves. The math behind it is a bit complex, but the simple description is that ECC uses a curve rather than large prime number factors to provide the same security as those with larger keys. Also, a key using ECC of the same length as one using prime number factors would be considered a stronger key.

Symmetric Encryption

As just discussed, encrypting data requires a passphrase or key. Symmetric encryption, also called private key encryption, uses a single key that encrypts and decrypts data. Think of it as locking and unlocking a door using the same key. The key must be kept safe since anybody with it in their possession can unlock the door. Symmetric encryption is used to encrypt files, to secure some VPN solutions, and to encrypt Wi-Fi networks, just to name a few examples.

To see symmetric encryption in action, let’s consider a situation where a user, Stacey, encrypts a file on a hard disk:

1.   Stacey flags the file to be encrypted.

2.   The file encryption software uses a configured symmetric key (or passphrase) to encrypt the file contents. The key might be stored in a file or on a smartcard, or the user might be prompted for the passphrase at the time.

3.   This same symmetric key (or passphrase) is used when the file is decrypted.

Encrypting files on a single computer is easy with symmetric encryption, but when other parties that need the symmetric key are involved (e.g., when connecting to a VPN using symmetric encryption), it becomes problematic: How do we securely get the symmetric key to all parties? We could transmit the key to the other parties via e-mail or text message, but we would already have to have a way to encrypt this transmission in the first place. For this reason, symmetric encryption does not scale well.

Asymmetric Encryption

Asymmetric encryption uses two different keys to secure data: a public key and a private key. This key pair is stored in a PKI certificate (which itself can be stored as a file), in a user account database, or on a smartcard. Using two mathematically related keys is what PKI is all about: a hierarchy of trusted certificates each with their own unique public and private key pairs.

The public key can be freely shared, but the private key must be accessible only by the certificate owner. Both the public and private keys can be exported to a certificate file or just the public key by itself. Keys are exported to exchange with others for secure communications or to use as a backup. If the private key is stored in a certificate file, the file must be password protected.

The recipient’s public key is required to encrypt transmissions to them. Bear in mind that the recipient could be a user, an application, or a computer. The recipient then uses their mathematically related private key to decrypt the message.

Consider an example, shown in Figure 10-2, where user Roman sends user Trinity an encrypted e-mail message using a PKI, or asymmetric encryption:

FIGURE 10-2   Sending an encrypted e-mail message

Images

1.   Roman flags an e-mail message for encryption. His e-mail software needs Trinity’s public key. PKI encryption uses the recipient’s public key to encrypt. If Roman cannot get Trinity’s public key, he cannot encrypt a message to her.

2.   Roman’s e-mail software encrypts and sends the message. Anybody intercepting the e-mail message will be unable to decipher the message content.

3.   Trinity opens the e-mail message using her e-mail program. Because the message is encrypted with her public key, only her mathematically related private key can decrypt the message.

Unlike symmetric encryption, PKI scales well. There is no need to find a safe way to distribute secret keys because only the public keys need be accessible by others, and public keys do not have to be kept secret.

Cloud providers use asymmetric encryption for communication between virtual machines. For example, AWS generates key pairs for authentication to Windows cloud-based virtual machines. AWS stores the public key, and you must download and safeguard the private key.

Digital Signatures

A PKI allows us to trust the integrity of data by way of digital signatures. When data is digitally signed, a mathematical hashing function is applied to the data in the message, which results in what is called a message digest, or hash. The PKI private key of the signer is then used to encrypt the hash: this is the digital signature.

Notice that the message content has not been secured; for that encryption is required. Other parties needing to trust the digitally signed data use the mathematically related public key of the signer to validate the hash. Remember that public keys can be freely distributed to anyone without compromising security.

As an example of the digital signature at work, consider user Ana, who is sending user Zoey a high-priority e-mail message that Zoey must trust really did come from Ana:

1.   Ana creates the e-mail message and flags it to be digitally signed.

2.   Ana’s e-mail program uses her PKI private key to encrypt the generated message hash.

3.   The e-mail message is sent to Zoey, but it is not encrypted in this example, only signed.

4.   Zoey’s e-mail program verifies Ana’s digital signature by using Ana’s mathematically related public key; if Zoey does not have Ana’s public key, she cannot verify Ana’s digital signature.

Using a public key to verify a digital signature is valid because only the related private key could have created that unique signature, so the message had to have come from that party. This is referred to as nonrepudiation. If the message is tampered with along the way, the signature is invalidated. Again, unlike symmetric encryption, there is no need to safely transmit secret keys; public keys are designed to be publicly available.

Images

Data confidentiality is achieved with encryption. Data authentication and integrity are achieved with digital signatures.

For the utmost in security, data can be encrypted and digitally signed, whether it is transmitted data or data at rest (stored).

Encryption Protocols

There are many methods that can be used to secure and verify the authenticity of data. These methods are called encryption protocols, and each is designed for specific purposes, such as encryption for confidentiality and digital signatures for data authenticity and verification (also known as nonrepudiation).

IPSec

Internet Protocol Security (IPSec) secures IP traffic using encryption and digital signatures. PKI certificates, symmetric keys, and other methods can be used to implement this type of security. IPSec is flexible because it is not application specific, so if IPSec secures the communication between hosts, it can encrypt and sign network traffic regardless of the application generating the traffic. IPSec can be used as both an encryption protocol as well as a tunneling protocol, discussed in the next section.

SSL/TLS

Unlike IPSec, Secure Sockets Layer (SSL) or Transport Layer Security (TLS) is used to secure the communication of specially configured applications. Like IPSec, encryption and authentication (signatures) are used to accomplish this level of security. TLS is SSL’s successor, although the improvements are minor.

Most computer people associate SSL/TLS with secure web servers, but SSL/TLS can be applied to any network software that supports it, such as Simple Mail Transfer Protocol (SMTP) e-mail servers and Lightweight Directory Access Protocol (LDAP) directory servers. SSL and TLS rely on PKI certificates to obtain the keys required for encryption, decryption, and authentication. Take note that some secured communication, such as connecting to a secured website using Hypertext Transfer Protocol Secure (HTTPS), uses public and private key pairs (asymmetric) to encrypt a session-specific key (symmetric). Most public cloud services are accessed over HTTPS.

Tunneling Protocols

Tunneling is the use of encapsulation and encryption to create a secure connection between devices so that intermediary devices cannot read the traffic and so that devices communicating over the tunnel are connected as if on a local network. Tunneling creates a secure way for devices to communicate with one another over less secure networks such as the Internet. This is a great way to extend an on-premises network into the public cloud.

Encapsulation is the packaging of data within another piece of data. Encapsulation is a normal function of network devices as data moves through the TCP/IP layers. For example, layer 3 IP packets are encapsulated inside layer 2 Ethernet frames. Tunneling encapsulates one IP packet destined for the recipient into another IP packet, treating the encapsulated packet simply as data to be transmitted. The reverse process of encapsulation is de-encapsulation, where the original IP packet is reassembled from the data received by a tunnel endpoint.

The nodes that form encapsulation, de-encapsulation, encryption, and decryption of data in the tunnel are called tunnel endpoints. Tunnel endpoints transmit the first packets of encapsulated data that will traverse the intermediary network.

Not all tunneling protocols encrypt the data that is transmitted through them, but all do encapsulate. If connectivity that seems local is all you are looking for, then a protocol that does not encrypt could work because it will operate faster without having to perform encryption on the data. However, for most uses of tunneling, encryption is a necessity because traffic is routed over an unsecured network. Without encryption, any node along the route could reassemble the data contained in the packets.

Tunneling consumes more network bandwidth and can result in lower speeds for connections over the tunnel because, rather than transmitting the packets themselves, network devices must take the entire packet, including header information, and package that into multiple packets that traverse from node to node until they reach their destination and are reassembled into the original packets that were sent.

Tunneling protocols are network protocols that enable tunneling between devices or sites. They consist of GRE, IPSec, PPTP, and L2TP. Table 10-1 compares each of these tunneling protocols.

TABLE 10-1   Tunneling Protocols Compared

Images

GRE

Generic Routing Encapsulation (GRE) is a lightweight, flexible tunneling protocol. GRE works with multiple protocols over IP version 4 and 6. GRE is not considered a secure tunneling protocol because it does not use encryption. GRE has an optional key field that can be used for authentication using checksum authentication or keyword authentication.

IPSec

Internet Protocol Security (IPSec) is a tunneling and encryption protocol. Its encryption features were mentioned previously in this chapter. IPSec tunneling secures IP traffic using Encapsulating Security Protocol (ESP) to encrypt the data that is tunneled over it using PKI certificates or asymmetric keys. Keys are exchanged using the Internet Security Agreement/Key Management Protocol (ISAKMP) and Oakley protocol and a security association (SA) so that endpoints can negotiate security settings and exchange encryption keys.

IPSec also offers authentication through the Authentication Header (AH) protocol. The main disadvantage of IPSec is that it encrypts the data of the original IP packet, but replicates the original packet’s IP header information, so intermediary devices know the final destination within the tunnel instead of just knowing the tunnel endpoint. IPSec functions in this way because it offers end-to-end encryption, meaning that the data is encrypted not from endpoint to endpoint but from the original source to the final destination.

PPTP

Point-to-Point Tunneling Protocol (PPTP) is a tunneling protocol that uses GRE and Point-to-Point Protocol (PPP) to transport data. PPTP has a very flexible configuration for authentication and encryption, so different implementations can utilize a variety of authentication and encryption protocols. PPP or GRE frames can be encrypted, compressed, or both. The primary benefit of PPTP is its speed and its native support in Microsoft Windows.

The most widely used variation of PPTP is used in Microsoft VPN connections. These connections use PAP, CHAP, MS-CHAP version 1, or MS-CHAP version 2 for authentication. At the time of this publication, weaknesses have been found in most of these protocols. The only secure implementation currently available for PPTP is Extensible Authentication Protocol Transport Layer Security (EAP-TLS) or Protected Extensible Authentication Protocol (PEAP).

PPTP data is encrypted using the Microsoft Point-to-Point Encryption (MPPE) protocol. MPPE uses RC4 key lengths of 128 bits, 56 bits, and 40 bits. Negotiation of keys is performed using the Compression Control Protocol (CCP). At the time of this publication, weaknesses in the RC4 protocol make PPTP implementations insecure because data can be decrypted using current toolsets.

L2TP

Layer 2 Tunneling Protocol (L2TP) offers improvements over PPTP. L2TP does not offer encryption built-in but can be combined with IPSec to provide encryption and authentication using Encapsulating Security Protocol (ESP) and Authentication Header (AH). However, L2TP is CPU intensive when encryption is used because data must be encapsulated twice, once with L2TP and another time with IPSec.

L2TP is a flexible tunneling protocol, allowing a variety of protocols to be encrypted through it. L2TP has been used to encrypt and tunnel IP, Asynchronous Transfer Mode (ATM), and Frame Relay data.

Ciphers

Recall that plaintext fed to an encryption algorithm results in ciphertext. “Cipher” is synonymous with “encryption algorithm,” whether the algorithm is symmetric (same key) or asymmetric (different paired keys). There are two categories of ciphers: block ciphers and stream ciphers. Table 10-2 lists some of the more common ciphers.

TABLE 10-2   Common Block and Stream Ciphers

Images

Block Ciphers

Designed to encrypt chunks or blocks of data, block ciphers convert plaintext to ciphertext in bulk as opposed to one data bit at a time, either using a fixed secret key or by generating keys from each encrypted block. A 128-bit block cipher produces a 128-bit block of ciphertext. This type of cipher is best applied to fixed-length segments of data, such as fixed-length network packets or files stored on a disk.

Some block ciphers include DES, AES, RC5, DSA, and 3DES. Each of these ciphers is shown in Table 10-2.

DES   Data Encryption Standard (DES) is a symmetric block cipher that uses block sizes of 64 bits and 16 rounds of encryption. 3DES or “triple-DES” encrypts data with DES three times using three keys. It is marginally better than DES and managed to extend the life of DES for a short time. DES and 3DES are now outdated protocols. They were succeeded by AES in 2001 as the new standard for government encryption.

AES   Advanced Encryption Standard (AES) is a symmetric block cipher that uses a 128-bit block and variable key sizes of 128, 192, and 256 bits. It performs 10 to 14 rounds of encryption depending on the key size used. AES replaces DES as the new standard for government encryption.

RC5   Rivest Cipher 5 (RC5) is a symmetric block cipher used to encrypt and decrypt data. It is named for its creator, Ron Rivest. RC5 is a block cipher that uses symmetric keys for encryption. RC5 replaces RC4 and supports a cipher strength of up to 2048 bits. RC5 uses 1–255 rounds of encryption.

DSA   The digital signature algorithm (DSA) is an asymmetric block cipher used for message or data signing and verification. DSA creates keys of variable lengths and can create per-user keys. DSA is accepted as a federal information processing standard in FIPS 186. DSA has a maximum cipher strength of 2048 bits and was created in 1991.

Stream Ciphers

Unlike block ciphers that work on a chunk of data at a time, stream ciphers convert plaintext into ciphertext one binary bit at a time. Stream ciphers are considerably faster than block ciphers. Stream ciphers are best suited where there is an unknown or variable amount of data to be encrypted, such as variable-length network transmissions. Some stream ciphers include RC4 and RSA shown in Table 10-2.

RC4   Rivest Cipher 4 (RC4) is a symmetric stream cipher used to encrypt and decrypt data. RC4 uses symmetric keys up to 128 bits in length for encryption. It is named for its creator, Ron Rivest.

TKIP   Temporal Key Integrity Protocol (TKIP) is a protocol specified in IEEE 802.11i that enhances the WEP/RC4 encryption in wireless networks. It was created in 2002. TKIP takes a PSK called the secret root key and combines it with a unique random or pseudorandom value called an initialization vector. TKIP also tracks the order of pieces of encrypted data using a sequence counter. This helps protect against an attack where previous ciphertext is provided to a system to try to perform a transaction twice in what is known as a replay attack. Lastly, TKIP uses an integrity checking function called the message integrity code (MIC) to verify ciphertext in the communication stream.

RSA   Rivest, Shamir, Adleman (RSA) is an asymmetric stream cipher used to encrypt and decrypt data. It is named after its three creators, Ron Rivest, Adi Shamir, and Leonard Adleman, and was created in 1977. RSA uses asymmetric key pairs up to 4096 bits in length for encryption.

Images

Stream ciphers are faster than block ciphers.

Storage Security

Storage security is concerned with the security of data at rest or when it is stored on a cloud system. There are a large number of cloud services specifically dedicated to storage of data, such as Dropbox, Google Drive, Amazon Drive, Microsoft OneDrive, and SpiderOak. For these services, storage is the business. For other cloud services, storage is one of the core building blocks on which the cloud service is architected and it is important to implement effective security at this level.

From the cloud consumer perspective, storage security is built into the product offering, so cloud consumers do not need to implement these controls. It is, however, still important to understand cloud security controls in order to ensure that the cloud service meets organizational security and technology stipulations, contractual agreements, and regulatory requirements. Storage security is vital when setting up a private cloud or when providing cloud services to others since storage is the underlying component behind cloud systems.

Granular Storage Resource Controls

Based on the storage technology utilized in the cloud system, security mechanisms can be put in place to limit access to resources over the network. This is important when setting up a private cloud or if you are working for a cloud provider since storage is the underlying component behind cloud systems. When using a storage area network (SAN), two techniques for limiting resource access are LUN masking and zoning. See Chapter 3 if you need a review on SANs and LUNs.

Images   LUN masking   LUN masking allows access to resources, namely storage logical unit numbers (LUNs), to be limited by the utilization of an LUN mask either at the host bus adapter or the switch level.

Images   Zoning   SANs can also utilize zoning, which is a practice of limiting access to LUNs that are attached to the storage controller.

LUN masking and zoning can be used in combination. Storage security is best implemented in layers, with data having to pass multiple checks before arriving at its intended target. All the possible security mechanisms, from software to operating system to storage system, should be implemented and configured to architect the most secure storage solution possible.

Securing Storage Resources

Data is the most valuable component of any cloud system. It is the reason that companies invest in these large, expensive infrastructures or services: to make certain that their users have access to the data they need to drive their business.

Storage is such a critical resource to the users of cloud models that special care must be taken with its security to make sure resources are available and accurate, and accessible for users who have been authorized for access.

Digital and Information Rights Management

Digital rights management (DRM) is a set of technologies that enforces specific usage limitations on data, such as preventing a document from being printed or e-mailed, or photos from being downloaded from a phone app. DRM is typically associated with consumer applications.

Similarly, information rights management (IRM) is a set of technologies that enforces specific usage limitations on data throughout enterprise systems, including cloud and distributed systems.

Protected Backups

Backups are copies of live data that are maintained in case something happens that makes the live dataset inaccessible. Because it is a copy of valuable data, it needs to have the same protections afforded it that the live data employs. It should be encrypted, password protected, and kept physically locked away from unauthorized access. See Chapter 12 for more information on backups and backup strategies.

CERTIFICATION OBJECTIVE 10.02

Network Security

Network security is the practice of protecting the usability, reliability, integrity, and safety of a network infrastructure and also the data traveling along it. As it does in many other areas, security in cloud computing has similarities to traditional computing models. If deployed without evaluating security, cloud systems may be able to deliver against its functional requirements, but they will likely have many gaps that could lead to a compromised system.

Security Systems

IT administrators and security teams have many tools at their disposal. There are systems they can deploy to implement security or vendor applications, and all these systems must be compatible with existing systems and services.

Security systems are designed to protect the network against certain types of threats. One solution alone will not fully protect the company, because attackers have multiple avenues for exploitation. Security systems should be used in conjunction with one another to protect against these avenues and to provide layers of security. If an attacker passes by one layer, he will not be able to exploit the network without bypassing another layer, and so forth. Security systems will only overlap in some places, so having 20 security systems does not mean that the organization has 20 layers.

The security systems mentioned here, and many others, can exist either as networking/security hardware that is installed in a data center or as virtual appliances that are placed in hypervisors in much the same way as servers. Cloud environments can deploy virtual appliances to their networks easily with this method. Virtual appliances were covered in the “Systems Maintenance” section of Chapter 9. The following sections cover three important security systems, firewalls, intrusion prevention/detection systems, and SIEM systems.

Firewall

A firewall is used to control traffic. Firewalls operate at OSI layer 3 or above, meaning that, at a minimum, they can analyze packets and implement filtering on packets based on the information contained in the packet header, such as source and destination IP addresses, lengths, and sizes. However, most firewalls operate at a much higher level.

Firewalls in a cloud environment are typically virtual appliances or software services offered to cloud consumers. Common public cloud providers offer layer 4 firewalls such as the Azure Network Security Group (NSG) and AWS EC2 Security Group. These are implemented simply by configuring them on an administrative dashboard.

Such cloud firewalls perform filtering, to a large extent, with access control lists (ACLs). ACLs are made up of a series of access control entries (ACEs). Each ACE specifies the access rights of an individual principal or entity. One or more ACEs are comprised in an ACL.

The firewall processes the ACL in order from the first ACE to the last ACE. For example, the first ACE would say allow traffic over HTTP to the web server. The second ACE would say allow SMTP traffic to the e-mail server, and the third ACE would say deny all traffic. If the firewall receives DNS traffic, it will go through the rules in order. ACE 1 is not matched because this is not HTTP traffic. ACE 2 is not matched because this is not SMTP traffic. ACE 3 is matched because it is anything else, so the firewall drops the packet. In the real world, ACLs are much more complex. Some firewall capabilities include the following:

Images   NAT/PAT   Network address translation (NAT) consolidates the addresses needed for each internal device to a single valid public IP address, allowing all of the organization’s employees to access the Internet with the use of a single public IP address.

Firewalls can be configured with multiple IP addresses, and NAT can be used to translate external IP addresses with internal IP addresses so that the actual IP address of the host is hidden from the outside world.

Port address translation (PAT) allows for mapping of private IP addresses to public IP addresses as well as for mapping multiple devices on a network to a single public IP address. PAT enables the sharing of a single public IP address between multiple clients trying to access the Internet. Each external-facing service has a port associated with it, and the PAT service knows which ones map to which internal servers. It repackages the data received, on the outside network to the inside network, addressing it to the destination server, and it makes this determination based on the port the data was sent to.

Images   Port/service The firewall can be configured to filter out traffic that is not addressed to an open port. For example, the firewall may be configured to allow traffic to HTTPS, port 443, and SMTP, port 25. If it receives data for HTTP on port 80, it will drop the packet.

Images   DMZ   A demilitarized zone is a network segment that has specific security rules on it. DMZs are created to segment traffic. A typical scenario is to place public-facing web servers in a DMZ. ACLs are then defined to allow web traffic to the web servers but not anywhere else. The web servers may need to connect back to the database server on an internal cloud segment, so ACLs allow the web servers in the DMZ to talk to the database server in the internal cloud segment, but connections from the outside would not be able to talk to the database server directly.

Images   Stateful packet inspection   Stateful packet inspection evaluates whether a session has been created for a packet before it will accept it. This is similar to the way an accounts receivable department looks to see if they issued a purchase order before paying an invoice. If there is no purchase order, no check is sent.

Images   IP spoofing detection   Spoofing is the modification of the sending IP address to obscure the origin of a data transmission. Attackers will sometimes try to send data to a device to make it seem like the data came from another device on the local network. They spoof the sending IP address and give it some local address. Spoofing is the modification of the source IP address to obfuscate the original source. However, the firewall knows which addresses it has internally, because they are contained in its routing table, so it if sees data with a sender address from outside the network, it knows that data is spoofed.

Spoofing is not limited to local addresses. Spoofing is often used in e-mail phishing to make e-mails appear as if they originated from a company’s servers when they actually came from an attacker. Spoofing is also used in man-in-the-middle attacks so that data is routed through a middleman.

Firewalls often exist on the perimeter of the network, where they can screen the data that is going to and coming from the Internet. There is also a type of firewall called the host-based firewall that resides on an endpoint to screen the data that is received by the endpoint. If a device is not running as a web server, there is no reason for it to process web traffic. Web traffic sent to it is either malicious or sent by mistake, so it is either a waste of time to look at it or, more likely, a threat. The host-based firewall drops this traffic before it can do harm to the device.

Host-based firewalls can be configured based on policy so that many machines can use the same configuration. Chapter 9 covered how to automate configuring firewalls with scripting. You can also use Windows group policies to configure the Windows Defender firewall that comes with Windows. Many antivirus vendors bundle a host-based firewall with their products, and these can be managed with a central management application or cloud portal if the licensing for that application or portal has been purchased.

WAF   One specialized type of firewall is called a web application firewall (WAF). A WAF is a device that screens traffic intended for web applications. WAFs understand common web application attacks such as cross-site scripting (XSS) and SQL injection and can inspect traffic at the application layer of the OSI model.

Cloud Access Security Broker   Organizations may choose to have a third party screen traffic for cloud or on-premises systems. A cloud access security broker (CASB) is a cloud service that operates as the gateway between external users and other cloud systems. The CASB screens incoming traffic for malicious content and anomalous behavior and prevents that traffic from being delivered to the cloud systems it services.

Distributed Denial of Service (DDoS)   A distributed denial of service (DDoS) is an attack that targets a single system simultaneously from multiple compromised systems to make that system unavailable. The attack is distributed because it uses thousands or millions of machines that could be spread across the globe. The attack denies services or disrupts availability by overwhelming the system so that it cannot respond to legitimate connection requests. Cloud firewalls or CASB can often prevent DDoS traffic from reaching organizational cloud servers because the cloud vendor or CASB has the necessary bandwidth to withstand a DDoS attack. However, there have been some DDoS attacks such as the ones performed by the Mirai botnet that overwhelmed some of even the largest networks.

IDS/IPS

An intrusion detection system (IDS) or an intrusion prevention system (IPS) looks at traffic to identify malicious traffic. IDSs and IPSs do this through two methods, signatures and heuristics. IDSs or IPSs in a cloud environment are typically virtual appliances or software services offered to cloud consumers. Such services include Azure Network Security Groups and AWS EC2 Security Groups, which are built into the cloud provider environment as software configurations. They can also include third-party cloud solutions that are supported by the cloud provider.

Signatures are descriptions of what malicious data looks like. IDSs and IPSs review the data that passes through them and take action if they find data that matches a signature. The second screening method used is heuristics, which looks for patterns that appear malicious, such as a large number of failed authentication attempts. Heuristics operates off an understanding of what constitutes normal on the network. A baseline is configured and periodically updated, so the IDS or IPS understands what to expect from the network traffic. Anything else is an anomaly, and the device will take action.

So far, we have only said that these devices take action. IDSs and IPSs differ in how they react to the things they find. An IDS sends alerts and logs suspicious traffic but does not block the traffic. An IPS can send alerts and log, but it can also block, queue, or quarantine the traffic. IPS can be generically referred to as intrusion detection and prevention (IDP). This term is used in the same way that IPS is used.

IDSs and IPSs need to be in a position to collect network data. There are two configurations, network-based and host-based. A network-based IDS (NIDS) or IPS (NIPS) is placed next to a firewall, or built into a perimeter firewall. A firewall is an ideal place because it processes all traffic going between the inside network and the outside. If the NIPS is a separate device, it will need to have the traffic forwarded to it and then relay back instructions, or it will need to be the first device to screen the information.

The second type of IDS or IPS is the host-based IDS (HIDS) or IPS (HIPS). These devices reside on endpoints such as cloud servers or cloud virtual desktop infrastructure (VDI). A NIDS or NIPS can collect a lot of data, but it does not see everything because not all data passes through the perimeter. Consider malware that has infected a machine. It may reach out to other computers on the network without touching the perimeter device. A HIDS or HIPS would be able to identify this traffic when a NIDS or NIPS would not.

You can choose to implement both host-based and network-based IDS or IPS. They will still operate independently but also send data to a collection point so that the data between the devices can be correlated to produce better intelligence on what is normal and abnormal.

In the cloud, firewalls and IDS/IPS functionality can be achieved with cloud-specific configuration settings or by deploying a virtual appliance from the cloud provider marketplace.

SIEM

A security information and event management (SIEM) system archives logs and reviews the logs in real time against correlation rules to identify possible threats or problems. Additionally, a SIEM system monitors baselines and heuristics to identify issues that crop up before they turn into bigger issues. For example, a number of informational events, when analyzed together, may present a bigger issue that can be resolved before it creates problems for end users. Without such a system, administrators would likely not notice the events until errors or warnings appeared in the log, which would probably be the same time that users of the system also experience issues.

It is much easier to fix issues before end users discover them and begin complaining. IT administrators tend to think more clearly and logically when not under pressure. SIEM can change the way IT administrators investigate issues, eventually causing them to focus more on proactive indicators of an issue rather than fighting fires.

Security Applications

Vendors have created a vast number of applications to solve many security challenges, and many security applications are available as cloud services. Here are some vendor applications or application components you should be familiar with:

Images   The application programming interface (API)

Images   Antivirus and antimalware software

Images   The command-line interface (CLI)

Images   The web graphical user interface (GUI)

Images   Cloud portals

APIs

Application programming interfaces (APIs) are used to expose functions of an application or cloud service to other programs and services. APIs allow for expansion of the original application’s scope, and they are used to integrate multiple applications together as part of an organization’s cloud or security operations.

A vendor will create an API for its application and then release documentation so that developers and integrators know how to utilize that API. For example, Office 365, a cloud-based productivity suite that includes an e-mail application, has an API for importing and exporting contacts. Salesforce, a cloud-based customer relationship management (CRM) application, could integrate with Office 365 through that API so that contacts could be updated based on interactions in the CRM tool.

Antivirus/Antimalware

Antivirus or antimalware looks at actions on a system to identify malicious activity. Antivirus or antimalware does this through the same two methods used by an IDS/IPS, signatures and heuristics. The terms antivirus and antimalware are used interchangeably. Both antivirus and antimalware software detect viruses, trojans, bots, worms, and malicious cookies. Some antivirus or antimalware software also identify adware, spyware, and potentially unwanted applications.

Signatures are descriptions of what malicious actions looks like. Antivirus and antimalware review the data in memory and scan data on the disk or disks that are plugged into them and take action if they find data that matches a signature.

The second screening method used is heuristics, which looks for patterns that appear malicious, such as a user mode process trying to access kernel mode memory addresses. Just as with the IDS and IPS, antivirus or antimalware heuristics operates off an understanding of what constitutes normal on the device. A baseline is configured and periodically updated, so the antivirus or antimalware understands what to expect from the network traffic. Anything else is an anomaly, and the device will take action.

Many antivirus and antimalware vendors have a central management application or cloud portal option that can be purchased. These tools or portals are very valuable for ease of administration. Each antivirus or antimalware client reports in to the portal, and administrators can view all machines in a set of dashboards. Dashboards show things like machines with outdated signatures, number of viruses detected, virus detection rates, virus types, items in quarantine, number of files scanned, and much more. These administration tools usually allow administrators to deploy antivirus or antimalware software to endpoints without walking to each machine.

Some antivirus and antimalware tools come with other services bundled in. These include host-based firewalls, data loss prevention, password vaults, e-mail scanners, web scanners, and other features.

CLI

Applications can have different interfaces, and there are pros and cons to the interface depending on how the application is going to be used. The command-line interface (CLI) is an interface that is text-based. The user must type instructions to the program using predefined syntax in order to interact with the program. For example, Microsoft Azure and Amazon Web Services each have their own CLI.

CLI tools can seem cumbersome to those used to graphical user interfaces (GUIs), but their power comes with the ability to script them. A task that could take 30 minutes of clicking in a GUI to accomplish could be achieved in a few minutes with a script because all the functions are preprogrammed with a series of commands. CLI tools are usually more lightweight, so they are easier to deploy and have less of a burden on systems.

Web GUI

Graphical user interfaces are very easy to work with. Simply point and click on something to take an action. Users can get to work on an application very quickly if it has a GUI. Web GUIs offer that functionality from a web browser. The advantage of a web GUI is that tools can be administered from any machine that can connect to the web server and has the authorization to administer the tool.

Cloud Portal

A cloud portal is an interface that is accessed over the Internet to manage cloud tools or integrate with other tools. The portal displays useful dashboards and information for decision-making and ease of administration.

Cloud portals are like web GUIs, but they can be accessed from almost anywhere. Cloud portals are great for administering cloud tools and for integrating with a variety of legacy tools that would have required logging into many different web GUIs, launching traditional applications, or sending SSH commands to a CLI.

Impact of Security Tools to Systems and Services

Security tools can affect the systems they are installed on. Antivirus software could flag some legitimate applications as malicious or require additional steps to log on to tools. Security tools can take up a large amount of CPU or memory resources. Logging tools utilize a large amount of storage. Since cloud resource utilization often determines how much the cloud consumer is charged, resource utilization directly impacts the bottom line.

Some security tools do not play well together. For example, it is never a good idea to install multiple antivirus tools on the same machine. Each tool will interpret the other tool as malicious because they each are scanning large numbers of files and trying to read memory. Some security tools may accidentally flag other security tools as malicious and cause problems running those tools. If this happens, whitelist the application in the security tool that blocks it. Some security tools can cause issues with backup jobs because the tools create locks when scanning files and folders that the backup job must wait for or try to snapshot.

CERTIFICATION OBJECTIVE 10.03

Access Control

Access control is the process of determining who or what should be able to view, modify, or delete information. Controlling access to network resources such as files, folders, databases, and web applications is reliant upon effective access control techniques. Access control is accomplished by authenticating and authorizing both users and hosts.

Authentication means that an entity can prove that it is who or what it claims to be, and authorization means that an entity has access to all of the resources it is supposed to have access to, and no access to the resources it is not supposed to have access to.

Authorization is the set of processes that determine who a claimant is and what they are allowed to do. These processes also log activity for later auditing and are sometimes referred to as AAA, which stands for authentication, authorization, and accounting.

This section includes coverage of the following access control concepts:

Images   Identification

Images   Authentication

Images   Authorization

Images   Access control methodologies

Images   Multifactor authentication

Images   Single sign-on (SSO)

Images   Federation

Identification

Identification is the process of claiming an identity. In the medieval days, a guard would ask, “Who goes there?” if a pair of strangers approached the gate, in response to which the strangers would reply, “Scott Wilson and Eric Vanderburg.” Of course, the guard would not just take the strangers word for it, and neither should a computer when a user or service tries to connect to it or log on. The next step is authentication.

Authentication

Authentication is the process of determining who or what is requesting access to a resource. When you log on to a computer, you present credentials that validate your identity to the computer, much like a driver’s license or passport identifies your identity to a police officer or customs officer. And just as the police officer or customs officer compares the photo on the ID to your face, the computer will compare the credentials you offer with the information on hand to determine if you are whom you claim to be. However, the computer may trust that you are who you say you are, but that doesn’t necessarily mean that you are allowed to be there. The next step is to determine if the user identity is allowed access to the resource.

Authorization

Authorization determines if the authenticated individual should have the requested access to the resource. For example, after this book is published, if we authors go to a club to celebrate, bypass the line, and present our credentials to the bouncer, he or she will compare our names to a list of authorized individuals. If the owner of the club is a huge fan of this book, she might have put our names on the list, in which case, we are granted access to the club. If not, the bouncer will tell us to get lost. Computer systems compare the identity to the resource ACL to determine if the user can access the resource.

ACLs define the level of access, such as read-only (RO), modify (M), or full control (FC). Read-only access allows a user to view the data but not make changes to it. Modify allows the user to read the data and change it. Full control allows the user to read data, change it, delete it, or change the permissions on it.

Images

Devices may exchange authorization data using the Security Assertion Markup Language (SAML).

Organizations should implement approval procedures and access policies along with authorization techniques. Approval and access policy are discussed next.

Approval

Approval is an audit of the authorization function. In the bouncer example, imagine that the bouncer is a fan of this book, not the owner. A supervisor who is watching over a group of club staff members, including the bouncer, sees the bouncer letting us into the club and decides to check the list to verify that Scott Wilson and Eric Vanderburg are on it. If the supervisor finds that we are not on the list or sees our names written in with the bouncer’s handwriting, sanctions are soon to follow, and we might not be welcome there anymore.

Approval, when implemented in the computer setting, would see a connection from a new user who is on the ACL. Since the user has not logged in before, a second authentication would take place, such as sending a text to the user’s phone or an e-mail to their inbox. The user would enter the code from the text or e-mail to prove that they know the password from the first authentication and that they have access to the phone or email as well. In future attempts, the approval step would not be needed since the user has logged in before. The concept of approval operates on a level of trust, and this can be changed depending on organizational preferences. One company could decide that approval will take place not just for new connections but the lesser of every fifth time someone logs in or every two weeks. Implementations are quite flexible.

Access Policy

Access policy is the governing activities that establish authorization levels. Access policy determines who should have access to what. Access policy requires at least three roles. The first is the authorizer. This is the person or group that can define who has access. The second is the implementer. This is the person or group that assigns access based on an authorization. The third role is the auditor. This person or group reviews existing access to verify that each access granted is authorized.

Here is how access policy plays out on the job: The organization typically defines access based on a job role. In this example, human resources (HR) is the authorizer, IT is the implementer, and audit is the auditor. When a new person is hired for a job in marketing, HR would notify IT of the new hire, their name, start date, and that they are in marketing. IT would then grant them access to the systems that others in marketing have access to. This is typically accomplished by creating the user account and then adding that account to an appropriate group.

Auditors would routinely review group memberships to determine if they match what has been defined by HR. Any inconsistencies would be brought to the attention of management. Access policies have a lifetime. Access is not provided forever and at some point, access rights are revoked. In this example, HR would notify IT to revoke access and IT would disable or remove the account when employees are terminated. On the next audit cycle, the auditor would verify that the account had been disabled.

Federation

Federation uses SSO to authorize access for users or devices to potentially many very different protected network resources, such as file servers, websites, and database applications. The protected resources could exist within a single organization or between multiple organizations.

For business-to-business (B2B) relationships, such as between a cloud customer and a cloud provider, a federation allows the cloud customer to retain their own on-premises user accounts and passwords that can be used to access cloud services from the provider. This way the user does not have to remember a username and password for the cloud services as well as for the local network. Federation also allows cloud providers to rent, on demand, computing resources from other cloud providers to service their clients’ needs.

Here is a typical B2B federation scenario (see Figure 10-3):

FIGURE 10-3   An example of a B2B federation at work

Images

1.   User Bob in company A attempts to access an application on web application server 1 in company B.

2.   If Bob is not already authenticated, the web application server in company B redirects Bob to the federation server in company B for authentication.

3.   Since Bob’s user account does not exist in company B, the federation server in company B sends an authentication redirect to Bob.

4.   Bob is redirected to the company A federation server and gets authenticated since this is where his user account exists.

5.   The company A federation server returns a digitally signed authentication token to Bob.

6.   Bob presents the authentication token to the application on web application server 1 and is authorized to use the application.

Access Control Methodologies

Several methodologies can be used to assign permissions to users so that they can use network resources. These methods include mandatory access control (MAC), discretionary access control (DAC), and non-discretionary access control (NDAC). NDAC consists of role-based access control (RBAC) and task-based access control (TBAC). RBAC, MAC, and DAC are compared in Table 10-3.

TABLE 10-3   Comparison of Access Control Methods

Images

Additionally, as systems become more complex and distributed, the policies, procedures, and technologies required to control access privileges, roles, and rights of users across a heterogeneous enterprise can be centralized in an identity and access management (IAM) system.

Mandatory Access Control

The word mandatory is used to describe this access control model because permissions to resources are controlled, or mandated, by the operating system (OS) or application, which looks at the requesting party and their attributes to determine whether or not access should be granted. These decisions are based on configured policies that are enforced by the OS or application.

With mandatory access control (MAC), data is labeled, or classified, in such a way that only those parties with certain attributes can access it. For example, perhaps only full-time employees can access a specific portion of an intranet web portal. Alternatively, perhaps only human resources employees can access files classified as confidential.

Discretionary Access Control

With the discretionary access control (DAC) methodology, the power to grant or deny user permissions to resources lies not with the OS or an application but rather with the data owner. Protected resources might be files on a file server or items in a specific web application.

Images

Most network environments use both DAC and RBAC; the data owner can give permissions to the resource by adding a group to the ACL.

There are no security labels or classifications with DAC; instead, each protected resource has an ACL that determines access. For example, we might add user RayLee with read and write permissions to the ACL of a specific folder on a file server so that she can access that data.

Non-Discretionary Access Control

With the non-discretionary access control (NDAC) methodology, access control decisions are based on organizational rules and cannot be modified by (at the discretion) of non-privileged users. NDAC scales well because access control rules are not resource specific and can be applied across the board to new resources such as servers, computers, cloud services, and storage as those systems are provisioned.

NDAC has been implemented in the role-based access control (RBAC) and task-based access control (TBAC) methodologies. RBAC is by far the most popular, but both are discussed in this section. RBAC relies on group or role memberships to determine access while TBAC relies on the task that is being performed to make access decisions.

Role-Based Access Control   For many years, IT administrators, and now cloud administrators, have found it easier to manage permissions to resources by using groups, or roles. This is the premise of RBAC. A group or role has one or more members, and that group or role is assigned permissions to a resource.

Permissions are granted either implicitly or explicitly. Any user placed into that group or role inherits its permissions; this is known as implicit inheritance. Granting permissions to individual users is considered explicit permission assignment, and it does not scale as well in larger organizations as RBAC does. RBAC is implemented in different ways depending on where it is being performed. Solutions such as IAM can manage identity across an enterprise, including on-premises and cloud systems. However, these systems tie into locally defined groups in order to assign generalized IAM groups to the local groups. IAM can greatly improve the time it takes to provision or remove users or to change their roles across an enterprise.

Sometimes the groups or roles in RBAC are defined by cloud vendors through an IAM solution such as AWS IAM. RBAC can also be applied at the operating system level, as in the case of a Microsoft Windows Active Directory group. RBAC can be applied at the application level, as in the case of Microsoft SharePoint Server roles. In the cloud, roles may be defined by the cloud provider if cloud consumers are not using provider IAM solutions.

To illustrate how cloud-based RBAC would be implemented using an IAM solution, consider the following commands. AWS IAM can be managed through the web GUI or via the CLI. These commands can be used in the AWS IAM CLI. This first command creates a group called managers:

Images

The following command assigns the administrator policy to the managers group. AWS IAM has policies that package together a set of permissions to perform tasks. The administrator policy provides full access to AWS.

Images

We can then create a user named EricVanderburg with the following command:

Images

Last, we add the user EricVanderburg to the managers group with this command:

Images

Task-Based Access Control   The TBAC methodology is a dynamic method of providing access to resources. It differs greatly from the other NDAC methodology, RBAC, in that it is not based on subjects and objects (users and resources). TBAC was created around the concept of least privilege. In other models, a user might have the ability to access a reporting system, but they use that system only once a month. For the majority of each month, the user has more access than they require. In TBAC, users have no access to resources by default and are only provided access when they perform a task requiring it, and access is not retained after the task is complete.

TBAC systems provide access just as it is needed and are usually associated with workflows or transactions. TBAC can also be efficient because tasks and their required access can be defined when new processes are defined and they are independent of who performs them or how many times they are performed.

Multifactor Authentication

Authentication means proving who (or what) you are. Authentication can be done with the standard username and password combination or with a variety of other methods.

Some environments use a combination of the three authentication mechanisms; this is known as multifactor authentication (MFA). Possessing a debit card, along with knowledge of the PIN, comprises multifactor authentication. Combining these authentication methods is considered much more secure than single-factor authentication.

A Real-World Look at Cloud RBAC

We were asked to help a company where a disgruntled employee had defaced the company website. The employee was terminated, but the company owner wanted to make sure that it would not happen again.

The company had started small, and the owner had always used cloud services to help the company grow quickly. The owner had set up cloud services on AWS, and the AWS username and password were stored on the company intranet for people to use when they needed to access AWS for different purposes. We conducted an assessment to identify the tasks that were performed on AWS and the people who performed those tasks.

Originally, the owner had done it all, but now there was a administrator to perform backups, several programmers to create software, a QA team to test the software, and helpdesk personnel to reset passwords and make front-end changes for trouble tickets. There were also some administrative users who managed payments for the account and a manager who approved new features.

As you can see, there were quite a few different roles, but everyone was using the same account. We provisioned accounts for each person who needed access to AWS. Accounts utilized the user’s name as part of the naming convention and users were instructed not to share their passwords with others. The password for the main AWS account was changed, and the owner stored that password in a safe at the company offices. The owner was also issued a named account.

We then created roles for backup operations, development, testing, helpdesk, management, and billing and assigned the necessary AWS policies to each group so that they could do their required tasks. Lastly, we placed the appropriate users into each role.

These are the three categories of authentication that can be combined in multifactor authentication scenarios:

Images   Something you know   Knowing your username and password is by far the most common. Knowing your first pet’s name, or the PIN for your credit card, or your mother’s maiden name all fall into this category.

Images   Something you have   Most of us have used a debit or credit card to make a purchase. We must physically have the card in our possession. For VPN authentication, a user would be given a hardware token with a changing numeric code synced with the VPN server. For cloud authentication, users could employ a mobile device authenticator app with a changing numeric code in addition to their username and password.

Images   Something you do This measures the particular way that an individual performs a routine task to validate their identity. Handwriting analysis can determine if the user writes their name the same way or the user could be asked to count to 10 and the computer would determine if this is the way they normally count to ten with the appropriate pauses and inflection points.

Images   Someplace you are   Geolocation is often used as an authentication method along with other methods. The organization may allow employees to access certain systems when they are in the workplace facility, but not from home. Traveling employees might be able to access some resources in the country, but not when traveling internationally.

Images   Something you are   This is where biometric authentication kicks in. Your fingerprints, your voice, your facial structure, the capillary pattern in your retinas—these are unique to you. Of course, voice impersonators could reproduce your voice, so some methods are more secure than others.

Images

Knowing both a username and password is not considered multifactor authentication, because they are both “something you know.”

Single Sign-On

As individuals, we have all had to remember multiple usernames and passwords for various software at work, or even at home for multiple websites. Wouldn’t it be great if we logged in only once and had access to everything without being prompted to log in again? This is what single sign-on (SSO) is all about!

SSO can take the operating system, VPN, or web browser authentication credentials and present them to the relying party transparently, so the user does not even know it is happening. Modern Windows operating systems use the credential locker as a password vault to store varying types of credentials to facilitate SSO. Enterprise SSO solutions such as the open-source Shibboleth tool or Microsoft Active Directory Federation Services (ADFS) let cloud personnel implement SSO on a large scale. Cloud providers normally offer identity federation services to cloud customers.

The problem with SSO is that different software and websites may use different authentication mechanisms. This makes implementing SSO in a large environment difficult.

CERTIFICATION SUMMARY

This chapter focused on data security, network security, and access control, all of which are of interest to cloud personnel.

As a CompTIA Cloud+ candidate, you must understand the importance of applying best practices to your network. Assessing the network is only effective when comparing your results with an established baseline of normal configuration and activity. Auditing a network is best done by a third party, and you may be required to use only accredited auditors that conform to industry standards such as PCI or SOX. All computing equipment must be patched and hardened to minimize the potential for compromise.

An understanding of data security measures and access control methods is also important for the exam. Data security must be in place both for data as it traverses a network and for stored data. Encrypting data prevents unauthorized access to the data, while digital signatures verify the authenticity of the data. Various encryption protocols are used to accomplish these objectives. The various access control models discussed in this chapter include role-based access control, mandatory access control, and discretionary access control.

KEY TERMS

Use the following list to review the key terms discussed in this chapter. The definitions also can be found in the glossary.

access control entry (ACE)   Specifies the access rights of an individual principal or entity. One or more ACEs are comprised in an ACL.

access control list (ACL)   A list that tracks permitted actions. An ACL for a server might contain the access rights of entities such as users, services, computers, or administrative accounts and whether those rights are permitted or denied, whereas an ACL for a firewall might contain the source address, port, and destination address for authorized communication and deny permissions for all others. ACLs are composed of a set of ACEs.

Advanced Encryption Standard (AES)   An algorithm used to encrypt and decrypt data. Principally, AES uses a 128-bit block and variable key sizes of 128, 192, and 256 bits. It performs 10 to 14 rounds of encryption depending on the key size used.

antimalware software   A piece of software that looks at actions on a system to identify malicious activity.

antivirus software   A piece of software that detects and removes malicious code such as viruses, trojans, worms, and bots.

application programming interface (API)   A structure that exposes functions of an application to other programs.

asymmetric encryption   Encryption mechanism that uses two different keys to encrypt and decrypt data.

authentication, authorization, and accounting (AAA)   The set of processes that determines who a claimant is and what they are allowed to do. These processes also log activity for later auditing.

block cipher   A method of converting plaintext to ciphertext in bulk as opposed to one data bit at a time, either using a fixed secret key or by generating keys from each encrypted block.

certificate authority (CA)   Entity that issues digital certificates and makes its public keys available to the intended audience to provide proof of its authenticity.

certificate revocation list (CRL)   A list managed by a certificate authority (CA) and often published to a public source that describes each certificate that the CA has removed from service so that users and computers know if they should no longer trust a certificate.

ciphertext   Data that has been encrypted using a mathematical algorithm.

command-line interface (CLI)   An interface that is text-based.

data classification   Practice of sorting data into discrete categories that help define the access levels and type of protection required for that set of data.

data encryption   Algorithmic scheme that secures data by scrambling it into a code that is not readable by unauthorized resources.

Data Encryption Standard (DES)   An algorithm used to encrypt and decrypt data. Principally, DES is a symmetric key algorithm that uses block sizes of 64 bits and 16 rounds of encryption.

demilitarized zone (DMZ)   A network segment that has specific security rules on it to segment traffic.

digital signature   Mathematical hash of a dataset that is encrypted by the private key and used to validate that dataset.

discretionary access control (DAC)   Security mechanism in which the power to grant or deny permissions to resources lies with the data owner.

distributed denial of service (DDoS)   An attack that targets a single system simultaneously from multiple compromised systems.

elliptic curve cryptography (ECC)   A cryptographic function that allows for smaller keys to be used to provide the same security as those with larger keys through the use of finite field algebraic structure of elliptic curves.

Encapsulating Security Protocol (ESP)   A cryptographic function used by IPSec to encrypt tunneled data using PKI certificates or asymmetric keys.

federation   Use of single sign-on (SSO) to authorize users or devices to many different protected network resources, such as file servers, websites, and database applications.

Generic Routing Encapsulation (GRE)   A lightweight, flexible tunneling protocol that works over IP but does not encrypt data.

host-based intrusion detection system (HIDS)   A system that analyzes activity on a host where a HIDS agent is installed for behavior patterns and notifies if patterns match those associated with malicious activity such as hacking or malware.

host-based intrusion prevention system (HIPS)   A system that analyzes activity on a host where a HIPS agent is installed for behavior patterns and takes action if patterns match those associated with malicious activity such as hacking or malware.

identity and access management (IAM)   The policies, procedures, and technologies required to control access privileges, roles, and rights of users across a heterogeneous enterprise.

Internet Protocol Security (IPSec)   A tunneling protocol that secures IP traffic using Encapsulating Security Protocol (ESP) to encrypt the data that is tunneled over it using PKI certificates or asymmetric keys and offers authentication through the Authentication Header (AH) protocol.

intrusion detection and prevention (IDP)   A system that analyzes activity for behavior patterns and notifies or takes action if patterns match those associated with malicious activity such as hacking or malware.

key management system (KMS)   A system that can issue, validate, distribute, and revoke cryptographic keys. Cloud KMS include such systems as AWS KMS, Microsoft Azure Key Vault, and Oracle Key Manager.

Layer 2 Tunneling Protocol (L2TP)   A tunneling protocol that does not offer encryption on its own, but when combined with IPSec offers a high level of encryption at the cost of additional CPU overhead to encapsulate data twice.

mandatory access control (MAC)   Security mechanism in which access is mandated by the operating system or application and not by data owners.

multifactor authentication (MFA)   Authentication of resources using proof from more than one of the five authentication categories: something you know, something you have, something you do, somewhere you are, and something you are.

network address translation (NAT)   A service that consolidates the addresses needed for each internal device to a single valid public IP address, allowing all of the organization’s employees to access the Internet with the use of a single public IP address.

network-based intrusion detection system (NIDS)   A system that analyzes activity on a network egress point such as a firewall for behavior patterns and notifies if patterns match those associated with malicious activity such as hacking or malware.

network-based intrusion prevention system (NIPS)   A system that analyzes activity on a network egress point such as a firewall for behavior patterns and takes action if patterns match those associated with malicious activity such as hacking or malware.

plaintext   Unencrypted data.

Point-to-Point Tunneling Protocol (PPTP)   A tunneling protocol that uses GRE and Point-to-Point Protocol (PPP) to transport data using a variety of now outdated protocols. Primarily used with older Microsoft Windows VPN connections.

port address translation (PAT)   A service that maps private IP addresses to public IP addresses to translate multiple devices on a network to a single public IP address using port to IP mappings.

pre-shared key (PSK)   A piece of data that only communication partners know that is used along with a cryptographic algorithm to encrypt communications.

private key   One of two keys used for asymmetric encryption, available only to the intended data user and is used for data decryption and creating digital signatures.

public key   One of two keys used for asymmetric encryption, available to anyone and is used for data encryption and digital signature validation.

public key infrastructure (PKI)   Hierarchy of trusted security certificates issued to users or computing devices.

Rivest Cipher 4 (RC4)   An algorithm used to encrypt and decrypt data. Principally, RC4 is a block cipher that uses symmetric keys up to 128 bits in length for encryption.

Rivest Cipher 5 (RC5)   An algorithm used to encrypt and decrypt data. Principally, RC5 is a block cipher that uses symmetric keys for encryption. RC5 replaces RC4 and supports a cipher strength of up to 2048 bits.

role-based access control (RBAC)   Security mechanism in which all access is granted through predefined collections of permissions, called roles, instead of implicitly assigning access to users or resources individually.

Secure Sockets Layer (SSL)   A cryptographic algorithm that allows for secure communications such as web browsing, FTP, VPN, instant messaging, and VoIP. See also Transport Layer Security (TLS).

security information and event management (SIEM)   A system that collects, correlates, and analyzes event logs. SIEM is also known as security incident event manager.

single sign-on (SSO)   Authentication process in which the resource requesting access can enter one set of credentials and use those credentials to access multiple applications or datasets, even if they have separate authorization mechanisms.

spoofing   The modification of the source IP address to obfuscate the original source.

stream cipher   A method of converting plaintext to ciphertext one bit at a time.

symmetric encryption   Encryption mechanism that uses a single key to both encrypt and decrypt data.

task-based access control (TBAC)   Security mechanism in which users have no access to resources by default and are only provided access when they perform a task requiring it. Access is not retained after the task is complete.

Temporal Key Integrity Protocol (TKIP)   A protocol specified in IEEE 802.11i that enhances the WEP/RC4 encryption in wireless networks.

Transport Layer Security (TLS)   A cryptographic algorithm that allows for secure communications such as web browsing, FTP, VPN, instant messaging, and VoIP. TLS replaces the SSL protocol. See also Secure Sockets Layer (SSL).

tunnel endpoint   Node that forms encapsulation, de-encapsulation, encryption, and decryption of data in the tunnel. Tunnel endpoints transmit the first packets of encapsulated data that will traverse the intermediary network.

tunneling   The use of encapsulation and encryption to create a secure connection between devices so that intermediary devices cannot read the traffic and so that devices communicating over the tunnel are connected as if on a local network.

tunneling protocol   A network protocol that enables tunneling between devices or sites.

web graphical user interface (GUI)   An interface that is point and click and accessible over the Web.

Images TWO-MINUTE DRILL

Data Security

Images  A public key infrastructure (PKI) is a hierarchy of trusted security certificates that each contain unique public and private key pairs; used for data encryption and verification of data integrity.

Images  Ciphertext is the result of feeding plaintext into an encryption algorithm; this is the encrypted data. Block ciphers encrypt chunks of data at a time, whereas the faster stream ciphers encrypt data a binary bit at a time. Stream ciphers are best applied where there is an unknown variable amount of data to be encrypted.

Images  Symmetric encryption uses the same secret key for encryption and decryption. The challenge lies in safely distributing the key to all involved parties.

Images  Asymmetric encryption uses two mathematically related keys (public and private) to encrypt and decrypt. This implies a PKI. The public and private key pairs contained within a PKI certificate are unique to that subject. Normally, data is encrypted with the recipient’s public key, and the recipient decrypts that data with the related private key. It is safe to distribute public keys using any mechanism to the involved parties.

Images  A digital signature is a unique value created from the signer’s private key and the data to which the signature is attached. The recipient validates the signature using the signer’s public key. This assures the recipient that data came from whom it says it came from and that the data has not been tampered with.

Network Security

Images  Network security is the practice of protecting the usability, reliability, integrity, and safety of a network infrastructure and also the data traveling along it.

Images  Security systems are designed to protect the network against certain types of threats.

Images  A firewall is used to control traffic. It performs functions such as NAT/PAT, port and service filtering, DMZ management, stateful packet inspection, and IP spoofing detection.

Images  An intrusion detection system (IDS) or an intrusion prevention system (IPS) looks at traffic to identify malicious traffic. IDSs and IPSs differ in how they react to the things they find. An IDS sends alerts and logs suspicious traffic but does not block the traffic. An IPS can send alerts and log, but it can also block, queue, or quarantine the traffic.

Images  Application programming interfaces (APIs) are used to expose functions of an application or cloud service to other programs and services.

Access Control

Images  Mandatory access control (MAC) is a method of authorization whereby a computer system, based on configured policies, checks user or computer attributes along with data labels to grant access. Data labels might be applied to files or websites to determine who can access that data. The data owner cannot control resource permissions.

Images  Discretionary access control (DAC) allows the owner of the data to grant permissions, at their discretion, to users. This is what is normally done in smaller networks where there is a small user base. A larger user base necessitates the use of groups or roles to assign permissions.

Images  Non-discretionary access control (NDAC) consists of both role-based access control (RBAC) and task-based access control (TBAC). RBAC is a method of using groups and roles to assign permissions to network resources. This scales well because once groups or roles are given the appropriate permissions to resources, users can simply be made members of the group or role to inherit those permissions. TBAC was created around the concept of least privilege. In TBAC users have no access to resources by default and are only provided access when they perform a task requiring it and access is not retained after the task is complete. TBAC systems provide access just as it is needed and are usually associated with workflows or transactions.

Images  Multifactor authentication is any combination of two or more authentication methods stemming from what you know, what you have, what you do, where you are, and what you are. For example, you might have a smartcard and also know the PIN to use it. This is two-factor authentication.

Images  Single sign-on (SSO) requires users to authenticate only once. They are then authorized to use multiple cloud systems without having to log in each time.

Images  Federation allows SSO across multiple cloud systems using a single identity (username and password, for example), even across organizational boundaries.

Images SELF TEST

The following questions will help you measure your understanding of the material presented in this chapter. As indicated, some questions may have more than one correct answer, so be sure to read all the answer choices carefully.

Data Security

1.   You are invited to join an IT meeting where the merits and pitfalls of cloud computing are being debated. Your manager conveys her concerns of data confidentiality for cloud storage. What can be done to secure data stored in the cloud?

A.   Encrypt the data.

B.   Digitally sign the data.

C.   Use a stream cipher.

D.   Change default passwords.

2.   Which of the following works best to encrypt variable-length data?

A.   Block cipher

B.   Symmetric cipher

C.   Asymmetric cipher

D.   Stream cipher

3.   With PKI, which key is used to validate a digital signature?

A.   Private key

B.   Public key

C.   Secret key

D.   Signing key

4.   Which of the following is related to nonrepudiation?

A.   Block cipher

B.   PKI

C.   Symmetric encryption

D.   Stream cipher

Network Security

5.   Which service does a firewall use to segment traffic into different zones?

A.   DMZ

B.   ACL

C.   Port/service

D.   NAT/PAT

6.   Which device would be used in front of a cloud application to prevent web application attacks such as cross-site scripting (XSS) and SQL injection?

A.   IAM

B.   DLP

C.   WAF

D.   IDS/IPS

7.   Which device would be used to identify potentially malicious activity on a network?

A.   IAM

B.   DLP

C.   WAF

D.   IDS/IPS

Access Control

8.   Sean configures a web application to allow content managers to upload files to the website. What type of access control model is Sean using?

A.   DAC

B.   MAC

C.   RBAC

D.   GBAC

9.   You are the administrator of a Windows network. When creating a new user account, you specify a security clearance level of top secret so that the user can access classified files. What type of access control method is being used?

A.   DAC

B.   MAC

C.   RBAC

D.   GBAC

10.   John is architecting access control for a custom application. He would like to implement a non-discretionary access control method that does not rely upon roles. Which method would meet John’s criteria?

A.   RBAC

B.   MAC

C.   TBAC

D.   DAC

Images SELF TEST ANSWERS

Data Security

1.   Images   A. Encrypting data at rest protects the data from those not in possession of a decryption key.

Images   B, C, and D are incorrect. Digital signatures verify data authenticity, but they don’t deal with the question of confidentiality. Stream ciphers are best used for unpredictable variable-length network transmissions; a block cipher would be better suited for file encryption. While changing default passwords is always relevant, it does nothing to address the concern about data confidentiality.

2.   Images   D. Stream ciphers encrypt data, usually a bit at a time, so this works well for data that is not a fixed length.

Images   A, B, and C are incorrect. Symmetric and asymmetric ciphers do not apply in this context. Block ciphers are generally better suited for data blocks of fixed length.

3.   Images   B. The public key of the signer is used to validate a digital signature.

Images   A, C, and D are incorrect. Private keys create, and don’t validate, digital signatures. A secret key is synonymous with an asymmetric key; PKI is implied when discussing signatures. Signing keys, as they are sometimes called, digitally sign data.

4.   Images   B. PKI is related to nonrepudiation, which means that a verified digital signature proves the message came from the listed party. This is true because only the private key of the signing party could have created the validated signature.

Images   A, C, and D are incorrect. Block ciphers and stream ciphers are not related to nonrepudiation; they are types of encryption methods. Symmetric encryption excludes the possibility of a PKI, and PKI relates to nonrepudiation.

Network Security

5.   Images   A. A demilitarized zone (DMZ) is a segment that has specific security rules on it. DMZs are created to segment traffic.

Images   B, C, and D are incorrect. An ACL defines allowed and denied access. Ports and services are used to define filtering rules. NAT/PAT consolidate the addresses needed for each internal device to a single valid public IP address.

6.   Images   C. A web application firewall (WAF) is a specialized type of firewall that screens traffic intended for web applications. WAFs understand common web application attacks such as cross-site scripting (XSS) and SQL injection and can inspect traffic at the application layer of the OSI model.

Images   A, B, and D are incorrect. Identity and access management (IAM) systems manage entities across an enterprise. Data loss prevention (DLP) is used to restrict the flow of data to authorized parties and devices. IDS/IPS technologies look at traffic to identify malicious traffic.

7.   Images   D. IDS/IPS technologies look at traffic to identify malicious traffic.

Images   A, B, and C are incorrect. A web application firewall (WAF) is a specialized type of firewall that screens traffic intended for web applications. Identity and access management (IAM) systems manage entities across an enterprise. Data loss prevention (DLP) is used to restrict the flow of data to authorized parties and devices.

Access Control

8.   Images   C. Sean is using a role (content managers) to control who can upload files to the website. This is role-based access control (RBAC).

Images   A, B, and D are incorrect. DAC allows data owners to grant permissions to users. MAC uses data classification and other attributes so that computer systems can determine who should have access to what. GBAC is not an access control method.

9.   Images   B. Mandatory access control (MAC) uses attributes or labels (such as “top secret”) that enable computer systems to determine who should have access to what.

Images   A, C, and D are incorrect. DAC allows data owners to grant permissions to users. RBAC uses groups and roles so that their members inherit permissions to resources. GBAC is not an access control method.

10.   Images   C. Task-based access control uses tasks instead of roles to determine the level and extent of access.

Images   A, B, and D are incorrect. RBAC relies on roles to access control decisions and John does not want to rely on roles. MAC and DAC are not non-discretionary access control methods.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.76.89