Chapter 11

Security Best Practices

Images

CERTIFICATION OBJECTIVES

11.01     Cloud Security Engineering

11.02     Security Governance and Strategy

11.03     Vulnerability Management

Images     Two-Minute Drill

Q&A   Self Test


Cloud services make it easier for companies to adopt software, systems, and services. However, it can be easy to implement a cloud system that functions well but secures very little. There are some practical steps that can be taken to better secure systems and data in the cloud.

Cloud security engineering includes important information on how to protect cloud systems. Cloud security engineering involves host and guest computer hardening and layering security to provide multiple overlapping controls that an attacker would need to break through to get to systems and data. Additionally, systems should be designed so that users and services have only the privileges they need to function, in what is known as least privilege. Cloud security engineers must also divide elements of tasks among multiple people to minimize the risk of corruption. Lastly, cloud security engineering seeks to automate security tasks.

Security governance is the set of activities and guidelines an organization uses to manage technologies, assets, and employees. Security governance ensures that the right activities are performed and that those activities accomplish the security goals of the organization. This is performed through security policies that set organizational expectations and procedures that define how tasks will be performed. Established industry standards and regulations can be used to craft the right mix of technologies and policies to meet regulatory requirements or established best practices.

Organizations also need to perform vulnerability scanning and penetration testing. These functions together form the basis for vulnerability management. Vulnerability management is the process of identifying possible vulnerabilities and enacting controls to mitigate those that are probable.

CERTIFICATION OBJECTIVE 11.01

Cloud Security Engineering

Cloud security engineering is the practice of protecting the usability, reliability, integrity, and safety of a cloud data and infrastructure and also the users utilizing cloud systems. As it does in many other areas, security in cloud computing has similarities to traditional computing models. If deployed without evaluating security, cloud systems may be able to deliver against its functional requirements, but they will likely have many gaps that could lead to a compromised system.

As part of any cloud deployment, attention needs to be paid to specific security requirements so that the resources that are supposed to have access to data and software in the cloud system are the only resources that can read, write, or change it. This section provides coverage of the following practices and principles employed in cloud security engineering:

Images   Host and guest computer hardening

Images   Implementing layered security

Images   Protecting against availability attacks

Images   Least privilege

Images   Separation of duties

Images   Security automation

Host and Guest Computer Hardening

The hardening of computer systems and networks, whether they are on premises or in the cloud, involves ensuring that the host and guest computers are configured in such a way that reduces the risk of attack from either internal or external sources. While the specific configuration steps for hardening vary from one system to another, the basic concepts involved are largely similar regardless of the technologies that are being hardened. Some of these central hardening concepts are as follows:

Images   Remove all software and services that are not needed on the system   Most operating systems and all preloaded systems run applications and services that are not needed by all configurations as part of their default. Systems deployed from standard cloud templates may contain services that are not required for your specific use case. These additional services and applications add to the attack surface of any given system and thus should be removed.

Images   Maintain firmware and patch levels   Security holes are continually discovered in both software and firmware, and vendors release patches as quickly as they can to respond to those discoveries. Enterprises as well as cloud providers need to apply these patches to be protected from the patched vulnerabilities.

Images   Control account access   Unused accounts should be either disabled or removed entirely from systems. The remaining accounts should be audited to make sure they are necessary and that they have only access to the resources they require. Default accounts should be disabled or renamed, because if hackers are looking to gain unauthorized access to a system and they can guess the username, then they already have half of the necessary information to log into that system.

For the same reason, all default passwords associated with any system or cloud service should be changed as well. In addition to security threats from malicious users who are attempting to access unauthorized systems or data, security administrators must also beware of the threat from a well-meaning employee who unknowingly accesses resources that shouldn’t be made available to them or, worse yet, deletes data that he or she did not intend to remove.

Images   Implement the principle of least privilege (POLP)   POLP dictates that users are given only the amount of access they need to carry out their duties and no additional privileges above that for anything else. Protecting against potential insider threats and protecting cloud consumers in a multitenant environment requires that privileged user management be implemented and that security policies follow the POLP.

Images   Disable unnecessary network ports   As with software and service hardening, only the required network ports should be enabled to and from servers and cloud services to reduce the attack surface.

Images   Deploy antivirus or antimalware software Antivirus or antimalware software should be deployed to all systems that support it. The most secure approach to virus defense is one in which any malicious traffic must pass through multiple layers of detection before reaching its potential target, such as filtering at the perimeter, through e-mail gateways, and then on endpoints such as cloud servers or end-user machines.

Images   Configure logging   Logging should be enabled on all systems so that if an intrusion is attempted, it can be identified and mitigated or, at the very least, investigated. Cloud logging options can be leveraged to archive logs automatically, conserving space on servers and ensuring that data is available if needed. See Chapter 8 for more information on log automation.

Images   Limit physical access   If a malicious user has physical access to a network resource, they may have more options for gaining access to that resource. Because of this, limitations that can be applied to physical access should be utilized. Some examples of physical access deterrents are locks on server room doors, network cabinets, and the network devices themselves. Additionally, servers need to be secured at the BIOS level with a password so that malicious users cannot boot to secondary drives and bypass operating system security.

Images   Scan for vulnerabilities   Once the security configuration steps have been defined and implemented for a system, a vulnerability assessment should be performed using a third-party tool or service provider to make certain no security gaps were missed. Penetration testing can validate whether vulnerabilities are exploitable and whether other security controls are mitigating the vulnerability. Vulnerability scanning and penetration testing are discussed later in this chapter.

Images   Deploy a host-based firewall   Software firewalls should be deployed to the hosts and guests that will support them. These software firewalls can be configured with access control lists (ACLs), as discussed in Chapter 10, and protection tools in the same fashion as hardware firewalls.

Images   Deactivate default accounts   Many systems come provisioned with accounts that can be used to set up the software or device initially. The usernames and passwords of such accounts are well known to attackers, so it is best to deactivate these default accounts. Deactivation is better than just changing the password because attackers still know the default username, which gives them one piece of the puzzle even if the password is changed.

Implementing Layered Security

To protect network resources from threats, secure network design employs multiple overlapping controls to prevent unwanted access to protected cloud resources. Some layered security components include demilitarized zones, ACLs, and intrusion detection and prevention systems.

A demilitarized zone (DMZ) is a separate network that is layered in between an internal network and an external network to house resources that need to be accessed by both while preventing direct access from the outside network to the inside network. ACLs define the traffic that is allowed to traverse a network segment. Lastly, intrusion detection systems can detect anomalous network behavior and send alerts to system administrators to take action, while intrusion prevention systems can detect anomalies and take specific actions to remediate threats.

The real strength of demilitarized zones, ACLs, and intrusion detection and prevention systems (covered in Chapter 10) is that they can all be used together, creating a layered security system for the greatest possible security.

Consider an attacker trying to get to a cloud database. The attacker would need to first get through the firewall. A DMZ, along with appropriately configured ACLs, between networks would make the attacker have to compromise a machine in the DMZ and then pivot from that machine to another machine in the internal network. However, networks with IDS/IPS might detect this activity and notify administrators and block the attacker from making the connection. In this way, these technologies work together to provide a layered solution to protect the cloud database.

Protecting Against Availability Attacks

Attacks on availability are those designed to take a system down so that users, such as customers or employees, cannot use it. Some availability attacks are used to cause a system to restart so that weaknesses in the startup routines of the system can be exploited to inject code, start in a maintenance mode and reset passwords, or take other malicious actions.

Distributed Denial of Service (DDoS)

A DDoS attack targets a single system simultaneously from multiple compromised systems. DDoS was introduced back in Chapter 10 under the discussion on firewalls and cloud access security brokers (CASBs), but there are other protections against DDoS.

DDoS attacks are distributed because they use thousands or millions of machines that could be spread across the globe. Such an attack denies services or disrupts availability by overwhelming the system so that it cannot respond to legitimate connection requests. The distributed nature of these attacks makes it difficult for administrators to block malicious traffic based on its origination point and to distinguish approved traffic from attacking traffic. DDoS can quickly overwhelm network resources. However, large cloud systems can offer protection to cloud consumers.

Cloud DDoS protection solutions, such as those from Amazon, Microsoft, Verisign, or Cloudflare, not only protect cloud consumers from attack and loss of availability of resources but also protect against excessive usage charges since many cloud providers charge for how much data is sent and received. Cloud providers offering services such as CASB, introduced in Chapter 10, can screen out some traffic, and they also have the bandwidth to soak up most DDoS traffic without becoming overwhelmed. There have been some high-profile DDoS attacks that caused disruption, such as those committed with Internet of things (IoT) devices that took down large clouds, but most DDoS attacks cannot commit resources at that scale.

Ping of Death (PoD)

PoD attacks send malformed ICMP packets with the intent of crashing systems that cannot process them and consequently shut down. Most modern cloud firewall packages, such as AWS Shield, DigitalOcean, and Zscaler, can actively detect these packets and discard them before they cause damage.

Ping Flood Attacks

Ping flood attacks are similar to DDoS attacks in that they attempt to overwhelm a system with more traffic than it can handle. In this variety, the attack is usually attempted by a single system, which makes the attack easier to identify and block. Defense strategies for ping floods are the same as those for DDoS, including cloud DDoS protection.

Least Privilege

Another important security control is the principle of least privilege. Employees should be granted only the minimum permissions necessary to do their job. No more, no less. Incorporating the principle of least privilege limits potential misuse and risk of accidental mishandling or viewing of sensitive information by unauthorized people.

Least Privilege in the G Suite

We were asked to help a company determine if they had significant cybersecurity gaps. The company was using Google Docs in the G Suite enterprise for storing departmental data. They used this method instead of an internal department share structure because the workforce was distributed and often collaborated on work remotely.

Our auditors reviewed the configuration and found that all employees had access to the G Suite and all documents contained within. In discussions with the business owner, we explained the concept of least privilege and how easy it is to set up access permissions in the G Suite. However, we were told that they trusted each employee and were not concerned about it.

We then audited the users who had accessed files and found that some employees in design had been accessing information on customers. We also found that an employee in marketing had accessed HR data. We presented this to the business owner and she agreed that it was best not to tempt her employees with access to more information than they needed.

We worked out a data map for the information on the G Suite and outlined permissions for each role. We then showed administrators how to implement the permissions.

In the scenario described in the “Exam at Work” sidebar, least privilege would prevent employees from accidentally viewing salary data on other employees, which could cause morale problems and conflict. Least privilege also prevents employees from stealing that information and prevents malware from corrupting or encrypting the information using the user’s credentials.

Separation of Duties

Separation of duties, also known as segregation of duties, divides the responsibilities required to perform a sensitive task among two or more people so that one person, acting alone, cannot compromise the system. Separation of duties needs to be carefully planned and implemented. If implemented correctly, it can act as an internal control to help reduce potential damage caused by the actions of a single administrator.

By limiting permissions and influence over key parts of the cloud environment, no one individual can knowingly or unknowingly exercise full power over the system. For example, in an e-commerce organization with multiple layers of security comprised in a series of cloud solutions, separation of duties would ensure that a single person would not be responsible for every layer of that security, such as provisioning accounts, implementing ACLs, and configuring logging and alerting for the various cloud services and their integrations. Therefore, if that person were to become disgruntled, they would not have the ability to compromise the entire system or the data it contains; they would only have the ability to access their layer of the security model.

Images

Separation of duties is the process of segregating tasks among two or more people. It prevents fraud because one person cannot compromise a system without colluding with others. Separation of duties is also called segregation of duties.

Security Automation

The last part of cloud security engineering is to automate security tasks. Security tasks must be performed at regular intervals, and it is important that they be performed each time correctly. Additionally, it can be quite a job to secure a large number of systems, and organizational security departments are supporting more systems and cloud services than ever before.

Security automation helps in both these areas. Automation ensures that tasks are performed the same way every time and that they are performed precisely on schedule. Furthermore, automation frees up valuable security resources so that they can focus on other tasks. Automation uses scripting, scheduled tasks, and automation tools to perform routine tasks so that IT staff can spend more time solving the real problems and proactively looking for ways to make things better and even more efficient.

This section discusses different security activities that can be automated to save time and standardize. They include the following:

Images   Disabling inactive accounts

Images   Eliminating outdated firewall rules

Images   Cleaning up outdated security settings

Images   Maintaining ACLs for target objects

Disabling Inactive Accounts

You can automate the disabling of inactive accounts. Use this quite sparingly because disabling an account will mean that the user cannot log in anymore. Choose to disable rather than remove an account, because once you remove an account, creating it again is somewhat difficult. If you create another account with the same name, it will still have a different security identifier and will not really be the same account. That is why it is best to disable accounts first, and then at some later point, you can remove the account. Disabling is also important in case you need to take action on that account in the future, such as decrypting EFS encrypted files, viewing profile settings, or logging onto that person’s e-mail. These are things that might need to be done for a terminated employee if that employee is under investigation; if the account were deleted, they would still be possible but a bit more difficult.

The following PowerShell script disables all accounts that have not been logged into for over 30 days. Of course, if you were in Europe, some people take a holiday for longer than 30 days, but you can always enable the account when the person returns.

Images

Eliminating Outdated Firewall Rules

It is possible through the course of adding and removing programs or changing server roles that the Windows Firewall rules for a virtual machine could become out of date. It can be difficult to automate the analysis of the rules and removal of outdated rules, so the best course of action is to remove all rules and reassign rules based on the current roles.

As mentioned many times in this book, it is imperative to document. Document the firewall rules that you put in place for virtual machines and organize the rules by role. For example, you would have one set of standard rules for database servers, web servers, file servers, domain controllers, certificate servers, VPN servers, FTP servers, DHCP servers, and a separate role for each type of application server.

Each of the firewall rules for a defined role can be scripted. Here is an example configuration for a virtual machine with the database role running Microsoft SQL Server 2016 with Analysis Services. This script allows remote management and communication over SQL Server ports. The last commands turn the firewall on, just in case it is not already on.

Images

Now, with the role descriptions and the scripts in hand, you can clear the configurations from a set of servers whose rules you believe are outdated and then you can reapply the company standard firewall rules for that role. Here is the command to clear the rules from the server. Essentially, this command resets the Windows Firewall to its default out-of-the-box settings.

Images

Please note that the firewall configuration formerly used just the netsh command but this command was deprecated. The new command is netsh advfirewall.

Firewalls are covered in more detail in the “Network Security” section of Chapter 10.

Cleaning Up Outdated Security Settings

VMware vSphere can be made much more secure by turning off some features for virtual machines. The first feature to disable is host guest file system (HGFS) file transfers. HGFS transfers files into the operating system of the virtual machine directly from the host, and a hacker or malware could potentially misuse this feature to download malware onto a guest or to exfiltrate data from the guest. Script these commands for each virtual machine:

Images

The next feature to disable is the ability to copy and paste data between the remote console and the virtual machine. This is disabled by default, but in case someone turned it on, you can disable it again. Enabling copy and paste can allow for sensitive content to accidentally be placed on another machine. Script these commands for each virtual machine:

Images

The third item to disable is the ability for a user to disconnect VMware devices from the virtual machine. When this is turned on, administrative users on the virtual machine can run commands to disconnect devices such as network adapters, hard disk drives, and optical drives. Script these commands for each virtual machine:

Images

The fourth item to disable is the ability of processes running in the virtual machine to send configuration messages to the hypervisor. Processes on the virtual machine that modify configuration settings can potentially damage the virtual machine or cause it to be unstable. Script these commands for each virtual machine:

Images

Maintaining ACLs for Target Objects

You can script setting access control lists for objects by using the cacls command. ACL scripting can be very useful if you want to change permissions for a large number of files and folders. Here is the command to give a group called DevOps full control to the D: drive and all subfolders:

Images

CERTIFICATION OBJECTIVE 11.02

Security Governance and Strategy

Attackers keep coming up with new attacks, so the line for security best practices continues to move. There are a variety of government agencies and standards bodies that publish security best practices and standards such as the ISO/IEC 27001 or NIST SP 800-53. These can give an organization some guidance on security governance practices and valuable security strategies, but each organization needs to determine for itself what is appropriate for its security based on its specific operations.

Implementing a practice just because it is listed in a standard might improve security, but it might not improve it as much as something else. Budgets are tight, so it is crucial to choose the security controls that will give your organization the best protection for your budget. This section covers best practices for governance and strategy. This text has organized these best practices into the following sections:

Images   Developing company security policies

Images   Account management policies

Images   Documenting security procedures

Images   Assessment and auditing

Images   Leveraging established industry standards and regulations

Images   Applying platform-specific security standards

Images   Data classification

Images   Keeping employees and tools up to date

Images   Roles and responsibilities

Developing Company Security Policies

Security policies set the organizational expectations for certain functional security areas. Policies should be defined based on what the organization is committed to doing, not on what it might do, because once a policy is put in place, others will expect the company to adhere to it. Policies usually come with sanctions for those who do not follow the policy, such as oral or written warnings, coaching, suspensions from work, or termination.

Security policies often use the terms personally identifiable information (PII) and personal health information (PHI). PII is information that represents the identity of a person, such as name, phone number, address, e-mail address, Social Security number, and date of birth. PHI is similar to PII in the context of patient identity but is used in HIPAA compliance and other similar areas. The term PII is common in security polices of many types of organization, whereas PHI is common is security policies of healthcare organizations. Both terms are used in security policies to designate information that must not be disclosed to anyone who is not authorized to access it.

Some common security policies include the following:

Images   Acceptable use policy   States how organizational assets are to be used. This policy covers use of organizational equipment such as computers, laptops, phones, and office equipment. More importantly, it covers which cloud and other Internet services employees can use, acceptable norms for e-mail, and use of social networking.

Images   Audit policy   Specifies how often audits are to occur, the differences between internal and external audits, who should handle audits, how they are reported on, and the level of access granted to auditors. Both internal and external audits would cover internal systems and cloud systems used by the company. The audit policy also covers how audit findings and exceptions are to be handled.

Images   Backup policy   Covers how the organization will back up the data that it has. This includes both data on premise and in the cloud. The backup policy usually includes who is responsible for backing up data, how often backups will take place, the data types that will be backed up, and the recovery time objective (RTO) and recovery point objective (RPO) for each data type.

Images   BYOD policy   Specifies how employee-owned devices are to be used within the company and how they can be used if they access company cloud services and data.

Images   Cloud services policy   Defines which cloud services are acceptable for organizational use, how cloud services are evaluated, who is authorized to purchase cloud services, and how employees suggest or recommend cloud services to the review committee.

Images   Data destruction policy   Outlines how the organization will handle disposal of equipment that houses data, such as computers, servers, and hard drives. It should specify how that data will be wiped or destroyed, what evidence will be retained on the disposal or destruction, and who is authorized to dispose of assets. This includes not only digital data, but physical documents as well so these documents must be shredded when the policy requires it.

Images   Data retention policy   Specifies how long data of different types will be kept on organizational systems or cloud systems the organization utilizes. For example, the data retention policy may specify that e-mail on Office 365 will be retained for two years, financial documents on SAP S/4HANA will be retained for seven years, and other data will be retained for one year.

Images   Encryption policy   Specifies what should be encrypted in the organization and in cloud systems used by the organization, how encryption systems are evaluated, which cryptographic algorithms are acceptable, how cryptographic keys are managed, and how keys are disposed of.

Images   Incident response policy   Specifies the expectations for how long it will take to respond to an incident, recovery times, investigation times, who the members are of the incident response team, how employees are to notify the team of incident indicators, what are the indicators of an incident, and how the team will vet incidents. Data may need to be retrieved from multiple cloud vendors so the incident response policy will specify how that will take place and the expectations of the cloud provider and the cloud consumer in an incident.

Images

The incident response plan must factor in the communication and coordination activities with each cloud provider. This can add significant time to an incident response timeline.

Images   Mobile device policy Specifies which types of mobile devices can be used for organizational purposes, who authorizes mobile devices, how those devices are to be protected, where they can be used, which cloud services can be accessed by mobile devices, how they are encrypted, and how organizational data will be removed from mobile devices when they are retired or when employees leave.

Images   Privacy policy   Includes what information the organization considers private, how the organization will handle that information, the purposes and uses of that information, and how that information will be collected, destroyed, or returned.

Images   Remote access policy   Specifies which types of remote access are acceptable, how remote access will take place, how employees are authorized for remote access, auditing of remote access, and how remote access is revoked.

There are hundreds of other policies that can be defined for more granular things. However, best practice is to keep the number of policies to the minimum necessary so that employees can easily find the organization’s expectations regarding a particular subject.

Some organizations choose to bundle policies together into a handbook or a comprehensive security policy. Compliance requirements may specify which policies an organization needs to have and the minimum requirements for those policies. Be aware of which compliance requirements your organization falls under so that you can make sure your policies are in accordance with those requirements.

Account Management Policies

Account management policies establish expectations on how accounts and their associated credentials will be managed. Some simpler policies will be called a password policy. These policies deal only with the password elements of the account management policy and are often used when granularity on password requirements is needed.

Images

Account management policies and password policies should apply to organizational systems and cloud systems that house organizational data.

Account management policies stipulate how long passwords need to be and how often they should be changed. They also specify who should be issued an account and how accounts are issued to users. This includes which approvals are necessary for provisioning an account. There may be rare cases where a password can be shared and account management policies will specify these circumstances, if any. These policies also establish requirements for how and when temporary passwords are issued, and the process for how and when passwords can be reset.

Two other sections in the account management policy require a bit more attention. They include the lockout policy and password complexity rules. These are covered next in their own sections.

Lockout Policy

A lockout is the automatic disabling of an account due to some potentially malicious action. The most common reason for a lockout is too many incorrect password attempts. Lockout policy can be specified on a per-resource basis or a per-domain basis. When single sign-on (SSO) is used, a single lockout policy also applies.

When a user’s password is entered incorrectly too many times, either by the user or by an unauthorized person, the system will disable the user’s account for a predefined period. In some cases, the account is disabled until an administrator unlocks it. Another system can be configured to lock out the account for a set amount of time that increases each time the account is subsequently locked out, until a point when an administrator is required to unlock the account again.

You may wish to notify on account lockouts. For example, if you have a cloud-based application, you may want to send users an e-mail when they enter their password incorrectly too many times. This way, if someone else tries to log on as the user, the authorized user will become aware of the attempt and can report that it was not authorized. Systems can be configured to automatically notify users when their account is locked out. Otherwise, users will find out their account is locked out when they are unable to log in.

Password Complexity Rules

Stealing or cracking passwords is one of the most common ways that attackers infiltrate a network and break into systems. People generally choose weak passwords because passwords are hard to remember. Most people try to create a password using information that is easy for them to remember, and the easiest thing to remember is something you already know, such as your address, phone number, children’s names, or workplace. But this is also information that can be learned about you easily, so it is a bad choice for a password.

Password complexity has to do with how hard a password would be to break with brute force techniques, where the attacker tries all possible combinations of a password until they get the right one. If you look at just numbers, there are four combinations in a two-character password, while there are eight combinations in a three-character password.

Numbers alone are the easiest to break because there are only ten combinations for each digit (0–9). When you add letters into the mix, this creates more possibilities for the brute force attack to factor in. Special characters such as @#$%^&*&(() expand that scope even further. The best passwords are ones that you can remember, but that are unrelated to anything someone would be able to figure out about you and unrelated to any security questions you may have answered. They should also contain numbers, uppercase and lowercase letters, and special characters.

For those with multiple-language keyboards and application support, a password that combines multiple character systems such as Chinese or Russian can make it even harder to crack.

Security practitioners have for decades tried to find a balance between password complexity and usability. On the one hand, stronger passwords are harder to guess and more difficult to brute force crack. However, these more complex passwords are harder to remember. This can lead users to circumvent best practices by writing passwords down.

Similarly, frequent change intervals can cause users to construct passwords that follow specific patterns such as Fire$ale4Dec in December, Fire$ale4Jan in January, and so forth. Since users have so many passwords to remember, some use the same password in many places and change them all at the same time. However, when a data breach occurs in one location, the usernames and passwords are often put in a database that attackers use to determine other likely passwords. In this example, the attacker might breach the database at the end of December but then the user changes their passwords. An attacker reviewing the database in April would likely try Fire$ale4Apr and gain access to the system if the user continued with their pattern.

NIST has recognized these weaknesses in their special publication 800-63B. Here are some of the new guidelines. First, increase the maximum password length to at least 64 characters. Along with this, NIST recommends that password fields allow spaces and other printable characters in passwords. These two changes allow users to create longer, but more natural, passwords.

NIST has also relaxed some of the complexity rules and recommends that companies require just one uppercase, number, or symbol, not all three, and that passwords be kept longer with less frequent change intervals.

NIST also adds some requirements. They recommend that two-factor authentication be used and they exclude SMS as a valid two-factor authentication method because of the ability for others to potentially obtain the unencrypted SMS authentication data. They also require that passwords be measured for how common and easy to guess they are. Authentication systems should restrict users from creating passwords that contain simple dictionary words, common phrases, or easily guessed information.

Documenting Security Procedures

Security policies specify the company’s expectations and provide general guidance for what to do, but they do not get into the specifics. This is where security procedures step in. Documenting procedures was covered in more detail in Chapter 9 so please review that section if this is unfamiliar to you. Security procedures outline the individual steps required to complete a task. Furthermore, security procedures ensure that those who follow the procedures will do the following:

Images   Perform the task consistently.

Images   Take a predictable amount of time to perform the task.

Images   Require the same resources each time the task is performed.

Assessment and Auditing

A network assessment is an objective review of an organization’s network infrastructure regarding current functionality and security capabilities. The environment is evaluated holistically against industry best practices and its ability to meet the organization’s requirements. Once all the assessment information has been documented, it is stored as a baseline for future audits to be performed against.

Complete audits must be scheduled on a regular basis to make certain that the configurations of all network resources are not changed in such a way that increases the risk to the environment or the organization.

Internal Audits

Internal audit teams validate that security controls are implemented correctly and that security systems are functioning as expected. Companies operate today in an environment of rapid change and this increased frequency of change can result in an increase in mistakes leading to security issues. Internal audits can help catch these issues before they are exploited.

Take, for example, the technologies that enable administrators to move virtual machines between hosts with no downtime and minimal administrative effort. Because of this, some cloud environments have become extremely volatile. A side effect of that volatility is that the security posture of a guest on one cloud may not be retained when it has been migrated to a different, yet compatible, cloud. The audit team would have a specification of what the security posture should look like and they would use that to determine if the machine met the requirements after being moved to the new cloud.

A change management system can help identify changes in an environment, but initial baseline assessments and subsequent periodic audits are critical. Such evaluations make it possible for administrators to correlate performance logs on affected systems with change logs, so they can identify configuration errors that may be causing problems. Change management will be covered in more detail in Chapter 13.

Utilizing Third-Party Audits

When assessing or auditing a network, it is best practice to use a third-party product or service provider. Using external resources is preferable to using internal resources, as the latter often have both preconceived biases and preexisting knowledge about the network and security configuration.

Familiarity with the environment can produce unsuccessful audits because the internal resources already have an assumption about the systems they are evaluating, and those assumptions result in either incomplete or incorrect information. A set of eyes from an outside source not only eliminates the familiar as a potential hurdle but also allows for a different (and in many cases, greater) set of skills to be utilized in the evaluation.

The results of an unbiased third-party audit are more likely to hold up under scrutiny. Many regulations and standards stipulate third-party audits.

Leveraging Established Industry Standards and Regulations

As cloud computing has become ubiquitous, various standards for best practice deployments of cloud computing infrastructures have been developed. Standards have been established to improve the quality of IT organizations. Some examples of standards include the Information Technology Infrastructure Library (ITIL) and the Microsoft Operations Framework (MOF).

Regulations specify security requirements for business systems and clouds. Non-compliance with regulations can lead to fines or the inability to do business in that industry or in the currency capacity. Some regulations include the Payment Card Industry Data Security Standard (PCI DSS), the Sarbanes–Oxley Act (SOX), and the Health Insurance Portability and Accountability Act (HIPAA). Regulatory compliance is more expensive for IT organizations than adhering to a set of standards or best practices. Regulatory compliance requires not only for the organization to build solutions according to the regulatory requirements, but also to demonstrate compliance to auditors. The tools and labor required to generate the necessary proof can be costly.

In addition to adopting published best practices, organizations can implement one of the many tools available that can raise alerts when a deviation from these compliance frameworks is identified.

Applying Platform-Specific Security Standards

Many vendors have released their own security standards or device configuration guides. It is a good idea to follow the recommendations from these vendors. After all, Cisco created Cisco switches, so who better to recommend how to configure them? Seek out the configuration guides for the equipment you have and audit your device against those security guidelines.

Some vendors release multiple guidelines that are customized for different needs. For example, you may want to harden web application servers, so you look to your web hosting provider for guidance. However, they might offer different guidelines on how to configure the server for HIPAA, PCI DSS, NIST, or their general security best practice or hardening guide. Which one you choose depends on which compliance areas you need to adhere to.

Data Classification

Data classification is the practice of sorting data into discrete categories that help define the access levels and type of protection required for that set of data. These categories are then used to determine the disaster recovery mechanisms, cloud technologies required to store the data, and the placement of that data onto physically or logically separated storage resources.

The process for data classification can be divided into four steps that can be performed by teams within an organization. The first step is to identify the present data within the organization. Next, the data should be grouped into areas with similar sensitivity and availability needs. The third step is to define classifications for each unique sensitivity and availability requirement. The last step is to determine how the data will be handled in each classification.

Here are some of the different types of data that an organization would classify into categories such as public, trade secret, work product, financial data, customer data, strategic information, and employee data:

Images   Account ledgers

Images   Application development code

Images   Bank statements

Images   Change control documentation

Images   Client or customer deliverables

Images   Company brochures

Images   Contracts and SLAs

Images   Customer data

Images   HR records

Images   Network schematics

Images   Payroll

Images   Press releases

Images   Process documentation

Images   Project plans

Images   Templates

Images   Website content

Keeping Employees and Tools Up to Date

The rapidly evolving landscape of cloud technologies and virtualization presents dangers for cloud security departments that do not stay abreast of changes to both their toolsets and their training. Companies can use new virtualization technologies and tools to more rapidly deploy new software, leading to an acceleration of software development activities in fast-forward-type deployments, known as rapid deployment. See Chapter 9 for more details.

One hazard of rapid deployment is the propensity to either ignore security or proceed with the idea that the organization will enable functionality for the system immediately, then circle back and improve the security once it is in place. Typically, however, the requests for new functionality continue to take precedence and security is rarely or inadequately revisited.

Many networks were initially designed to utilize traditional network security devices that monitor traffic and devices on a physical network. If the intra-virtual-machine traffic that those tools are watching for never routes through a physical network, it cannot be monitored by that traditional toolset. The problem with limiting network traffic to guests within the host is that if the tools are not virtualization or cloud aware, they will not provide the proper information to make a diagnosis or even to suggest changes to the infrastructure. Therefore, it is critical that monitoring and management toolsets (including cloud-based CLIs) are updated as frequently as the technology that they are designed to control.

Roles and Responsibilities

Security is a complex discipline and involves securing a variety of components, including applications, storage, network connectivity, and server configuration. There are many different security functions, security controls, and security technologies, so it is unlikely that a single person will able to handle all of the company’s security needs. It is also important to evaluate methods for implementing separation of duties, introduced earlier in this chapter, by splitting the responsibilities of those managing security procedures among various people.

There are some benefits to having a different person in charge of each facet of the cloud security environment. Having different people running different configuration tests creates a system of checks and balances since not just one person has ultimate control. For example, a programmer would be responsible for verifying all of the code within their application and for making sure there are no security risks in the code itself, but the programmer would not be responsible for the web server or database server that is hosting or supporting the application. The person testing code security should be different from the person who wrote the code. Likewise, the person testing cloud service integration security should not be the person who configured it.

CERTIFICATION OBJECTIVE 11.03

Vulnerability Management

In addition to comprehensive testing of all areas affecting service and performance, it is incumbent on an organization to test for vulnerabilities as well. Security testing in the cloud is a critical part of having an optimal cloud environment. It is very similar to security testing in a traditional environment in that testing involves components like login security and the security layer in general.

Before doing any security tests, testers should always clearly define the scope, present it to the system owner, and get written permission to proceed. The contract that is in place with the cloud provider should then be reviewed to determine testing notification requirements. Inform the cloud provider of any planned security penetration testing prior to actually performing it unless the contract specifies otherwise.

Another thing for an organization to consider is that with a public cloud model, the organization does not own the infrastructure; therefore, the environment the resources are hosted in may not be all that familiar. For example, if you have an application that is hosted in a public cloud environment, that application might make some application programming interface (API) calls back into your data center via a firewall, or the application might be entirely hosted outside of your firewall.

Another primary security concern when using a cloud model is who has access to the organization’s data in the cloud and what are the concerns and consequences if that data is lost or stolen. Being able to monitor and test access to that data is a primary responsibility of the cloud administrator and should be taken seriously, as a hosted account may not have all the proper security implemented. For example, a hosted resource might be running an older version of system software that has known security issues, so keeping up with the security for the hosted resource and the products that are running on those resources is vital.

Security testing should be performed on a regular basis to ensure consistent and timely cloud and network vulnerability management. Periodic security testing will reveal newly discovered vulnerabilities and recent configuration issues, enabling administrators to remediate them before (hopefully) attackers have an opportunity to exploit them.

Common testing scenarios include quarterly penetration testing with monthly vulnerability scanning or annual penetration testing with quarterly or monthly vulnerability scanning. It is absolutely necessary to run tests at intervals specified by compliance requirements. Testing should also be conducted whenever the organization undergoes significant changes.

Cloud vendors typically require notification before penetration testing is conducted on their networks. Microsoft Azure recently announced that it no longer requires such notification. Check with your cloud vendor before conducting vulnerability scanning or penetration testing to be sure you have permission.

In this section, you will learn about the following vulnerability management concepts:

Images   Black-box, gray-box, and white-box testing

Images   Vulnerability scanning

Images   Penetration testing

Images   Vulnerability management roles and responsibilities

Testing Methods

The three basic types of security testing in a cloud environment are known as black-box, gray-box, and white-box testing. They differ based on the amount of information the tester has about the targets before starting the test. When performing a black-box test, the tester knows as little as possible about the system, similar to a real-world hacker. Black-box testing is a good method, as it simulates a real-world attack and uncovers vulnerabilities that are discoverable even by someone who has no prior knowledge of the environment. However, it may not be right for all scenarios, because of the additional expense required for research and reconnaissance. Two other options are available: gray-box and white-box testing.

When performing gray-box testing, the test team begins with some information on the targets, usually what attackers would reasonably be assumed to find out through research, such as the list of target IP addresses, public DNS records, and public-facing URLs. Roles and configurations are not provided to the testing team in a gray-box test. Gray-box testing can be a cost-effective solution if one can reasonably assume that information such as the list of target IP addresses, public DNS records, and public-facing URLs would be obtained by an attacker. Gray-box testing is faster and cheaper than black-box testing because some research and reconnaissance work is reduced, but it is somewhat more expensive than white-box testing.

White-box testing is done with an insider’s view and can be much faster than black-box or gray-box testing. White-box testing makes it possible to focus on specific security concerns the organization may have because the tester spends less time figuring out which systems are accessible, their configurations, and other parameters.

Since testing is typically done at regular intervals, companies often perform black-box testing the first time and then perform gray- or white-box testing after that, assuming the testing team already knows the information gained from the first black-box test.

Vulnerability Scanning

Vulnerability scanning is the process of discovering flaws or weaknesses in systems and applications. These weaknesses can range anywhere from host and service misconfiguration to insecure application design. Vulnerability scanning can be performed manually, but it is common to use a vulnerability scanning application to perform automated testing.

Automated vulnerability scanning utilizes software to probe a target system. The vulnerability scanning software will send connection requests to a computer and then monitor the responses it receives. It may insert different data types into web forms and analyze the results. This allows the software to identify potential weaknesses in the system.

Vulnerability scanning includes basic reconnaissance tools such as port scanning, a process that queries each TCP/UDP port on a system to see if it is capable of receiving data; footprinting, the process of enumerating the computers or network devices on a target network; and fingerprinting, a process that determines the operating system and software running on a device.

Management may review the vulnerabilities and make determinations as to which ones they want to remediate and who will be responsible for remediation. The vulnerability remediation request (VRR) is a formal request to make a change to an application or system to remediate a known vulnerability.

Vulnerabilities are ranked with industry standards such as the Common Vulnerability Scoring System (CVSS) numbers for vulnerability scoring. These rankings have a risk score associated with them, and the CVSS numbers can be used to find additional threat and remediation information on the vulnerability in the national vulnerability database (NVD).

This remainder of this section discusses the phases, tools, and scope options for vulnerability scanning.

Phases

The vulnerability scanning process is organized into three phases: intelligence gathering, vulnerability assessment, and vulnerability validation. The phases are shown in Figure 11-1.

FIGURE 11-1   Vulnerability scanning phases

Images

Intelligence Gathering   A vulnerability scanning project begins by gathering information about the targets. Intelligence gathering is a phase of information gathering that consists of passive and active reconnaissance. Depending on your level of knowledge of the targets, this step may not be necessary.

Vulnerability Assessment   The second phase is vulnerability assessment. Vulnerability scanning tools are used at this stage to scan targets for common weaknesses such as outdated or unpatched software, published vulnerabilities, and weak configurations. The vulnerability assessment then measures the potential impact of discovered vulnerabilities. Identified vulnerabilities are classified according to CVSS numbers for vulnerability scoring.

Vulnerability Validation   Automated scans alone do not represent a complete picture of the vulnerabilities present on the target machines. Automated scans are designed to be nondisruptive, so they tend to err on the side of caution when identifying the presence of security weaknesses. As a result, conditions which outwardly appear to be security flaws—but which in fact are not exploitable—are sometimes identified as being vulnerabilities. It takes experience in interpreting a tool’s reports, as well as knowledge of the system, to identify vulnerabilities that are likely exploitable.

Some vulnerability validation can be performed with automated tools. Such automation reduces the manual testing burden, but there will still be cases where manual validation is required to ensure a quality deliverable. Tools are discussed in the next section.

Tools

A wide variety of tools can be used to perform intelligence gathering and vulnerability assessment. Testers will likely use most of the intelligence gathering tools to gather information about their targets. Snmpwalk uses SNMP messages to obtain information on targets through their MIB data. See Chapter 7 for more information on SNMP. Fierce is used to find internal and external IP addresses for a target DNS name.

Sam Spade is a tool that combines a number of command-line functions together. These functions include Whois, a command that identifies the owner of a domain name; ping, a tool that tests to determine if a host is responding to ICMP packets; IPBlock, a tool that performs whois operations on a block of IP addresses; dig, a command that obtains resource records for a domain (see Chapter 14); traceroute, a command that identifies each hop from source to destination (see Chapter 14); and finger, a tool that obtains information on the user logged into a target machine. Please note that finger has been disabled on most machines for years now, so this tool is unlikely to work on targets today, but it remains in the Sam Spade suite of tools.

Nmap, Zenmap, and Unicornscan are each used to map a network by identifying the hosts that are online, the operating system they are running, installed applications, and security configuration such as host-based firewalls.

Some of the vulnerability assessment tools are specific for certain cloud applications. For example, the Amazon Inspector would be used for AWS servers, while the Microsoft Azure Security Center would be used for servers in an Azure environment. Nessus, Nexpose, OpenVAS, and Security Center can scan cloud systems or on-premise systems. Of the four, OpenVAS is open source and available for free. OpenVAS is an excellent tool to use to get familiar with the process.

Table 11-1 lists some of the tools along with their uses. A wide variety of the tools listed are open source. Some Linux distributions come with a large number of security tools preinstalled. The most popular security distribution is Kali, but others such as DEFT, Caine, Pentoo, Samurai Web Testing Framework, and Parrot Security offer similarly valuable tool sets, albeit with a somewhat different interface. These Linux security distributions can be used as a bootable DVD or can be installed on a system for permanent use. Linux security distributions contain hundreds of security tools, including many for penetration testing, vulnerability scanning, network analysis, and computer forensics.

TABLE 11-1   Vulnerability Scanning Tools

Images

Scope

A vulnerability scan can cover different scopes such as external scanning, internal scanning, web application scanning, or a combination of them. External scanning is conducted from outside the company’s internal network, on nodes that are web-facing such as web servers, e-mail servers, FTP servers, and VPN servers.

Internal vulnerability scanning, on the other hand, is conducted from within the company network, targeting servers, workstations, and other devices on the corporate network. In both internal and external testing, web applications need close attention, as there can be significant variations in how each application works depending on its purpose and role. For example, an enterprise resource planning (ERP) system functions much differently than an asset tracking system. The ERP system has many more interconnections and is functionally more complex than the asset tracking system, just to name some of the differences.

Scans can also be performed through services offered by the cloud provider such as Microsoft Azure Security Center, Google Cloud Security Scanner, or Amazon Inspector. Some of these tools are built to scan the types of systems that reside on the cloud vendor’s systems, while others are more flexible. For example, Google Cloud Security Scanner scans Google App Engine apps for vulnerabilities, while Amazon Inspector can analyze any applications running within Amazon Web Services (AWS).

Penetration Testing

Penetration testing evaluates system security at a point in time by attacking target systems as an outside attacker would and then documenting which attacks were successful, how the systems were exploited, and which vulnerabilities were utilized. Penetration testing provides realistic, accurate, and precise data on system security.

A penetration test is a proactive and approved plan to measure the protection of a cloud infrastructure by using system vulnerabilities, together with operating system or software application bugs, insecure settings, and potentially dangerous or naïve end-user behavior to obtain access to systems. Such assessments also are helpful in confirming the effectiveness of defensive mechanisms, and assessing end users’ adherence to security policies.

Tests are usually performed using manual or automatic technologies to compromise servers, endpoints, applications, wireless networks, network devices, mobile devices, and alternative potential points of exposure. Once vulnerabilities are exploited on a particular system, pen testers might commit to using the compromised system to launch later exploits at other internal resources, in a technique known as pivoting. Pivoting is performed to incrementally reach higher levels of security clearance and deeper access to electronic assets and data via privilege increase.

The remainder of this section discusses the phases, tools, scope options, and testing limitations for penetration testing. The section concludes with a discussion on roles and responsibilities. Security testing requires specialized skill sets and should be performed by a team that is independent from DevOps.

Phases

The penetration testing process is organized into seven phases: intelligence gathering, vulnerability assessment, vulnerability validation, attack planning and simulation, exploitation, post-exploitation, and reporting. The phases are shown in Figure 11-2.

FIGURE 11-2   Penetration testing phases

Images

As you can see, penetration testing begins with the three phases of vulnerability scanning, covered in the previous section, so they will not be covered again. We will start with phase 4, attack planning and simulation.

Attack Planning and Simulation   Once the vulnerabilities have been enumerated and validated, the next step is to determine how the vulnerabilities can best be used together to exploit systems. Some of this comes with experience as penetration testers learn to see the subtle relationships between hosts that automated tools and complex scripts cannot detect. An initial plan of attack is built from this data.

This phase also involves attack plan simulations. Simulations of the exploits outlined in the attack plan are performed in a test environment or automatically in penetration testing tools to eliminate lingering false-positive results and refine the attack plan through the simulations. A full attack strategy can then be put together to be employed in the exploitation phase.

Exploitation   In the exploitation phase, penetration testers establish access to a system or resource by employing exploit packages that take advantage of discovered vulnerabilities. Penetration testing activities are performed for the approved scope following the attack strategy.

Post-exploitation   In this stage, evidence of exploitation of the vulnerabilities is collected, and remnants from the exploits are removed. As part of this, penetration testers clean up accounts and resident files that were put in place to perform the exploits.

Reporting   The last phase of penetration testing is to put all the details of the tests, including what worked and what didn’t, into the report. Information on the security vulnerabilities that were successfully exploited through penetration testing is collected and documented on the report. The report is provided to a risk manager or someone in charge of security in the organization. This person will then coordinate with other teams to remediate the vulnerabilities, track remediation, and possibly schedule validation tests to ensure that the vulnerabilities identified have been successfully remediated.

Reports rank findings by risk rating and provide recommendations on how to remediate the items.

Tools

A wide variety of tools can be used to perform penetration testing. Table 11-2 lists several popular penetration testing tools. Some tools are large suites with various components, while others perform a particular task. Many tools are command-line driven, requiring familiarity with the command structure and usage.

TABLE 11-2   Penetration Testing Tools

Images

Penetration testers may try to crack passwords in order to test the strength of passwords users have created. Brute force attempts are usually made on a password database that has been downloaded from a system. Testers first obtain access to the password database and download it. However, most password databases cannot be read because the passwords in them are hashed. Penetration testers use a computer with powerful CPU or GPU or a network of distributed systems to try all possible combinations until they get in. The number of possible combinations increases exponentially as the number of characters in the password increases.

Scope

A penetration test can cover any of the following different scopes, or a combination of them:

Images   External penetration testing   External penetration testing is conducted from the Web, from outside the company’s internal network, with the targets being the company’s web-facing hosts. This may sometimes include web servers, e-mail servers, FTP servers, and VPN servers.

Images   Internal penetration testing   Internal penetration testing is conducted from within the company network. Targets may include servers, workstations, network devices such as firewalls or routers, and Internet of things (IoT) devices such as webcams, IP lighting, or smart TVs.

Images   Web application penetration testing   Web application penetration testing is concerned with evaluating the security of web-based applications by issuing attacks against the site and its supporting infrastructures such as database servers, file servers, or authentication devices.

Images   Wireless penetration testing   Wireless penetration testing evaluates wireless access points and common weaknesses in a company’s wireless network. This includes attempting to crack wireless passwords, capture traffic on the wireless network, capture authentication information, and obtain unauthorized access to the network through a wireless connection. Wireless penetration testing also scans for rogue access points and peer-to-peer wireless connections.

Images   Physical penetration testing Physical penetration testing evaluates the ability of an outsider to obtain direct access to company facilities and areas containing sensitive data.

Images   Social engineering penetration testing   Social engineering penetration testing can involve a person directly interacting with individuals, but it is more common to use remote social engineering tactics since these are most often employed by attackers.

Remote social engineering evaluates employee response to targeted phishing attacks. The penetration tester requests a listing of e-mail addresses to be tested. A custom phishing e-mail is crafted and sent employing a spoofed source e-mail address or an external one that appears legitimate to every employee. The e-mail message will encourage the user to perform a range of nonsecure activities like clicking a link, visiting an unauthorized website, downloading a file, or revealing their username and password.

Testing Limitations

Testing limitations affect the scope of penetration testing by defining types of testing that are not allowed. Typical testing restrictions exclude from the scope memory corruption tests and similar tests that are likely to cause instability, and such testing is an assumed limitation when testing production environments. Denial of service attacks are also often excluded from the scope of testing.

Images

The difference between a penetration test and a vulnerability assessment is that a penetration test simulates an attack on the environment.

Roles and Responsibilities

Security testing can be a complicated procedure and involves testing a variety of components, including applications, storage, network connectivity, and server configuration. Security testing requires specialized skill sets and should be performed by a team that is independent from DevOps.

Vulnerability scanning is an easier task to perform than penetration testing and penetration testing requires vulnerability scanning, so this is an obvious place to define roles. Vulnerability analysts detect and validate vulnerabilities and then pass that information to penetration testers who might be more familiar with certain areas such as operating systems, storage, software development, web services, communications protocols, and so forth. These penetration testers are also familiar with how such services can be exploited, and they stay up to date on new vulnerabilities, exploits, and tools.

The social engineering penetration tester may also be a different role since this requires a knowledge of human behavior and what will most effectively entice victims to read phishing e-mails and follow the instructions given.

The most important detail is that the security testing team should be distinct and independent from the DevOps team. Such a separation of duties ensures that the test accurately represents what an attacker could do. Furthermore, it provides a level of objectivity and reduces the likelihood of bias from internal knowledge or a conflict of interest that could arise if security testing team members have a personal stake, for example, in an application being launched on time.

CERTIFICATION SUMMARY

This chapter covered the concepts of cloud security engineering, security governance and strategy, and vulnerability management. Cloud security engineering is the practice of protecting the usability, reliability, integrity, and safety of information systems, including network and cloud infrastructures, and the data traversing and stored on such systems.

One method of engineering secure cloud systems is to harden them. Hardening involves ensuring that the host or guest is configured in such a way that reduces the risk of attack from either internal or external sources. Another method is to layer security technologies on top of one another so that systems are protected even if one security system fails because others are there to guard against intrusion. Next, incorporate the principle of least privilege by granting employees only the minimum permissions necessary to do their job. Along with least privilege is a concept called separation of duties, a process that divides the responsibilities required to perform a sensitive task among two or more people so that one person, acting alone, cannot compromise the system.

Security governance and strategy begins with the creation of company security policies. Security policies set the organizational expectations for certain functional security areas.

Data classification can help companies apply the correct protection mechanisms to the data they house and maintain. Data classification is the practice of sorting data into discrete categories that help define the access levels and type of protection required for that set of data. An organization establishes security policies to set their expectations for certain functional security areas. Some of these expectations may come out of standards, best practices, or compliance requirements.

Companies perform assessments and audits to identify areas where they are not meeting organizational expectations, contractual agreements, best practices, or regulatory requirements. Regulations such as HIPAA, SOX, and PCI DSS specify requirements for security controls, procedures, and policies that some organizations must comply with.

Cloud security professionals have limited time and much to do. Consider how existing security management processes can be automated. Some examples include removing inactive accounts, eliminating outdated firewall rules, cleaning up outdated security settings, and maintaining ACLs for target objects.

Lastly, it is important to test systems for vulnerabilities and to remediate those vulnerabilities so that systems will be protected against attacks targeting those vulnerabilities. Vulnerability management consists of vulnerability scanning and penetration testing to identify weaknesses in organizational systems and the corresponding methods and techniques to remediate those weaknesses. Vulnerability scanning consists of intelligence gathering, vulnerability assessment, and vulnerability validation. Penetration testing includes the steps in vulnerability scanning as well as attack planning and simulation, exploitation, post-exploitation, and reporting.

KEY TERMS

Use the following list to review the key terms discussed in this chapter. The definitions also can be found in the glossary.

data classification   Practice of sorting data into discrete categories that help define the access levels and type of protection required for that set of data.

demilitarized zone (DMZ)   A separate network that is layered in between an internal network and an external network to house resources that need to be accessed by both while preventing direct access from the outside network to the inside network.

distributed denial of service (DDoS)   An attack that targets a single system simultaneously from multiple compromised systems.

fingerprinting   A process that determines the operating system and software running on a device.

footprinting   The process of enumerating the computers or network devices on a target network.

hardening   Ensuring that a host, guest, or network is configured in such a way that reduces the risk of attack from either internal or external sources.

least privilege   Principle that states employees should be granted only the minimum permissions necessary to do their job.

network assessment   Objective review of an organization’s network infrastructure regarding functionality and security capabilities used to establish a baseline for future audits.

network audit   Objective periodic review of an organization’s network infrastructure against an established baseline.

penetration testing   Process of evaluating network security with a simulated attack on the network from both external and internal attackers.

personal health information (PHI)   Data that represents the identity of a patient, such as name, phone number, address, e-mail address, Social Security number, and date of birth. The PHI term is mostly used in the context of HIPAA compliance.

personally identifiable information (PII)   Data that represents the identity of a person, such as name, phone number, address, e-mail address, Social Security number, and date of birth. The PII term is mostly used in the context of privacy compliance.

ping flood   An attack that sends a massive number of ICMP packets to overwhelm a system with more traffic than it can handle.

Ping of Death (PoD)   An attack that sends malformed ICMP packets with the intent of crashing systems that cannot process them and consequently shut down.

separation of duties   A process that divides the responsibilities required to perform a sensitive task among two or more people so that one person, acting alone, cannot compromise the system.

spoofing   The modification of the source IP address to obfuscate the original source.

vulnerability assessment   Process used to identify and quantify any vulnerabilities in a network environment.

vulnerability remediation request (VRR)   A formal request to make a change to an application or system to remediate a known vulnerability.

vulnerability scanning   The process of discovering flaws or weaknesses in systems and applications.

Images TWO-MINUTE DRILL

Cloud Security Engineering

Images  To protect network resources from threats, secure network design employs multiple overlapping controls to prevent unwanted access to protected cloud resources. Some layered security components include demilitarized zones, ACLs, and intrusion detection and prevention systems.

Images  Hardening is the process of ensuring that a host or guest is not vulnerable to compromise. Logging must be enabled to track potential intrusions. Only the required software components should be installed on the system, software patches should regularly be applied, firewall and antimalware software should be functional and up to date, and any unused user accounts should be disabled or removed.

Images  Separation of duties, also known as segregation of duties, divides the responsibilities required to perform a sensitive task among two or more people so that one person, acting alone, cannot compromise the system.

Images  Incorporating the principle of least privilege limits potential misuse and risk of accidental mishandling or viewing of sensitive information by unauthorized people. Employees should be granted only the minimum permissions necessary to do their job. No more, no less. Security automation ensures that routine security procedures are performed consistently and it frees up valuable security resources to perform other duties. Some automation discussed here included disabling inactive accounts, eliminating outdated firewall rules, cleaning up outdated security settings, and maintaining ACLs.

Security Governance and Strategy

Images  Security policies set the organizational expectations for certain functional security areas.

Images  Complete audits must be scheduled on a regular basis to make certain that the configurations of all network resources are not changed in such a way that increases the risk to the environment or the organization.

Images  The rapidly evolving landscape of cloud technologies and virtualization presents dangers for cloud security departments that do not stay abreast of changes to both their toolsets and their training.

Images  Security is a complex discipline and involves securing a variety of components, including applications, storage, network connectivity, and server configuration. There are many different security functions, security controls, and security technologies, so it is unlikely that a single person will able to handle all of the company’s security needs. Roles and responsibilities define who is supposed to do what in security.

Vulnerability Management

Images  Security testing helps a company stay aware of security gaps in its technology infrastructure and cloud environments.

Images  Testing can be black-box, where no information about targets is provided, gray-box, where some information is provided, or white-box, where significant information is provided about the targets to test.

Images  Vulnerability scanning is the process of discovering flaws or weaknesses in systems and applications.

Images  The vulnerability scanning process is organized into three phases: intelligence gathering, vulnerability assessment, and vulnerability validation.

Images  Penetration testing is an extension of vulnerability scanning that evaluates system security at a point in time by attacking target systems as an outside attacker would and then documenting which attacks were successful, how the systems were exploited, and which vulnerabilities were utilized.

Images  The penetration testing process is organized into seven phases: intelligence gathering, vulnerability assessment, vulnerability validation, attack planning and simulation, exploitation, post-exploitation, and reporting.

Images  A penetration test tests network and host security by simulating malicious attacks and then analyzing the results (not to be confused with a vulnerability assessment, which only identifies weaknesses that can be determined without running a penetration test).

Images  Security testing requires specialized skill sets and should be performed by a team that is independent from DevOps.

Images SELF TEST

The following questions will help you measure your understanding of the material presented in this chapter. As indicated, some questions may have more than one correct answer, so be sure to read all the answer choices carefully.

Cloud Security Engineering

1.   You have been asked to harden a crucial network router. What should you do? (Choose two.)

A.   Disable the routing of IPv6 packets.

B.   Change the default administrative password.

C.   Apply firmware patches.

D.   Configure the router for SSO.

2.   Which best practice configures host computers so that they are not vulnerable to attack?

A.   Vulnerability assessment

B.   Penetration test

C.   Hardening

D.   PKI

3.   You are responsible for cloud security at your organization. The Chief Compliance Officer has mandated that the organization utilize layered security for all cloud systems. Which of the following would satisfy the requirement?

A.   Implementing ACLs and packet filtering on firewalls

B.   Configuring a DMZ with unique ACLs between networks and an IDS/IPS

C.   Specifying separation of duties for cloud administration and training additional personnel on security processes

D.   Defining a privacy policy, placing the privacy policy on the website, and emailing the policy to all current clients

Security Governance and Strategy

4.   Which policy would be used to specify how all employee owned devices may be used to access organizational resources?

A.   Privacy policy

B.   Mobile device policy

C.   Remote access policy

D.   BYOD policy

5.   Which policy or set of rules temporarily disables an account when a threshold of incorrect passwords is attempted?

A.   Account lockout policy

B.   Threshold policy

C.   Disabling policy

D.   Password complexity enforcement rules

Vulnerability Management

6.   Which type of test simulates a network attack?

A.   Vulnerability assessment

B.   Establishing an attack baseline

C.   Hardening

D.   Penetration test

7.   Which of the following phases are unique to penetration testing? (Choose all that apply.)

A.   Intelligence gathering

B.   Vulnerability validation

C.   Attack planning and simulation

D.   Exploitation

8.   Which of the following describes a brute force attack?

A.   Attacking a site with exploit code until the password database is cracked

B.   Trying all possible password combinations until the correct one is found

C.   Performing a denial of service (DoS) attack on the server authenticator

D.   Using rainbow tables and password hashes to crack the password

Images SELF TEST ANSWERS

Cloud Security Engineering

1.   Images   B, C. Changing the default passwords and applying patches are important steps in hardening a device.

Images   A and D are incorrect. Without more information, disabling IPv6 packet routing does not harden a router, nor does configuring it for SSO.

2.   Images   C. Hardening configures systems such that they are protected from compromise.

Images   A, B, and D are incorrect. While vulnerability assessments identify security problems, they do not correct them. Penetration tests simulate an attack, but do not configure machines to be protected from such attacks. PKI (public key infrastructure) is a hierarchy of trusted security certificates; it does not address configuration issues.

3.   Images   B. Layered security requires multiple overlapping controls that are used together to protect systems. Configuring a DMZ with ACLs along with an IDS/IPS provides multiple layers because an attacker would have to compromise a machine in the DMZ and then pivot from that machine to another machine in the internal network. However, IDS/IPS systems might detect this activity and notify administrators and block the attacker from making the connection.

Images   A, C, and D are incorrect. A is incorrect because implementing ACLs and packet filtering is just one component, not a set of layers. C is incorrect because separation of duties and cross-training address fraud and resiliency. Only the fraud element would be a layer, but since no other controls are specified, this answer does not work. D is incorrect because each of these elements is only concerned with the privacy policy, which does not enhance the security of the system. Rather, it ensures that customers know how the company will protect their data and what data will be collected.

Security Governance and Strategy

4.   Images   D. The BYOD policy is the correct answer here. Bring your own device (BYOD) is a device that is employee owned, not owned by the company; this policy governs how those devices may be used to access organizational resources.

Images   A, B, and C are incorrect. The privacy policy specifies the information that the organization will store and how it is handled, not how employee devices may be used. The mobile device policy could address some employee devices, but it may not address all employee owned devices because some of those devices may not be mobile, such as a home desktop computer. Lastly, the remote access policy specifies how employees can access systems remotely, but it would not address how BYOD devices are used locally to access resources.

5.   Images   A. An account lockout policy temporarily disables an account after a certain number of failed logons. For example, if the policy were set to 3, then a user’s account would be temporarily disabled (locked out) after three failed tries until an administrator unlocks it.

Images   B, C, and D are incorrect. Threshold and disabling policies are not password policies. Password complexity enforcement rules ensure that users create complex passwords and that new passwords meet complexity requirements, but they do not impact users who enter their credentials incorrectly.

Vulnerability Management

6.   Images   D. Penetration tests simulate a network attack.

Images   A, B, and C are incorrect. Vulnerability assessments identify weaknesses but do not perform simulated network attacks. While establishing a usage baseline is valid, establishing an attack baseline is not. Hardening is the process of configuring a system to make it less vulnerable to attack; it does not simulate such attacks.

7.   Images   C and D. Penetration testing includes all the steps from vulnerability scanning. The two steps that are unique to penetration testing here are attack planning and simulation and exploitation.

Images   A, and B are incorrect. Intelligence gathering and vulnerability validation are steps that are performed both in vulnerability scanning and in penetration testing, so they are not unique to penetration testing.

8.   Images   B. A brute force attack tries all possible combinations of a password. A brute force attack relies on the ability to try thousands or millions of passwords per second. For example, at the time of this writing, the password cracking system that is used at TCDI for penetration testing can try 17 million passwords per minute in a brute force attack.

Images   A, C, and D are incorrect. Attacking a site with exploit code is the exploitation phase of penetration testing. Simply trying all possible exploits is called kitchen sink exploiting and it usually results in problems. Performing a DoS attack on the authenticator would not be a brute force attack, but it could make it impossible for users to log into the system until the authenticator comes online again. Rainbow tables and password hashes might crack the password, but using them would not constitute a brute force attack.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.119.114