Chapter 6
Security Assessment and Testing

This chapter covers the following topics:

  • Design and Validate Assessment, Test, and Audit Strategies: Explains the use of assessment, test, and audit strategies, including internal, external, and third-party strategies.

  • Conduct Security Control Testing: Concepts discussed include the security control testing process, including vulnerability assessments, penetration testing, log reviews, synthetic transactions, code review and testing, misuse case testing, test coverage analysis, and interface testing.

  • Collect Security Process Data: Concepts discussed include NIST SP 800-137, account management, management review and approval, key performance and risk indicators, backup verification data, training and awareness, and disaster recovery and business continuity.

  • Analyze and Report Test Outputs: Explains the importance of analyzing and reporting test outputs, including automatic and manual reports.

  • Conduct or Facilitate Security Audits: Describes the internal, external, and third-party auditing processes and the three types of SOC reports.

Security assessment and testing covers designing, performing, and analyzing security testing. Security professionals must understand these processes to protect their assets from attacks.

Security assessment and testing requires a number of testing methods to determine an organization’s vulnerabilities and risks. It assists an organization in managing the risks in planning, deploying, operating, and maintaining systems and processes. Its goal is to identify any technical, operational, and system deficiencies early in the process, before those deficiencies are deployed. The earlier you can discover those deficiencies, the cheaper it is to fix them.

This chapter discusses assessment and testing strategies, security control testing, collection of security process data, analysis and reporting of test outputs, and internal, external, and third-party audits.

Foundation Topics

Design and Validate Assessment and Testing Strategies

Security professionals must ensure that their organization plans, designs, executes, and validates appropriate security assessment, testing, and audit strategies to ensure that risks are mitigated. Security professionals must take a lead role in helping the organization implement the appropriate security assessment, testing, and auditing strategies. The organization should rely on industry best practices, national and international standards, and vendor-recommended practices and guidelines to ensure that the strategies are planned and implemented appropriately.

Organizations will most likely establish a team that will be responsible for executing any assessment, testing, and auditing strategies. The team should consist of individuals who understand security assessment, testing, and auditing but should also include representatives from other areas of the organization. Verifying and validating security is an ongoing activity that never really stops. But security professionals should help guide an organization in terms of when a particular type of assessment or testing is best performed.

Security Testing

Security testing ensures that a control is functioning properly. Both manual and automatic security testing can be performed. Security testing should be carried out on a regular basis. Security testing should be performed on all types of devices.

When performing security testing, security professionals should understand that it will affect the performance of the devices involved in the security test. Security testing cannot always be performed during non-peak hours. Only performing this testing during non-peak hours could also result in skewed results.

Security professionals should consider the following factors when performing security testing:

  • Impact

  • Difficulty

  • Time needed

  • Changes that could affect the performance

  • System risk

  • System criticality

  • Security test availability

  • Information sensitivity level

  • Likelihood of technical failure or misconfiguration

Once security tests are performed, security professionals should analyze the results and make appropriate recommendations based on those results. In addition, the security testing tools themselves can be configured to send alerts or messages based on preconfigured triggers or filters. Without proper analysis, security testing does not provide a benefit to the organization.

Security Assessments

Security assessments are the reviews of the security status and reports for a system, application, or other environment. During this assessment, a security professional will review the results of the security tests, identify any vulnerabilities, and make recommendations for remediation. Security testing leads to security assessments.

Security professionals should prepare a formal security assessment report that includes all of the identified issues and recommendations. Also, they should document the actions taken based on the recommendations.

Security Auditing

Security auditing is the process of providing the digital proof when someone who is performing certain activities needs to be identified. Like security assessment and testing, it can be performed internally, externally, and via a third party. Security auditing is covered in more detail later in this chapter and in Chapter 7, “Security Operations.”

Internal, External, and Third-party Security Assessment, Testing, and Auditing

Security assessment, testing, and auditing occur in three manners: internal, external, and third-party. Internal assessment, testing, and auditing are carried out by personnel within the organization. External assessment, testing, and auditing are carried out by a vendor or contractor that is engaged by the company.

Sometimes third-party assessment, testing, and auditing are performed by a party completely unrelated to the company and not previously engaged by it. This scenario often arises as a result of having to comply with some standard or regulation or when accreditation or certification is involved. Many certifying or regulating bodies may require engagement of a third party that has not had a previous relationship with the organization being assessed. In this case, the certifying body will work with the organization to engage an approved third party.

Companies should ensure that, at minimum, internal and external testing and assessments are completed on a regular basis.

Conduct Security Control Testing

Organizations must manage the security control testing that occurs to ensure that all security controls are tested thoroughly by authorized individuals. The facets of security control testing that organizations must include are vulnerability assessments, penetration testing, log reviews, synthetic transactions, code review and testing, misuse case testing, test coverage analysis, and interface testing.

Vulnerability Assessment

A vulnerability assessment helps to identify the areas of weakness in a network. It can also help to determine asset prioritization within an organization. A comprehensive vulnerability assessment is part of the risk management process. But for access control, security professionals should use vulnerability assessments that specifically target the access control mechanisms.

Image

Vulnerability assessments usually fall into one of three categories:

  • Personnel testing: Reviews standard practices and procedures that users follow.

  • Physical testing: Reviews facility and perimeter protections.

  • System and network testing: Reviews systems, devices, and network topology.

The security analyst who will be performing a vulnerability assessment must understand the systems and devices that are on the network and the jobs they perform. The analyst needs this information to be able to assess the vulnerabilities of the systems and devices based on the known and potential threats to the systems and devices.

After gaining knowledge regarding the systems and devices, the security analyst should examine existing controls in place and identify any threats against these controls. The security analyst can then use all the information gathered to determine which automated tools to use to search for vulnerabilities. After the vulnerability analysis is complete, the security analyst should verify the results to ensure that they are accurate and then report the findings to management, with suggestions for remedial action. With this information in hand, the analyst should carry out threat modeling to identify the threats that could negatively affect systems and devices and the attack methods that could be used.

Vulnerability assessment applications include Nessus, Open Vulnerability Assessment System (OpenVAS), Core Impact, Nexpose, GFI LanGuard, QualysGuard, and Microsoft Baseline Security Analyzer (MBSA). Of these applications, OpenVAS and MBSA are free.

When selecting a vulnerability assessment tool, you should research the following metrics: accuracy, reliability, scalability, and reporting. Accuracy is the most important metric. A false positive generally results in time spent researching an issue that does not exist. A false negative is more serious, as it means the scanner failed to identify an issue that poses a serious security risk.

Network Discovery Scan

A network discovery scan examines a range of IP addresses to determine which ports are open. This type of scan only shows a list of systems on the network and the ports in use on the network. It does not actually check for any vulnerabilities.

Topology discovery entails determining the devices in the network, their connectivity relationships to one another, and the internal IP addressing scheme in use. Any combination of these pieces of information allows a hacker to create a “map” of the network, which aids him tremendously in evaluating and interpreting the data he gathers in other parts of the hacking process. If he is completely successful, he will end up with a diagram of the network. Your challenge as a security professional is to determine whether such a mapping process is possible, using the same tools as the attacker. Based on your findings, you should determine steps to take that make topology discovery either more difficult or, better yet, impossible.

Operating system fingerprinting is the process of using some method to determine the operating system running on a host or a server. By identifying the OS version and build number, a hacker can identify common vulnerabilities of that OS using readily available documentation from the Internet. While many of the issues will have been addressed in subsequent updates, service packs, and hotfixes, there might be zero-day weaknesses (issues that have not been widely publicized or addressed by the vendor) that the hacker can leverage in the attack. Moreover, if any of the relevant security patches have not been applied, the weaknesses the patches were intended to address will exist on the machine. Therefore, the purpose of attempting OS fingerprinting during assessment is to assess the relative ease with which it can be done and identifying methods to make it more difficult.

Operating systems have well-known vulnerabilities, and so do common services. By determining the services that are running on a system, an attacker also discovers potential vulnerabilities of the service of which he may attempt to take advantage. This is typically done with a port scan, in which all “open,” or “listening,” ports are identified. Once again, the lion’s share of these issues will have been mitigated with the proper security patches, but that is not always the case; it is not uncommon for security analysts to find that systems that are running vulnerable services are missing the relevant security patches. Consequently, when performing service discovery, check patches on systems found to have open ports. It is also advisable to close any ports not required for the system to do its job.

Network discovery tools can perform the following types of scans:

  • TCP SYN scan: Sends a packet to each scanned port with the SYN flag set. If a response is received with the SYN and ACK flags set, the port is open.

  • TCP ACK scan: Sends a packet to each port with the ACK flag set. If no response is received, then the port is marked as filtered. If an RST response is received, then the port is marked as unfiltered.

  • Xmas scan: Sends a packet with the FIN, PSH, and URG flags set. If the port is open, there is no response. If the port is closed, the target responds with a RST/ACK packet.

The result of this type of scan is that security professionals can determine if ports are open, closed, or filtered. Open ports are being used by an application on the remote system. Closed ports are open ports but there is no application accepting connections on that port. Filtered ports are ports that cannot be reached.

The most widely used network discovery scanning tool is Nmap.

Network Vulnerability Scan

Network vulnerability scans perform a more complex scan of the network than network discovery scans. These scans will probe a targeted system or network to identify vulnerabilities. The tools used in this type of scan will contain a database of known vulnerabilities and will identify if a specific vulnerability exists on each device.

There are two types of vulnerability scanners:

  • Passive vulnerability scanners: A passive vulnerability scanner (PVS) monitors network traffic at the packet layer to determine topology, services, and vulnerabilities. It avoids the instability that can be introduced to a system by actively scanning for vulnerabilities.

    PVS tools analyze the packet stream and look for vulnerabilities through direct analysis. They are deployed in much the same way as intrusion detection systems (IDSs) or packet analyzers. A PVS can pick a network session that targets a protected server and monitor it as much as needed. The biggest benefit of a PVS is its ability to do its work without impacting the monitored network. Some examples of PVSs are the Nessus Network Monitor (formerly Tenable PVS) and NetScanTools Pro.

  • Active vulnerability scanners: Whereas passive scanners can only gather information, active vulnerability scanners (AVSs) can take action to block an attack, such as block a dangerous IP address. They can also be used to simulate an attack to assess readiness. They operate by sending transmissions to nodes and examining the responses. Because of this, these scanners may disrupt network traffic. Examples include Nessus and Microsoft Baseline Security Analyzer (MBSA).

Regardless of whether it’s active or passive, a vulnerability scanner cannot replace the expertise of trained security personnel. Moreover, these scanners are only as effective as the signature databases on which they depend, so the databases must be updated regularly. Finally, scanners require bandwidth and potentially slow the network.

For best performance, you can place a vulnerability scanner in a subnet that needs to be protected. You can also connect a scanner through a firewall to multiple subnets; this complicates the configuration and requires opening ports on the firewall, which could be problematic and could impact the performance of the firewall.

The most popular network vulnerability scanning tools include Qualys, Nessus, and MBSA.

Vulnerability scanners can use agents that are installed on the devices, or they can be agentless. While many vendors argue that using agents is always best, there are advantages and disadvantages to both, as presented in Table 6-1.

Image

Table 6-1 Server-Based vs. Agent-Based Scanning

Type

Technology

Characteristics

Agent-based

Pull technology

Can get information from disconnected machines or machines in the DMZ

Ideal for remote locations that have limited bandwidth

Less dependent on network connectivity

Based on policies defined in the central console

Server-based

Push technology

Good for networks with plentiful bandwidth

Dependent on network connectivity

Central authority does all the scanning and deployment

Some scanners can do both agent-based and server-based scanning (also called agentless or sensor-based scanning).

Web Application Vulnerability Scan

Because web applications are highly used in today’s world, companies must ensure that their web applications remain secure and free of vulnerabilities. Web application vulnerability scanners are special tools that examine web applications for known vulnerabilities.

Popular web application vulnerability scanners include QualysGuard and Nexpose.

Penetration Testing

The goal of penetration testing, also known as ethical hacking, is to simulate an attack to identify any threats that can stem from internal or external resources planning to exploit the vulnerabilities of a system or device.

Image

The steps in performing a penetration test are as follows:

  1. Document information about the target system or device.

  2. Gather information about attack methods against the target system or device. This includes performing port scans.

  3. Identify the known vulnerabilities of the target system or device.

  4. Execute attacks against the target system or device to gain user and privileged access.

  5. Document the results of the penetration test and report the findings to management, with suggestions for remedial action.

Both internal and external tests should be performed. Internal tests occur from within the network, whereas external tests originate outside the network and target the servers and devices that are publicly visible.

Image

Strategies for penetration testing are based on the testing objectives defined by the organization. The strategies that you should be familiar with include the following:

  • Blind test: The testing team is provided with limited knowledge of the network systems and devices that use publicly available information. The organization’s security team knows that an attack is coming. This test requires more effort by the testing team, and the team must simulate an actual attack.

  • Double-blind test: This test is like a blind test except the organization’s security team does not know that an attack is coming. Only a few individuals in the organization know about the attack, and they do not share this information with the security team. This test usually requires equal effort for both the testing team and the organization’s security team.

  • Target test: Both the testing team and the organization’s security team are given maximum information about the network and the type of attack that will occur. This is the easiest test to complete but does not provide a full picture of the organization’s security.

Image

Penetration testing is also divided into categories based on the amount of information to be provided. The main categories that you should be familiar with include the following:

  • Zero-knowledge test: The testing team is provided with no knowledge regarding the organization’s network. The testing team can use any means available to obtain information about the organization’s network. This is also referred to as closed, or black-box, testing.

  • Partial-knowledge test: The testing team is provided with public knowledge regarding the organization’s network. Boundaries might be set for this type of test. This is also referred to as gray-box testing.

  • Full-knowledge test: The testing team is provided with all available knowledge regarding the organization’s network. This test is focused more on what attacks can be carried out. This is also referred to as white-box testing.

Penetration testing applications include Metasploit, Wireshark, Core Impact, Nessus, Cain & Abel, Kali Linux, and John the Ripper. When selecting a penetration testing tool, you should first determine which systems you want to test. Then research the different tools to discover which can perform the tests that you want to perform for those systems and research the tools’ methodologies for testing. In addition, the organization needs to select the correct individual to carry out the test. Remember that penetration tests should include manual methods as well as automated methods because relying on only one of these two will not yield a thorough result.

Table 6-2 compares vulnerability assessments and penetration tests.

Image

Table 6-2 Comparison of Vulnerability Assessments and Penetration Tests

 

Vulnerability Assessment

Penetration Test

Purpose

Identifies vulnerabilities that may result in compromise of a system.

Identifies ways to exploit vulnerabilities to circumvent the security features of systems.

When

After significant system changes. Schedule at least quarterly thereafter.

After significant system changes. Schedule at least annually thereafter.

How

Use automated tools with manual verification of identified issues.

Use both automated and manual methods to provide a comprehensive report.

Reports

Potential risks posed by known vulnerabilities, ranked using base scores associated with each vulnerability. Both internal and external reports should be provided.

Description of each issue discovered, including specific risks the issue may pose and specifically how and to what extent it may be exploited.

Duration

Typically several seconds to several minutes per scanned host.

Days or weeks, depending on the scope and size of the environment to be tested. Tests may grow in duration if efforts uncover additional scope.

Log Reviews

A log is a recording of events that occur on an organizational asset, including systems, networks, devices, and facilities. Each entry in a log covers a single event that occurs on the asset. In most cases, there are separate logs for different event types, including security logs, operating system logs, and application logs. Because so many logs are generated on a single device, many organizations have trouble ensuring that the logs are reviewed in a timely manner. Log review, however, is probably one of the most important steps an organization can take to ensure that issues are detected before they become major problems.

Computer security logs are particularly important because they can help an organization identify security incidents, policy violations, and fraud. Log management ensures that computer security logs are stored in sufficient detail for an appropriate period of time so that auditing, forensic analysis, investigations, baselines, trends, and long-term problems can be identified.

The National Institute of Standards and Technology (NIST) has provided two special publications that relate to log management: NIST SP 800-92, “Guide to Computer Security Log Management,” and NIST SP 800-137, “Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations.” While both of these special publications are primarily used by federal government agencies and organizations, other organizations may want to use them as well because of the wealth of information they provide. The following section covers NIST SP 800-92, and NIST SP 800-137 is discussed later in this chapter.

NIST SP 800-92
Image

NIST SP 800-92 makes the following recommendations for more efficient and effective log management:

  • Organizations should establish policies and procedures for log management. As part of the planning process, an organization should

    • Define its logging requirements and goals.

    • Develop policies that clearly define mandatory requirements and suggested recommendations for log management activities.

    • Ensure that related policies and procedures incorporate and support the log management requirements and recommendations.

  • Management should provide the necessary support for the efforts involving log management planning, policy, and procedures development.

  • Organizations should prioritize log management appropriately throughout the organization.

  • Organizations should create and maintain a log management infrastructure.

  • Organizations should provide proper support for all staff with log management responsibilities.

  • Organizations should establish standard log management operational processes. This includes ensuring that administrators

    • Monitor the logging status of all log sources.

    • Monitor log rotation and archival processes.

    • Check for upgrades and patches to logging software and acquire, test, and deploy them.

    • Ensure that each logging host’s clock is synchronized to a common time source.

    • Reconfigure logging as needed based on policy changes, technology changes, and other factors.

    • Document and report anomalies in log settings, configurations, and processes.

According to NIST SP 800-92, common log management infrastructure components include general functions (log parsing, event filtering, and event aggregation), storage (log rotation, log archival, log reduction, log conversion, log normalization, and log file integrity checking), log analysis (event correlation, log viewing, and log reporting), and log disposal (log clearing.)

Syslog provides a simple framework for log entry generation, storage, and transfer that any operating system, security software, or application could use if designed to do so. Many log sources either use syslog as their native logging format or offer features that allow their log formats to be converted to syslog format. Each syslog message has only three parts. The first part specifies the facility and severity as numerical values. The second part of the message contains a timestamp and the hostname or IP address of the source of the log. The third part is the actual log message content.

No standard fields are defined within the message content; it is intended to be human-readable and not easily machine-parsable. This provides very high flexibility for log generators, which can place whatever information they deem important within the content field, but it makes automated analysis of the log data very challenging. A single source may use many different formats for its log message content, so an analysis program would need to be familiar with each format and be able to extract the meaning of the data within the fields of each format. This problem becomes much more challenging when log messages are generated by many sources. It might not be feasible to understand the meaning of all log messages, so analysis might be limited to keyword and pattern searches. Some organizations design their syslog infrastructures so that similar types of messages are grouped together or assigned similar codes, which can make log analysis automation easier to perform.

As log security has become a greater concern, several implementations of syslog have been created that place greater emphasis on security. Most have been based on IETF’s RFC 3195, which was designed specifically to improve the security of syslog. Implementations based on this standard can support log confidentiality, integrity, and availability through several features, including reliable log delivery, transmission confidentiality protection, and transmission integrity protection and authentication.

Security information and event management (SIEM) products allow administrators to consolidate all security information logs. This consolidation ensures that administrators can perform analysis on all logs from a single resource rather than having to analyze each log on its separate resource. Most SIEM products support two ways of collecting logs from log generators:

  • Agentless: The SIEM server receives data from the individual hosts without needing to have any special software installed on those hosts. Some servers pull logs from the hosts, which is usually done by having the server authenticate to each host and retrieve its logs regularly. In other cases, the hosts push their logs to the server, which usually involves each host authenticating to the server and transferring its logs regularly. Regardless of whether the logs are pushed or pulled, the server then performs event filtering and aggregation and log normalization and analysis on the collected logs.

  • Agent-based: An agent program is installed on the host to perform event filtering and aggregation and log normalization for a particular type of log. The host then transmits the normalized log data to the SIEM server, usually on a real-time or near-real-time basis for analysis and storage. Multiple agents may need to be installed if a host has multiple types of logs of interest. Some SIEM products also offer agents for generic formats such as syslog and Simple Network Management Protocol (SNMP). A generic agent is used primarily to get log data from a source for which a format-specific agent and an agentless method are not available. Some products also allow administrators to create custom agents to handle unsupported log sources.

There are advantages and disadvantages to each method. The primary advantage of the agentless approach is that agents do not need to be installed, configured, and maintained on each logging host. The primary disadvantage is the lack of filtering and aggregation at the individual host level, which can cause significantly larger amounts of data to be transferred over networks and increase the amount of time it takes to filter and analyze the logs. Another potential disadvantage of the agentless method is that the SIEM server may need credentials for authenticating to each logging host. In some cases, only one of the two methods is feasible; for example, there might be no way to remotely collect logs from a particular host without installing an agent onto it.

SIEM products usually include support for several dozen types of log sources, such as OSs, security software, application servers (e.g., web servers, email servers), and even physical security control devices such as badge readers. For each supported log source type, except for generic formats such as syslog, the SIEM products typically know how to categorize the most important logged fields. This significantly improves the normalization, analysis, and correlation of log data over that performed by software with a less granular understanding of specific log sources and formats. Also, the SIEM software can perform event reduction by disregarding data fields that are not significant to computer security, potentially reducing the SIEM software’s network bandwidth and data storage usage.

Typically, system, network, and security administrators are responsible for managing logging on their systems, performing regular analysis of their log data, documenting and reporting the results of their log management activities, and ensuring that log data is provided to the log management infrastructure in accordance with the organization’s policies. In addition, some of the organization’s security administrators act as log management infrastructure administrators, with responsibilities such as the following:

  • Contact system-level administrators to get additional information regarding an event or to request that they investigate a particular event.

  • Identify changes needed to system logging configurations (e.g., which entries and data fields are sent to the centralized log servers, what log format should be used) and inform system-level administrators of the necessary changes.

  • Initiate responses to events, including incident handling and operational problems (e.g., a failure of a log management infrastructure component).

  • Ensure that old log data is archived to removable media and disposed of properly once it is no longer needed.

  • Cooperate with requests from legal counsel, auditors, and others.

  • Monitor the status of the log management infrastructure (e.g., failures in logging software or log archival media, failures of local systems to transfer their log data) and initiate appropriate responses when problems occur.

  • Test and implement upgrades and updates to the log management infrastructure’s components.

  • Maintain the security of the log management infrastructure.

Organizations should develop policies that clearly define mandatory requirements and suggested recommendations for several aspects of log management, including log generation, log transmission, log storage and disposal, and log analysis. Table 6-3 gives examples of logging configuration settings that an organization can use. The types of values defined in Table 6-3 should only be applied to the hosts and host components previously specified by the organization as ones that must or should log security-related events.

Image

Table 6-3 Examples of Logging Configuration Settings

Category

Low-Impact Systems

Moderate-Impact Systems

High-Impact Systems

Log retention duration

1–2 weeks

1–3 months

3–12 months

Log rotation

Optional (if performed, at least every week or every 25 MB)

Every 6–24 hours or every 2–5 MB

Every 15–60 minutes or every 0.5–1.0 MB

Log data transfer frequency (to SIEM)

Every 3–24 hours

Every 15–60 minutes

At least every 5 minutes

Local log data analysis

Every 1–7 days

Every 12–24 hours

At least 6 times a day

File integrity check for rotated logs?

Optional

Yes

Yes

Encrypt rotated logs?

Optional

Optional

Yes

Encrypt log data transfers to SIEM?

Optional

Yes

Yes

Synthetic Transactions

Synthetic transaction monitoring, which is a type of proactive monitoring, is often preferred for websites and applications. It provides insight into the availability and performance of an application and warns of any potential issue before users experience any degradation in application behavior. It uses external agents to run scripted transactions against an application. For example, Microsoft’s System Center Operations Manager uses synthetic transactions to monitor databases, websites, and TCP port usage.

In contrast, real user monitoring (RUM), which is a type of passive monitoring, captures and analyzes every transaction of every application or website user. Unlike synthetic monitoring, which attempts to gain performance insights by regularly testing synthetic interactions, RUM cuts through the guesswork by seeing exactly how users are interacting with the application.

Code Review and Testing

Code review and testing must occur throughout the entire system or application development life cycle. The goal of code review and testing is to identify bad programming patterns, security misconfigurations, functional bugs, and logic flaws.

In the planning and design phase, code review and testing include architecture security reviews and threat modeling. In the development phase, code review and testing include static source code analysis, manual code review, static binary code analysis, and manual binary review. Once an application is deployed, code review and testing involve penetration testing, vulnerability scanning, and fuzz testing.

Formal code review involves a careful and detailed process with multiple participants and multiple phases. In this type of code review, software developers attend meetings where each line of code is reviewed, usually using printed copies. Lightweight code review typically requires less overhead than formal code inspections, though it can be equally effective when done properly. Code review methods include the following:

  • Over-the-shoulder: One developer looks over the author’s shoulder as the author walks through the code.

  • Email pass-around: Source code is emailed to reviewers automatically after the code is checked in.

  • Pair programming: Two authors develop code together at the same workstation.

  • Tool-assisted code review: Authors and reviewers use tools designed for peer code review.

  • Black-box testing, or zero-knowledge testing: The team is provided with no knowledge regarding the organization’s application. The team can use any means at its disposal to obtain information about the organization’s application. This is also referred to as closed testing.

  • White-box testing: The team goes into the process with a deep understanding of the application or system. Using this knowledge, the team builds test cases to exercise each path, input field, and processing routine.

  • Gray-box testing: The team is provided more information than in black-box testing, while not as much as in white-box testing. Gray-box testing has the advantage of being nonintrusive while maintaining the boundary between developer and tester. On the other hand, it may uncover some of the problems that might be discovered with white-box testing.

Table 6-4 compares black-box, gray-box, and white-box testing.

Image

Table 6-4 Black-Box, Gray-Box, and White-Box Testing

Black Box

Gray Box

White Box

Internal workings of the application are not known.

Internal workings of the application are somewhat known.

Internal workings of the application are fully known.

Also called closed-box, data-driven, and functional testing.

Also called translucent testing, as the tester has partial knowledge.

Also known as clear-box, structural, or code-based testing.

Performed by end users, testers, and developers.

Performed by end users, testers, and developers.

Performed by testers and developers.

Least time-consuming.

More time-consuming than black-box testing but less so than white-box testing.

Most exhaustive and time-consuming.

Other types of testing include dynamic versus static testing and manual versus automatic testing.

Code Review Process

Code review varies from organization to organization. Fagan inspections are the most formal code reviews that can occur and should adhere to the following process:

  1. Plan

  2. Overview

  3. Prepare

  4. Inspect

  5. Rework

  6. Follow-up

Most organizations do not strictly adhere to the Fagan inspection process. Each organization should adopt a code review process fitting for its business requirements. The more restrictive the environment, the more formal the code review process should be.

Static Testing

Static testing analyzes software security without actually running the software. This is usually provided by reviewing the source code or compiled application. Automated tools are used to detect common software flaws. Static testing tools should be available throughout the software design process.

Dynamic Testing

Dynamic testing analyzes software security in the runtime environment. With this testing, the tester should not have access to the application’s source code.

Dynamic testing often includes the use of synthetic transactions, which are scripted transactions that have a known result. These synthetic transactions are executed against the tested code, and the output is then compared to the expected output. Any discrepancies between the two should be investigated for possible source code flaws.

Fuzz Testing

Fuzz testing is a dynamic testing tool that provides input to the software to test the software’s limits and discover flaws. The input provided can be randomly generated by the tool or specially created to test for known vulnerabilities.

Fuzz testers include Untidy, Peach Fuzzer, and Microsoft SDL File/Regex Fuzzer.

Misuse Case Testing

Misuse case testing, also referred to as negative testing, tests an application to ensure that the application can handle invalid input or unexpected behavior. This testing is completed to ensure that an application will not crash and to improve the quality of an application by identifying its weak points. When misuse cast testing is performed, organizations should expect to find issues. Misuse testing should include testing that looks for the following:

  • Required fields must be populated.

  • Fields with a defined data type can only accept data that is the required data type.

  • Fields with character limits allow only the configured number of characters.

  • Fields with a defined data range accept only data within that range.

  • Fields accept only valid data.

Test Coverage Analysis

Test coverage analysis uses test cases that are written against the application requirements specifications. Individuals involved in this analysis do not need to see the code to write the test cases. Once a document that describes all the test cases is written, test groups refer to a percentage of the test cases that were run, that passed, that failed, and so on. The application developer usually performs test coverage analysis as a part of unit testing. Quality assurance groups use overall test coverage analysis to indicate test metrics and coverage according to the test plan.

Test coverage analysis creates additional test cases to increase coverage. It helps developers find areas of an application not exercised by a set of test cases. It helps in determining a quantitative measure of code coverage, which indirectly measures the quality of the application or product.

One disadvantage of code coverage measurement is that it measures coverage of what the code covers but cannot test what the code does not cover or what has not been written. In addition, this analysis looks at a structure or function that already exists and not those that do not yet exist.

Interface Testing

Interface testing evaluates whether an application’s systems or components correctly pass data and control to one another. It verifies whether module interactions are working properly and errors are handled correctly. Interfaces that should be tested include client interfaces, server interfaces, remote interfaces, graphical user interfaces (GUIs), application programming interfaces (APIs), external and internal interfaces, and physical interfaces.

GUI testing involves testing a product’s GUI to ensure that it meets its specifications through the use of test cases. API testing tests APIs directly in isolation and as part of the end-to-end transactions exercised during integration testing to determine whether the APIs return the correct responses.

Collect Security Process Data

After security controls are tested, organizations must ensure that they collect the appropriate security process data. NIST SP 800-137 provides guidelines for developing an information security continuous monitoring (ISCM) program. Security professionals should ensure that security process data that is collected includes account management, management review, key performance and risk indicators, backup verification data, training and awareness, and disaster recovery and business continuity.

NIST SP 800-137

According to NIST SP 800-137, ISCM is defined as maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions.

Image

Organizations should take the following steps to establish, implement, and maintain ISCM:

  1. Define an ISCM strategy based on risk tolerance that maintains clear visibility into assets, awareness of vulnerabilities, up-to-date threat information, and mission/business impacts.

  2. Establish an ISCM program that includes metrics, status monitoring frequencies, control assessment frequencies, and an ISCM technical architecture.

  3. Implement an ISCM program and collect the security-related information required for metrics, assessments, and reporting. Automate collection, analysis, and reporting of data where possible.

  4. Analyze the data collected, report findings, and determine the appropriate responses. It may be necessary to collect additional information to clarify or supplement existing monitoring data.

  5. Respond to findings with technical, management, and operational mitigating activities or acceptance, transference/sharing, or avoidance/rejection.

  6. Review and update the monitoring program, adjusting the ISCM strategy and maturing measurement capabilities to increase visibility into assets and awareness of vulnerabilities, further enable data-driven control of the security of an organization’s information infrastructure, and increase organizational resilience.

Account Management

Account management is important because it involves the addition and deletion of accounts that are granted access to systems or networks. But account management also involves changing the permissions or privileges granted to those accounts. If account management is not monitored and recorded properly, organizations may discover that accounts have been created for the sole purpose of carrying out fraudulent or malicious activities. Two-person controls should be used with account management, often involving one administrator who creates accounts and another who assigns those accounts the appropriate permissions or privileges.

Escalation and revocation are two terms that are important to security professionals. Account escalation occurs when a user account is granted more permission based on new job duties or a complete job change. Security professionals should fully analyze a user’s needs prior to changing the current permissions or privileges, making sure to grant only permissions or privileges that are needed for the new task and to remove those that are no longer needed. Without such analysis, users may be able to retain permissions that cause possible security issues because separation of duties is no longer retained. For example, suppose a user is hired in the accounts payable department to print out all vendor checks. Later this user receives a promotion to approve payment for the same accounts. If this user’s old permission to print checks is not removed, this single user would be able to both approve the checks and print them, which is a direct violation of separation of duties.

Account revocation occurs when a user account is revoked because a user is no longer with an organization. Security professionals must keep in mind that there will be objects that belong to this user. If the user account is simply deleted, access to the objects owned by the user may be lost. It may be a better plan to disable the account for a certain period. Account revocation policies should also distinguish between revoking an account for a user who resigns from an organization and revoking an account for a user who is terminated.

Management Review and Approval

Management review of security process data should be mandatory. No matter how much data an organization collects on its security processes, the data is useless if it is never reviewed by an administrator. Guidelines and procedures should be established to ensure that management review occurs in a timely manner. Without regular review, even the most minor security issue can be quickly turned into a major security breach.

Management review should include an approval process whereby management reviews any recommendations from security professionals and approves or rejects the recommendations based on the data given. If alternatives are given, management should approve the alternative that best satisfies the organizational needs. Security professionals should ensure that the reports provided to management are as comprehensive as possible so that all the data can be analyzed to ensure the most appropriate solution is selected.

Key Performance and Risk Indicators

By using key performance and risk indicators of security process data, organizations better identify when security risks are likely to occur. Key performance indicators (PKIs) allow organizations to determine whether levels of performance are below or above established norms. Key risk indicators (KRIs) allow organizations to identify whether certain risks are more or less likely to occur.

NIST has released the Framework for Improving Critical Infrastructure Cybersecurity, also known as the Cybersecurity Framework, which focuses on using business drivers to guide cybersecurity activities and considering cybersecurity risks as part of the organization’s risk management processes. The framework consists of three parts: the Framework Core, the Framework Profiles, and the Framework Implementation Tiers.

The Framework Core is a set of cybersecurity activities, outcomes, and informative references that are common across critical infrastructure sectors, providing the detailed guidance for developing individual organizational profiles. The Framework Core consists of five concurrent and continuous functions—identify, protect, detect, respond, and recover.

After each function is identified, categories and subcategories for each function are recorded. The Framework Profiles are developed based on the business needs of the categories and subcategories. Through use of the Framework Profiles, the framework helps an organization align its cybersecurity activities with its business requirements, risk tolerances, and resources.

The Framework Implementation Tiers provide a mechanism for organizations to view and understand the characteristics of their approach to managing cybersecurity risk. The following tiers are used: Tier 1, partial; Tier 2, risk informed; Tier 3, repeatable; and Tier 4, adaptive.

Organizations will continue to have unique risks—different threats, different vulnerabilities, and different risk tolerances—and how they implement the practices in the framework will vary. Ultimately, the framework is aimed at reducing and better managing cybersecurity risks and is not a one-size-fits-all approach to managing cybersecurity.

Backup Verification Data

Any security process data that is collected should also be backed up. Security professionals should ensure that their organization has the appropriate backup and restore guidelines in place for all security process data. If data is not backed up properly, a failure can result in vital data being lost forever. In addition, personnel should test the restore process on a regular basis to make sure it works as it should. If an organization is unable to restore a backup properly, the organization might as well not have the backup.

Training and Awareness

All personnel must understand any security assessment and testing strategies that an organization employs. Technical personnel may need to be trained in the details about security assessment and testing, including security control testing and collecting security process data. Other personnel, however, only need to be given more awareness training on this subject. Security professionals should help personnel understand what type of assessment and testing occurs, what is captured by this process, and why this is important to the organization. Management must fully support the security assessment and testing strategy and must communicate to all personnel and stakeholders the importance this program.

Disaster Recovery and Business Continuity

Any disaster recovery and business continuity plans that an organization develops must consider security assessment and testing, security control testing, and security process data collection. Often when an organization goes into disaster recovery mode, personnel do not think about these processes. As a matter of fact, ordinary security controls often fall by the wayside at such times. A security professional is responsible for ensuring that this does not happen. Security professionals involved in developing the disaster recovery and business continuity plans must cover all these areas.

Analyze and Report Test Outputs

Personnel should understand the automated and manual reporting that can be done as part of security assessment and testing. Output must be reported in a timely manner to management in order to ensure that they understand the value of this process. It may be necessary to provide different reports depending on the level of audience understanding. For example, high-level management may need only a summary of findings. But technical personnel should be given details of the findings to ensure that they can implement the appropriate controls to mitigate or prevent any risks found during security assessment and testing.

Personnel may need special training on how to run manual reports and how to analyze the report outputs.

Conduct or Facilitate Security Audits

Organizations should conduct internal, external, and third-party audits as part of any security assessment and testing strategy. These audits should test all security controls that are currently in place. The following are some guidelines to consider as part of a good security audit plan:

  • At minimum, perform annual audits to establish a security baseline.

  • Determine your organization’s objectives for the audit and share them with the auditors.

  • Set the ground rules for the audit, including the dates/times of the audit, before the audit starts.

  • Choose auditors who have security experience.

  • Involve business unit managers early in the process.

  • Ensure that auditors rely on experience, not just checklists.

  • Ensure that the auditor’s report reflects risks that the organization has identified.

  • Ensure that the audit is conducted properly.

  • Ensure that the audit covers all systems and all policies and procedures.

  • Examine the report when the audit is complete.

Remember that internal audits are performed by personnel within the organization, while external or third-party audits are performed by individuals outside the organization or another company. Both types of audits should occur.

Many regulations today require that audits occur. Organizations used to rely on Statement on Auditing Standards (SAS) 70, which provided auditors information and verification about data center controls and processes related to data center users and their financial reporting. A SAS 70 audit verified that the controls and processes set in place by a data center are actually followed. The Statement on Standards for Attestation Engagements (SSAE) 16, Reporting on Controls at a Service Organization, is a newer standard that verifies the controls and processes and also requires a written assertion regarding the design and operating effectiveness of the controls being reviewed.

Image

An SSAE 16 audit results in a Service Organization Control (SOC) 1 report. This report focuses on internal controls over financial reporting. There are two types of SOC 1 reports:

  • SOC 1, Type 1 report: Focuses on the auditors’ opinion of the accuracy and completeness of the data center management’s design of controls, system, and/or service.

  • SOC 1, Type 2 report: Includes the Type 1 report as well as an audit of the effectiveness of controls over a certain time period, normally between six months and a year.

Two other report types are also available: SOC 2 and SOC 3. Both of these audits provide benchmarks for controls related to the security, availability, processing integrity, confidentiality, or privacy of a system and its information. A SOC 2 report includes service auditor testing and results, and a SOC 3 report provides only the system description and auditor opinion. A SOC 3 report is for general use and provides a level of certification for data center operators that assures data center users of facility security, high availability, and process integrity. Table 6-5 briefly compares the three types of SOC reports.

Image

Table 6-5 SOC Reports Comparison

 

What It Reports On

Who Uses It

SOC 1

Internal controls over financial reporting

User auditors and controller office

SOC 2

Security, availability, processing integrity, confidentiality, or privacy controls

Management, regulators, and others; shared under nondisclosure agreement (NDA)

SOC 3

Security, availability, processing integrity, confidentiality, or privacy controls

Publicly available to anyone

Exam Preparation Tasks

As mentioned in the section “About the CISSP Cert Guide, Third Edition” in the Introduction, you have a couple of choices for exam preparation: the exercises here, Chapter 9, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

Review All Key Topics

Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 6-6 lists a reference of these key topics and the page numbers on which each is found.

Image

Table 6-6 Key Topics for Chapter 6

Key Topic Element

Description

Page Number

List

Three categories of vulnerability assessments

536

Table 6-1

Server-Based vs. Agent-Based Scanning

539

List

Steps in a penetration test

539

List

Strategies for penetration testing

540

List

Penetration testing categories

540

Table 6-2

Comparison of Vulnerability Assessments and Penetration Tests

541

List

NIST SP 800-92 recommendations for log management

542

Table 6-3

Examples of Logging Configuration Settings

545

Table 6-4

Black-Box, Gray-Box, and White-Box Testing

547

List

Steps to establish, implement, and maintain ISCM

550

List

Types of SOC 1 reports

555

Table 6-5

SOC Reports Comparison

555

Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:

account management

active vulnerability scanner (AVS)

black-box testing

blind test

code review and testing

double-blind test

dynamic testing

full-knowledge test

fuzz testing

gray-box testing

information security continuous monitoring (ISCM)

interface testing

log

log review

misuse case testing

negative testing

network discovery scan

network vulnerability scan

NIST SP 800-137

NIST SP 800-92

operating system fingerprinting

partial-knowledge test

passive vulnerability scanner (PVS)

penetration test

real user monitoring (RUM)

static testing

synthetic transaction monitoring

target test

test coverage analysis

topology discovery

vulnerability

vulnerability assessment

white-box testing

zero-knowledge test

Answer Review Questions

1. For which of the following penetration tests does the testing team know an attack is coming but have limited knowledge of the network systems and devices and only publicly available information?

  1. Target test

  2. Physical test

  3. Blind test

  4. Double-blind test

2. Which of the following is NOT a guideline according to NIST SP 800-92?

  1. Organizations should establish policies and procedures for log management.

  2. Organizations should create and maintain a log management infrastructure.

  3. Organizations should prioritize log management appropriately throughout the organization.

  4. Choose auditors with security experience.

3. According to NIST SP 800-92, which of the following are facets of log management infrastructure? (Choose all that apply.)

  1. General functions (log parsing, event filtering, and event aggregation)

  2. Storage (log rotation, log archival, log reduction, log conversion, log normalization, log file integrity checking)

  3. Log analysis (event correlation, log viewing, log reporting)

  4. Log disposal (log clearing)

4. What are the two ways of collecting logs using security information and event management (SIEM) products, according to NIST SP 800-92?

  1. Passive and active

  2. Agentless and agent-based

  3. Push and pull

  4. Throughput and rate

5. Which monitoring method captures and analyzes every transaction of every application or website user?

  1. RUM

  2. Synthetic transaction monitoring

  3. Code review and testing

  4. Misuse case testing

6. Which type of testing is also known as negative testing?

  1. RUM

  2. Synthetic transaction monitoring

  3. Code review and testing

  4. Misuse case testing

7. What is the first step of the information security continuous monitoring (ISCM) plan, according to NIST SP 800-137?

  1. Establish an ISCM program.

  2. Define the ISCM strategy.

  3. Implement an ISCM program.

  4. Analyze the data collected.

8. What is the second step of the information security continuous monitoring (ISCM) plan, according to NIST SP 800-137?

  1. Establish an ISCM program.

  2. Define the ISCM strategy.

  3. Implement an ISCM program.

  4. Analyze the data collected.

9. Which of the following is NOT a guideline for internal, external, and third-party audits?

  1. Choose auditors with security experience.

  2. Involve business unit managers early in the process.

  3. At minimum, perform bi-annual audits to establish a security baseline.

  4. Ensure that the audit covers all systems and all policies and procedures.

10. Which SOC report should be shared with the general public?

  1. SOC 1, Type 1

  2. SOC 1, Type 2

  3. SOC 2

  4. SOC 3

11. Which of the following is the last step in performing a penetration test?

  1. Document the results of the penetration test and report the findings to management, with suggestions for remedial action.

  2. Gather information about attack methods against the target system or device.

  3. Document information about the target system or device.

  4. Execute attacks against the target system or device to gain user and privileged access.

12. In which of the following does the testing team have zero knowledge of the organization’s network?

  1. Gray-box testing

  2. Black-box testing

  3. White-box testing

  4. Physical testing

13. Which of the following is defined as a dynamic testing tool that provides input to the software to test the software’s limits and discover flaws?

  1. Interface testing

  2. Static testing

  3. Test coverage analysis

  4. Fuzz testing

14. Which factors should security professionals follow when performing security testing? (Choose all that apply.)

  1. Changes that could affect the performance

  2. System risk

  3. Information sensitivity level

  4. Likelihood of technical failure or misconfiguration

15. Which of the following can a hacker use to identify common vulnerabilities in an operating system running on a host or server?

  1. Operating system fingerprinting

  2. Network discovery scan

  3. Key performance and risk indicators

  4. Third-party audits

Answers and Explanations

1. c. With a blind test, the testing team knows an attack is coming and has limited knowledge of the network systems and devices and publicly available information. A target test occurs when the testing team and the organization’s security team are given maximum information about the network and the type of attack that will occur. A physical test is not a type of penetration test. It is a type of vulnerability assessment. A double-blind test is like a blind test except that the organization’s security team does not know an attack is coming.

2. d. NIST SP 800-92 does not include any information regarding auditors. So the “Choose auditors with security experience” option is NOT a guideline according to NIST SP 800-92.

3. a, b, c, d. According to NIST SP 800-92, log management functions should include general functions (log parsing, event filtering, and event aggregation), storage (log rotation, log archival, log reduction, log conversion, log normalization, log file integrity checking), log analysis (event correlation, log viewing, log reporting), and log disposal (log clearing).

4. b. The two ways of collecting logs using security information and event management (SIEM) products, according to NIST SP 800-92, are agentless and agent-based.

5. a. Real user monitoring (RUM) captures and analyzes every transaction of every application or website user.

6. d. Misuse case testing is also known as negative testing.

7. b. The steps in an ISCM program, according to NIST SP 800-137, are

  1. Define an ISCM strategy.

  2. Establish an ISCM program.

  3. Implement an ISCM program and collect the security-related information required for metrics, assessments, and reporting.

  4. Analyze the data collected, report findings, and determine the appropriate responses.

  5. Respond to findings.

  6. Review and update the monitoring program.

8. a. The steps in an ISCM program, according to NIST SP 800-137, are

  1. Define an ISCM strategy.

  2. Establish an ISCM program.

  3. Implement an ISCM program and collect the security-related information required for metrics, assessments, and reporting.

  4. Analyze the data collected, report findings, and determine the appropriate responses.

  5. Respond to findings.

  6. Review and update the monitoring program.

9. c. The following are guidelines for internal, external, and third-party audits:

  • At minimum, perform annual audits to establish a security baseline.

  • Determine your organization’s objectives for the audit and share them with the auditors.

  • Set the ground rules for the audit, including the dates/times of the audit, before the audit starts.

  • Choose auditors who have security experience.

  • Involve business unit managers early in the process.

  • Ensure that auditors rely on experience, not just checklists.

  • Ensure that the auditor’s report reflects risks that the organization has identified.

  • Ensure that the audit is conducted properly.

  • Ensure that the audit covers all systems and all policies and procedures.

  • Examine the report when the audit is complete.

10. d. SOC 3 is the only SOC report that should be shared with the general public.

11. a. The steps in performing a penetration test are as follows:

  1. Document information about the target system or device.

  2. Gather information about attack methods against the target system or device. This includes performing port scans.

  3. Identify the known vulnerabilities of the target system or device.

  4. Execute attacks against the target system or device to gain user and privileged access.

  5. Document the results of the penetration test and report the findings to management, with suggestions for remedial action.

12. b. In black-box testing, or zero-knowledge testing, the testing team is provided with no knowledge regarding the organization’s network. In white-box testing the testing team goes into the testing process with a deep understanding of the application or system. In gray-box testing the testing team is provided more information than in black-box testing, while not as much as in white-box testing. Gray-box testing has the advantage of being nonintrusive while maintaining the boundary between developer and tester. Physical testing reviews facility and perimeter protections.

13. d. Fuzz testing is a dynamic testing tool that provides input to the software to test the software’s limits and discover flaws. The input provided can be randomly generated by the tool or specially created to test for known vulnerabilities. Interface testing evaluates whether an application’s systems or components correctly pass data and control to one another. It verifies whether module interactions are working properly and errors are handled correctly. Static testing analyzes software security without actually running the software. This is usually provided by reviewing the source code or compiled application. Test coverage analysis uses test cases that are written against the application requirements specifications.

14. a, b, c, d. Security professionals should consider the following factors when performing security testing:

  • Impact

  • Difficulty

  • Time needed

  • Changes that could affect the performance

  • System risk

  • System criticality

  • Security test availability

  • Information sensitivity level

  • Likelihood of technical failure or misconfiguration

15. a. Operating system fingerprinting is the process of using some method to determine the operating system running on a host or a server. By identifying the OS version and build number, a hacker can identify common vulnerabilities of that OS using readily available documentation from the Internet. A network discovery scan examines a range of IP addresses to determine which ports are open. This type of scan only shows a list of systems on the network and the ports in use on the network. It does not actually check for any vulnerabilities. By using key performance and risk indicators of security process data, organizations better identify when security risks are likely to occur. Key performance indicators allow organizations to determine whether levels of performance are below or above established norms. Key risk indicators allow organizations to identify whether certain risks are more or less likely to occur. Organizations should conduct internal, external, and third-party audits as part of any security assessment and testing strategy.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.183.1