Organizations face a wide range of challenges today, including ever-expanding risks to organizational assets, intellectual property, and customer data. Understanding and managing these risks is an integral component of organizational success. The security practitioner is expected to participate in organizational risk management process, assist in identifying risks to information systems, and develop and implement controls to mitigate identified risks. As a result, the security practitioner must have a firm understanding of risk, response, and recovery concepts and best practices.
Topics
Effective incident response allows organizations to respond to threats that attempt to exploit vulnerabilities to compromise the confidentiality, integrity, and availability of organizational assets.
A Systems Security Certified Practitioner (SSCP) plays an integral role in incident response at any organization. The security practitioner must:
The security practitioner will be expected to be a key participant in the organizational risk management process. As a result, it is imperative that the security practitioner have a strong understanding of the responsibilities in the risk management process. To obtain this understanding, we will define key risk management concepts and then present an overview of the risk management process. We will then present real-world examples to reinforce key risk management concepts.
While you are reviewing risk management concepts, it is critical to remember that the ultimate purpose of information security is to reduce risks to organizational assets to levels that are deemed acceptable by senior management. Information security should not be performed using a “secure at any cost” approach. The cost of controls should never exceed the loss that would result if the confidentiality, integrity, or availability of a system were compromised. Risks, threats, vulnerabilities, and potential impacts should be assessed. Only after assessing these factors can cost-effective information security controls be selected and implemented that eliminate or reduce risks to acceptable levels.
We will reproduce the National Institute of Standards and Technology (NIST) Special Publication 800-30 R1, “Risk Management Guide for Information Systems”1 to establish definitions of key risk management concepts.
These basic risk management concepts merge to form a model for risk as seen in the NIST diagram shown in Figure 3-1.
Now that we have established definitions for basic risk management concepts, we can proceed with an explanation of the risk management process. It is important to keep these definitions in mind when reviewing the risk management process because these concepts will be continually referenced.
Risk management is the process of identifying risks, assessing their potential impacts to the organization, determining the likelihood of their occurrence, communicating findings to management and other affected parties, and developing and implementing risk mitigation strategies to reduce risks to levels that are acceptable to the organization. The first step in the risk management process is conducting a risk assessment.
Risk assessments assess threats to information systems, system vulnerabilities, and weaknesses, and the likelihood that threats will exploit these vulnerabilities and weaknesses to cause adverse effects. For example, a risk assessment could be conducted to determine the likelihood that an un-patched system connected directly to the Internet would be compromised. The risk assessment would determine that there is almost 100% likelihood that the system would be compromised by a number of potential threats such as casual hackers and automated programs. Although this is an extreme example, it helps to illustrate the purpose of conducting risk assessments.
The security practitioner will be expected to be a key participant in the risk assessment process. The security practitioner's responsibilities may include identifying system, application, and network vulnerabilities or researching potential threats to information systems. Regardless of the security practitioner's role, it is important that the security practitioner understand the risk assessment process and how it relates to implementing controls, safeguards, and countermeasures to reduce risk exposure to acceptable levels.
When performing a risk assessment, an organization should use a methodology that uses repeatable steps to produce reliable results. Consistency is vital in the risk assessment process, as failure to follow an established methodology can result in inconsistent results. Inconsistent results prevent organizations from accurately assessing risks to organizational assets and can result in ineffective risk mitigation, risk transference, and risk acceptance decisions.
Although a number of risk assessment methodologies exist, they generally follow a similar approach. NIST Special Publication 800-30 R1, “Risk Management Guide for Information Technology Systems” details a four-step risk assessment process. The risk assessment process described by NIST is composed of the steps shown in Figure 3-2.
Although a complete discussion of each step described within the NIST Risk Assessment Process is outside the scope of this text, we will use the methodology as a guideline to explain the typical risk assessment process. A brief description of a number of the steps is provided to help the security practitioner understand the methodology and functions that the security practitioner will be expected to perform as an SSCP.
The first step in the risk assessment process is to prepare for the assessment. The objective of this step is to establish a context for the risk assessment. This context is established and informed by the results from the risk framing step of the risk management process. Risk framing identifies, for example, organizational information regarding policies and requirements for conducting risk assessments, specific assessment methodologies to be employed, procedures for selecting risk factors to be considered, scope of the assessments, rigor of analyses, degree of formality, and requirements that facilitate consistent and repeatable risk determinations across the organization. Preparing for a risk assessment includes the following tasks:
The second step in the risk assessment process is to conduct the assessment. The objective of this step is to produce a list of information security risks that can be prioritized by risk level and used to inform risk response decisions. To accomplish this objective, organizations analyze threats and vulnerabilities, impacts and likelihood, and the uncertainty associated with the risk assessment process. This step also includes the gathering of essential information as a part of each task and is conducted in accordance with the assessment context established in the prepare step of the risk assessment process (Step 1). The expectation for risk assessments is to adequately cover the entire threat space in accordance with the specific definitions, guidance, and direction established during the prepare step. However, in practice, adequate coverage within available resources may dictate generalizing threat sources, threat events, and vulnerabilities to ensure full coverage and assessing specific, detailed sources, events, and vulnerabilities only as necessary to accomplish risk assessment objectives. Conducting risk assessments includes the following specific tasks:
The specific tasks are presented in a sequential manner for clarity. Depending on the purpose of the risk assessment, the SSCP may find reordering the tasks advantageous. Whatever adjustments the SSCP makes to the tasks, risk assessments should meet the stated purpose, scope, assumptions, and constraints established by the organizations initiating the assessments.
As mentioned previously, a threat is the potential for a particular threat source to successfully exercise a specific vulnerability. During the threat identification stage, potential threats to information resources are identified. Threat sources can originate from natural threats, human threats, or environmental threats. Natural threats include earthquakes, floods, tornadoes, hurricanes, tsunamis, and the like. Human threats are events that are either caused through employee error or negligence or are events that are caused intentionally by humans via malicious attacks that attempt to compromise the confidentiality, integrity, and availability of IT systems and data. Environmental threats are those issues that arise because of environmental conditions such as power failure, HVAC failure, or electrical fire.
For the SSCP, the first step of a risk assessment involves understanding the threats that may target an organization. This step considers both adversarial threats and non-adversarial threats such as natural disasters. During this step, the security practitioner may:
Several threat sources may be referenced as part of the threat identification. Honey pots and honey nets provide targeted and specific threat information in near–real time, but they require expert operation, interpretation, and analysis and therefore can be cumbersome and expensive for smaller organizations. Incorrectly configured and deployed honey pots/nets can also provide a staging ground for further attacks against an organization. Organizations may also consider threat catalogs such as Appendix D of NIST SP 800-30 R1 or the German BSI Threats Catalogue.2
An organization can subscribe to threat source information services. These services scan the Internet and the globe to determine active threats and how they may be targeting a specific industry, business, or economy. The more an organization is willing to pay, the more specific threat intelligence the organization may be able to maintain. Some programs such as the United States' FBI Infraguard are designed to provide threat information from the government to businesses and non-governmental organizations.
Since threat information is volatile yet crucial for the risk management process, organizations in similar economies or sectors may band together to share threat information between them. These collations often exist within a “no attribution” environment where confidentiality of the source of the threat information is the norm. While some areas, such as government, have done well in collaborating, other industries have been slow to adopt threat sharing for fear of losing competitive advantage over rivals.
After the threat identification process has been completed, a threat statement should be generated. A threat statement lists potential threat sources that could exploit system vulnerabilities. Successfully identifying threats coupled with vulnerability identification are prerequisites for selecting and implementing information security controls.
Threat events are characterized by the threat sources that could initiate the events. The SSCP needs to define these threat events with sufficient detail to accomplish the purpose of the risk assessment. Multiple threat sources can initiate a single threat event. Conversely, a single threat source can potentially initiate any of multiple threat events. Therefore, there can be a many-to-many relationship among threat events and threat sources that can potentially increase the complexity of the risk assessment. For each threat event identified, organizations need to determine the relevance of the event. The values selected by organizations have a direct linkage to organizational risk tolerance. The more risk averse, the greater the range of values considered. Organizations accepting greater risk or having a greater risk tolerance are more likely to require substantive evidence before giving serious consideration to threat events. If a threat event is deemed to be irrelevant, no further consideration is given. For relevant threat events, organizations need to identify all potential threat sources that could initiate the events.
Identifying vulnerabilities is an important step in the risk assessment process. The step allows for the identification of both technical and nontechnical vulnerabilities that, if exploited, could result in a compromise of system or data confidentiality, integrity, and availability. A review of existing documentation and reports is a good starting point for the vulnerability identification process. This review can include the results of previous risk assessments. It can also include a review of audit reports from compliance assessments, a review of security bulletins and advisories provided by vendors, and data made available via personal and social networking. NIST SP 800-30 R1 Appendix F provides a set of tables for using in identifying predisposing conditions and vulnerabilities.
Vulnerability identification and assessment can be performed via a combination of automated and manual techniques. Automated techniques such as system scanning allow a security practitioner to identify technical vulnerabilities present in assessed IT systems. These technical vulnerabilities may result from failure to apply operating system and application patches in a timely manner, architecture design problems, or configuration errors. By using automated tools, systems can be rapidly assessed and vulnerabilities can be quickly identified.
Although a comprehensive list of vulnerability assessment tools is outside the scope of this text, an SSCP should spend a significant amount of time becoming familiar with the various tools that could be used to perform automated assessments. Although many commercial tools are available, there are also many open-source tools equally as effective in performing automated vulnerability assessments. Figure 3-3 is a screenshot of policy configuration using Nessus, a widely used vulnerability assessment tool.
Manual vulnerability assessment techniques may require more time to perform when compared to automated techniques. Typically, automated tools will initially be used to identify vulnerabilities. Manual techniques can then be used to validate automated findings. By performing manual techniques, one can eliminate false positives. False positives are potential vulnerabilities identified by automated tools that are not actual vulnerabilities.
Manual techniques may involve attempts to actually exploit vulnerabilities. When approved personnel attempt to exploit vulnerabilities to gain access to systems and data, it is referred to as penetration testing. Although vulnerability assessments merely attempt to identify vulnerabilities, penetration testing actually attempts to exploit vulnerabilities. Penetration testing is performed to assess security controls and to determine if vulnerabilities can be successfully exploited. Figure 3-4 is a screenshot of the Metasploit console, a widely used penetration-testing tool.
Likelihood determination attempts to define the likelihood that a given vulnerability could be successfully exploited within the current environment. Likelihood is often the most difficult to determine in the risk framework. Factors that must be considered when assessing the likelihood of a successful exploit include:
An impact analysis defines the impact to an organization that would result if a vulnerability were successfully exploited. An impact analysis cannot be performed until system mission, system and data criticality, and system and data sensitivity have been obtained and assessed. The system mission refers to the functionality provided by the system in terms of business or IT processes supported. System and data criticality refer to the system’s importance to supporting the organizational mission. System and data sensitivity refer to requirements for data confidentiality and integrity.
In many cases, system and data criticality and system and data sensitivity can be assessed by determining the adverse impact to the organization that would result from a loss of system and data confidentiality, integrity, or availability. Remember that confidentiality refers to the importance of restricting access to data so that they are not disclosed to unauthorized parties. Integrity refers to the importance that unauthorized modification of data is prevented. Availability refers to the importance that systems and data are available when needed to support business and technical requirements. When a person assesses each of these factors individually and aggregates the individual impacts resulting from a loss of confidentiality, integrity, and availability, the overall adverse impact from a system compromise can be assessed.
Impact can be assessed in either quantitative or qualitative terms. A quantitative impact analysis assigns a dollar value to the impact. The dollar value can be calculated based on an assessment of the likelihood of a threat source exploiting a vulnerability, the loss resulting from a successful exploit, and an approximation of the number of times that a threat source will exploit a vulnerability over a defined period. To understand how a dollar value can be assigned to an adverse impact, one must review some fundamental concepts. An explanation of each of these concepts is provided below.
Single Loss Expectancy = Asset Value X Exposure Factor
Annual Loss Expectancy = Single Loss Expectancy X Annualized Rate of Occurrence
Organizations can use the results of annual loss expectancy calculations to determine the quantitative impact to an organization if an exploitation of a specific vulnerability were successful. In addition to the results of quantitative impact analysis, organizations should evaluate the results of qualitative impact analysis.
A qualitative impact analysis assesses impact in relative terms such as high impact, medium impact, and low impact without assigning a dollar value to the impact. A qualitative assessment is often used when it is difficult or impossible to accurately define loss in terms of dollars. For example, a qualitative assessment may be used to assess the impact resulting from a loss of customer confidence, from negative public relations, or from brand devaluation.
Organizations generally find it most helpful to use a blended approach to impact analysis. When they evaluate both the quantitative and qualitative impacts to an organization, a complete picture of impacts can be obtained. The results of these impact assessments provide required data for the risk determination process.
In the risk determination step, the overall risk to an IT system is assessed. Risk determination uses the outputs from previous steps in the risk assessment process to assess overall risk. Risk determination results from the combination of:
A risk-level matrix can be created that analyzes the combined impact of these factors to assess the overall risk to a given IT system. The exact process for creating a risk-level matrix is outside the scope of this text. For additional information, refer to NIST Special Publication 800-30 R1, “Risk Management Guide for Information Technology Systems.”
The result of creating a risk-level matrix is that overall risk level may be expressed in relative terms of high, medium, and low risk. Figure 3-5 provides an example of a risk matrix.
A description of each risk level and recommended actions:
The third step in the risk assessment process is to communicate the assessment results and share risk-related information. The objective of this step is to ensure that decision makers across the organization have the appropriate risk-related information needed to inform and guide risk decisions. Communicating and sharing information consists of the following specific tasks:
The fourth step in the risk assessment process is to maintain the assessment. The objective of this step is to keep current the specific knowledge of the risk organizations incur. The results of risk assessments inform risk management decisions and guide risk responses. To support the ongoing review of risk management decisions (e.g., acquisition decisions, authorization decisions for information systems and common controls, connection decisions), organizations maintain risk assessments to incorporate any changes detected through risk monitoring. Risk monitoring provides organizations with the means to, on an ongoing basis:
Maintaining risk assessments includes the following specific tasks:
Risk treatment is the next step in the risk assessment process. The goal of risk treatment is to reduce risk exposure to levels that are acceptable to the organization. Risk treatment can be performed using a number of different strategies. These strategies include:
Risk mitigation reduces risks to the organization by implementing technical, managerial, and operational controls. Controls should be selected and implemented to reduce risk to acceptable levels. When controls are selected, they should be selected based on their cost, effectiveness, and ability to reduce risk. Controls restrict or constrain behavior to acceptable actions. To help understand controls, look at some examples of security controls.
A simple control is the requirement for a password to access a critical system. When an organization implements a password control, unauthorized users would theoretically be prevented from accessing a system. The key to selecting controls is to select controls that are appropriate based on the risk to the organization. Although we noted that a password is an example of a control, we did not discuss the length of the password.
If a password is implemented that is one character long and must be changed on an annual basis, a control is implemented; however, the control will have almost no effect on reducing the risk to the organization because the password could be easily cracked. If the password is cracked, unauthorized users could access the system and, as a result, the control has not effectively reduced the risk to the organization.
On the other hand, if a password that is 200 characters long that must be changed on an hourly basis is implemented, it becomes a control that will effectively prevent authorized access. The issue in this case is that a password with those requirements will have an unacceptably high cost to the organization. End-users would experience a significant productivity loss because they would be constantly changing their passwords.
The key to control selection is to implement cost-effective controls that reduce or mitigate risks to levels that are acceptable to the organization. By implementing controls based on this concept, organizations will reduce risk but not totally eliminate it.
Controls can be categorized in technical, managerial, or operational control categories. Managerial controls are controls that dictate how activities should be performed. Policies, procedures, standards, and guidelines are examples of managerial controls. These controls provide a framework for managing personnel and operations. They also can establish requirements for systems operations. An example of a managerial control is the requirement that information security policies should be reviewed on an annual basis and updated as necessary to ensure that they accurately reflect the environment and remain valid.
Technical controls are designed to control end-user and system actions. They can exist within operating systems, applications, databases, and network devices. Examples of technical controls include password constraints, access control lists, firewalls, data encryption, antivirus software, and intrusion prevention systems.
Technical controls help to enforce requirements specified within administrative controls. For example, an organization could have implemented a malicious code policy as an administrative control. The policy could require that all end-users’ systems have antivirus software installed. Installation of the antivirus software would be the technical control that provides support to the administrative control.
In addition to being categorized as technical, administrative, or operational, controls can be simultaneously categorized as either preventative or detective. Preventative controls attempt to prevent adverse behavior and actions from occurring. Examples of preventative controls include firewalls, intrusion prevention systems, and segregation of duties. Detective controls are used to detect actual or attempted violations of system security. Examples of detective controls include intrusion detection systems and audit logging.
Although implementing controls will reduce risk, some amount of risk will remain even after controls have been selected and implemented. The risk that remains after risk reduction and mitigation efforts are complete is referred to as residual risk. Organizations must determine how to treat this residual risk. Residual risk can be treated by risk transference, risk avoidance, or risk acceptance.
Risk transference transfers risk from an organization to a third party. Some types of risk such as financial risk can be reduced by transferring it to a third party via a number of methods. The most common risk transference method is insurance. An organization can transfer its risk to a third party by purchasing insurance. When an organization purchases insurance, it effectively sells its risk to a third party. The insurer agrees to accept the risk in exchange for premium payments made by the insured. If the risk is realized, the insurer compensates the insured party for any incurred losses.
Organizations must be cautious when relying on risk transference because some risk cannot be transferred. Outsourcing sensitive information processing may seem like outsourcing the risk of personnel data breaches. If a breach occurs, surely those affected would understand that the outsourced processor caused the breach and focus their rage there, right? Unfortunately, this is rarely the case. In most instances, the victims seek damages against the organization that originally collected the information in the first place. Organizations must ensure they conduct their due diligence when transferring operations and data to third parties when the organization is still responsible for the protection of the data.
Finally, organizations must be aware of risk that cannot be transferred. While financial risk is easy to transfer to a third party through insurance, reputational damage and customer loyalty is almost impossible to transfer. When an information security risk affects these aspects of an organization, the organization must go on the defensive and engage public relations teams to try to maintain the goodwill of their customers and protect the reputation of their organization.
Another alternative to mitigate risk is to avoid the risk. Risk can be avoided by eliminating the entire situation causing the risk. This could involve disabling system functionality or preventing risky activities when risk cannot be adequately reduced. In drastic measures, it can involve shutting down entire systems or parts of a business.
Residual risk can also be accepted by an organization. A risk acceptance strategy indicates that an organization is willing to accept the risk associated with the potential occurrence of a specific event. It is important that when an organization chooses risk acceptance, it clearly understands the risk that is present, the probability that the loss related to the risk will occur, and the cost that would be incurred if the loss were realized. Organizations may determine that risk acceptance is appropriate when the cost of implementing controls exceeds the anticipated losses.
Once assessed and quantified or qualified, risk should be reported and recorded. Organizations should have a way to aggregate risk in a centralized function that combines information security risk with other risk such as market risk, legal risk, human capital risk, and financial risk. The organizational risk executive function serves as a way to understand total risk to the organization. Risk is aggregated in a system called a risk register or, in some cases, a risk dashboard. The SSCP must ensure only risk information (not vulnerability or threat information) is reported to the risk register. The risk register serves as a way for the organization to know their possible exposure at a given time.
The risk register will generally be shared with stakeholders, allowing them to be kept aware of issues and providing a means of tracking the response to issues. It can be used to flag new risks and to make suggestions on what course of action to take to resolve any issues. The risk register is there to help with the decision-making process and enables managers and stakeholders to handle risk in the most appropriate way. The risk register is a document that contains information about identified risks, analysis of risk severity, and evaluations of the possible solutions to be applied. Presenting this in a spreadsheet is often the easiest way to manage things so that key information can be found and applied quickly and easily.
The SSCP should be familiar with how to create a risk register for the organization. If you are unsure how to create a risk register, the following is a brief guide on how to get started in just a few steps:
The risk register addresses risk management in four key steps:
A security audit is an evaluation of how well the objectives of a security framework are met and a verification to ensure the security framework is appropriate for the organization. Nothing that comes out of an audit should surprise security practitioners if they have been doing their continuous monitoring. Think of it this way. Monitoring is an ongoing evaluation of a security framework done by the folks who manage the security day-to-day, while an audit is an evaluation of the security framework performed by someone outside of the day-to-day security operations. Security audits serve two purposes for the security practitioner. First, they point out areas where security controls are lacking, policy is not being enforced, or ambiguity exists in the security framework. The second benefit of security audits is that they emphasize security things that are being done right. Auditors, in general, should not be perceived as the “bad guys” that are out to prove what a bad job the organization is doing. On the contrary, auditors should be viewed as professionals who are there to assist the organization in driving the security program forward and to assist security practitioners in making management aware of what security steps are being correctly taken and what more needs to be done.
So, who audits and why? There are two categories of auditors; the first type is an internal auditor. These people work for the company, and these audits can be perceived as an internal checkup. The other type of auditor is an external auditor. These folks either are under contract with the company to perform objective audits or are brought in by other external parties. Audits can be performed for several reasons. This list is by no means inclusive of all the reasons that an audit may be performed, but it covers most of them.
What are the auditors going to use as a benchmark to test against? Auditors should use the organization's security framework as a basis for them to audit against; however, to ensure that everything is covered, they will first compare the organization's framework against a well-known and accepted standard. What methodology will the auditors use for the audit? There are many different methodologies used by auditors worldwide. Here are a few of them:
Auditors may also use a methodology of their own design or one that has been adapted from several guidelines. What difference does it make to you as a security practitioner which methodology they use? From a technical standpoint, it does not matter; however, it is a good idea to be familiar with the methodology they are going to use to understand how management will be receiving the audit results.3
Auditors collect information about your security processes. Auditors are responsible for:
Before auditors begin an audit, they define the audit scope that outlines what they are going to be looking at. One way to define the scope of an audit is to break it down into eight domains of security responsibility; that is, to break up the IT systems into manageable areas upon which audits may be based. For example, an audit may be broken into eight domains such as:
Auditors may also limit the scope by physical location, or they may just choose to review a subset of your security framework.
The security practitioner will be asked to participate in the audit by helping the auditors collect information and interpret the findings for the auditors. Having said that, there may be times when IT will not be asked to participate. They might be the reason for the audit, and the auditors need to get an unbiased interpretation of the data. There are several areas the security practitioner will be asked to participate in. Before participating in an audit, ensure controls, policies, and standards are in place to support target areas and be sure they are working up to the level prescribed. An audit is not the place to realize that logging or some other function is not working.
As part of an audit, the auditors will want to review system documentation. This can include:
Once the security practitioner has finished helping the auditors gather the required data and finished assisting them with interpreting the information, the practitioner’s work is not over. There are still a few more steps that need to be accomplished to complete the audit process.
After an audit is performed, an exit interview will alert personnel to glaring issues they need to be concerned about immediately. Besides these preliminary alerts, an auditor should avoid giving detailed verbal assessments, which may falsely set the expectation level of the organization with regard to security preparedness in the audited scope.
After the auditors have finished tabulating their results, they will present the findings to management. These findings will contain a comparison of the audit findings versus the company’s security framework and industry standards or “best practices.” These findings will also contain recommendations for mitigation or correction of documented risks or instances of noncompliance.
Management will have the opportunity to review the audit findings and respond to the auditors. This is a written response that becomes part of the audit documentation. It outlines plans to remedy findings that are out of compliance, or it explains why management disagrees with the audit findings.
The security practitioner should be involved in the presentation of the audit findings and assist with the management response. Even if input is not sought for the management response, the security practitioner needs to be aware of what issues were presented and what the management’s responses to those issues were.
Once all these steps are completed, the security cycle starts over again. The findings of the audit need to be fixed, mitigated, or introduced into the organization's security framework.
The security practitioner will be expected to perform security assessment activities including but not limited to vulnerability scanning, penetration testing, internal-external assessment, and interpreting the outcomes of the results.
Vulnerability scanning is simply the process of checking a system for weaknesses. These vulnerabilities can take the form of applications or operating systems that are missing patches, misconfiguration of systems, unnecessary applications, or open ports. While these tests can be conducted from outside the network, as an attacker would, it is advisable to do a vulnerability assessment from a network segment that has unrestricted access to the host the security practitioner is conducting the assessment against. Why is this? If the security practitioner only tests a system from the outside world, the security practitioner will identify any vulnerability that can be exploited from the outside. However, what happens if an attacker gains access inside the target network? Now there are vulnerabilities exposed to the attacker that could have easily been avoided. Unlike a penetration test, which is discussed later in this domain, a vulnerability tester has access to network diagrams, configurations, login credentials, and other information needed to make a complete evaluation of the system. The goal of a vulnerability assessment is to study the security level of the systems, identify problems, offer mitigation techniques, and assist in prioritizing improvements.
The benefits of vulnerability testing include the following:
The disadvantages of vulnerability testing include:
Note that vulnerability testing software is often placed into two broad categories:
General vulnerability software probes hosts and operating systems for known flaws. It also probes common applications for flaws. Application-specific vulnerability tools are designed specifically to analyze certain types of application software. For example, database scanners are optimized to understand the deep issues and weaknesses of Oracle databases, Microsoft SQL Server, etc., and they can uncover implementation problems therein. Scanners optimized for web servers look deeply into issues surrounding those systems.
Vulnerability scanning software, in general, is often referred to as V/A (vulnerability assessment) software, and sometimes combines a port mapping function to identify which hosts are where and the applications they offer with further analysis that assigns vulnerabilities to applications. Good vulnerability software will offer mitigation techniques or links to manufacturer websites for further research. This stage of security testing is often an automated software process. It is also beneficial to use multiple tools and cross-reference the results of those tools for a more accurate picture. As with any automated process, the security practitioner needs to examine the results closely to ensure that they are accurate for the organization's environment.
Vulnerability testing usually employs software specific to the activity and tends to have the following qualities:
Problems that may arise when using vulnerability analysis tools include:
A variety of scanner tools are available:
Even if a scanner reports a service as vulnerable, or missing a patch that leads to vulnerability, the system is not necessarily vulnerable. Accuracy is a function of the scanner’s quality; that is, how complete and concise the testing mechanisms are built (better tests equal better results), how up to date the testing scripts are (fresher scripts are more likely to spot a fuller range of known problems), and how well it performs OS fingerprinting (knowing which OS the host runs helps the scanner pinpoint issues for applications that run on that OS). Double check the scanner’s work. Verify that a claimed vulnerability is an actual vulnerability. Good scanners will reference documents to help the security practitioner learn more about the issue.
Organizations serious about security create hardened host configuration procedures and use policy to mandate host deployment and change. There are many ingredients to creating a secure host, but the security practitioner should always remember that what is secure today might not be secure tomorrow, because conditions are ever changing. There are several areas to consider when securing a host or when evaluating its security. These are discussed in the following sections.
Services that are not critical to the role the host serves should be disabled or removed as appropriate for that platform. For the services the host does offer, make sure it is using server programs considered secure, make sure the security practitioner fully understands them, and tighten the configuration files to the highest degree possible. Unneeded services are often installed and left at their defaults, but since they are not needed, administrators ignore or forget about them. This may draw unwanted data traffic to the host from other hosts attempting connections, and it will leave the host vulnerable to weaknesses in the services. If a host does not need a particular host process for its operation, do not install it. If software is installed but not used or intended for use on the machine, it may not be remembered or documented that software is on the machine and therefore will likely not be patched. Port mapping programs use many techniques to discover services available on a host. These results should be compared with the policy that defines this host and its role. One must continually ask the critical questions, for the less a host offers as a service to the world while still maintaining its job, the better for its security (because there is less chance of subverting extraneous applications).
Certain programs used on systems are known to be insecure, cannot be made secure, and are easily exploitable; therefore, only use secure alternatives. These applications were developed for private, secure LAN environments, but as connectivity proliferated worldwide, their use has been taken to insecure communication channels. Their weakness falls into three categories:
Common services are studied carefully for weaknesses by people motivated to attack the organization’s systems. Therefore, to protect hosts, one must understand the implications of using these and other services that are commonly hacked. Eliminate them when necessary or substitute them for more secure versions. For example:
Least privilege is the concept that describes the minimum number of permissions required to perform a particular task. This applies to services/daemon processes as well as user permissions. Often systems installed out of the box are at minimum security levels. Make an effort to understand how secure newly installed configurations are, and take steps to lock down settings using vendor recommendations.
For UNIX-based systems, remove all unnecessary SUID (set used ID) and SGID (set group ID) programs that embed the ability for a program running in one user context to access another program. This ability becomes even more dangerous in the context of a program running with root user permissions as a part of its normal operation. For Windows-based systems, use the Microsoft Management Center (MMC) “security configuration and analysis” and “security templates” snap-ins to analyze and secure multiple features of the operation system, including audit and policy settings and the registry.
Patches are pieces of software code meant to fix a vulnerability or problem that has been identified in a portion of an operating system or in an application that runs on a host. Keep the following in mind regarding patching:
In a perfect world, applications are built from the ground up with security in mind. Applications should prevent privilege escalation and buffer overflows and a myriad of other threatening problems. However, this is not always the case, and applications need to be evaluated for their ability not to compromise a host. Insecure services and daemons that run on hardened hosts may by nature weaken the host. Applications should come from trusted sources. Similarly, it is inadvisable to download executables from websites the security practitioner knows nothing about. Executables should be hashed and verified with the publisher. Signed executables also provide a level of assurance regarding the integrity of the file. Some programs can help evaluate a host’s applications for problems. In particular, these focus on web-based systems and database systems:
The SSCP should do the following:
Firewalls are designed to be points of data restriction (choke points) between security domains. They operate on a set of rules driven by a security policy to determine what types of data are allowed from one side to the other (point A to point B) and back again (point B to point A). Similarly, routers can also serve some of these functions when configured with access control lists (ACLs). Organizations deploy these devices to not only connect network segments together but also to restrict access to only those data flows that are required. This can help protect organizational data assets. Routers with ACLs, if used, are usually placed in front of the firewalls to reduce the noise and volume of traffic hitting the firewall. This allows the firewall to be more thorough in its analysis and handling of traffic. This strategy is also known as layering or defense in depth.
Changes to devices should be governed by change control processes that specify what types of changes can occur and when they can occur. This prevents haphazard and dangerous changes to devices that are designed to protect internal systems from other potentially hostile networks, such as the Internet or an extranet to which the organization’s internal network connects. Change control processes should include security testing to ensure the changes were implemented correctly and as expected.
Configuration of these devices should be reflected in security procedures, and the rules of the access control lists should be engendered by organizational policy. The point of testing is to ensure machine configurations match approved policy.
With this sample baseline in mind, port scanners and vulnerability scanners can be leveraged to test the choke point’s ability to filter as specified. If internal (trusted) systems are reachable from the external (untrusted) side in ways not specified by policy, a mismatch has occurred and should be assessed. Likewise, internal to external testing should conclude that only the allowed outbound traffic could occur. The test should compare a device’s logs with the tests dispatched from the test host.
Advanced firewall testing will test a device’s ability to perform the following (this is a partial list and is a function of the firewall’s capabilities):
Advanced firewall testing can leverage a vulnerability or port scanner’s ability to dispatch denial-of-service and reconnaissance tests. A scanner can be configured to direct, for example, massive amounts of SYN packets at an internal host. If the firewall is operating properly and effectively, it will limit the number of these half-open attempts by intercepting them so that the internal host is not adversely affected. These tests must be used with care because there is always a chance that the firewall will not do what is expected and the internal hosts might be affected.
IDS systems are technical security controls designed to monitor for and alert on the presence of suspicious or disallowed system activity within host processes and across networks. Device logging is used for recording many types of events that occur within hosts and network devices. Logs, whether generated by IDS or hosts, are used as audit trails and permanent records of what happened and when. Organizations have a responsibility to ensure that their monitoring systems are functioning correctly and alerting on the broad range of communications commonly in use. Documenting this testing can also be used to show due diligence. Likewise, the security practitioner can use testing to confirm that IDS detects traffic patterns as claimed by the vendor.
With regard to IDS testing, methods should include the ability to provide a stimulus (i.e., send data that simulate an exploitation of a particular vulnerability) and observe the appropriate response by the IDS. Testing can also uncover an IDS’s inability to detect purposeful evasion techniques that might be used by attackers. Under controlled conditions, stimulus can be crafted and sent from vulnerability scanners. Response can be observed in log files generated by the IDS or any other monitoring system used in conjunction with the IDS. If the appropriate response is not generated, investigation of the causes can be undertaken.
With regard to host logging tests, methods should also include the ability to provide a stimulus (i.e., send data that simulates a “log-able” event) and observe the appropriate response by the monitoring system. Under controlled conditions, stimulus can be crafted in many ways depending on the security practitioner's test. For example, if a host is configured to log an alert every time an administrator or equivalent logs on, the security practitioner can simply log on as the “root” user to the organization's UNIX system. In this example, the response can be observed in the system’s log files. If the appropriate log entry is not generated, investigation of the causes can be undertaken.
The overall goal is to make sure the monitoring is configured to the organization’s specifications and that it has all of the features needed.
The following traffic types and conditions are those the security practitioner should consider testing for in an IDS environment, as vulnerability exploits can be contained within any of them. If the monitoring systems the security practitioner uses do not cover all of them, the organization's systems are open to exploitation:
IPSs are technical security controls designed to monitor and alert for the presence of suspicious or disallowed system activity within host processes and across networks, and then take action on suspicious activities. Likewise, the security practitioner can use testing to confirm that IPS detects traffic patterns and reacts as claimed by the vendor. When one is auditing an IPS, its position in the architecture is slightly different from that of an IDS; an IPS needs to be positioned inline of the traffic flow so the appropriate action can be taken. Some of the other key differences are as follows: The IPS acts on issues and handles the problems, while an IDS only reports on the traffic and requires some other party to react to the situation. The negative consequence of the IPS is that it is possible to reject good traffic and there will only be the logs of the IPS to show why the good traffic is getting rejected. Many times, the networking staff may not have access to those logs and may find network troubleshooting more difficult.
Some organizations use security gateways or web proxies to intercept certain communications and examine them for validity. Gateways perform their analysis on these communications based on a set of rules supplied by the organization—rules driven by policy—and pass them along if they are deemed appropriate and exploitation-free, or block them if they are not. Security gateway types include the following:
Security testing should encompass these gateway devices to ensure their proper operation. Depending on the device, the easiest way to test to ensure that it is working is try to perform the behavior that it is supposed to block. There are “standard” antivirus (AV) test files available on the Internet that do not contain a virus but have a signature that will be discovered by all major AV vendors. While this file will not ensure that the organization's AV will catch everything, it will at least confirm that the gateway is looking into the traffic for virus patterns.
With the proliferation of wireless access devices comes the common situation where they are not configured for even minimal authentication and encryption because the people deploying them generally have no knowledge of the ramifications. Therefore, periodic wireless testing to spot unofficial access points is needed.
Adding 802.11-based wireless access points (AP) to networks increases overall convenience for users, mostly because of the new mobility that is possible. Whether using handheld wireless PDA devices, laptop computers, or the newly emerging wireless voice over IP telephones, users are now able to flexibly collaborate outside of the office confines—ordinarily within the building and within range of an AP—while still remaining connected to network-based resources. To enable wireless access, one can procure an inexpensive wireless access point and plug it into the network. Communication takes place from the unwired device to the AP. The AP serves (usually) as a bridge to the wired network.
The problem with this from a security perspective is that allowing the addition of one or more AP units onto the network infrastructure will likely open a large security hole. Therefore, no matter how secure the organization’s wired network is, adding one AP with no configured security is like adding a network connection to the parking lot, which allows anyone with the right tools and motivation to access the network. The implication here is that many APs as originally implemented in the 802.11b/g/n standard have easily breakable security. Security testers should have methods to detect rogue APs that have been added by employees or unauthorized persons so these wireless security holes can be examined. The following wireless networking high points discuss some of the issues surrounding security of these systems:
With this information, a security tester can test for the effectiveness of wireless security in the environment using specialized tools and techniques as presented here.
To search for rogue (unauthorized) access points, the security practitioner can use some of the following techniques:
To lock down the enterprise from the possibility of rogue APs, the security practitioner can do the following:
To gauge security effectiveness of authorized APs:
There are a variety of useful wireless tools available:
War dialing attempts to locate unauthorized, also called rogue, modems connected to computers that are connected to networks. Attackers use tools to sequentially and automatically dial large blocks of numbers used by the organization in the hopes that rogue modems or modems used for out-of-band communication will answer and allow them to make a remote asynchronous connection to it. With weak or nonexistent authentication, these rogue modems may serve as a back door into the heart of a network, especially when connected to computers that host remote control applications with lax security. Security testers can use war dialing techniques as a preventative measure and attempt to discover these modems for subsequent elimination. Although modems and war dialing have fallen out of favor in the IT world, a security practitioner still needs to check for the presence of unauthorized modems connected to their network
War driving is the wireless equivalent of war dialing. While war dialing involves checking banks of numbers for a modem, war driving involves traveling around with a wireless scanner looking for wireless access points. Netstumbler was one of the original products that people used for war driving. From the attacker perspective, war driving gives them a laundry list of access points where they can attach to a network and perform attacks. The best ones in a hacker’s eye are the unsecured wireless access points that allow unrestricted access to the corporate network. The hacker will not only compromise the corporate network but will then use the corporate Internet access to launch attacks at other targets that are then untraceable back to the hacker. From a security standpoint, war driving enables the security practitioner to detect rogue access points in and around the physical locations. Is an unsecured wireless access point that is not on the network a security threat to the network? It certainly is. If a user can connect their workstation to an unknown and unsecured network, they introduce a threat to the security of the network.
Penetration testing takes vulnerability assessment one step farther. It does not stop at identifying vulnerabilities; it also uses those vulnerabilities to expose other weaknesses in the network. Penetration testing consists of five different phases:
Penetration testing also has three different modes, which will be explored in more detail later in the domain. Those modes are:
These testers perform tests with the knowledge of the security and IT staff. They are given physical access to the network and sometimes even a normal username and password. Qualities include:
There are several pros and cons to consider with the white box approach:
Grey box testing involves giving some information to the penetration testing team. Sometimes this may involve publically discoverable information, and it may also include some information about systems inside the protective boundaries of the organization. Grey box testing allows the penetration testing team to focus on attacking the organization and trying to get access and reducing time on discovery. Organizations who feel they have a good grasp on what is publically available about them often use this approach to maximize the resources focused on specific system attacks.
There are several pros and cons to consider with the grey box approach:
These testers generally perform unannounced tests that even the security and IT staff may not know about. Sometimes these tests are ordered by senior managers to test their staff and the systems for which the staff are responsible. Other times, the IT staff will hire covert testers under the agreement that the testers can and will test at any given time, such as four times per year. The objective is generally to see what they can see and get into whatever they can get into, without causing harm, of course. Qualities include:
There are several pros and cons to consider with the black box approach:
Without defined goals, security testing can be a meaningless and costly exercise. The following are examples of some high-level goals for security testing, thereby providing value and meaning for the organization:
It is extremely important as a security practitioner to ensure that the security practitioner has business support and authorization (in accordance with a penetration testing policy) before conducting a penetration test. It is advisable to get this support and permission in writing before conducting the testing.
Software tools exist to assist in the testing of systems from many angles. Tools help the security practitioner to interpret how a system functions for evaluating its security. This section presents some tools available on the commercial market and in the open source space as a means to the testing end. Do not interpret the listing of a tool as a recommendation for its use. Likewise, just because a tool is not listed does not mean it is not worth considering. Choosing a tool to test a particular aspect is a personal or organizational choice.
Some points to consider regarding the use of software tools in the security testing process:
Be aware that the models for detection of vulnerabilities are inconsistent among different toolsets; therefore, results should be studied and made reasonably consistent among different tools.
It is often easier to understand the testing results by creating a graphical depiction in a simple matrix of vulnerabilities, ratings for each, and an overall vulnerability index derived as the product of vulnerability and system criticality. More complicated matrices may include details describing each vulnerability and sometimes ways to mitigate it or ways to confirm the vulnerability.
Tests should conclude with a report and matrices detailing the following:
Testers often present findings matrices that list each system and the vulnerabilities found for each with a “high, medium, low” ranking. The intent is to provide the recipient a list of what should be fixed first. The problem with this method is that it does not take into account the criticality of the system in question. You need a way to differentiate among the urgency for fixing “high” vulnerabilities across systems. Therefore, reports should rank true vulnerabilities by seriousness, taking into account how the organization views the asset’s value. Systems of high value may have their medium and low vulnerabilities fixed before a system of medium value has any of its vulnerabilities fixed. This criticality is determined by the organization, and the reports and matrices should help reflect a suggested path to rectifying the situation. For example, the organization has received a testing report listing the same two high vulnerabilities for a print server and an accounting database server. The database server is certainly more critical to the organization; therefore, its problems should be mitigated before those of the print server.
As another example, assume the organization has assigned a high value to the database server that houses data for the web server. The web server itself has no data but is considered medium value. In contrast, the FTP server is merely for convenience and is assigned a low value. A security testing matrix may show several high vulnerabilities for the low-value FTP server. It may also list high vulnerabilities for both the database server and the web server. The organization will likely be interested in fixing the high vulnerabilities first on the database server, then the web server, and then the FTP server. This level of triage is further complicated by trust and access between systems. If the web server gets the new pages and content from the FTP server, this may increase the priority of the FTP server issues over that of the usually low-value FTP server issues.
Basic security testing activities include reconnaissance and network mapping.
Penetration testing is an art. This means that different IT security practitioners have different methods for testing. This domain attempts to note the highlights to help the security practitioner differentiate among the various types and provides information on tools that assist in the endeavor. Security testing is an ethical responsibility. Testing must always be authorized, and the techniques should never be used for malice. This information on tools is presented for the purpose of helping the security practitioner spot weaknesses in the systems the security practitioner is authorized to test so that they may be improved.
Often reconnaissance is needed by a covert penetration tester who has not been granted regular access to perform a cooperative test. These testers are challenged with having little to no knowledge of the system and must collect it from other sources to form the basis of the test attack.
Reconnaissance is necessary for these testers because they likely have no idea what they will be testing at the commencement of the test. Their orders are usually “see what the security practitioner can find and get into but do not damage anything.”
Once the security practitioner thinks they know what should be tested based on the information collected, they must always check with the persons who have hired them to ensure that the systems the security practitioner intends to penetrate are actually owned by the organization. Doing otherwise may put delicate systems in harm’s way, or the tester may test something not owned by the organization ordering the test (leading to possible legal repercussions and financial exposure). These parameters should be defined before the test.
Social engineering is an activity that involves the manipulation of persons or physical reconnaissance to get information for use in exploitation or testing activities. Low-tech reconnaissance uses simple technical means to obtain information.
Before attackers or testers make an attempt on the organization's systems, they can learn about the target using low-technology techniques such as:
Mid-tech reconnaissance includes several ways to get information that can be used for testing.
The following are example attacks:
There are many sources for Whois information and tools, including:
In an effort to discover the names and types of servers operating inside or outside a network, attackers may attempt zone transfers from DNS servers. A zone transfer is a special type of query directed at a DNS server that asks the server for the entire contents of its zone (the domain that it serves). Information that is derived is only useful if the DNS server is authoritative for that domain. To find a DNS server authoritative for a domain, one can refer to the Whois search results, which often provide this information. Internet DNS servers will often restrict which servers are allowed to perform transfers, but internal DNS servers usually do not have these restrictions.
Secure systems should lock down DNS. Testers should see how the target does this by keeping the following in mind:
There are varieties of free programs available that will resolve DNS names, attempt zone transfers, and perform a reverse lookup of a specified range of IPs. Major operating systems also include the “nslookup” program, which can also perform these operations.
Network mapping is a process that paints the picture of which hosts are up and running externally or internally and what services are available on the system. Commonly, the security practitioner may see mapping in the context of external host testing and enumeration in the context of internal host testing, but this is not necessarily ironclad, and mapping and enumeration often seem to be used interchangeably. They essentially accomplish similar goals, and the terms can be used in similar ways.
When performing mapping of any kind, the tester should limit the mapping to the scope of the project. Testers may be given a range of IP addresses to map, so the testers should limit the query to that range. Overt and covert testing usually includes network mapping, which is the activity that involves techniques to discover the following:
Mapping is the precursor to vulnerability testing and usually defines what will be tested more deeply at that next stage. For example, consider a scenario where the security practitioner discovers that a host is listening on TCP port 143. This probably indicates the host is running application services for the IMAP mail service. Many IMAP implementations have vulnerabilities. During the network mapping phase, the security practitioner learns that host 10.1.2.8 is listening on this port. Network mapping may provide insight about the operating system the host is running, which may in turn narrow the possible IMAP applications. For example, it is unlikely that the Microsoft Exchange IMAP process will be running on a Solaris computer; therefore, if network mapping shows a host with telltale Solaris “fingerprints” as well as indications that the host is listening on TCP port 143, the IMAP server is probably not being provided by Microsoft Exchange. As such, when the security practitioner is later exploring vulnerabilities, they can likely eliminate Exchange IMAP vulnerabilities for this host.
Mapping results can be compared to security policy to discover rogue or unauthorized services that appear to be running. For example, an organization may periodically run mapping routines to match results with what should be expected. If more services are running than one would expect to be running, the systems may have been accidentally misconfigured (therefore opening up a service not approved in the security policy) or the host(s) may have been compromised.
When performing mapping, make sure the security practitioner is performing the mapping on a host range owned by the organization. For example, suppose an nslookup DNS domain transfer for bobs-italianbookstore.com showed a mail server at 10.2.3.70 and a Web server at 10.19.40.2. Assuming the security practitioner does not work for bobs-italianbookstore.com and does not have intimate knowledge of their systems, the security practitioner might assume that they have two Internet connections. In a cooperative test, the best course of action is to check with their administrative staff for clarification. They may tell the security practitioner that the mail is hosted by another company and that it is outside of the scope of the test. However, the web server is a host to be tested. You should ask which part of the 10.0.0.0 bobsbookstore.com controls. Let us assume they control the class-C 10.19.40.0 range. Therefore, network mapping of 10.19.40.1 through 10.19.40.254 is appropriate and will not interfere with anyone else’s operations. Even though only one host is listed in the DNS, there may be other hosts up and running.
Depending on the level of stealth required (i.e., to avoid detection by IDS systems or other systems that will “notice” suspected activity if a threshold for certain types of communication is exceeded), network mapping may be performed very slowly over long periods of time. Stealth may be required for covert penetration tests.
Network mapping can involve a variety of techniques for probing hosts and ports. Several common techniques are:
Available mapping tools include:
Another technique for mapping a network is commonly known as “firewalking,” which uses traceroute techniques to discover which services a filtering device like a router or firewall will allow through. These tools generally function by transmitting TCP and UDP packets on a particular port with a time to live (TTL) equal to at least one greater than the targeted router or firewall. If the target allows the traffic, it will forward the packets to the next hop. At that point, the traffic will expire as it reaches the next hop, and an ICMP_TIME_EXCEEDED message will be generated and sent back out of the gateway to the test host. If the target router or firewall does not allow the traffic, it will drop the packets and the test host will not see a response.
Available firewalking tools include:
By all means, do not forget about the use of basic built-in operating system commands for discovering hosts and routes. Basic built-in and other tools include:
Another useful technique is system fingerprinting. System fingerprinting refers to testing techniques used by port scanners and vulnerability analysis software that attempt to identify the operating system in use on a network device and the versions of services running on the host. Why is it important to identify a system? By doing so, the security practitioner knows what they are is dealing with and, later on, what vulnerabilities are likely for that system. As mentioned previously, Microsoft-centric vulnerabilities are (usually) not going to show up on Sun systems and vice versa.
One final resource to keep in mind is web repositories for vulnerabilities are extremely useful for the security practitioner. Websites such as shodan (www.shodanhq.com) actively compile vulnerable online devices in a searchable format. Additionally, organizations such as NIST compile centralized vulnerability databases such as the National Vulnerability Database (http://nvd.nist.gov/).
Before active penetration, the security practitioner needs to evaluate the findings and perform risk analysis on the results to determine which hosts or services the security assessor is going to try and actively penetrate. The security practitioner should not perform an active penetration on every host until the organization has fully completed Phase 2. The security practitioner must also identify the potential business risks associated with performing a penetration test against particular hosts. The security practitioner can and probably will interrupt normal business processes if they perform a penetration test on a production system. The business leaders need to be made aware of that fact, and they need to be involved in making the decision on which devices to actively penetrate.
This bears repeating. Think twice before attempting to exploit a possible vulnerability that may harm the system. For instance, if the system might be susceptible to a buffer overflow attack, it might be enough to identify the vulnerability without actually exploiting it and bringing down the system. Weigh the benefits of succinctly identifying vulnerabilities against potentially crashing the system. Here are some samples:
As with any security testing, documentation of the test, analysis of the results, and reporting those results to the proper managers are imperative. Many different methods can be utilized when reporting the results of a vulnerability scan. A comprehensive report will have separate sections for each management/technical level involved in the test. An overview of the results with a summary of the findings might be ideal for management, while a technical review of specific findings with remediation recommendations would be appropriate for the device administrators. As with any report that outlines issues, it is always best to offer solutions for fixing the issues as well as reporting their existence.
The following outline provides a high-level view of the steps that could be taken to exploit systems during a penetration test. It is similar in nature to a vulnerability test but goes further to perform an exploit.
Continuous monitoring represents the desire to have real-time risk information available at any time to make organizational decisions. Continuous monitoring systems are comprised of sensor networks, input from assessments, logging, and risk management. When implemented correctly, continuous monitoring systems can provide organizations with a sense of information security risk; when not configured correctly, they can lead an organization to false panic or a false sense of security.
The security practitioner should assume that all systems are susceptible to attack and at some point will be attacked. This mindset helps prepare for inevitable system compromises. Comprehensive policies and procedures and their effective use are excellent mitigation techniques for stemming the effectiveness of attacks. Security monitoring is a mitigation activity used to protect systems, identify network patterns, and identify potential attacks.
Monitoring terminology can seem arcane and confusing. The purpose of this list is to define by example and reinforce terminology commonly used when discussing monitoring technology.
What is an IDS and an IDPS and how do they differ? IDS stands for intrusion detection system. It is a passive system that detects security events but has limited ability to intervene on the event. Intrusion prevention is the process of performing intrusion detection and attempting to stop detected possible incidents. Intrusion detection and prevention systems (IDPS) are primarily focused on identifying possible incidents, logging information about them, attempting to stop them, and reporting them to security administrators.4
There are two types of IDS/IDPS devices. Network-based IDS or NIDS generally connect to one or more network segments. It monitors and interprets traffic and identifies security events by anomaly detection or based on the signature of the event. Host-based IDS, or HIDS, usually reside as a software agent on the host. Most of the newer NIDS are IDPS devices and not IDS. Active HIDS can do a variety of activities to protect the host from intercepting system calls and examining them for validity to stopping application access to key system files or parameters. Active HIDS would be considered an IDP. Passive HIDS (IDS) take a snapshot of the file system in a known clean state and then compare that against the current file system on a regular basis. They can then flag alerts based on file changes.
Business data flow diagrams and data classification policies are a good starting point. HIDS and NIDS need to be deployed where they can best protect critical organizational assets. Of course, in a perfect world with unlimited budget, all devices would be protected by HIDS and NIDS, but most IT security staff has to deal with limited personnel and limited budget. Protect critical assets first and protect the most possible with the fewest devices. A good starting point would be to place HIDS on all systems that contain financial data, HR data, PII, research, and any other data for which protection is mandated. NIDS should be placed on all ingress points into your network and on segments separating critical servers from the rest of the network. There used to be some debate among security practitioners as to whether NIDS should be placed outside your firewall to identify all attacks against your network or inside your firewall to only identify those that made it past your firewall rules. The general practice now appears to be to not worry if it is raining outside; only worry about what is dripping in your living room. In other words, keep the NIDS inside the firewall.
Both HIDS and NIDS are notoriously noisy out of the box and should be tuned for specific environments. They also need to be configured to notify the responsible people via the approved method if they identify a potential incident. These, like the alerts themselves, must be configured properly to ensure that when something really does happen, the designated people know to take action. For the most part, low-priority events can send alerts via email, while high-priority events should page or call the security personnel on call.
The security practitioner should remember the security tenets confidentiality, integrity, and availability. IDS can alert to these conditions by matching known conditions (signatures) with unknown but suspect conditions (anomalies).
The security practitioner must make decisions concerning the deployment of monitoring systems. What types to deploy and where to deploy are functions of budget and system criticality. Implementation of monitoring should be supported by policy and justified by the risk assessment. The actual deployment of the sensors will depend on the value of the assets.
Monitoring control deployment considerations should include:
As system changes are made, the security systems that protect them should be considered. During the change control process, new changes should factor in how the security systems will handle them. Some sample questions that the security practitioner should consider are as follows:
Organizations must have a policy and plan for dealing with events as they occur and the corresponding forensics of incidents.
The security practitioner should consider asking the following questions:
When organizations suffer attacks, logging information, whether generated by a host, network device, IDS/IPS, or other device, may be at some point considered evidence by law enforcement personnel. Preserving a chain of custody for law enforcement is important so as not to taint the evidence for use in criminal proceedings.
If unauthorized activity is detected, IDS/IPS systems can take one or both of the following actions:
The following is a list of examples of passive IDS/IPS response:
The following is a list of examples of active IPS response:
Active IDS response pitfalls include:
Many monitoring systems provide the means to specify some sort of response if a particular signature fires, although doing so may have unintended consequences.
Entertain the notion that a signature has been written that is too generic in nature. This means that it sometimes fires because of exploit traffic and is a true positive but sometimes fires because of innocent traffic and is a false positive. If the signature is configured to send TCP resets to an offending address and does so in a false-positive situation, the IDS may be cutting off legitimate traffic.
Self-induced denial of service can also be a problem with active response systems. If an attacker decided to spoof the IP address of a business partner and sends attacks with the partner’s IP address and the organization's IDS reacted by dynamically modifying the edge router configuration, it would cut off communications with the business partner.
Take note that active response mechanisms should be used carefully and be limited to the types of actions listed above. Some organizations take it upon themselves to implement systems that actively counter attack systems they believe have attacked them as a response. This is highly discouraged and may result in legal issues for the organization. It is irresponsible to counter attack any system for any reason.
Attackers are threats generally thought of as persons who perform overt and covert intrusions or attacks on systems, which are often motivated a combination of motive, means, and opportunity. Motivations typically fall into one of the following types:
There is one category of “attacker” that is often overlooked by security professionals, and while it may seem unfair to some to lump them in with attackers, organizations avoid them at their peril. These are the individuals within an organization who either through ignorance, ego, or stress causes unintended intrusions to occur. They could be the IT system administrator that finds it easier to grant everyone administrative rights to a server rather than take the time to define access controls correctly, or they could be the administrative assistant who uses someone else’s login to get into the HR system just because it was already logged in. These individuals can cause as much damage to a company as someone who is trying to attack it, and the incidents they create need to be addressed like any other intrusion incident.
Intrusions are acts by persons, organizations, or systems that violate the security framework of the recipient. In some instances, it is easier to identify an intrusion as a violation. For example, you have a policy that prohibits users from using computing resources while logged in as someone else. While in the true sense of the definition, the event is an intrusion, it is more acceptable to refer to the event as a violation. Intrusions are considered attacks but are not just limited to computing systems. If something is classified as an intrusion, it will fall into one of following two categories:
An event is a single occurrence that may or may not indicate an intrusion. All intrusions contain events, but not all events are intrusions. For example, a user account is locked for bad password attempts. That is a security event. If it was just the user forgetting what she changed her password to last Friday, it is not an intrusion; however, if the user claims that they did not try to log in before their account was locked, it might be an intrusion.
There are two basic classes of monitoring devices: real-time and non–real-time.
Monitoring should be designed to positively identify actual attacks (true positive) but not identify regular communications as a threat (false positive), or do everything possible to increase the identification of actual attacks and decrease the false-positive notifications.
Monitoring event information can provide considerable insight about possible attacks perpetrated within your network. This information can help organizations make necessary modifications to security policies and the supporting safeguards and countermeasures for improved protection.
Monitoring technology needs to be “tuned,” which is the process of customizing the default configuration to the unique needs of your systems. To tune out alarms that are harmless (false positives), the organization must know what types of data traffic are considered acceptable. If security framework, data classification documentation, and business flow documentation exist, the organization should have a good idea of what information should be on its network and how it should be moving between devices.
Deploying monitoring systems is one part of a multilayer security strategy comprised of several types of security controls. By itself, it will not prevent intrusions or give an organization all the information it needs to analyze an intrusion.
When a system is compromised, an attacker will often alter certain key files to provide continued access and prevent detection. By applying a message digest (cryptographic hash) to key files and then checking the files periodically to ensure the hash has not altered, one can maintain a degree of assurance. On detecting a change, an alert will be triggered. Furthermore, following an attack, the same files can have their integrity checked to assess the extent of the compromise.
Some examples include:
Monitoring is the ongoing, near-time analysis of traditional and non-traditional data sources, related to targeted business activities and controls, to proactively identify, trend, and respond to potential compliance signals and to be predictive of user behavior. Monitoring is considered distinct from auditing, which is typically retrospective and often limited by time, frequency, and scope. Monitoring results inform corrective action plans, including full-scale compliance investigations, policy changes, enhanced training and communications, additional monitoring, focused audits, and other programmatic responses.
Continuous monitoring represents the desire to have real-time risk information available at any time to make organizational decisions. Continuous monitoring systems are comprised of sensor networks, input from assessments, logging, and risk management. When implemented correctly, continuous monitoring systems can provide organizations with a sense of information security risk; when not configured correctly, they can lead an organization to false panic or a false sense of security.
Log files can be cumbersome and unwieldy. They can also contain critical information within them that can document compliance with an organization's security framework. Your security framework should have the policies and procedures to cover log files; namely, they need to spell out:
Auditors are going to want to review host logs as part of the audit process. Security practitioners regularly review host log files as part of the organization’s security program. As part of the organization’s log processes, guidelines must be established for log retention and followed. If the organizational policy states to retain standard log files for only six months, that is all the organization should have. During normal log reviews, it is acceptable to use live log files as long as the review does not disrupt the normal logging process.
Any time an incident occurs, the security practitioner should save the log files of all devices that have been affected or are along the network path the intruder took. These files need to be saved differently than your standard log retention policy. Since it is possible that these log files might be used in a court case against the intruder, the security practitioner must follow sound forensic chain of custody principles when obtaining and preserving the logs.
Identifying log anomalies is often the first step in identifying security-related issues both during an audit and during routine monitoring. What constitutes an anomaly? A log anomaly is anything out of the ordinary. Some will be glaringly obvious, for example, gaps in date/time stamps or account lockouts. Others will be harder to detect, such as someone trying to write data to a protected directory. While it would seem logging everything so that you would not miss any important data is the best approach, most would soon drown under the amount of data collected.
Log files can grow beyond the organization’s means to gather meaningful data from them if they are not managed properly. How can the organization ensure they are getting the data needed without overburdening the organization’s resources with excessive log events? The first thing to remember is start with the organization’s most critical resources. Then be selective in the amount of data received from each host. Finally, get the data somewhere easily accessible for analysis.
Clipping levels are a predefined criteria or threshold that sets off an event entry. For example, a security operations center does not want to be notified on every failed login attempt because everyone mistypes their password occasionally. Thus, set the clipping level to only create a log entry after two failed password attempts. Clipping levels usually have a time property associated with them. For the logging process to not have to keep track of every single failed password attempt in the off chance that the next time that account is logged in the password is mistyped, set the time limit to a reasonable amount of time; for example, 30 minutes. Now the system only has to keep track of an invalid login attempt on a particular account for 30 minutes. If another invalid attempt does not come in on that account, the system can disregard the first one. Clipping levels are great for reducing the amount of data accumulating in log files. Care must be taken to ensure important data is not skipped, and like everything else, the clipping levels need to be documented and protected because an attacker would gain an advantage knowing them.
Log filtering usually takes place after the log files have been written. Clipping reduces the amount of data in the log; filtering reduces the amount of data viewed. Filters come in extremely handy when trying to isolate a particular host, user account, or event type. For example, you could filter a log file to only look at invalid login attempts or for all entries within a certain time window. Care must be taken when filtering so that you do not filter out too much information and miss what you are looking for.
Log consolidation gives one-stop shopping for log file analysis. Log file consolidation is usually done on a separate server from the one that actually generates the log files. These servers are called security information and event management (SIEM) systems. Most systems have the ability to forward log messages to another server. Thus, the entry not only appears in the log file of the server but it also appears in the consolidated log on the SIEM. Log consolidation is extremely useful when you are trying to track a user or event that reaches across multiple servers or devices. Log consolidation is discussed further below in the centralized versus distributed log management section.
Now that all the organization’s systems are being logged, and they are being reviewed for anomalies, how long is the organization required to keep the logs? How should they be storing the logs?
Logs are a critical part of any system. They provide insight into what a system is doing, as well as what happened. Virtually every process running on a system generates logs in some form or another. Usually these logs are written to files on local disks. When your system grows to multiple hosts, managing the logs and accessing them can get complicated. Searching for a particular error across hundreds of log files on hundreds of servers is difficult without good tools. A common approach to this problem is to set up a centralized logging solution so that multiple logs can be aggregated in a central location.
Another option that you probably already have installed is syslog. Most people use rsyslog or syslog-ng, which are two syslog implementations. These daemons allow processes to send log messages to them, and the syslog configuration determines how they are stored. In a centralized logging setup, a central syslog daemon is set up on your network, and the client logging daemons are set up to forward messages to the central daemon.
All of these have their specific features and differences, but their architectures are fairly similar. They generally consist of logging clients and agents on each specific host. The agents forward logs to a cluster of collectors, which in turn forward the messages to a scalable storage tier. The idea is that the collection tier is horizontally scalable to grow with the increased number of logging hosts and messages. Similarly, the storage tier is also intended to scale horizontally to grow with increased volume. Some examples include:
There are also several hosted “logging as a service” providers as well. The benefit of them is that you only need to configure your syslog forwarders or agents, and they manage the collection, storage, and access to the logs. All of the infrastructure that you have to set up and maintain is handled by them, freeing you up to focus on your application. Each service provides a simple setup (usually syslog forwarding based), an API, and a UI to support search and analysis. Some examples include:
Both NetFlow and sFlow are used to provide visibility into an organization’s network. Typically, NetFlow and sFlow are used to:
According to Cisco, NetFlow is an embedded instrumentation within Cisco IOS Software to characterize network operation. Netflow allows for the collection and monitoring of network traffic, which can then be analyzed to create a picture of the network traffic flow and its volume across the network. The netflow solution is implemented using a netflow collector, which is a centralized server used to gather up netflow Information from monitored systems and then allow for analysis of the data captured.5
sFlow, short for “sampled flow,” is an industry standard for packet export at layer 2 of the OSI model. It provides a means for exporting truncated packets, together with interface counters. Maintenance of the protocol is performed by the sFlow.org consortium, the authoritative source of the sFlow protocol specifications.
sFlow is a technology for monitoring traffic in data networks containing switches and routers. It is described in RFC 3176 (https://www.ietf.org/rfc/rfc3176.txt).
The sFlow monitoring system consists of a sFlow Agent (embedded in a switch or router or in a standalone probe) and a central data collector, or sFlow Analyzer. The sFlow Agent uses sampling technology to capture traffic statistics from the device it is monitoring. sFlow Datagrams are used to immediately forward the sampled traffic statistics to an sFlow Analyzer for analysis.
SIEM technology is used in many enterprise organizations to provide real-time reporting and long-term analysis of security events. SIEM products evolved from two previously distinct product categories, namely security information management (SIM) and security event management (SEM).
SIEM combines the essential functions of SIM and SEM products to provide a comprehensive view of the enterprise network using the following functions:
The SIEM market is evolving towards integration with business management tools, internal fraud detection, geographical user activity monitoring, content monitoring, and business critical application monitoring. SIEM systems are implemented for compliance reporting, enhanced analytics, forensic discovery, automated risk assessment, and threat mitigation.
Compliance with monitoring and reporting regulations is often a significant factor in the decision to deploy a SIEM system. Other policy requirements of a specific organization may also play a role. Examples of regulatory requirements include the Health Insurance Portability and Accountability Act (HIPAA) for healthcare providers in the United States, or the Payment Card Industry’s Data Security Standard (PCI-DSS) for organizations that handle payment card information. A SIEM can help the organization to comply with monitoring mandates and document their compliance to auditors.
Attacks against network assets are a daily occurrence for many organizations. Attack sources can be inside or outside the organization’s network. As the boundaries of the enterprise network expand, the role of network security infrastructure must expand as well. Typically, security operations staff deploys many security measures, such as firewalls, IDS sensors, IPS appliances, Web and email content protection, and network authentication services. All of these can generate significant amounts of information, which the security operations staff can use to identify threats that could harm network security and operations.
For example, when an employee’s laptop becomes infected by malware, it may discover other systems on the corporate network and then attempt to attack those systems, spreading the infection. Each system under attack can report the malicious activity to the SIEM. The SIEM system can correlate those individual reports into a single alert that identifies the original infected system and its attempted targets. In this case, the SIEM’s ability to collect and correlate logs of failed authentication attempts allows security operations personnel to determine which system is infected so that it can be isolated from the network until the malware is removed. In some cases, it may be possible to use information from a switch or other network access device to automatically disable the network connection until remediation takes place. The SIEM can provide this information in both real-time alerts and historical reports that summarize the security status of the enterprise network over a large data collection, typically on the order of months rather than days. This historical perspective enables the security administrators to establish a baseline for normal network operations, so they can focus their daily effort on any new or abnormal security events.
Full packet capture devices typically sit at the ingress and egress points of an organization’s network. These devices capture every single information packet that flows across the border. These devices are indispensable tools for the security practitioner as they capture the raw packet data, which can be analyzed later should an incident occur. As full packet capture devices capture every packet, they require an organizational network architecture that consolidates traffic to as few points as possible for collection. Many organizations use VPN tunnels and mandatory routing configurations for hosts to ensure all traffic is sent through a full packet capture device.
Full packet capture devices can consume tremendous amounts of storage. As every packet that flows into or out of the organization is captured, the security practitioner must be mindful of the storage capacities needed to maintain a useful packet capture. Knowing a breach occurred three months ago and only having one month of packet capture is not going to be helpful.
Most full capture packet devices also come with the ability to do a packet dump or extract a certain cross-section of the traffic as defined by criteria such as IP, protocol, or time of day. This traffic can then be analyzed using specialized tools or even played in a simulated browser if the traffic is Web traffic. Full packet capture is indispensable in collecting evidence of workplace misconduct and understanding an attack. Often signatures for IDS/IPS systems come from the analysis and understanding of packet dumps from full packet capture systems.
While full packet capture systems are extremely useful, it is important to note what they do not do. Most if not all are not IDS or IPS systems. Additionally, they may not include the native ability to break encrypted transmissions such as SSL and HTTPS communications. Therefore, the security practitioner must understand how full packet capture fits with other information security technologies.
Monitoring of information and system state is one of the most important activities that the security practitioner can engage in to ensure that the infrastructure under their care is optimized and secured in a proactive manner. Monitoring can be focused on many different systems, from applications, to operating systems, to middleware and infrastructure.
According to VMware, VMware’s vRealize Hyperic monitors operating systems, middleware, and applications running in physical, virtual, and cloud environments. Hyperic provides proactive performance management with complete and constant visibility into applications and infrastructure. It produces more than 50,000 performance metrics on more than 75 technologies at every layer of the stack. At startup, Hyperic automatically discovers and adds new servers and VMs to inventory; configures monitoring parameters; and collects performance metrics and events. Hyperic helps reduce operations workload, increases a company's IT management maturity level, and drives improvements in availability and infrastructure health.6
Microsoft also provides complete management and monitoring capabilities with their System Center Suite of products. Operations Manager, a component of Microsoft System Center 2012/2012 R2, is software that helps the security practitioner monitor services, devices, and operations for many computers from a single console.
Using Operations Manager in the environment makes it easier to monitor multiple computers, devices, services, and applications. The Operations Manager console, shown in Figure 3-7, enables you to check the health, performance, and availability for all monitored objects in the environment and helps you identify and resolve problems.
Operations Manager will tell you which monitored objects are not healthy, send alerts when problems are identified, and provide information to help you identify the cause of a problem and possible solutions.7
Effective network security demands an integrated defense-in-depth approach. The first layer of a defense-in-depth approach is the enforcement of the fundamental elements of network security. These fundamental security elements form a security baseline, creating a strong foundation on which more advanced methods and techniques can subsequently be built.
A security baseline defines a set of basic security objectives that must be met by any given service or system. The objectives are chosen to be pragmatic and complete, and do not impose technical means. Therefore, details on how these security objectives are fulfilled by a particular service/system must be documented in a separate security implementation document. These details depend on the operational environment a service/system is deployed into, and might, thus, creatively use and apply any relevant security measure. Derogations from the baseline are possible and expected, and must be explicitly marked.
Developing and deploying a security baseline can, however, be challenging due to the vast range of features available. For instance, a network security baseline is designed to assist in this endeavor by outlining those key security elements that should be addressed in the first phase of implementing defense-in-depth. The main focus of a network security baseline is to secure the network infrastructure itself: the control and management planes. The network security baseline presents the fundamental network security elements that are key to developing a strong network security baseline. The focus is primarily on securing the network infrastructure itself, as well as critical network services, and addresses the following key areas of baseline security:
Unless these baseline security elements are addressed, additional security technologies and features are typically useless. For example, if a default access account and password are active on a network infrastructure device, it is not necessary to mount a sophisticated attack because attackers can simply log in to the device and perform whatever actions they choose.
The use of metrics and analysis (MA) is a sophisticated practice in security management that takes advantage of data to produce usable, objective information and insights that guide decisions.
Analytics is the discovery and communication of meaningful patterns in data. Especially valuable in areas rich with recorded information, analytics relies on the simultaneous application of statistics, computer programming, and operations research to quantify performance. Risk analytics involves reporting on metrics from various areas that point out which businesses, processes, or systems are most at risk and require immediate attention.
Through MA, a CSO or other security professional can better understand risks and losses, discern trends, and manage performance. Software designed specifically for the security field can make the gathering of security and risk-significant data orderly, convenient, and accurate while holding the data in a format that facilitates analysis. Security and risk-focused incident management software offers both the standardization and consolidation of data. Such software also automates the task of analysis through trending and predictive analysis and the generation of customized statistical reports.
Carnegie Mellon University created the Systems Security Engineering Capability Maturity Model (SSE-CMM), which provides valuable metrics for security practitioners. Security metrics are a quantifiable measurement of actions that organizations take to manage and reduce security risks such as information theft, damage to reputation, financial theft, and business discontinuity. Administrators and other stakeholders use metrics to address concerns such as:
For example, the use of metrics and analysis at the Massachusetts Port Authority (Massport) to solve the specific problem of security door alarms is illustrative. Massport was able to greatly reduce such alarms through the analysis of alarm metrics. That analysis helped security management determine the cause of each type of alarm and develop solutions to eliminate or reduce them. Analysis of detailed door transaction data, including video, showed the causes of alarms. That understanding led to a variety of corrective actions, including maintenance and user training being implemented.
The security practitioner can use MA not only to identify security problems but also to gauge the effectiveness of the various security measures used to counteract those problems.
Data visualization is a general term that describes any effort to help people understand the significance of data by placing it in a visual context. Patterns, trends, and correlations that might go undetected in text-based data can be exposed and recognized easier with data visualization software. Data visualization tools go beyond the standard charts and graphs used in Excel spreadsheets, displaying data in more sophisticated ways such as infographics, dials and gauges, geographic maps, sparklines, heat maps, and detailed bar, pie, and fever charts. The images may include interactive capabilities, enabling users to manipulate them or drill into the data for querying and analysis. Indicators designed to alert users when data has been updated or predefined conditions occur can also be included. Some examples of data visualization tools include:
According to the NIST Guide to Computer Security Log Management (SP 800-92), a log is a record of the events that occur in a system. The challenge that security practitioners face today is the increasing complexity of the systems that we are asked to manage, and as a result of that complexity, the increasing volumes of information generated by these systems. In order to be able to make sense of the day-to-day, moment-to-moment operation of systems, the security practitioner needs to be able to balance the continuous flow of information being gathered in logs with the ability to monitor and manage. Implementing the following NIST security log management recommendations should assist the security practitioner in facilitating more efficient and effective log management for the organization:
Findings are most effectively communicated in reports embracing five key elements:
This exercise will take the reader through the process of performing a basic vulnerability scan against a known vulnerable system using freely available tools. As part of risk identification, the security practitioner should be versed in using vulnerability detection and scanning tools. This exercise assumes the security practitioner has experience with virtual environments and basic networking.
The security practitioner should never conduct testing on any system without authorization and without assurance they will not cause damage or undesired impacts. Therefore the security professional should be familiar with the operation of virtual test environments.
This exercise relies on virtual machines. VirtualBox will be the virtual environment used throughout this exercise. VirtualBox is freely available on the Internet at https://www.virtualbox.org/wiki/Downloads.
To install VirtualBox on a chosen system, ensure the downloaded software matches the target system’s specifications and follow the directions for the specific platform found at https://www.virtualbox.org/manual/ch01.html#intro-installing.
Once VirtualBox is installed, two virtual machines will be downloaded for this exercise. The first is Kali Linux. Kali Linux is a staple in the security practitioner’s toolkit and comes preloaded with numerous vulnerability scanning and exploitation tools.
Download the Kali Virtual (i486) machine from http://www.offensive-security.com/kali-linux-vmware-arm-image-download/.
The root
password for the images is toor
.
Next, download the Metasploitable virtual machine available at https://information.rapid7.com/metasploitable-download.html?LS=1631875&CS=web.
The provider does request some marketing information.
default login/password is: msfadmin:msfadmin
Alternatively, to create a new Metasploitable VM, download the image from here and build: http://sourceforge.net/projects/metasploitable/files/Metasploitable2/
Both virtual machines will need to be unpacked since they have both been zipped. One of the files is packed using 7zip. 7zip can unzip both files and can be downloaded from http://www.7-zip.org/.
The goal in creating the virtual environment is to ensure all malicious activity is contained within the virtual environment. When one is creating this environment, it is vitally important the networks for all virtual machines be configured for a private subnet using the “host-only” option found in the network settings of the VirtualBox settings for each virtual machine. Next, the downloaded virtual machines need to be loaded.
Follow these steps to configure Metasploitable:
Metasploitable.vmdk file
, as shown in Figure 3-10.Follow these steps to configure Kali Linux:
Now that both virtual machines are created, they both need to be started.
msfadmin/msfadmin
The IP address may be different than noted in Figure 3-18. Take note of the IP address listed. This is the target’s IP address.
Follow these steps to launch a scan:
root
as the username and toor
as the password. Note that Zenmap automatically generates the command that a user would type should they decide to launch the scan from a terminal. Select Intense Scan from the profile drop down menu. The types of scans and their meanings can be found at http://www.securesolutions.no/zenmap-preset-scans/.
The intense scan selected scans the most common TCP ports and makes an attempt to identify the operating system and services of the target.
Once the scan is complete, review Nmap Output and note that the scan discovered the following open ports:
Just because a port is listed as open doesn’t mean it is vulnerable. The next step is to determine what services may be available on each port. Scrolling down further through the results will yield information about services and basic information about vulnerable configurations. See Figure 3-21.
Notice how Zenmap located services on port 21 and identified them as FTP services. Zenmap then attempted to log in with an anonymous account and was able to do so. Zenmap also identified the bind version the target is running as 9.4.2. There are also others, but for the sake of this exercise, the FTP and the BIND services are of interest.
Having a vulnerability on the system does not equate to risk. If there is no actual sensitive information on the system and the system is in test, there is no risk because there is no risk when losing the information. For the sake of this exercise, assume the target system has sensitive information on it and is vital to the organization.
Risk must always be determined through the dimensions of impact, vulnerability, and threat. The scanner has provided vulnerability information. The last step is to determine the threat information. Threat information is sourced from a variety of services and systems. For the sake of this exercise, assume there is a highly motivated and capable threat.
The question the security professional must answer is “How would a threat exploit the vulnerabilities I’ve discovered to impact the organization?” In this scenario, the anonymous FTP access could allow access to sensitive information on the server if not properly secured and logically separated from the FTP server. Additionally, when researching the BIND service, the security professional discovers CVE-2014-3859 (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-3859), which states this version of BIND used may be vulnerable to denial-of-service attacks. This information is combined with organizational risk information to determine the best mitigation approach.
The virtual setup provided here can be used for numerous tests of attacking a vulnerable system. Several tutorials are available online for additional research and training.
An information security practitioner must have a thorough understanding of fundamental risk, response, and recovery concepts. By understanding each of these concepts, the security practitioner will have the knowledge required to protect the security practitioner’s organization and professionally execute the security practitioner’s job responsibilities.
18.221.136.142