IT risk management is the primary focus of the concept of IT security in an organization's IT operations. All organizations depend on IT operations and the information systems that are developed from that technology to successfully carry out the mission and business function of the enterprise. Frequently, portable devices such as digital assistants, tablets, laptops and cell phones are being integrated within the IT systems of the organization.
Possible threats challenge the organization from many different directions, many of which require continuous and ongoing threat identification and monitoring processes that determine the application of controls. Organizational IT systems are subject to serious threats or attacks that could have an adverse impact on the availability and integrity of the organization's hardware systems, information storage, and information telecommunications.
Managing security risks to information systems is a complex, multifaceted undertaking. There are levels of responsibilities throughout the organization that are necessary to meet the goals and objectives of risk management. Senior leaders and management provide corporate policies and governance, organizational managers provide planning and implementation to meet the requirements of these policies, and individuals such as security practitioners are charged with ensuring that the controls designed to reduce risk meet operational baselines efficiently and effectively. The System Security Certified Practitioner will always be familiar with the enterprise's policies, as well as the standards, procedures, and guidelines that will ensure due-care protection of its information as well as its hardware assets. Through the use of risk analysis, assets and their vulnerabilities are identified as well as potential threats that may exploit these vulnerabilities. A variety of tools and reporting techniques are used to reduce the impact of threats to the organization.
The systems security certified practitioner will be involved at many levels of risk identification, threat assessment, intrusion discovery, and eventual remediation and restoration activities. It is important to have a thorough understanding of risk, vulnerabilities, and threats that face organizations IT infrastructure. These will be processes and projects in which Systems Security Certified Practitioners will contribute both knowledge and skills.
Risks may take many forms in an organization, including IT risk, financial risk, operational risk, commercial market risk, operational security risk, and personal exposure to risk. A senior management team of any organization is continuously burdened with the identification and mitigation of risks in any category that may result in damage or harm to assets of the organization. The most common risk management approach used by organizations is to identify, assess, manage, and control potential events or situations. Various processes are used to effectively perform risk management. Information risk management is the process of acknowledging and identifying risks, mitigating risks by the reduction of threats or vulnerabilities through the use of controls, and implementing strategies to maintain an acceptable risk level.
Risks are categorized depending upon the type of risk or the aspect of the business they affect. The top executives must consider every risk to an organization.
The definition of risk is stated as the probability or likelihood that a certain event or incident may occur that will have an adverse impact on an organization and the achievement of its mission or objectives. Since risk has been defined as a probability of something occurring, it may be represented by a mathematical function.
When information technology risk is defined, we are basically speaking of the unauthorized use, disruption, modification, exfiltration, copying, inspection, or destruction of information or information processing hardware or software assets. Company managers will use various tools such as risk analysis and risk assessments to analyze the potential risks that may impact organizational assets. There is no such thing as a risk-free environment, and there is no way to totally reduce risk to zero. The tools, devices, and techniques IT managers and security practitioner use are referred to as controls to reduce the level of risk. Residual risk is any risk remaining after the implementation of safeguards and controls. For instance, an asset valued at $100,000 may have a $100,000 insurance policy placed upon it. But that $100,000 insurance policy may have a $5,000 deductible, which is the amount an organization pays in the event of loss. Obviously, with a total loss, the insurance company would reimburse the organization $95,000. The residual risk would be the $5,000 deductible.
The risk management process is simply a business function that involves identifying, evaluating, and controlling risks (Figure 5.1). As you have seen, risks are prevalent throughout the organization. For example, a commercial market risk may occur when a competitor may lower its prices below what it costs another company to manufacture the same product. A major supplier may have a fire, thus eliminating a supply of raw material to the organization. A hurricane might cause the manufacturing facility to be closed down for an extended period of time. Each of these represents risks to the organization and must be addressed through the risk management process. For instance, in the example of the major supplier experiencing a fire, management would have recognized that nondelivery of production materials would pose a threat to the organization. During a risk analysis process, threats that might affect a major supplier would be identified. In the event such threats came to fruition, causing the supplier to not be able to supply raw materials, plans would be in place to purchase the raw materials through other suppliers. Although this scenario is simplistic in nature, it illustrates the process behind risk management.
The loss of information processing hardware, software, and information is a serious risk to any organization. Information risk management (IRM) is the process whereby risks to IT hardware, software, and information assets are identified and threats and vulnerabilities are reduced to an acceptable level and controls are implemented to maintain that level. At the heart of information risk management are decisions concerning priorities and monetary resources. The question for senior management is what assets to protect and how much to budget to protect it.
Several categories of risks to IT hardware, software, and information assets must be considered. The following are some of the categories that represent threat sources to an IT organization:
The determination of risk requires the knowledge of various attributes that exist in every scenario where there is risk, as detailed in the following sections.
Assets are resources that an organization uses to fulfill its mission or business objectives. Resource in this context may be very broad and refer to any asset including time, people, money and physical items. When defining risk, we are considering the possibility of loss of a resource. Assets and resources always represent value to the organization. Resources may be grouped into two major categories: tangible assets and intangible assets.
Assets may be ranked or prioritized within an organization. During asset analysis, a variety of scores will be utilized to rank assets. Of course, the people within an organization are ranked as the highest value asset and must be protected at all costs. The second highest asset ranking will be critical assets. If critical assets are lost or compromised, the viability of the organization is jeopardized. Where hardware items, facilities, and products available from third-party vendors are ranked lower, critical assets in any organization generally include information assets such as customer/vendor data, trade secrets, proprietary information, intellectual property, and information generated by or critical to the ongoing operation of the organization.
Threats are defined as any incident or event that represents the probability to harm an organization. Harm to an organization's IT infrastructure may be in the form of authorized access, data manipulation or change, denial of access, destruction of assets and an authorized information access and release. Threats to an organization are caused by a number of different threat actions. Common sources of threats are natural, human, or environmental. While it is easy to assume that a hurricane is a threat, it is in fact just a threat source. What this means, in essence, is that the hurricane is the actor that causes the threats. The actual threat caused by the hurricane is referred to as the threat action or a threat agent. A threat vector is the path which a threat takes to cause an action.
While threat sources were identified in the section “Risk Management Process” earlier in this chapter, the actual threat that they pose may be individually identified. For example, a hurricane is a threat action. This means that a variety of threats to the organization may be posed during a hurricane. Each of these threats will follow a specific threat vector to harm the organization:
So while we may look at a number of different threat actions as being potential threats, in fact many of the threats are the same; they are just caused by different threat actions. For example, a fire, a flood, a wind storm, or a catastrophic mud slide may all cause the destruction of physical assets, such as computer hardware, networking, and infrastructure. As a security professional, you must plan for the resultant action of a threat regardless of the actual cause. The reality is that if the power goes out, the appropriate response must be determined regardless of why the power went out. An event triage is a process during which damage is assessed and restoration priorities are determined. During an event triage, you may predict that the power will be out for two hours, for example, or for four days; your prediction will influence your decisions for the proper recovery response. In any event, the power is out regardless of whether it was caused by a hurricane or forest fire.
The actions of humans pose the greatest risk to any organization. While hurricanes, tornadoes, and forest fires may be spectacular during the event and create great harm to the organization, the frequency and complexity of threats created by people far outweigh any other threat to the organization. Threats posed by the human element may be grouped into two categories:
Vulnerability is any flaw or weakness that may be attacked or exploited by a threat. A vulnerability may be characterized by either an intentional flaw or weakness placed by an individual or an unintentional flaw such as a mistake in manufacturing. For example, a programmer might intentionally create a back door in a software application. If not detected during a code review, the back door could provide access to the programmer or other people with malicious intent.
Any information system, application, controls, or information assets could be exploited by a threat or a threat agent. For instance, an unpatched or unsupported application or operating system on a host machine or server could create a security vulnerability. From a corporate risk perspective, the security of the entire IT security environment must take into consideration the physical aspects of the building and structure, HVAC, fire suppression, physical lockdown, and access restrictions as well as vulnerabilities to the electrical power, communications, and network infrastructure.
Vulnerabilities may be grouped into two different categories:
Controls are the mechanisms utilized during the risk management process to reduce the ability of a threat to exploit a vulnerability, which would result in harm to the organization. Controls may also be used to reduce the level of a vulnerability. For example, a control that might be used to keep a thief (threat) from exploiting the door (vulnerability) on a backyard work shed might be a heavy lock (control) on the shed door. This effort would reduce or mitigate the vulnerability of the door of the shed. The methods used to reduce or mitigate a threat, on the other hand, might include a heavy lock on the backyard gate, motion-sensing lighting in the backyard, and a guard dog. Therefore, by reducing a vulnerability and reducing the threat, the result is reduction of the overall risk.
There are three categories of controls:
The terms safeguards and controls are used interchangeably and describe any device, procedure, or action that provides a degree of protection to an asset. Controls are a general category of procedures, mechanisms, and techniques that make up the layered defense model; safeguards are generally described in terms of preventative activities or devices put in place as a result of a risk analysis. A safeguard is generally the use of a control mechanism of some type, the concept of defense in depth provides for the use of multiple controls in a series. Second- or third-tier controls are sometimes referred to as safeguards because they may mitigate a threat that manages to pass through the primary control mechanism.
Every control can have a built-in or inherent weakness. A compensating control is a device, procedure, or mechanism that addresses the inherent weakness of the primary control. In most cases, a compensating control addresses conditions or situations that the primary control misses.
During a risk analysis, various threats as well as vulnerabilities are identified. Countermeasures generally describe specific activities, procedures, or devices which are put in place to mitigate an identified risk or vulnerability that has been identified during the risk analysis process. Examples of countermeasures include:
Exposure is defined as the estimated percentage of loss should a specific threat exploit the vulnerability of an asset. For example, if a server is valued at $10,000 and each time it is attacked $5,000 is required to rebuild the system, then the exposure factor is 50 percent. Exposure is always expressed as a percentage that when multiplied by the asset value results in the amount of loss during each attack. For instance, the example scenario above restated as a mathematical equation would appear as: $10,000 × .50 = $5,000
The risk analysis process is an analytical method of identifying both threats and asset vulnerabilities and determining the likelihood and impact should the threat event occur and exploit the identified vulnerability.
Understanding risk impact is an important factor in risk analysis and risk assessment programs. The amount of impact or damage a threat may cause to an asset may determine the risk level to be dealt with. It may be determined that although a risk exists, the value or importance of the asset is such that it may not be worth protecting or that the cost to protect it outweighs the value.
Risk management requires the knowledge of best practices, industry standards, and a structured risk analysis method whereby assets and threats may be classified to determine the efforts required to reduce the risk to the organization. Through the years a structured series of standards and methodology has evolved in the form of frameworks.
Frameworks have originated through interaction with industry groups, consultants, and a variety of working committees managed by a standards organizations. Three frameworks have become widely utilized throughout IT security and are discussed in the following sections.
The ISO/IEC 27000 series is a group of standards that offers guidance to IT security management organizations. The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have published an extensive series of best practices, recommendations, and guidelines. The ISO/IEC refers to its family of standards as an information security management system (ISMS). The most popular of the standards in the series is ISO/IEC 27002, which is a security code of practice and guidelines for IT security management. ISO/IEC 27005 offers a framework based upon a broad scope of various factors within the organization. This type of framework allows each organization to address the risks based upon their own ISMS. This framework covers such areas as the following:
While the ISO/IEC 27000 series has its foundations in establishing a structure of practices within an organization and then certifying and accrediting the organization for meeting established benchmarks, NIST Special Publication 800-37 Revision 1, “Guide for Applying the Risk Management Framework to Federal Information Systems,” was created to guide IT organizations within the U.S. federal government with a more practical approach to risk management. The National Institute of Standards and Technology (NIST) had its origination as the National Bureau of Standards and is a nonregulatory agency of the U.S. Department of Commerce. NIST is responsible for developing information security standards and guidelines, including minimum requirements for federal information systems. Although the news bulletins and publications originated as guidelines for federal agencies, they are widely applied throughout private and public businesses and organizations.
There are a variety of terms associated with NIST publications:
The risk management framework (RMF) that is detailed in NIST SP 800-37 Revision 1 offers a six-step process for implementing information security and risk management activities into a cohesive system development life cycle. The risk management methodology described by NIST and this Special Publication is composed of the following steps:
Simply put, a risk management framework is a continuous methodology of categorizing the system based upon a number of criteria and then implementing and monitoring various risk mitigation controls (Figure 5.2). This is referred to as a system security life cycle approach.
The categorization of the system can be quite extensive, requiring the determination of the system architecture.
NIST Special Publication 800-39, “Managing Information Security Risk: Organization, Mission, and Information System View” is a NIST document that concerns security risk in an IT environment. It was authored by the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology, which provides technical leadership to the nation's measurement and standards infrastructure. ITL should not be confused with ITIL, which is the Information Technology Infrastructure Library, a six-book series of best practices for the IT services industry. ITL provides research, guidelines, and outreach efforts in information systems security for industry, government, and academic organizations.
NIST Special Publication 800-39 contains these major topics:
This publication offers a very good starting point to understanding risk and IT security.
Risk analysis and risk assessment define various methods for understanding risk within an organization. To be effective, risk management should be a culture and process that is fully integrated throughout the organization. Determining asset value, deciding upon risk mitigation controls, and eventually monitoring and assessing the value of the controls in an everyday environment.
There are two processes that an organization uses when considering risk. A risk assessment is an analytical approach usually employing facts and costs, while a risk assessment may provide a more detailed approach utilizing a much broader scope of the information such as impact analysis and information provided by subject matter experts.
Risk analysis is the method by which we identify and analyze risk. Risk analysis is performed by identifying potential threats and vulnerabilities to arrive at a risk determination. The application of numerical analytical techniques is employed to determine asset value and thereby arrive at the cost of controls required to reduce or mitigate the overall risk associated with the asset. There are two general methodologies used to analyze risk: quantitative and qualitative.
The quantitative risk analysis process involves accumulating various facts and figures about an asset. These facts and figures may include information such as original cost, total cost of ownership, replacement cost, or other monetary amounts. The organization then utilizes this information to determine the total cost that should be budgeted for controls that are used to reduce risk to an acceptable level. The term that describes quantitative risk analysis is Formal Risk Modeling. Here are some examples of establishing asset value:
The example figures cited here may have been determined based upon the cost to replace the items, the salaries of technical individuals, and possibly an estimated cost of lost revenue due to downtime, among other estimated costs.
The following costs or activities should be considered when values are assigned to assets:
Asset valuation determined by various analytical methodologies, can be used to determine the impact of the loss of an asset to the organization. There are several variables that can be used to determine the impact of a loss as a cost to the organization. It might be imagined, not every attack results in a 100 percent loss and that not every attack is successful. Therefore, when using a quantitative risk analysis amount to consider might include, for example, losing 60 percent of an asset or that a successful attack may happen only two times a year.
The ultimate question we are seeking an answer to is how much we should spend on a countermeasure. The following are the calculations that will assist us in answering this question:
SLE is the cost (in dollars) that can be lost if a risk event happens. A single loss is of course a one-time loss. But, as mentioned, not every event results in a total loss. So a factor must be used in the equation to represent the amount of loss we expect. So, if
the equation would appear as follows:
Or
In this equation, if an asset was worth $10,000 and we expect it to lose only half of its value with any given risk event, the equation would be as follows:
The ALE is the total cost (in dollars) for all of the SLEs occurring during the year. The number of times we anticipate the risk event will happen during the year will be represented by annualized rate of occurrence (ARO). For instance, if an event happened once a year, ARO = 1; if an event happened twice a year, ARO = 2. So, the equation would appear as follows:
In this equation, if the SLE is $5,000 and we expect two risk events to happen in a year, the equation would be as follows:
or
The annualized rate of occurrence, or ARO, is always written as a probability from 0.0 to 1.0, where 1.0 or 100 percent represents one annual occurrence. If we expect two annual occurrences, ARO would equal 2.0, and if we expected an event to happen every two years, the ARO would equal .5.
Normally, the ARO is derived based upon historical or empirical knowledge. For instance, the frequency of hurricanes or floods in a geographic area is recorded in almanacs and government weather services. On occasion, the ARO may the derived based only on past personal experience. For instance, the hacker was able to bypass the firewall four times during the last year. This will result in ARO of 4.0.
This would represent catastrophic or complete 100 percent loss of the asset.
This EF would represent a 50 percent loss of the asset.
Qualitative risk analysis is based upon the intuition and individual knowledge of the organization's subject matter experts as opposed to the facts and figures of quantitative risk analysis. Qualitative is a subjective valuation system in which asset value is determined based on other factors rather than accounting costs. During the qualitative analysis, variables such as customer flight, cost to rebuild goodwill, estimated loss of potential revenue, bad publicity or press, and the loss of the ability of staff members to maintain productivity.
In qualitative risk analysis, dollar figures may be difficult to assign due to the subjective result. For instance, it is difficult to place an exact dollar figure on damage to goodwill. In many cases, the results of qualitative analysis are addressed in terms of high, medium, and low or on a scale from 0 to 5.
On many occasions, it will be the responsibility of the security practitioner to participate in the risk analysis process. Information may be gathered under different subject information categories that will be used during the analysis process.
This information is required for accurate calculations during risk analysis. Much of it comes from accounting data sources and historical data, while other information might be available from subject matter experts (SMEs) Subject matter experts such as database administrators, network managers, and administrators as well as technical staff may supply knowledge concerning the performance of an asset. Another source of data used in risk calculations may be supplied by department managers, executives, or general users based upon recollections or memory of loss occurrences.
The following techniques can be used in gathering information relevant to an asset within its operational boundary or knowledge area.
Risk assessments are the primary method used during effective risk management to implement risk reduction strategies. Because risk management is a continuous process throughout the System Development Lifecycle (SDLC), it is a process that can be used to identify, assess, and classify threats against the asset and determine the optimal mitigation technique or control to reduce risk. During this process, many operating parameters may be monitored and adjusted with the appropriate reports generated to facilitate management decision making.
Risk assessments build on risk analysis and incorporate the identification of specific risks, the likelihood of occurrence, the impact, and recommendations for controls. Even in small organizations, risk analysis and risk assessments require a substantial commitment of funds, personnel, and time. As might be expected, large organizations require teams to continuously perform risk assessments. In any organization, cost, time, and ease of use are primary concerns. Therefore, a proven framework may be adopted to provide efficient risk assessments.
NIST Special Publication 800-30 Revision 1, “Guide for Conducting Risk Assessments,” offers guidance for conducting risk assessments of federal information systems and organizations. The NIST Special Publication 800-30 Revision 1 publication has been adopted by a large number of organizations worldwide and forms the foundation of their risk management strategy.
The Special Publication outlines the risk assessment component of risk management and provides a step-by-step process on how to prepare for risk assessments, how to conduct risk assessments, how to communicate risk assessments to key organizational personnel, and how to maintain the risk assessments over time.
In the original version of NIST Special Publication 800-30, which is now retired, a nine-step risk assessment process was outlined in detail. It was designed to be similar to a project plan, using typical project management concepts, and each of the nine steps included an input, a process method, and an output. Although effective and comprehensive, it proved to be very bulky and detailed when applied to hundreds of assets. There was a requirement for simplification featuring both speed and adaptability.
In the most recent version, NIST Special Publication 800-30 Revision 1, the risk assessment process has been distilled down to four primary steps. All of the original nine steps are still included in the process. Figure 5.3 illustrates the four basic steps in the risk assessment process and the specific tasks for conducting the assessment.
As part of the overall risk management plan, organizations analyze and assess risks. During this process, assets are identified, potential threats are determined, and vulnerabilities are evaluated to determine the probability that a threat will exploit a vulnerability and therefore cause harm to the organization.
The next step in the process of managing risks is to formulate a risk treatment plan. A treatment, by definition, is a strategy by which risks are reduced by mitigating threats and reducing the likelihood that a vulnerability may be exploited. The treatment plan is a list of procedures, devices, controls, or steps that may be taken in an effort to reduce risk or minimize the impact once a threat event has taken place.
The treatment plan details how the organization plans to respond to potential risks. It outlines how risks are managed regardless of whether they are low, high, or acceptable risks and outlines preferred strategies for dealing with identified risks. Sometimes treatment plans are referred to as risk assessment plans, but in actuality, the treatment plan is the result of an assessment plan. The primary purpose of the risk treatment plan is to determine precisely who is responsible for the implementation of controls in what time frame and with what budget. It may also detail the response to an event or incident.
Once an organization identifies risks, it must choose from among several different strategies to deal with the risk. Risk treatment involves identifying a range of options for mitigating techniques that may be used to reduce risk. Risk treatment also refers to the overall process of prioritizing risks, evaluating various mitigation options by weighing their benefits and costs, preparing and implementing a risk reduction plan, and then monitoring the mitigation process.
The following risk response options are available for the treatment of risks:
Acceptance
The effect of risk may be addressed by reducing the possibility that a threat will exploit a vulnerability prior to the event being triggered, or it may be reduced through incident response activities that limit continuing damage after an event is triggered. Prior to an event, appropriate controls may be used to reduce the likelihood of an event being triggered. After an event, the appropriate testing taken by a damage control team or an incident response process can limit damage caused by the event.
The risk treatment schedule documents the plan for implementing preferred risk mitigation strategies for dealing with identified risks. A risk treatment schedule is an output of the risk assessment phase. Risks have been identified, and various controls have been selected to reduce risk. The risk treatment schedule is a listing of risks in order of priorities. Figure 5.4 illustrates a typical risk treatment schedule.
Although each risk treatment plan can be customized specifically for an organization, at a minimum the plan should include the following sections:
Although the risk treatment schedule takes the form of a spreadsheet template, many of the fields are brief descriptions. The risk treatment plan should include full documentation supporting the identification of a risk during a risk assessment activity, an impact analysis, a control selection criteria, control baselines and standards, and an incident response document.
A risk register is a primary document used to maintain a record of risks. It is a direct output of the risk assessment process. A risk register includes a detailed description of each risk that is listed. Although the risk register may appear to overlap the risk treatment schedule, the two documents serve different purposes. The risk treatment plan should include complete documentation concerning the identification of threats and asset vulnerabilities. It also prioritizes risks so they can be addressed with available resources. The risk register might include the following fields or columns:
Other types of risk treatments exist that do not fall into the preceding categories. It is nearly impossible to predict all possible risks. In this case, the organization may acknowledge that other risks exist. If an unknown risk event is triggered and identified, it may be placed into an existing general response category. This type of response is called control and investigation. The damage is controlled by a general preplanned process, and then an investigation is launched as to the source of the threat, the scope of the attack, and the amount of damage caused.
For example, a network intrusion prevention system (IPS) could be attacked through an exploit of a previously unknown flaw in its operating system. This type of attack is called the zero-day attack because it is previously unknown to the manufacturer, and therefore no patches or mitigation techniques are known. The zero-day attack can exploit a vulnerability, allowing an attacker to place a Trojan on the network. The Trojan can enable its malware component to open a back door to a database application. Although the organization may not have listed this device as a possible risk in its risk register, it still may have taken action. The attack might immediately have been classified as an Internet intrusion, and an incident response team might immediately be called in to control any damage. After an event, an investigation should be undertaken to identify the specific attack.
Most organizations today have an insufficient understanding of risk and an inability to identify and prevent risks. Many managers have insufficient knowledge of the potential of threats and vulnerabilities within the organization, let alone have the resources or a plan to address the risk event once it happens.
Organizations with limited risk visibility are continuously in a reactive state. Most organizations have insufficient understanding of how risk affects daily operations. The culture of many organizational groups is engaged almost exclusively with accomplishing daily tasks, and most managers are rarely able to comprehend the big picture or gain visibility into a risk landscape. Executives and managers at all levels are not risk aware even within their own workspace. Should a risk event occur, the focus is placed on fixing problems and putting out fires rather than on risk prevention.
Enterprise risk management (ERM) is a program designed to change the risk culture from reactive to proactive and accurately forecast and mitigate the risk on any key programs. For ERM to be successful, it must be undertaken at an organizational level. This means that it must become part of the overall culture of the organization in order to identify and reduce risks on a proactive basis. To accomplish this task, senior management must decide to make the risk management program more visible throughout the organization.
Without access to comprehensive risk data, an organization may find it difficult to identify its outstanding risks and their probability of occurrence, causing forecasts to be less accurate. There may also be a disconnect between the information used by employees in the field and the information used by executive decision-makers.
An introduction of an enterprise risk management program may involve any of the following:
An organization-wide ERM monitors the entire risk surface of an organization, which may include the monitoring of regulatory compliance, financial activities, and the physical attributes of the facilities as well as IT network risks. The ERM program is intended to raise the awareness of risks to the organization and enhance the reaction time to either mitigate the risk, thereby reducing the exposure, or react to the risk minimizing damage.
Managing the risk landscape involves many more actions than just identifying risks, establishing controls, and responding to problems. It involves a continuous monitoring requirement for the effective communication of the status of system internal controls, where threat events may be discovered immediately or as soon as possible after the incursion. Rapid identification of problems or weaknesses and quick response actions help to reduce the cost of possible risk events.
An ideal situation would be to continuously manage risk through real-time metrics. To perform this type of monitoring program, performance baselines must be established for all controls being monitored. These controls should trigger an alert in the event of state changes or even activities outside of normal operating parameters. The baselines must be monitored due to inevitable changes over time.
The amount of data captured during passive monitoring can be substantial. Various software tools may be used on the data to search for anomalies. During the log review, various anomalies or activities are noted, requiring some procedure to be followed.
During passive monitoring, logs are accumulated through various machines manually or routed to a monitoring console. The serious deficiency with passive monitoring in many organizations is that logs may not be examined on a timely basis. The passive monitoring activity is extremely helpful in device troubleshooting.
Depending upon established procedures and protocol for a designated type of intrusion, the Computer Emergency Response Team (CERT) may be required to perform necessary activities. The difference between real-time monitoring and active monitoring is that real-time monitoring is continuously listening to the traffic on the network and automatically sending alerts based upon some criteria.
Even with real-time monitoring, human response may not be sufficient. Various devices such as a network intrusion prevention systems may discover an intrusion and immediately trigger an action, such as modifying a firewall rule that closes a port or blocks an IP address. This automatic response would be much more timely and efficient than a human response.
If an anomaly is located, operators are alerted based upon the severity of the intrusion. For instance, operators may be notified to mediate issues by special screens that appear on their consoles. These devices may also be configured to alert administrators by telephone or pager. Less severe intrusions for anomalies can be displayed on dashboards or in various reports.
Many organizations employ an in-house or a third-party security operations center (SOC) to monitor the physical perimeter, CCTV cameras, and facility access (Figure 5.5). An information security operations center (ISOC) primarily monitors the organization's applications, databases, websites, servers, and networks. The ISOC provides real-time situational awareness and is a primary center for network defense.
Many security practitioners are initially employed in an ISOC that provides an excellent opportunity to learn about the network and the business of the organization. The ISOC staff includes security engineers, system administrators, and other IT personnel and network professionals that have certifications such as the (ISC)2 Certified Information System Security Professional (CISSP).
Real-time security operations are sometimes outsourced by service providers that provide managed services to monitor the client's network on a 24/7 basis. This is ideal for both the small to medium-sized organizations as well as organizations that have numerous branch offices or facilities.
Time is of the essence in the world of cybersecurity. On the other end of the spectrum, organizational resources such as personnel, money, hardware infrastructure, and necessary skill sets are scarce. Threat intelligence is a method by which the organization has up-to-the-minute information concerning zero-day attacks, threat profiles, malware signatures, and other vital pieces of information required to protect the organization's resources.
The sharing of cyber threat information and threat intelligence has been strained at best. It does make sense, however, that if one company is attacked, other companies might want to know so that they can protect their own assets. The problem has been with current laws. The sharing of information between companies in the same industry may appear to be in violation of Security and Exchange Commission regulations. Many are concerned that sharing specific threat intelligence with the government may violate privacy laws and regulations.
Various laws proposed in the U.S. Congress have failed on a number of counts. Unfortunately, although logic dictates the wisdom of sharing information, the business sector's fear of sharing information with the government is based on distrust due to the possible opening of a door that would allow the government to use data that is acquired in a surveillance program. While generally private-sector organizations agree that attack information should be readily shared, the disagreement is with proprietary data such as customer lists and personally identifiable information.
A way around the current legal hurdles and legislation or lack thereof is by using a shared central proprietary attack database. Appliances are currently on the market that identify zero-day attacks or other anomalies and immediately upload the data to a proprietary database. Other appliances from the same company can access the database and immediately protect their user networks from the identified attack. Using this technique, only the intrusion data is shared while the end users remain anonymous. This may become an extremely viable method for participating organizations to share attack information as soon as it's available.
The performance and availability of networks are constantly under threat from both internal and external sources. The security practitioner requires an understanding of monitoring techniques as well as how to interpret the results gathered during the monitoring process. Network devices are sometimes widely dispersed throughout an organization. This may also include offsite networks and remote offices.
Network monitoring involves two general categories of information:
SNMP makes use of information contained in a management information base (MIB). The MIB is a collection of information stored in a database of network devices such as routers, switches, and servers and can be accessed using SNMP. The managed object can represent a characteristic of the device being managed. SNMP includes the following components:
As a security practitioner, you will undoubtedly be involved with monitoring the security aspects of networks and network-connected devices. This is different than the device monitoring discussed earlier. Device monitoring is a method of gauging the health of a device and the health of the network. During security device monitoring, various alerts will be generated based upon established templates or parameters of a network device.
There's a wide range of log files that track various events on the network. Servers and devices record a wide range of items in logs sometimes called syslogs. In practice, syslogs are used to gather information, usually in a database that may contain data aggregated from many different types of systems. The messages maintained in a syslog can be customized through templates to indicate facility level, location, type of activity, and the severity of the alert (such as emergency, critical, or error warning). The type of message the program is logging is indicated by the facility level. Various applications across the organization may have different facility level codes. Of importance to the security practitioner will be the levels of message severity. There are eight severity levels:
Event data analysis is the process of taking raw data from numerous sources, assimilating and processing it, and presenting the result in a way that can be easily interpreted and acted upon. While the science of mathematical data analysis is quite extensive, the security practitioner should understand the sources of the data and what the data represents.
For many years, dashboards have been used to represent the correlation of data from numerous sources. At a glance, practitioners, analysts, administrators, and executives have been able to absorb large amounts of data when represented as a digital numerical display, dial with a pointer needle, bar chart, pie chart, or some other visual display technique. Microsoft products, such as Microsoft Excel, offer excellent capability to display graphs and dashboards of complex data. Excel tools such as those for formatting content, conditional formatting, pivot tables, and advanced data filtering have been used in IT data analysis.
Many vendors offer specialized tools that can be used to filter log files, memory dumps, packet captures, and other sources of raw IT event data. In many cases, data templates are used or modified to select the specific event data of interest. For example, log data or packet captures can be analyzed to identify the IP source of specific traffic. Large amounts of data over a period of time can be quickly analyzed to determine various factors such as frequency, length of visit, actions taken or data accessed, and other valuable information that can be used during investigation.
Visualization is a technique of representing complex data in a visual form rather than a tabular form such as a list. Even simple network diagrams can be extremely complex and hard to understand. Visualization takes advantage of the human brain's capability of learning and processing complex information based upon visual input. It has been proven that the brain will notice intricacies and details all at once. This allows us to analyze and absorb vast quantities of abstract information at the same time. Prior to graphic visualization, flow charts, graphs, diagrams, and other techniques were used to depict the relationship between items on a network. As networks became much more advanced and complex, paper-based graphing techniques became inefficient.
Information visualization and IT network design is an emerging field of information communication. Data analysis is indispensable when diagnosing problems facing large data communication networks. By viewing the entire network as a spatial diagram of connected nodes and network devices, the analyst can easily visualize large amounts of information and apply reason and insight rather than analytical skills in deciphering information. Many vendors supply visualization software that will map entire networks and allow the user to zoom out or zoom in to comprehend the amount of information they desire. Future iterations of visualization software will be able to display problems and recommend solutions graphically.
In the past, IT network information has been represented by various methods, including nodes illustrated as hanging off of a central bus, host workstations and servers as circles network designs, tree diagrams for Ethernet networks, and clouds representing the Internet or other mass communication technology. Figure 5.6 represents a data visualization of a large network. Major work centers are represented by the larger circles. Work groups of satellite offices are represented by smaller circles in groups, and finally individual nodes are represented by single circles.
The results of data analysis van be communicated using a number of methods. The decision as to the medium or media to be used to convey the information is directly related to the speed of the communication. For example, displaying data on a computer screen is much faster than printing out a pile of paper. Also, utilizing a dashboard to quickly visualize data is much faster than analyzing a spreadsheet with columns of information.
Various roles within the organization require data to make informed decisions. Each of these roles generally requires the data in which they are specifically interested. Data should be presented in a manner that is actionable by the individual receiving the communication. The following roles might be receiving and acting upon data:
The Systems Security Certified Practitioner must be familiar with the organization's policies, standards, procedures, and guidelines to ensure adequate information availability, integrity, and confidentiality.
Security administration includes the roles and responsibilities of many persons within the organization who must carry out various tasks according to established policy and directives. Practitioners may be involved with change control, configuration management, security awareness training, and the monitoring of systems and devices. The application of generally accepted industry best practices is the responsibility of the IT administrators and security practitioners. Key administration duties may include configuration, logging, monitoring, upgrading, and updating products as well as providing end-user support.
In this chapter, the importance of policies within an organization was stressed. Without policies and the resulting procedures and guidelines, there would be a complete lack of corporate governance with respect to IT security. Security policies are the foundation upon which the organization can rely for guidance. These policies also include the concept of continuity of operations. Continuity of operations includes all actions required to continue operations after a disaster event occurrence. An associated policy of disaster preparedness and a disaster recovery policy provide the steps that are required to restore operations to a point prior to the disaster.
The configuration and management of various systems and network products may be the responsibility of the security practitioner. Various concepts of patching and upgrading systems were discussed in this chapter. Version numbering is the methodology used to identify various versions of software, firmware, and hardware, while release management includes the responsibilities involved in the distribution of software changes, upgrades, or patches throughout the organization.
This chapter covered data classification policies with regard to the responsibilities of the security practitioner. The practitioner can be involved in both the classification process and the declassification process for data management policies.
Security education and awareness training was also discussed in this chapter. The security practitioner may be involved in conducting or facilitating security awareness training courses or sessions. During the sessions, topics such as malware introduction, social media, passwords, and the implications of loss devices can be covered. Different groups of individuals will require training.
Business continuity and disaster recovery plans are important programs to initiate within an organization. The security practitioner will be involved in originating or maintaining such plans and will definitely be involved if the plan is exercised. A business impact analysis is central to the creation of a business continuity plan. The plan should be tested using a variety of methods to ensure that individuals are aware of their responsibilities and that all of the details of the plan have been considered.
You can find the answers in Appendix A.
You can find the answers in Appendix B.
A. Eliminate risk
B. Mitigate the possibility of the use of malware
C. Enforce and maintain the AIC objectives
D. Maintain the organizations network operations
A. Altering elements of the enterprise in response to a risk analysis
B. Mitigating risk to the enterprise at any cost.
C. Allowing a third party to assume all risk for the enterprise
D. Paying all costs associated with risks with internal budgets
A. Crackers
B. Employees
C. Hackers
D. Flood
A. Risk is identified and measured by performing a risk analysis.
B. Risk is controlled through the application of safeguards and countermeasures.
C. Risk is managed by periodically reviewing the risk and taking responsible actions based on the risk.
D. All risks can be totally eliminated through risk management.
A. Any vulnerability in an information technology system
B. Protective controls
C. Multilayered controls
D. Possibility for a source to exploit a specific vulnerability
A. Potential for a source to exploit a categorized vulnerability
B. Controls put in place to provide some amount of protection for an asset
C. Weakness in internal controls that could be exploited by a threat or a threat agent
D. A control designed to warn of an attack
A. Any event with the potential to harm an information system through unauthorized access
B. Controls put in place as a result of a risk analysis
C. The annualized rate of occurrence multiplied by the single lost exposure
D. The company resource that could be lost due to an accident
A. A quantitative risk analysis does not use the hard cost of losses; a qualitative risk analysis does.
B. A quantitative risk analysis makes use of real numbers.
C. A quantitative risk analysis results in subjective high, medium, or low results.
D. A quantitative risk analysis cannot be automated.
A. Detective controls uncover attacks and prompt the action of preventative or corrective controls.
B. Controls perform as the countermeasures for threats.
C. Controls reduce the effect of an attack.
D. Corrective controls always reduce the likelihood of a premeditated attack.
A. A qualitative impact analysis identifies areas that require immediate improvement
B. A qualitative impact analysis provides a rationale for determining the effect of security controls
C. A quantitative impact analysis makes a cost benefit analysis simple
D. A quantitative impact analysis provides specific measurements of attack impacts
A. Distributing a multi-page form
B. Utilizing automated risk poling tools
C. Interviewing fired employees
D. Reviewing existing policy documents
A. Useful life
B. Value
C. Age
D. Most frequently used
A. Reduce risk to a level tolerable by the organization
B. Reduce all risks without respect to cost to the organization
C. Transfer all risks to external third parties
D. Prosecute any employees that are violating published security policies
A. An asset loss that could cause a financial or operational impact to the organization
B. Controls put in place that reduce the effects of threats
C. Competitive advantage, capability, credibility, or goodwill
D. Personnel, compensation, and retirement programs
A. The possibility that a threat exists must be determined as an element of the risk assessment.
B. The level of impact of a threat must be determined as an element of risk assessment.
C. Arisk assessment is the last result of the risk management process.
D. Risk assessment is the first step in the risk management process.
A. Total cost of ownership (TCO) needs to be included in determining the total cost of the safeguard.
B. It is most common to consider the cost effectiveness of the safeguard.
C. The most effective safeguard should always be implemented regardless of cost.
D. Several criteria should be considered when determining the total cost of the safeguard.
A. Determining the effects of a denial of service and preparing the company's response
B. The removal of all exposure and threats to the organization
C. Defining the acceptable level of risk and assigning the responsibility of loss or disruption to a third-party, such as an insurance carrier
D. Defining the acceptable level of risk the organization can tolerate and reducing risk to that level
A. Passive monitoring
B. Active monitoring
C. Subjective monitoring
D. Real-time monitoring
A. Risk acceptance
B. Ignoring risk
C. Risk transference
D. Risk reduction
A. Administrative
B. Physical
C. Preventative
D. Technical
18.222.182.66