Chapter 8

Security Assessment and Testing

IN THIS CHAPTER

check Developing assessment and test strategies

check Performing vulnerability assessments, penetration tests, and more

check Collecting security process data

check Understanding test outputs

check Conducting internal, external, and third-party audits

In this chapter, you learn about the various tools and techniques that security professionals use to continually assess and validate an organization’s security environment. This domain represents 12 percent of the CISSP certification exam.

Design and Validate Assessment and Test Strategies

Modern security threats are rapidly and constantly evolving. Likewise, an organization’s systems, applications, networks, services, and users are frequently changing. Thus, it is critical that organizations develop an effective strategy to regularly test, evaluate, and adapt their business and technology environment to reduce the probability and impact of successful attacks, as well as achieve compliance with applicable laws, regulations, and contractual obligations.

Organizations need to implement a proactive assessment and test strategy for both existing and new information systems and assets. The strategy should be an integral part of the risk management process to help the organization identify new and changing risks that are important enough to warrant analysis, decisions, and action.

Security personnel must identify all applicable laws, regulations, and other legal obligations such as contracts to understand what assessments, testing, and auditing are required. Further, security personnel should examine their organization’s risk management framework and control framework to see what assessments, control testing, and audits are suggested or required. The combination of these would then become a part of the organization’s overall strategy for assuring that all its security-related tools, systems, and processes are operating properly.

There are three main perspectives that come into play when planning for an organization’s assessments, testing, and auditing:

  • Internal: This represents assessments, testing, and auditing performed by personnel who are a part of the organization. The advantages of using internal resources for assessments, tests, and audits include lower cost and greater familiarity with the organization’s practices and systems. However, internal personnel may not be as objective as external parties.
  • External: This represents assessments, testing, and audits performed by people from an external organization or agency. Some laws and regulations, as well as contractual obligations, may require external assessments, test, and audits of certain systems and processes. The greatest advantage of using external personnel is that they’re objective. However, they’re often more expensive, particularly for activities requiring higher skill levels or specialized tools.
  • Third parties: This is all about audits of critical business activities that have been outsourced to external service providers, or third parties. Here, the systems and personnel being examined belong to an external service provider. Depending upon requirements in applicable laws, regulations, and contracts, these assessments of third parties may be performed by internal personnel, or in some cases external personnel may be required.

    tip Many third-party service providers will commission external audits whose audit reports can be distributed to their customers. This can help service providers avoid multiple audits by its customers. Examples of such audits include SSAE 18, SOC-1, and SOC-2. Service providers also commission security consulting firms to conduct penetration tests on systems and applications, which helps them to reduce the number of customers who would want to do this themselves.

Conduct Security Control Testing

Security control testing employs various tools and techniques, including vulnerability assessments, penetration (or pen) testing, synthetic transactions, interfaces testing, and more. You learn about these and other tools and techniques in the following sections.

Vulnerability assessments

A vulnerability assessment is performed to identify, evaluate, quantify, and prioritize security weaknesses in an application or system. Additionally, a vulnerability assessment provides remediation steps to mitigate specific vulnerabilities that are identified in the environment.

There are three general types of vulnerability assessments:

  • Port scan (not intensive)
  • Vulnerability scan (more intensive)
  • Penetration test (most intensive)

Generally, automated network-based scanning tools are used to identify vulnerabilities in applications, systems, and network devices in a network. Sometimes, system-based scanning tools are used to examine configuration settings to identify exploitable vulnerabilities. Often, network- and system-based tools are used together to build a more complete picture of vulnerabilities in an environment.

Port scanning

A port scan uses a tool that communicates over the network with one or more target systems on various Transmission Control Protocol/Internet Protocol (TCP/IP) ports. A port scan can discover the presence of ports that should probably be disabled (because they serve no useful or necessary purpose on a particular system).

Vulnerability scans

Network-based vulnerability scanning tools send network messages to systems in a network to identify any utilities, programs, or tools that may be configured to communicate over the network. These tools attempt to identify the version of any utilities, programs, and tools; often, it is enough to know the versions of the programs that are running, because scanning tools often contain a database of known vulnerabilities associated with program versions. Scanning tools may also send specially crafted messages to running programs to see if those programs contain any exploitable vulnerabilities.

Tools are also used to identify vulnerabilities in software applications. Generally, these tools are divided into two types: dynamic application security testing (DAST) and static application security testing (SAST). DAST will execute an application and then use techniques such as fuzzing to attempt to identify exploitable vulnerabilities that could permit an attacker to successfully compromise a software application; this would permit an attacker to alter or steal data or take over control of the system. SAST examine an application’s source code and look for exploitable vulnerabilities. Neither DAST nor SAST can find all vulnerabilities, but when used together by skilled personnel, many exploitable vulnerabilities can be found.

Examples of network-based vulnerability scanning tools include Nessus, Rapid7, and Qualys. Examples of system-based vulnerability scanning tools include Microsoft Baseline Security Analyzer (MBSA) and Flexera (formerly Secunia) PSI. Examples of application scanning tools include IBM AppScan, HP WebInspect, HP Fortify, Accunetix, and Burp Suite.

Unauthenticated and authenticated scans

Vulnerability scanning tools (both those used to examine systems and network devices, as well as those that examine applications) generally perform two types of scans: unauthenticated scans and authenticated scans. In an authenticated scan, the scanning tool will be configured with login credentials and will attempt to log in to the device, system, or application to identify vulnerabilities not discoverable otherwise. In an unauthenticated scan, the scanning tool will not attempt to log in; hence, it can only discover vulnerabilities that would be exploitable by someone who does not possess valid login credentials.

Vulnerability scan reports

Generally, all the types of scanning tools discussed in this section create some sort of a report that contains summary and detailed information about the scan that was performed and vulnerabilities that were identified. Many of these tools produce a good amount of detail, including steps used to identify each vulnerability, the severity of each vulnerability, and steps that can be taken to remediate each vulnerability.

Some vulnerability scanning tools employ a proprietary methodology for vulnerability identification, but most scanning tools include a common vulnerability scoring system (CVSS) score for each identified vulnerability. Application security is discussed in more detail in Chapter 10.

Vulnerability assessments are a key part of risk management (discussed in Chapter 3).

Penetration testing

Penetration testing (pen testing for short) is the most rigorous form of vulnerability assessment. The level of effort required to perform a penetration test is far higher than for a port scan or vulnerability scan. Typically, an organization will employ a penetration test on a target system or environment when it wants to simulate an actual attack by an adversary.

Network penetration testing

A network penetration test of systems and network devices generally begins with a port scan and/or a vulnerability scan. This gives the pen tester an inventory of the attack surface of the network and the systems and devices connected to the network. The pen test will continue with extensive use of manual techniques used to identify and/or exploit vulnerabilities. In other words, the pen tester uses both automated as well as manual techniques to identify and confirm vulnerabilities.

Occasionally, a pen tester will exploit vulnerabilities during a penetration test. Pen testers generally tread carefully here because they must be acutely aware of the target environment. For instance, if a pen tester is testing a live production environment, exploiting vulnerabilities could result in malfunctions or outages in the target environment. In some cases, data corruption or data loss could also result.

When performing a penetration test, the pen tester will often take screen shots showing the exploited system or device. Often, a pen tester does this because system/device owners sometimes don’t believe that their environments contain exploitable vulnerabilities. By including screen shots in the final report, the pen tester is “proving” that vulnerabilities exist and are exploitable.

Pen testers often include details for reproducing exploits in their reports. This is helpful for system or network engineers who often want to reproduce the exploit, so that they can “see for themselves” that the vulnerability does, in fact, exist. It’s also helpful when engineers or developers make changes to mitigate the vulnerabilities; they can use the same techniques to see whether their fixes closed the vulnerabilities.

In addition to scanning networks, some other techniques are generally included in the topic of network penetration testing, including the following:

  • War dialing: Hackers use war dialing to sequentially dial all phone numbers in a range to discover any active modems. The hacker then attempts to compromise any connected systems or networks via the modem connection. This is old school, but it’s still used occasionally.
  • War driving: War driving is the 21st-century version of war dialing. Someone uses a laptop computer and literally drives around a densely populated area, looking to discover unprotected (or poorly protected) wireless access points.
  • Radiation monitoring: Radio frequency (RF) emanations are the electromagnetic radiation emitted by computers and network devices. Radiation monitoring is similar to packet sniffing and war driving in that someone uses sophisticated equipment to try to determine what data is being displayed on monitors, transmitted on local area networks (LANs), or processed in computers.
  • Eavesdropping: Eavesdropping is as low-tech as dumpster diving, but a little less (physically) dirty. Basically, an eavesdropper takes advantage of one or more persons who are talking or using a computer — and paying little attention to whether someone else is listening to their conversations or watching them work with discreet over-the-shoulder glances. (The technical term for the latter is shoulder surfing.)
  • Packet sniffing: A packet sniffer is a tool that captures all TCP/IP packets on a network, not just those being sent to the system or device doing the sniffing. An Ethernet network is a shared-media network (see Chapter 6), which means that any or all devices on the LAN can (theoretically) view all packets. However, switched-media LANs are more prevalent today and sniffers on switched-media LANs generally pick up only packets intended for the device running the sniffer.

    tip A network adapter that operates in promiscuous mode accepts all packets, not just the packets destined for the system and sends them to the operating system.

Application penetration testing

An application penetration test is used to identify vulnerabilities in a software application. Although the principles of an application penetration test are the same as a network penetration test, the tools and skills are somewhat different. Someone performing an application penetration test generally will have an extensive background in software development. Indeed, the best application pen testers are often former software developers or software engineers.

Physical penetration testing

Penetration tests are also performed on the controls protecting physical premises, to see whether it is possible for an intruder to bypass security controls such as locked doors and keycard-controlled entrances. Sometimes pen testers will employ various social engineering techniques to gain unauthorized access to work centers and sensitive areas within work centers such as computer rooms and file storage rooms. Often, they plant evidence, such as a business card or other object to prove they were successful.

tip Hacking for Dummies, 6th Edition, explores penetration testing and other techniques in more detail.

In addition to breaking into facilities, another popular technique used by physical pen testers is dumpster diving. Dumpster diving is low-tech penetration testing at its best (or worst), and is exactly what it sounds like. Dumpster diving can sometimes be an extraordinarily fruitful way to obtain information about an organization. Organizations in highly competitive environments also need to be concerned about where their trash and recycled paper goes.

Social engineering

Social engineering is any testing technique that employs some means for tricking individuals into performing some action or providing some information that provides the pen tester with the ability to break in to an application, system, or network. Social engineering involves such low-tech tactics as an attacker pretending to be a support technician, then calling an employee and asking for their password. You’d think most people would be smart enough not to fall for this, but people are people (and Soylent Green is people)! Some of the ruses used in social engineering tests include the following:

  • Phishing messages: Email messages purporting to be something they’re not, in an attempt to lure someone into opening a file or clicking a link. Test phishing messages are, of course, harmless but are used to see how many personnel fall for the ruse.
  • Telephone calls: Calls to various workers inside an organization can be made to trick them into performing tasks. For instance, a call to the service desk might attempt to reset a user’s account (possibly enabling the pen tester to log in using that user’s account), a call to any employee claiming to be from the IT service desk to see whether the user will give up login credentials or perform a task, or a call to any employee claiming to be someone in need of assistance.
  • Tailgating: Attempts to enter a restricted work area by following legitimate personnel as they pass through a controlled doorway. Sometimes the tester will be carrying boxes in the hopes that an employee will hold the door open for them, or they may pose as a delivery or equipment repair person.

Log reviews

Reviewing your various security logs on a regular basis (daily, ideally) is a critical step in security control testing. Unfortunately, this important task often ranks only slightly higher than “updating documentation” on many administrators’ “to-do” list. Log reviews often happen only after an incident has already occurred. But that’s not the time to discover that your logging is incomplete or insufficient.

Logging requirements (including any regulatory or legal mandates) need to be clearly defined in an organization’s security policy, including:

  • What gets logged, such as
    • Events in network devices, such as firewalls, intrusion prevention systems (IPS), web filters, and data loss prevention (DLP) systems
    • Events in server and workstation operating systems
    • Events in subsystems, such as web servers, database management systems, and application gateways
    • Events in applications
  • What’s in the logs, such as
    • Date/time of event
    • Source (and destination, if applicable), protocol, and IP addresses
    • Device, System and/or User ID
    • Event ID and category
    • Event details
  • When and how often the logs are reviewed.
  • The level of logging (how verbose the logs are)
  • How and where the logs are transmitted, stored, and protected; for example:
    • Are the logs stored on a centralized log server or on the local system hard drives?
    • Which secure transmission protocol is used to ensure the integrity of the logging data in transit?
    • How are date and timestamps synchronized (such as an NTP server)?
    • Is encryption of the logs required?
    • Who is authorized access to the logs?
    • Which safeguards are in place to protect the integrity of the logs?
    • How is access to the logs logged?
  • How long the logs are retained.
  • Which events in logs are triggered to generate alerts, and to whom alerts are sent.

tip Various log management tools, such as security information and event management (SIEM) systems (discussed in Chapter 9), often are used to help with real-time monitoring, parsing, anomaly detection, and generation of alerts to key personnel.

Synthetic transactions

Synthetic transactions are real-time actions or events that automatically execute on monitored objects. For example, a tool may be used to regularly perform a series of scripted steps on an e-commerce website to measure performance, identify impending performance issues, and simulate the user experience. Thus, synthetic transactions can help an organization proactively test, monitor, and ensure availability (refer to the C-I-A triad in Chapter 3) for critical systems and monitor service-level agreement (SLA) guarantees.

Application performance monitoring tools traditionally have produced such metrics as system uptime, correct processing, and transaction latency. While uptime certainly is an important aspect of availability, it is only one component. Increasingly, reachability (which is a more user- or application-centric metric) is becoming the preferred metric for organizations that focus on customer experience. After all, it doesn’t do your customers much good if your web servers are up 99.999 percent of the time, but Internet connections from their region of the world are slow, DNS doesn’t resolve quickly, or web pages take 5 or 6 seconds to load in an online world that measures responsiveness in milliseconds! Hence, other key metrics for applications are correct processing (perhaps expressed as a percentage, which should be pretty close to 100 percent!) and transaction latency (the length of time it takes for specific types of transactions to complete). These metrics help operations personnel spot application problems.

Code review and testing

Code review and testing (sometimes known as peer review) involves systematically examining application source code to identify bugs, mistakes, inefficiencies, and security vulnerabilities in software programs. Online software repositories, such as Mercurial and Git, enable software developers to manage source code in a collaborative development environment. A code review can be accomplished either manually by carefully examining code changes visually, or by using automated code reviewing software (such as IBM AppScan Source, HP Fortify, and CA Veracode). Different types of code review and testing techniques include

  • Pair programming. Pair (or peer) programming is a technique commonly used in agile software development and extreme programming (both discussed in Chapter 10), in which two developers work together and alternate between writing and reviewing code, line by line.
  • Lightweight code review. Often performed as part of the development process, consisting of informal walkthroughs, e-mail pass-around, tool-assisted, and/or over-the-shoulder (not recommended for the rare introverted or paranoid developer!) reviews.
  • Formal inspections. Structured processes, such as the Fagan inspection, used to identify defects in design documents, requirements specifications, test plans and source code, throughout the development process.

tip Code review and testing can be invaluable in helping to identify software vulnerabilities such as buffer overflows, script injection vulnerabilities, memory leaks, and race conditions (see Chapter 10 to learn more).

Misuse case testing

The opposite of use case testing (in which normal or expected behavior in a system or application is defined and tested), abuse/misuse case testing is the process of performing unintended and malicious actions in a system or application in order to produce abnormal or unexpected behavior, and thereby identify potential vulnerabilities.

After misuse case testing identifies a potential vulnerability, a use case can be developed to define new requirements for eliminating or mitigating similar vulnerabilities in other programs and applications.

A common technique used in misuse case testing is known as fuzzing. Fuzzing involves the use of automated tools that can produce dozens (or hundreds, or even more) of combinations of input strings to be fed to a program’s data input fields in order to elicit unexpected behavior. Fuzzing is used, for example, in an attempt to successfully attack a program using script injection. Script injection is a technique where a program is tricked into executing commands in various languages, mainly JavaScript and SQL. Tools such as HP WebInspect, IBM AppScan, Acunetix, and Burp Suite have built-in fuzzing and script injection tools that are pretty good at identifying script injection vulnerabilities in software applications.

Test coverage analysis

Test (or code) coverage analysis measures the percentage of source code that is tested by a given test (or validation) suite. Basic coverage criteria typically include

  • Branch coverage (for example, every branch at a decision point is executed as TRUE or FALSE)
  • Condition (or predicate) coverage (for example, each Boolean expression is evaluated to both TRUE and FALSE)
  • Function coverage (for example, every function or subroutine is called)
  • Statement coverage (for example, every statement is executed at least once)

For example, a security engineer might use a dynamic application security testing tool (DAST), such as AppScan or WebInspect, to test a travel booking program to determine whether the program has any exploitable security defects. Tools such as these are powerful, and they use a variety of methods to “fuzz” input fields in attempts to discover flaws. But the other thing these tools need to do is fill out forms in every conceivable combination, so that all the program’s code will be executed. In this example of a travel booking tool, these combinations would involve every way in which flights, hotels, or cars could be searched, queried, examined, and finally booked. In a complex program, this can be really daunting. Highly systematic analysis would be needed, to make sure that every possibly combination of conditions is tested so that all of a program’s code is tested.

Interface testing

Interface testing focuses on the interface between different systems and components. It ensures that functions (such as data transfer and control between systems or components) perform correctly and as expected. Interface testing also verifies that any execution errors are properly handled and do not expose any potential security vulnerabilities. Examples of interfaces tested include

  • Application programming interfaces (APIs)
  • Web services
  • Transaction processing gateways
  • Physical interfaces, such as keypads, keyboard/mouse/display, and device switches and indicators

tip APIs, web services, and transaction gateways can often be tested with automated tools such as HP WebInspect, IBM AppScan, and Acunetix, which are also used to test the human-input portion of web applications.

Collect Security Process Data

Assessment of security management processes and systems helps an organization determine the efficacy of its key processes and controls. Periodic testing of key activities is an important part of management and regulatory oversight, to confirm the proper functioning of key processes, as well as identification of improvement areas.

Several factors must be considered when determining who will perform this testing, including:

  • Regulations. Various regulations specify which parties must perform testing, whether qualified internal staff or outside consultants.
  • Staff resources and qualifications. Regulations and other conditions permitting, an organization may have adequately skilled and qualified staff that can perform some or all of its testing.
  • Organizational integrity. While an organization may have the resources and expertise to test its management processes, often an organization will elect to have an outside, qualified organization perform testing. Independent outside testing helps avoid bias.

These factors will also determine required testing methods, including the tools used, testing criteria, sampling, and reporting. For example, in a U.S. public company, an organization is required to self-evaluate its information security controls in specific ways and with specific auditing standards, under the auspices of the Sarbanes–Oxley (SOX) Act of 2002, also known as the Public Company Accounting Reform and Investor Protection Act.

Account management

Management must regularly review user and system accounts and related business processes and records to ensure that privileges are provisioned and de-provisioned appropriately and with proper approvals. The types of reviews include

  • All user account provisioning was properly requested, reviewed, approved, and executed.
  • All internal personnel transfers result in timely termination of access that is no longer needed.
  • All personnel terminations result in timely termination of all access.
  • All users holding privileged account access still require it, and their administrative actions are logged.
  • All user accounts can be traced back to a proper request, review, and approval.
  • All unused user accounts are evaluated to see whether they can be deactivated.
  • All users’ access privileges are certified regularly as necessary.

Account management processes are discussed in more detail in Chapter 9.

Management review

Management provides resources and strategic direction for all aspects of an organization, including its information security program. As a part of its overall governance, management will need to review key aspects of the security program. There is no single way that this is done; instead, in the style and with the rigor that management reviews other key activities in an organization, management will review the security program. In larger organizations, this review will likely be quite formal, with executive-level reports created periodically for senior management, including key activities, events, and metrics (think eye candy here). In smaller organizations, this review will probably be a lot less formal. In the smallest organizations, as well as organizations lower security maturity levels, there may not be any management review at all. Management review often includes these activities:

  • Review of recent security incidents
  • Review of security-related spending
  • Review (and ratification) of recent policy changes
  • Review (and ratification) of risk treatment decisions
  • Review (and ratification) of major changes to security-related processes, and the security-related components of other business processes
  • Review of operational- and management-level metrics and risk indicators

The internationally recognized standard, ISO/IEC 27001, “Information technology — Security techniques — Information security management systems — Requirements,” requires that an organization’s management determine what activities and elements in the information security program need to be monitored, the methods to be used, and the individuals or teams that will review them.

Key performance and risk indicators

Key performance and risk indicators are meaningful measurements of key activities in an information security program that can be used to help management at every level better understand how well the security program and its components are performing.

This is easier said than done; here are a few reasons why:

  • There is no single set of universal metrics that are applicable to every organization.
  • There are different ways to measure performance and risk.
  • Executives will want key activities measured in specific ways.
  • Maturity levels vary from organization to organization.

Organizations will typically develop metrics and key risk indicators (KRIs) around its key security-related activities to ensure that security processes are operating as expected. Metrics help identify improvement areas by alerting management through unexpected trends.

Some of the focus areas for security metrics include the following:

  • Vulnerability management: Operational metrics will include numbers of scans performed, numbers of vulnerabilities identified (and by severity), and numbers of patches applied. Key risk indicators will focus on the elapsed time between the public release of a vulnerability and the completion of patching.
  • Incident response: Operational metrics will focus on the numbers and categories of incidents, and whether trends suggest new weaknesses in defenses. Key risk indicators will focus on the time required to realize an incident is in progress (known as dwell time) and the time required to contain and resolve the incident.
  • Security awareness training: Operational metrics and key risk indicators generally focus on the completion rate over time.
  • Logging and monitoring: Operational metrics generally focus on the numbers and types of events that occur. Key risk indicators focus on the proportion of assets whose logs are being monitored, and the elapsed time between the start of an incident and the time when personnel begin to take action.

Key risk indicators are so-called because they are harbingers of information risk in an organization. Although the development of operational metrics is not all that difficult, security managers often struggle with the problem of developing key risk indicators that make sense to executive management. For example, the vulnerability management process involves the use of one or more vulnerability scanning tools and subsequent remediation efforts. Here, some good operational metrics include numbers of scans performed, numbers of vulnerabilities identified, and the time required to remediate identified vulnerabilities. These metrics, however, will make no sense to management, because they’re lacking business context. However, one or more good key risk indicators can be derived from data in the vulnerability management process. For instance, “percentage of servers supporting manufacturing whose critical security defects are not remediated within ten days” is a great key risk indicator. This metric directly helps management understand how well the vulnerability management process is performing in a specific business context. This is also a good leading indicator of the risk of a potential breach (which exploits an unpatched, vulnerable server) that could impact business operations (manufacturing, in this case).

Backup verification data

Organizations need to routinely review and test system and data backups, and recovery procedures, to ensure they are accurate, complete, and readable. Organizations need to regularly test the ability to actually recover data from backup media, to ensure that they can do so in the event of a hardware malfunction or disaster.

On the surface, this seems easy enough. But, as they say, the devil’s in the details. There are several gotchas and considerations including the following:

  • Data recovery versus disaster recovery: There are two main reasons for backing up data:

    • Data recovery: When various circumstances require the recovery of data from a past state.
    • Disaster recovery: When an event has resulted in damage to primary processing systems, necessitating recovery of data onto alternate processing systems.

    For data recovery, you want your backup media (in whatever form) logically and physically near your production systems, so that the logistics of data recovery are simple. However, disaster recovery requires backup media to be far away from the primary processing site so that it is not involved in the same natural disaster. These two are at odds with one another; organizations sometimes solve this by creating two sets of backup media: One stays in the primary processing center, while the other is stored at a secure, offsite storage facility.

  • Data integrity: For requests to “roll back” data to an earlier date and time, it is vital to know exactly what data needs to be recovered. Database management systems enforce a rule known as referential integrity. This means that a database cannot be recovered to a state where relationships between indexes, tables, and foreign keys would be broken. This issue often comes into play in larger, distributed systems with multiple databases on different servers, sometimes owned by different organizations.
  • Version control: For requests to recover data to an earlier state, personnel also need to be mindful of all changes to programs and database design that are dependent on one another. For instance, rolling data back to a point in time last week may also require that associated computer programs be rolled back, if there were changes in the application that involved both code and data changes. Further, rolling back to an earlier point in time could also involve other components such as run-time libraries, subsystems such as Java, and even operating system versions and patches.
  • Staging environments: Depending upon the reason for recovering data from a point in time in the past, it may be appropriate to recover data onto a separate environment. For instance, if certain transactions in an e-commerce environment were lost, it may make sense to recover data including the lost transactions onto a test server, so that those transactions can be found. If older data was recovered onto the primary production environment, this would effectively wipe out transactions from that point in time up to the present.

Training and awareness

Organizations need to measure the participation in and effectiveness of security training and awareness programs. This will ensure that individuals at all levels in the organization understand how to respond to new and evolving threats and vulnerabilities. Security awareness training is discussed in Chapter 3.

Disaster recovery and business continuity

Organizations need to periodically review and test their disaster recovery (DR) and business continuity (BC) plans, to determine whether recovery plans are up-to-date and will result in the successful continuation of critical business processes in the event of a disaster. Disaster recovery and business continuity plan development and testing are discussed in Chapters 3 and 9.

tip Information security continuous monitoring (ISCM) is defined in NIST SP 800-137 as “maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions.” An ISCM strategy helps the organization to systematically maintain an effective security management program in a dynamic environment.

Analyze Test Output and Generate Reports

Various systems and tools are capable of producing volumes of log and testing data. Without proper analysis and interpretation, these reports are useless or may be used out of context. Security professionals must be able to analyze log and test data, and report this information in meaningful ways, so that senior management can understand organizational risks and make informed security decisions.

Often this requires that test output and reports be developed for different audiences with information in a form that is useful to them. For example, the output of a vulnerability scan report with its lists of IP addresses, DNS names, and vulnerabilities with their respective common vulnerabilities and exposures codes (CVEs) and CVSSs would be useful to system engineers and network engineers who would use such reports as lists of individual defects to be fixed. But give that report to a senior executive, and he’ll have little idea what it’s about or what it means in business terms. For senior executives, vulnerability scan data would be rolled up into meaningful business metrics and key risk indicators to inform senior management of any appreciable changes in risk levels.

The key here for information security professionals is knowing the meaning of data and transforming it for various purposes and different audiences. Security professionals who do this well are more easily able to obtain funding for additional tools and staff. This is because they’re able to articulate the need for resources in business terms.

Conduct or Facilitate Security Audits

Auditing is the process of examining systems and/or business processes to ensure that they’ve been properly designed, are being properly used, and are considered effective. Audits are frequently performed by an independent third-party or an independent group within an organization. This helps to ensure that the audit results are accurate and are not biased because of organizational politics or other circumstances.

Audits are frequently performed to ensure an organization is in compliance with business or security policies and other requirements that the business may be subject to. These policies and requirements can include government laws and regulations, legal contracts, and industry or trade group standards and best practices.

The major factors in play for internal and external audits include

  • Purpose and scope. The reason for an internal or external audit, and the scope of the audit, need to be fully understood by both management in the audited organization and those performing the audit. Scope may include one or more of the following factors:
    • Organization business units and departments
    • Geographic locations
    • Business processes, systems, and networks
    • Time periods
  • Applicable standards or regulations. Often, an audit is performed under the auspices of a law, regulation, or standard. Often, this will determine such matters as who may perform the auditing, auditor qualifications, the type of auditing, scope of audits, and obligations of the audited organization at the conclusion of the audit.
  • Qualifications of auditors. The personnel performing audits often are required to have specific work experience, possess specific training and/or certifications, or work in certain types of firms.
  • Types of auditing. There are several types of audit activities that comprise an audit, including:
    • Observation. Auditors passively observe activities performed by personnel and/or information systems.
    • Inquiry. Auditors ask questions of control or process owners to understand how key activities are performed.
    • Inspection. Auditors inspect documents, records, and systems to verify that key controls or processes are operating properly.
    • Reperformance. Auditors perform tasks or transactions on their own to see whether the results are correct.
  • Sampling. The process of selecting items in a large population is known as sampling. Regulations and standards often specify the types and rates of sampling that are required for an audit.
  • Management response. In some types of audits, management in the auditee organization are permitted to write a statement in response to an auditor’s findings.

There are three main contexts for audits of information systems and related processes:

  • Internal audit: Personnel in the organization will conduct an audit on selected information systems and/or business processes.
  • External audit: Auditors from an outside firm will conduct an audit on one or more information systems and/or business processes.
  • Third-party audit: Auditors, internal or external, will perform an audit of a third-party service provider that is performing services on behalf of the organization. For example, an organization may outsource a part of its software development to another company. From time to time, the organization will audit the software development company, to ensure that its business processes and information systems are in compliance with applicable regulations and business requirements.

tip Security professionals who are interested in the information systems auditing profession may want to explore the Certified Information Systems Auditor (CISA) certification.

tip Business-critical systems need to be subject to regular audits as dictated by regulatory, contractual, or trade group requirements.

warning For organizations that are subject to regulatory requirements, such as Sarbanes-Oxley (discussed in Chapter 3), it’s all too easy and far too common to make the mistake of focusing on audits and compliance rather than on implementing a truly effective and comprehensive security strategy. Remember, compliance does not equal security. Compliance isn’t optional, but neither is security. Don’t assume that achieving compliance will automatically achieve effective security (or vice versa). Fortunately, security and compliance aren’t mutually exclusive — but you need to ensure your efforts truly achieve both objectives.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.252.56