Chapter 3

Security and Risk Management

IN THIS CHAPTER

check Aligning security to the business

check Understanding security governance principles and concepts

check Recognizing legal, regulatory, compliance and professional ethics issues

check Documenting security policies, standards, procedures and guidelines

check Developing business continuity requirements

check Implementing personnel security policies

check Applying risk management concepts and threat modeling

check Integrating security risk considerations

check Establishing and monitoring security education, training, and awareness programs

The Security and Risk Management domain addresses many fundamental security concepts and principles, as well as compliance, ethics, governance, security policies and procedures, business continuity planning, risk management, and security education, training, and awareness. This domain represents 15 percent of the CISSP certification exam.

Apply Security Governance Principles

For the CISSP exam, you must fully understand and be able to apply security governance principles including:

  • Alignment of security function to business strategy, goals, mission, and objectives
  • Organizational processes
  • Security roles and responsibilities
  • Control frameworks
  • Due care
  • Due diligence

Alignment of security function to business strategy, goals, mission, and objectives

In order for an information security program to be effective, it must be aligned with the organization’s mission, strategy, goals, and objectives; thus you must understand the differences and relationships between an organization’s mission statement, strategy, goals, and objectives. You also need to know how these elements can affect the organization’s information security policies and program. Proper alignment with the organization’s mission, strategy, goals, and objectives also helps to build business cases, secure budgets, and allocate resources for security program initiatives. With proper alignment, security projects and other activities are appropriately prioritized, and they fit better into organization policies, practices, and processes.

Mission (not-so-impossible) and strategy

Corny heading, yes, but there’s a good chance you’re humming the Mission Impossible theme song now — mission accomplished!

An organization’s mission statement expresses its reason for existence. A good mission statement is an easily understood, general-purpose statement that says what the organization is, what it does, and why it exists, doing what it does in the way that it has chosen.

An organization’s strategy, describes how it accomplishes its mission and is frequently adapted to address new challenges and business realities.

Goals and objectives

A goal is something (or many somethings) that an organization hopes to accomplish. A goal should be consistent with the organization’s mission statement or philosophy, and it should help define a vision for the organization. It should also whip people into a wild frenzy, running around their offices, waving their arms in the air, and yelling “GOOOAAALLL!” (Well, maybe only if they’re World Cup fans.)

An objective is a milestone or a specific result that is expected and, as such, helps an organization attain its goals and achieve its mission.

Security personnel should be acutely aware of their organizations’ goals and objectives. Only then can security professionals ensure that security capabilities will work with and protect all the organization’s current, changing, and new products, services, and endeavors.

warning Organizations often use the terms goals and objectives interchangeably without distinction. Worse yet, some organizations refer to goals as long-term objectives, and objectives as short-term goals! For the CISSP exam, an objective (short-term) supports a goal (intermediate-term), which supports a mission (long-term), which is accomplished with a well-defined strategy. All of these fall under the umbrella of the organization’s mission statement.

Organizational processes (security executive oversight)

In this section, we discuss key processes in the realm of security governance.

Governance committees and executive oversight

Security management starts (or should start!) at the top with executive management and board-level oversight. This generally takes the form of security governance, which simply means that the organization’s governing body has set the direction and the organization has policies and processes in place to ensure that executive management is following that direction, is fully informed, and is in control of information security strategy, policy, and operations.

A governance committee is a group of executives and/or managers who regularly meet to review security incidents, projects, operational metrics, and other aspects of concern to them. The governance committee will occasionally issue mandates to security management about new business activities and shifts in priorities and strategic direction.

In practice, this is not much different from governance in IT or other departments. Governance is how executive management stays involved in the goings-on in IT, security, and other parts of the business.

Acquisitions and divestitures

Organizations, particularly in private industry, continually are reinventing themselves. More than ever before, it is important to be agile and competitive. This results in organizations acquiring other organizations, organizations splitting themselves into two (or more) separate companies, as well as internal reorganizations to change the alignment of teams, departments, divisions, and business units.

There are several security-related considerations that should be taken into account when an organization acquires another organization, or when two (or more) organizations merge:

  • Security governance and management. How is security managed in each organization, and what important differences are there?
  • Security policy. How do policies between the two organizations differ, and what issues will be encountered when merging the policies into one?
  • Security posture. Which security controls are present in each organization, and how different are they from one another?
  • Security operations. What security operations are in place today and how do they operate? This includes vulnerability management, event monitoring, identity and access management, third-party risk management, and incident management.

If the security of one organization is vastly different from another, the organization should not be too hasty to connect the two organizations’ networks together.

Interestingly, when an organization divides itself into two (or more) separate organizations or sells off a division, it can be trickier. Each new company probably will need to duplicate the security governance, management, controls, operations, and tools that the single organization had before the split. This doesn’t always mean that the two separate security functions need to be the same as the old; it is important to fully understand the business mission in each new organization, and what security regulations and standards apply to each new organization. Only then can information security align to each new organization.

Security roles and responsibilities

The truism that information security is “everyone’s responsibility” is too often put into practice as everyone is responsible, but no one is accountable. To avoid this pitfall, specific roles and responsibilities for information security should be defined in an organization’s security policy, individual job or position descriptions, and third-party contracts. These roles and responsibilities should apply to employees, consultants, contractors, interns, and vendors. And they should apply to every level of staff, from C-level executives to line employees.

Management

Senior-level management is often responsible for information security at several levels, including the role as an information owner, which we discuss in the following section. However, in this context, management has a responsibility to demonstrate a strong commitment to an organization’s information security program through the following actions:

  • Creating, mandating, and approving a corporate information security policy: This policy should include a statement of support from management and should also be signed by the CEO, COO, CIO, or the Chairman of the Board.
  • Leading by example: A CEO who carries a mandatory identification badge and utilizes system access controls sets a good example.
  • Rewarding compliance: Management should expect proper security behavior and acknowledge, recognize, and/or reward employees accordingly.

remember Management is always ultimately responsible for an organization’s overall information security and for any information security decisions that are made (or not made). Our role as information security professionals is to report security issues and to make appropriate information security recommendations to management.

Users

An end-user (or user) includes just about everyone within an organization. Users aren’t specifically designated. They can be broadly defined as anyone who has authorized access to an organization’s internal information or information systems. Users include employees, contractors and other temporary help, consultants, vendors, customer, and anyone else with access. Some organizations call them employees, partners, associates, or what-have-you. Typical user responsibilities include

  • Complying with all applicable security requirements defined in organizational policies, standards, and procedures; applicable legislative or regulatory requirements; and contractual requirements (such as non-disclosure agreements and Service Level Agreements).
  • Exercising due care in safeguarding organizational information and information assets.
  • Participating in information security training and awareness efforts as required.
  • Reporting any suspicious activity, security violations, security problems, or security concerns to appropriate personnel.

Control frameworks

Organizations often adopt a control framework to aid in their legal and regulatory compliance efforts. Some examples of relevant security frameworks include

  • COBIT 5. Developed by ISACA (formerly known as the Information Systems Audit and Control Association) and the IT Governance Institute (ITGI), COBIT consists of several components, including:

    • Framework. Organizes IT governance objectives and best practices.
    • Process descriptions. Provides a reference model and common language.
    • Maturity models. Assess organizational maturity/capability and address gaps.

    The COBIT framework is popular in organizations that are subject to the Sarbanes-Oxley Act (SOX; discussed later in this chapter) or ICOFR.

  • NIST (National Institute for Standards and Technology) Special Publication 800-53: Security and Privacy Controls for Federal Information Systems and Organizations. Known among information security professionals as NIST 800-53, this is a very popular and comprehensive controls framework required by U.S. government agencies. It also is widely used in private industry, both in the U.S. and throughout the world.
  • COSO (Committee of Sponsoring Organizations of the Treadway Commission). Developed by the Institute of Management Accountants (IMA), the American Accounting Association (AAA), the American Institute of Certified Public Accountants (AICPA), The Institute of Internal Auditors (IIA), and Financial Executives International (FEI), the COSO framework consists of five components:
    • Control environment. Provides the foundation for all other internal control components.
    • Risk assessment. Establishes objectives through identification and analysis of relevant risks and determines whether anything will prevent the organization from meeting its objectives.
    • Control activities. Policies and procedures that are created to ensure compliance with management directives. Various control activities are discussed in the other chapters of this book.
    • Information and communication. Ensures appropriate information systems and effective communications processes are in place throughout the organization.
    • Monitoring activities. Activities that assess performance over time and identify deficiencies and corrective actions.
  • ISO/IEC 27002 (International Organization for Standardization/ International Electrotechnical Commission). Formally titled “Information technology — Security techniques — Code of practice for information security management,” ISO/IEC 27002 documents security best practices in 14 domains, as follows:
    • Information security policies
    • Organization of information security
    • Human resource security
    • Asset management
    • Access control
    • Cryptography
    • Physical and environmental security
    • Operations security
    • Communications security
    • Systems acquisition, development, and maintenance
    • Supplier relationships
    • Information security incident management
    • Information security aspects of business continuity management
    • Compliance
  • ITIL (Information Technology Infrastructure Library). A set of best practices for IT service management consisting of five volumes, as follows:
    • Service Strategy. Addresses IT services strategy management, service portfolio management, IT services financial management, demand management, and business relationship management.
    • Service Design. Addresses design coordination, service catalog management, service level management, availability management, capacity management, IT service continuity management, information security management system, and supplier management.
    • Service Transition. Addresses transition planning and support, change management, service asset and configuration management, release and deployment management, service validation and testing, change evaluation, and knowledge management.
    • Service Operation. Addresses event management, incident management, service request fulfillment, problem management, and access management.
    • Continual Service Improvement. Defines a seven-step process for improvement initiatives, including identifying the strategy, defining what will be measured, gathering the data, processing the data, analyzing the information and data, presenting and using the information, and implementing the improvement.

Due care

Due care is the conduct that a reasonable person exercises in a given situation, which provides a standard for determining negligence. In the practice of information security, due care relates to the steps that individuals or organizations take to perform their duties and implement security best practices.

Another important aspect of due care is the principle of culpable negligence. If an organization fails to follow a standard of due care in the protection of its assets (or its personnel), the organization may be held culpably negligent. In such cases, jury awards may be adjusted accordingly, and the organization’s insurance company may be required to pay only a portion of any loss — the organization may get stuck paying the rest of the bill!

Due diligence

Due diligence is the prudent management and execution of due care. It’s most often used in legal and financial circles to describe the actions that an organization takes to research the viability and merits of an investment or merger/acquisition opportunity. In the context of information security, due diligence commonly refers to risk identification and risk management practices, not only in the day-to-day operations of an organization, but also in the case of technology procurement, as well as mergers and acquisitions.

warning The concepts of due care and due diligence are related but distinctly different. For example, in practice, due care is turning on logging; due diligence is regularly reviewing the logs.

Understand and Apply Concepts of Confidentiality, Integrity, and Availability

The CIA triad (also referred to as ICA) forms the basis of information security (see Figure 3-1). The triad is composed of three fundamental information security concepts:

  • Confidentiality
  • Integrity
  • Availability
image

FIGURE 3-1: The C-I-A triad.

As with any triangular shape, all three sides depend on each other (think of a three-sided pyramid or a three-legged stool) to form a stable structure. If one piece falls apart, the whole thing falls apart.

Confidentiality

Confidentiality limits access to information to subjects (users and machines) that require it. Privacy is a closely related concept that’s most often associated with personal data. Various U.S. and international laws exist to protect the privacy (confidentiality) of personal data.

Personal data most commonly refers to personally identifiable information (PII) or personal health information (PHI). PII includes names, addresses, Social Security numbers, contact information (in some cases), and financial or medical data. PHI consists of many of the same data elements as PII, but also includes an individual patient’s medical records and healthcare payment history. Personal data, in more comprehensive legal definitions (particularly in Europe), may also include race, marital status, sexual orientation or lifestyle, religious preference, political affiliations, and any number of other unique personal characteristics that may be collected or stored about an individual.

tip The U.S. Health Insurance Portability and Accountability Act (HIPAA), discussed later in this chapter, defines PHI as protected health information. In its more general context, PHI refers to personal health information.

The objective of privacy is the confidentiality and proper handling of personal data.

Integrity

Integrity safeguards the accuracy and completeness of information and processing methods. It ensures that

  • Unauthorized users or processes don’t make modifications to data.
  • Authorized users or processes don’t make unauthorized modifications to data.
  • Data is internally and externally consistent, meaning a given input produces an expected output.

Availability

Availability ensures that authorized users have reliable and timely access to information, and associated systems and assets, when and where needed. Availability is easily one of the most overlooked aspects of information security. In addition to Denial of Service attacks, other threats to availability include single points of failure, inadequate capacity (such as storage, bandwidth, and processing) planning, equipment malfunctions, and business interruptions or disasters.

Compliance

Compliance is composed of the set of activities undertaken by an organization in its attempts to abide by applicable laws, regulations, standards, and other legal obligations such as contract terms and conditions and service-level agreements (SLAs).

Because of the nature of compliance, and because there are many security- and privacy-related laws and standards, many organizations have adopted the fatally mistaken notion that to be compliant with security regulations is the same thing as being secure. However, it is appropriate to say that being compliant with security regulations and standards is a step in the right direction on the journey to becoming secure. The nature of threats today makes it plain that even organizations that are fully compliant with applicable security laws, regulations, and standards may be woefully unsecure.

Legislative and regulatory compliance

A basic understanding of the major types and classifications of U.S. and international law, including key concepts and terms, is required for the CISSP exam.

Common law

Common law (also known as case law) originated in medieval England, and is derived from the decisions (or precedents) of judges. Common law is based on the doctrine of stare decisis (“let the decision stand”) and is often codified by statutes. Under the common law system of the United States, three major categories of laws are defined at the federal and state levels: criminal, civil (or tort), and administrative (or regulatory) laws.

Criminal law

Criminal law defines those crimes committed against society, even when the actual victim is a business or individual(s). Criminal laws are enacted to protect the general public. As such, in the eyes of the court, the victim is incidental to the greater cause.

CRIMINAL PENALTIES

Penalties under criminal law have two main purposes:

  • Punishment: Penalties may include jail/prison sentences, probation, fines, and/or financial restitution to the victim.
  • Deterrence: Penalties must be severe enough to dissuade any further criminal activity by the offender or anyone else considering a similar crime.

BURDEN OF PROOF UNDER CRIMINAL LAW

To be convicted under criminal law, a judge or jury must believe beyond a reasonable doubt that the defendant is guilty. Therefore, the burden of proof in a criminal case rests firmly with the prosecution.

CLASSIFICATIONS OF CRIMINAL LAW

Criminal law has two main classifications, depending on severity, such as type of crime/attack or total loss in dollars:

  • Felony: More serious crimes, normally resulting in jail/prison terms of more than one year.
  • Misdemeanor: Less serious crimes, normally resulting in fines or jail/prison terms of less than one year.

Civil law

Civil (tort) law addresses wrongful acts committed against an individual or business, either willfully or negligently, resulting in damage, loss, injury, or death.

CIVIL PENALTIES

Unlike criminal penalties, civil penalties don’t include jail or prison terms. Instead, civil penalties provide financial restitution to the victim:

  • Compensatory damages: Actual damages to the victim, including attorney/legal fees, lost profits, investigative costs, and so on.
  • Punitive damages: Determined by a jury and intended to punish the offender.
  • Statutory damages: Mandatory damages determined by law and assessed for violating the law.

BURDEN OF PROOF UNDER CIVIL LAW

Convictions under civil law are typically easier to obtain than under criminal law because the burden of proof is much less. To be convicted under civil law, a jury must believe based upon the preponderance of the evidence that the defendant is guilty. This simply means that the available evidence leads the judge or jury to a conclusion of guilt.

LIABILITY AND DUE CARE

The concepts of liability and due care are germane to civil law cases, but they’re also applicable under administrative law, which we discuss in the next section.

The standard criteria for assessing the legal requirements for implementing recommended safeguards is to evaluate the cost of the safeguard and the estimated loss from the corresponding threat, if realized. If the cost is less than the estimated loss and the organization doesn’t implement a safeguard, then a legal liability may exist. This is based on the principle of proximate causation, in which an action taken or not taken was part of a sequence of events that resulted in negative consequences.

Under the Federal Sentencing Guidelines, senior corporate officers may be personally liable if their organization fails to comply with applicable laws. Such individuals must follow the prudent man (or person) rule, which requires them to perform their duties:

  • In good faith.
  • In the best interests of the enterprise.
  • With the care and diligence that ordinary, prudent people in a similar position would exercise under similar circumstances.

Administrative law

Administrative (regulatory) laws define standards of performance and conduct for major industries (including banking, energy, and healthcare), organizations, and government agencies. These laws are typically enforced by various government agencies, and violations may result in financial penalties and/or imprisonment.

International law

Given the global nature of the Internet, it’s often necessary for countries to cooperate in order to bring a computer criminal to justice. But because practically every country in the world has its own unique legal system, such cooperation is always difficult and often impossible. As a starting point, countries sometimes disagree on exactly what justice is. Other problems include

  • Lack of universal cooperation: We can’t answer the question, “Why can’t we all just get along?” but we can tell you that it’s highly unlikely that a 14-year-old hacker in some remote corner of the world will commit some dastardly crime that unites us all in our efforts to take him down, bringing about a lasting world peace.
  • Differing interpretations of laws: What’s illegal in one country (or even in one state in the U.S.) isn’t necessarily illegal in another.
  • Differing rules of evidence: This problem can encompass different rules for obtaining and collecting evidence, as well as different rules for admissibility of evidence.
  • Low priority: Different nations have different views regarding the seriousness of computer crimes; and in the realm of international relations, computer crimes are usually of minimal concern.
  • Outdated laws and technology: Related to the low-priority problem. Technology varies greatly throughout the world, and many countries (not only the Third World countries) lag far behind others. For this reason and many others, computer crime laws are often a low priority and aren’t kept current. This problem is further exacerbated by the different technical capabilities of the various law enforcement agencies that may be involved in an international case.
  • Extradition: Many countries don’t have extradition treaties and won’t extradite suspects to a country that has different or controversial practices, such as capital punishment. Although capital punishment for a computer crime may sound extreme, recent events and the threat of cyberterrorism make this a very real possibility.

Besides common law systems (which we talk about in the section “Common law,” earlier in this chapter), other countries throughout the world use legal systems including:

  • Civil law systems: Not to be confused with U.S. civil law, which is based on common law. Civil law systems use constitutions and statutes exclusively and aren’t based on precedent. The role of a judge in a civil law system is to interpret the law. Civil law is the most widespread type of law system used throughout the world.
  • Napoleonic code: Originating in France after the French Revolution, the Napoleonic code has spread to many other countries in Europe and elsewhere. In this system, laws are developed by legislative bodies and interpreted by the courts. However, there is often no formal concept of legal precedent.
  • Religious (or customary) law systems: Derived from religious beliefs and values. Common religious law systems include Sharia in Islam, Halakha in Judaism, and Canon law in Christianity.
  • Pluralistic (or mixed) law systems: Combinations of various systems, such as civil and common law, civil and religious law, and common and religious law.

Privacy requirements compliance

Privacy and data protection laws are enacted to protect information collected and maintained on individuals from unauthorized disclosure or misuse. Privacy laws are one area in which the United States lags behind many others, particularly the European Union (EU) and its General Data Protection Regulation (GDPR), which has defined increasingly restrictive privacy regulations that regulate the transfer of personal information to countries (including the United States) that don’t equally protect such information. The EU GDPR privacy rules include the following requirements about personal data and records:

  • Must be collected fairly and lawfully, and only after the subject has provided explicit consent.
  • Must only be used for the purposes for which it was collected and only for a reasonable period of time.
  • Must be accurate and kept up to date.
  • Must be accessible to individuals who request a report on personal information held about themselves.
  • Individuals must have the right to have any errors in their personal data corrected.
  • Individuals must have the right for their information to be expunged from an organization’s information systems.
  • Personal data can’t be disclosed to other organizations or individuals unless authorized by law or consent of the individual.
  • Transmission of personal data to locations where equivalent privacy protection cannot be assured is prohibited.

Specific privacy and data protection laws are discussed later in this chapter.

Understand Legal and Regulatory Issues that Pertain to Information Security in a Global Context

CISSP candidates are expected to be familiar with the laws and regulations that are relevant to information security throughout the world and in various industries. This could include national laws, local laws, and any laws that pertain to the types of activities performed by organizations.

Computer crimes

Computer crime consists of any criminal activity in which computer systems or networks are used as tools. Computer crime also includes crimes in which computer systems are targeted, or in which computers are the scene of the crime committed. That’s a pretty wide spectrum.

The real world, however, has difficulty dealing with computer crimes. Several reasons why computer crimes are hard to cope with include

  • Lack of understanding: In general, legislators, judges, attorneys, law enforcement officials, and jurors don’t understand the many different technologies and issues involved in a computer crime.
  • Inadequate laws: Laws are slow to change, and fail to keep pace with rapidly evolving new technology.
  • Encryption: Increasingly, there are cases where law enforcement organizations are hindered in their criminal investigations because of advanced encryption techniques in mobile devices.
  • Multiple roles of computers in crime: These roles include crimes committed against a computer (such as hacking into a system and stealing information) and crimes committed by using a computer (such as using a system to launch a Distributed Denial of Service attack). Computers may also support criminal enterprises, where criminals use computers for crime-related recordkeeping or communications.

Computer crimes are often difficult to prosecute for the reasons we just listed, and also because of the following issues:

  • Lack of tangible assets: Traditional rules of property often don’t clearly apply in a computer crime case. However, property rules have been extended in many countries to include electronic information. Computing resources, bandwidth, and data (in the form of magnetic particles) are often the only assets at issue. These can be very difficult to quantify and assign a value to. The asset valuation process, which we discuss later in this chapter, can provide vital information for valuing electronic information.
  • Rules of evidence: Often, original documents aren’t available in a computer crime case. Most evidence in such a case is considered hearsay evidence (which we discuss later in the upcoming section “Hearsay rule”) and must meet certain requirements to be admissible in court. Often, evidence is a computer itself, or data on its hard drive.
  • Lack of evidence: Many crimes are difficult to prosecute because law enforcement agencies lack the skills or resources to even identify the perpetrator, much less gather sufficient evidence to bring charges and successfully prosecute. Frequently, skilled computer criminals use a long trail of compromised computers through different countries in order to make it as difficult as possible for even diligent law enforcement agencies to identify them. Further, encryption techniques sometimes prevent law enforcement from being able to search computers and mobile devices for evidence.
  • Definition of loss: A loss of confidentiality or integrity of data goes far beyond the normal definition of loss in a criminal or civil case.
  • Location of perpetrators: Often, the people who commit computer crimes against specific organizations do so from locations outside of the victim’s country. Computer criminals do this, knowing that even if they make a mistake and create discoverable evidence that identifies them, the victim’s country law enforcement agencies will have difficulty apprehending the criminal.
  • Criminal profiles: Computer criminals aren’t necessarily hardened criminals and may include the following:
    • Juveniles: Juvenile laws in many countries aren’t taken seriously and are inadequate to deter crime. A busy prosecutor is unlikely to pursue a low-profile crime committed by a juvenile that results in a three-year probation sentence for the offender.
    • Trusted individuals: Many computer criminals are individuals who hold a position of trust within a company and have no prior criminal record. Such an individual likely can afford a dream team for legal defense, and a judge may be inclined to levy a more lenient sentence for the first-time offender. However, recent corporate scandals in the U.S. have set a strong precedent for punishment at the highest levels.

Computer crimes are often classified under one of the following six major categories:

  • Industrial espionage: Businesses are increasingly the targets of industrial espionage. These attacks include competitive intelligence gathering, as well as theft of product specifications, plans, and schematics, and business information such as marketing and customer information. Businesses can be inviting targets for an attacker due to

    • Lack of expertise: Despite heightened security awareness, a shortage of qualified security professionals exists and is getting worse. This results in organizations not having adequate preventive, detective, and response capabilities.
    • Lack of resources: Businesses often lack the resources to prevent, or even detect, attacks against their systems.
    • Lack of concern: Executive management and boards of directors in many organizations still turn a blind eye to requests for security resources.
    • Lack of reporting or prosecution: Because of public relations concerns and the inability to prosecute computer criminals because of either a lack of evidence or a lack of properly handled evidence, the majority of business attacks still go unreported. Further, few jurisdictions require organizations to disclose break-ins involving intellectual property.

    The cost to businesses can be significant, including loss of trade secrets or proprietary information, loss of revenue, and loss of reputation when intrusions are made public.

  • Financial attacks: Banks, large corporations, and e-commerce sites are the targets of financial attacks, many of which are motivated by greed. Financial attacks may seek to steal or embezzle funds, gain access to online financial information, extort individuals or businesses, or obtain the personal credit card numbers of customers. Ransomware attacks are immensely successful forms of financial attacks that encrypt information and demand a cryptocurrency ransom for the key to decrypt the information. Destructware attacks are similar to ransomware in that they often demand ransoms but do not provide keys to recover the encrypted information.
  • “Fun” attacks: “Fun” attacks are perpetrated by thrill-seekers and script kiddies who are motivated by curiosity or excitement. Although these attackers may not intend to do any harm or use any of the information that they access, they’re still dangerous and their activities are still illegal.

    These attacks can also be relatively easy to detect and prosecute. Because the perpetrators are often script kiddies (hackers who use scripts or programs written by other hackers because they don’t have programming skills themselves) or otherwise-inexperienced hackers, they may not know how to cover their tracks effectively.

    Also, because no real harm is normally done nor intended against the system, it may be tempting (although ill-advised) for a business to prosecute the individual and put a positive public relations spin on the incident. You’ve seen the film at 11:00: “We quickly detected the attack, prevented any harm to our network, and prosecuted the responsible individual; our security is unbreakable!” Such action, however, will likely motivate others to launch a more serious and concerted grudge attack against the business.

    Many computer criminals in this category only seek notoriety. Although it’s one thing to brag to a small circle of friends about defacing a public website, the wily hacker who appears on CNN reaches the next level of hacker celebrity-dom. These twisted individuals want to be caught to revel in their 15 minutes of fame.

  • Grudge attacks: Grudge attacks are targeted at individuals or businesses, and the attacker is motivated by a desire to take revenge against a person or organization. A disgruntled employee, for example, may steal trade secrets, delete valuable data, or plant a logic bomb in a critical system or application.

    Fortunately, these attacks (at least in the case of a disgruntled employee) can be easier to prevent or prosecute than many other types of attacks because:

    • The attacker is often known to the victim.
    • The attack has a visible impact that produces a viable evidence trail.
    • Most businesses (already sensitive to the possibility of wrongful-termination suits) have well-established termination procedures.
    • Specific laws (such as the U.S. Economic Espionage Act of 1996, which we discuss in the section “U.S. Economic Espionage Act of 1996,” later in this chapter) provide very severe penalties for such crimes.
  • Ideological attacks and hacktivism: Ideological attacks — commonly known as “hacktivism” — have become increasingly common in recent years. Hacktivists typically target businesses or organizations to protest a controversial position that does not agree with their own ideology. These attacks typically take the form of Distributed Denial-of-Service (DDoS) attacks, but can also include data theft. For example, the U.S. Senate and many businesses — including the Sony PlayStation Network — were targeted in 2011 and early 2012 because of their support for the Stop Online Piracy Act (SOPA).
  • Military and political intelligence attacks: Military and political intelligence attacks are perpetrated by criminals, traitors, or foreign military and intelligence agents seeking classified government, law enforcement, or military information. Such attacks are often carried out by governments during times of war and conflict.
  • Terrorist attacks: Terrorism exists at many levels on the Internet. Following the terrorist attacks against the U.S. on September 11, 2001, the general public became painfully aware of the extent of terrorism on the Internet. Terrorist organizations and cells use online capabilities to coordinate attacks, transfer funds, harm international commerce, disrupt critical systems, disseminate propaganda, recruit new members, and gain useful information about developing techniques and instruments of terror, including nuclear, biological, and chemical weapons.

Important international computer crime and information security laws and standards that the CISSP candidate should be familiar with include

  • U.S. Computer Fraud and Abuse Act of 1986
  • U.S. Electronic Communications Privacy Act (ECPA) of 1986
  • U.S. Computer Security Act of 1987
  • U.S. Federal Sentencing Guidelines of 1991 (not necessarily specific to computer crime, but certainly relevant)
  • U.S. Economic Espionage Act of 1996
  • U.S. Child Pornography Prevention Act of 1996
  • USA PATRIOT Act of 2001
  • U.S. Sarbanes-Oxley Act of 2002
  • U.S. Homeland Security Act of 2002
  • U.S. The Federal Information Security Management Act of 2002 (FISMA)
  • U.S. Controlling the Assault of Non-Solicited Pornography and Marketing (CAN-SPAM) Act of 2003
  • U.S. Identity Theft and Assumption Deterrence Act of 2003
  • U.S. Intelligence Reform and Terrorism Prevention Act of 2004
  • The Council of Europe’s Convention on Cybercrime of 2001
  • The Computer Misuse Act of 1990 (U.K.)
  • Privacy and Electronic Communications Regulations of 2003 (U.K.)
  • Information Technology Act 2000 (India)
  • Cybercrime Act of 2001 (Australia)
  • General Data Protection Regulation (GDPR) (EU)
  • Payment Card Industry Data Security Standard (PCI DSS)

It is important to understand that cybersecurity and privacy laws change from time to time. The list of such laws in this book should not be considered complete or up to date. Instead, consider these a sampling of laws from the U.S. and elsewhere.

U.S. Computer Fraud and Abuse Act of 1986, 18 U.S.C. § 1030 (as amended)

In 1986, the first U.S. federal computer crime law, the U.S. Computer Fraud and Abuse Act, was passed. This intermediate act was narrowly defined and somewhat ambiguous. The law covered:

  • Classified national defense or foreign relations information
  • Records of financial institutions or credit reporting agencies
  • Government computers

The U.S. Computer Fraud and Abuse Act of 1986 enhanced and strengthened the 1984 law, clarifying definitions of criminal fraud and abuse for federal computer crimes and removing obstacles to prosecution.

The Act established two new felony offenses for the unauthorized access of federal interest computers and a misdemeanor for unauthorized trafficking in computer passwords:

  • Felony 1: Unauthorized access, or access that exceeds authorization, of a federal interest computer to further an intended fraud, shall be punishable as a felony [Subsection (a)(4)].
  • Felony 2: Altering, damaging, or destroying information in a federal interest computer or preventing authorized use of the computer or information, that causes an aggregate loss of $1,000 or more during a one-year period or potentially impairs medical treatment, shall be punishable as a felony [Subsection (a)(5)].

    This provision was stricken in its entirety and replaced with a more general provision, which we discuss later in this section.

  • Misdemeanor: Trafficking in computer passwords or similar information if it affects interstate or foreign commerce or permits unauthorized access to computers used by or for the U.S. government [Subsection (a)(6)].

tip The Act defines a federal interest computer (actually, the term was changed to protected computer in the 1996 amendments to the Act) as either a computer

  • “[E]xclusively for the use of a financial institution or the United States government, or, in the case of a computer not exclusively for such use, used by or for a financial institution or the United States government and the conduct constituting the offense affect that use by or for the financial institution or the government”
  • “[W]hich is used in interstate or foreign commerce or communication”

Several minor amendments to the U.S. Computer Fraud and Abuse Act were made in 1988, 1989, and 1990, and more significant amendments were made in 1994, 1996 (by the Economic Espionage Act of 1996), and 2001 (by the USA PATRIOT Act of 2001). The Act, in its present form, establishes eight specific computer crimes. In addition to the three that we discuss in the preceding list, these crimes include the following five provisions (we discuss subsection [a][5] in its current form in the following list):

  • Unauthorized access, or access that exceeds authorization, to a computer that results in disclosure of U.S. national defense or foreign relations information [Subsection (a)(1)].
  • Unauthorized access, or access that exceeds authorization, to a protected computer to obtain any information on that computer [Subsection (a)(2)].
  • Unauthorized access to a protected computer, or access that exceeds authorization, to a protected computer that affects the use of that computer by or for the U.S. government [Subsection (a)(3)].
  • Unauthorized access to a protected computer causing damage or reckless damage, or intentionally transmitting malicious code which causes damage to a protected computer [Subsection (a)(5), as amended].
  • Transmission of interstate or foreign commerce communication threatening to cause damage to a protected computer for the purpose of extortion [Subsection (a)(7)].

In the section “USA PATRIOT Act of 2001,” later in this chapter, we discuss major amendments to the U.S. Computer Fraud and Abuse Act of 1986 (as amended) that Congress introduced in 2001.

The U.S. Computer Fraud and Abuse Act of 1986 is the major computer crime law currently in effect. The CISSP exam likely tests your knowledge of the Act in its original 1986 form, but you should also be prepared for revisions to the exam that may cover the more recent amendments to the Act.

U.S. Electronic Communications Privacy Act (ECPA) of 1986

The ECPA complements the U.S. Computer Fraud and Abuse Act of 1986 and prohibits eavesdropping, interception, or unauthorized monitoring of wire, oral, and electronic communications. However, the ECPA does provide specific statutory exceptions, allowing network providers to monitor their networks for legitimate business purposes if they notify the network users of the monitoring process.

The ECPA was amended extensively by the USA PATRIOT Act of 2001. These changes are discussed in the upcoming “USA PATRIOT Act of 2001” section.

The U.S. Electronic Communications Privacy Act (ECPA) provides the legal basis for network monitoring.

U.S. Computer Security Act of 1987

The U.S. Computer Security Act of 1987 requires federal agencies to take extra security measures to prevent unauthorized access to computers that hold sensitive information. In addition to identifying and developing security plans for sensitive systems, the Act requires those agencies to provide security-related awareness training for their employees. The Act also assigns formal government responsibility for computer security to the National Institute of Standards and Technology (NIST) for information security standards, in general, and to the National Security Agency (NSA) for cryptography in classified government/military systems and applications.

U.S. Federal Sentencing Guidelines of 1991

In November 1991, the United States Sentencing Commission published Chapter 8, “Federal Sentencing Guidelines for Organizations,” of the U.S. Federal Sentencing Guidelines. These guidelines establish written standards of conduct for organizations, provide relief in sentencing for organizations that have demonstrated due diligence, and place responsibility for due care on senior management officials with penalties for negligence, including fines of up to $290 million.

U.S. Economic Espionage Act of 1996

The U.S. Economic Espionage Act (EEA) of 1996 was enacted to curtail industrial espionage, particularly when such activity benefits a foreign entity. The EEA makes it a criminal offense to take, download, receive, or possess trade secret information that’s been obtained without the owner’s authorization. Penalties include fines of up to $10 million, up to 15 years in prison, and forfeiture of any property used to commit the crime. The EEA also enacted the 1996 amendments to the U.S. Computer Fraud and Abuse Act, which we talk about in the section “U.S. Computer Fraud and Abuse Act of 1986, 18 U.S.C. § 1030 (as amended),” earlier in this chapter.

U.S. Child Pornography Prevention Act of 1996

The U.S. Child Pornography Prevention Act (CPPA) of 1996 was enacted to combat the use of computer technology to produce and distribute pornography involving children, including adults portraying children.

USA PATRIOT Act of 2001

Following the terrorist attacks against the United States on September 11, 2001, the USA PATRIOT Act of 2001 (Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act) was enacted in October 2001 and renewed in March 2006. (Many provisions originally set to expire have since been made permanent under the renewed Act.) This Act takes great strides to strengthen and amend existing computer crime laws, including the U.S. Computer Fraud and Abuse Act and the U.S. Electronic Communications Privacy Act (ECPA), as well as to empower U.S. law enforcement agencies, if only temporarily. U.S. federal courts have subsequently declared some of the Act’s provisions unconstitutional. The sections of the Act that are relevant to the CISSP exam include

  • Section 202 — Authority to Intercept Wire, Oral, and Electronic Communications Relating to Computer Fraud and Abuse Offenses: Under previous law, investigators couldn’t obtain a wiretap order for violations of the Computer Fraud and Abuse Act. This amendment authorizes such action for felony violations of that Act.
  • Section 209 — Seizure of Voice-Mail Messages Pursuant to Warrants: Under previous law, investigators could obtain access to e-mail under the ECPA but not voice-mail, which was covered by the more restrictive wiretap statute. This amendment authorizes access to voice-mail with a search warrant rather than a wiretap order.
  • Section 210 — Scope of Subpoenas for Records of Electronic Communications: Under previous law, subpoenas of electronic records were restricted to very limited information. This amendment expands the list of records that can be obtained and updates technology-specific terminology.
  • Section 211 — Clarification of Scope: This amendment governs privacy protection and disclosure to law enforcement of cable, telephone, and Internet service provider records.
  • Section 212 — Emergency Disclosure of Electronic Communications to Protect Life and Limb: Prior to this amendment, no special provisions existed that allowed a communications provider to disclose customer information to law enforcement officials in emergency situations, such as an imminent crime or terrorist attack, without exposing the provider to civil liability suits from the customer.
  • Section 214 — Pen Register and Trap and Trace Authority under FISA (Foreign Intelligence Surveillance Act): Clarifies law enforcement authority to trace communications on the Internet and other computer networks, and it authorizes the use of a pen/trap device nationwide, instead of limiting it to the jurisdiction of the court.

    technicalstuff A pen/trap device refers to a pen register that shows outgoing numbers called from a phone and a trap and trace device that shows incoming numbers that called a phone. Pen registers and trap and trace devices are collectively referred to as pen/trap devices because most technologies allow the same device to perform both types of traces (incoming and outgoing numbers).

  • Section 217 — Interception of Computer Trespasser Communications: Under previous law, it was permissible for organizations to monitor activity on their own networks but not necessarily for law enforcement to assist these organizations in monitoring, even when such help was specifically requested. This amendment allows organizations to authorize persons “acting under color (pretense or appearance) of law” to monitor trespassers on their computer systems.
  • Section 220 — Nationwide Service of Search Warrants for Electronic Evidence: Removes jurisdictional issues in obtaining search warrants for e-mail. For an excellent example of this problem, read The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage, by Clifford Stoll (Doubleday).
  • Section 814 — Deterrence and Prevention of Cyberterrorism: Greatly strengthens the U.S. Computer Fraud and Abuse Act, including raising the maximum prison sentence from 10 years to 20 years.
  • Section 815 — Additional Defense to Civil Actions Relating to Preserving Records in Response to Government Requests: Clarifies the “statutory authorization” (government authority) defense for violations of the ECPA.
  • Section 816 — Development and Support of Cybersecurity Forensic Capabilities: Requires the Attorney General to establish regional computer forensic laboratories, maintain existing laboratories, and provide forensic and training capabilities to Federal, State, and local law enforcement personnel and prosecutors.

warning The USA PATRIOT Act of 2001 changes many of the provisions in the computer crime laws, particularly the U.S. Computer Fraud and Abuse Act, which we discuss in the section “U.S. Computer Fraud and Abuse Act of 1986, 18 U.S.C. § 1030 (as amended),” earlier in this chapter; and the Electronic Communications Privacy Act of 1986, which we detail in the section “U.S. Electronic Communications Privacy Act (ECPA) of 1986,” earlier in this chapter. As a security professional, you must keep abreast of current laws and affairs to perform your job effectively.

U.S. Sarbanes-Oxley Act of 2002 (SOX)

In the wake of several major corporate and accounting scandals, SOX was passed in 2002 to restore public trust in publicly held corporations and public accounting firms by establishing new standards and strengthening existing standards for these entities including auditing, governance, and financial disclosures.

SOX established the Public Company Accounting Oversight Board (PCAOB), which is a private-sector, nonprofit corporation responsible for overseeing auditors in the implementation of SOX. PCAOB’s “Accounting Standard 2” recognizes the role of information technology as it relates to a company’s internal controls and financial reporting. The Standard identifies the responsibility of Chief Information Officers (CIOs) for the security of information systems that process and store financial data, and it has many implications for information technology security and governance.

U.S. Homeland Security Act of 2002

This law consolidated 22 U.S. government agencies to form the Department of Homeland Security (DHS). The law also provided for the creation of a privacy official to enforce the Privacy Act of 1974.

U.S. Federal Information Systems Management Act (FISMA) of 2002

FISMA extended the Computer Security Act of 1987 by requiring regular audits of both U.S. government information systems, and organizations providing information services to the U.S. federal government.

U.S. CAN-SPAM Act of 2003

The U.S. CAN-SPAM Act (Controlling the Assault of Non-Solicited Pornography and Marketing Act) establishes standards for sending commercial e-mail messages, charges the U.S. Federal Trade Commission (FTC) with enforcement of the provision, and provides penalties that include fines and imprisonment for violations of the Act.

U.S. Identity Theft and Assumption Deterrence Act of 2003

This law updated earlier U.S. laws on identity theft.

Directive 95/46/EC on the protection of personal data (1995, EU)

In 1995, the European Parliament ratified this essential legislation that protects personal information for all European citizens. The directive states that personal data should not be processed at all, except when certain conditions are met.

A legitimate concern about the disposition of European citizens’ personal data when it leaves computer systems in Europe and enters computer systems in the U.S. led to the creation of the Safe Harbor program (discussed in the following section).

Safe Harbor (1998)

In an agreement between the European Union and the U.S. Department of Commerce in 1998, the U.S. Department of Commerce developed a certification program called Safe Harbor. This permits U.S.-based organizations to certify themselves as properly handling private data belonging to European citizens.

U.S. Intelligence Reform and Terrorism Prevention Act of 2004

This law facilitates the sharing of intelligence information between various U.S. government agencies, as well as protections of privacy and civil liberties.

The Council of Europe’s Convention on Cybercrime (2001)

The Convention on Cybercrime is an international treaty, currently signed by more than 40 countries (the U.S. ratified the treaty in 2006), requiring criminal laws to be established in signatory nations for computer hacking activities, child pornography, and intellectual property violations. The treaty also attempts to improve international cooperation with respect to monitoring, investigations, and prosecution.

The Computer Misuse Act 1990 (U.K.)

The Computer Misuse Act 1990 (U.K.) defines three criminal offenses related to computer crime: unauthorized access (whether successful or unsuccessful), unauthorized modification, and hindering authorized access (Denial of Service).

Privacy and Electronic Communications Regulations of 2003 (U.K.)

Similar to U.S. “do not call” laws, this law makes it illegal to use equipment to make automated telephone calls that play recorded messages.

Information Technology Act 2000 (India)

This law modernized computer crimes and defined activities such as data theft, creation and spreading of malware, identity theft, pornography, child pornography, and cyber terrorism. This law also validated electronic contracts and electronic signatures.

Cybercrime Act 2001 (Australia)

The Cybercrime Act 2001 (Australia) establishes criminal penalties, including fines and imprisonment, for people who commit computer crimes (including unauthorized access, unauthorized modification, or Denial of Service) with intent to commit a serious offense.

Payment Card Industry Data Security Standard (PCI DSS)

Although not (yet) a legal mandate, the Payment Card Industry Data Security Standard (PCI DSS) is one example of an industry initiative for mandating and enforcing security standards. PCI DSS applies to any business worldwide that transmits, processes, or stores payment card (meaning credit card) transactions to conduct business with customers — whether that business handles thousands of credit card transactions a day or a single transaction a year. Compliance is mandated and enforced by the payment card brands (American Express, MasterCard, Visa, and so on) and each payment card brand manages its own compliance program.

tip Although PCI DSS is an industry standard rather than a legal mandate, many states are beginning to introduce legislation that would make PCI compliance (or at least compliance with certain provisions) mandatory for organizations that do business in that state.

PCI DSS requires organizations to submit an annual assessment and network scan, or to complete onsite PCI data security assessments and quarterly network scans. The actual requirements depend on the number of payment card transactions handled by an organization and other factors, such as previous data loss incidents.

PCI DSS version 3.2 consists of six core principles, supported by 12 accompanying requirements, and more than 200 specific procedures for compliance. These include

  • Principle 1: Build and maintain a secure network:
    • Requirement 1: Install and maintain a firewall configuration to protect cardholder data.
    • Requirement 2: Don’t use vendor-supplied defaults for system passwords and other security parameters.
  • Principle 2: Protect cardholder data:
    • Requirement 3: Protect stored cardholder data.
    • Requirement 4: Encrypt transmission of cardholder data across open, public networks.
  • Principle 3: Maintain a vulnerability management program:
    • Requirement 5: Use and regularly update antivirus software.
    • Requirement 6: Develop and maintain secure systems and applications.
  • Principle 4: Implement strong access control measures:
    • Requirement 7: Restrict access to cardholder data by business need-to-know.
    • Requirement 8: Assign a unique ID to each person who has computer access.
    • Requirement 9: Restrict physical access to cardholder data.
  • Principle 5: Regularly monitor and test networks:
    • Requirement 10: Track and monitor all access to network resources and cardholder data.
    • Requirement 11: Regularly test security systems and processes.
  • Principle 6: Maintain an information security policy:
    • Requirement 12: Maintain a policy that addresses information security.

Penalties for non-compliance are levied by the payment card brands and include not being allowed to process credit card transactions, fines up to $25,000 per month for minor violations, and fines up to $500,000 for violations that result in actual lost or stolen financial data.

Licensing and intellectual property

Given the difficulties in defining and prosecuting computer crimes, many prosecutors seek to convict computer criminals on more traditional criminal statutes, such as theft, fraud, extortion, and embezzlement. Intellectual property rights and privacy laws, in addition to specific computer crime laws, also exist to protect the general public and assist prosecutors.

remember The CISSP candidate should understand that because of the difficulty in prosecuting computer crimes, prosecutors often use more traditional criminal statutes, intellectual property rights, and privacy laws to convict criminals. In addition, you should also realize that specific computer crime laws do exist.

Intellectual property is protected by U.S. law under one of four classifications:

  • Patents
  • Trademarks
  • Copyrights
  • Trade secrets

Intellectual property rights worldwide are agreed upon, defined, and enforced by various organizations and treaties, including the World Intellectual Property Organization (WIPO), World Customs Organization (WCO), World Trade Organization (WTO), United Nations Commission on International Trade Law (UNCITRAL), European Union (EU), and Trade-Related Aspects of Intellectual Property Rights (TRIPs).

Licensing violations are among the most prevalent examples of intellectual property rights infringement. Other examples include plagiarism, software piracy, and corporate espionage.

Digital rights management (DRM) attempts to protect intellectual property rights by using access control technologies to prevent unauthorized copying or distribution of protected digital media.

Patents

A patent, as defined by the U.S. Patent and Trademark Office (PTO) is “the grant of a property right to the inventor.” A patent grant confers upon the owner (either an individual or a company) “the right to exclude others from making, using, offering for sale, selling, or importing the invention.” In order to qualify for a patent, an invention must be novel, useful, and not obvious. An invention must also be tangible — an idea cannot be patented. Examples of computer-related objects that may be protected by patents are computer hardware and physical devices in firmware.

A patent is granted by the U.S. PTO for an invention that has been sufficiently documented by the applicant and that has been verified as original by the PTO. A U.S. patent is generally valid for 20 years from the date of application and is effective only within the U.S., including territories and possessions. Patent applications must be filed with the appropriate patent office in various countries throughout the world to receive patent protection in that country. The owner of the patent may grant a license to others for use of the invention or its design, often for a fee.

U.S. patent (and trademark) laws and rules are covered in 35 U.S.C. and 37 C.F.R., respectively. The Patent Cooperation Treaty (PCT) provides some international protection for patents. More than 130 countries worldwide have adopted the PCT. Patent infringements are not prosecuted by the U.S. PTO. Instead, the holder of a patent must enforce their patent rights through the appropriate legal system.

remember Patent grants were previously valid for only 17 years, but have recently been changed, for newly granted patents, to 20 years.

Trademarks

A trademark, as defined by the U.S. PTO, is “any word, name, symbol, or device, or any combination, used, or intended to be used, in commerce to identify and distinguish the goods of one manufacturer or seller from goods manufactured or sold by others.” Computer-related objects that may be protected by trademarks include corporate brands and operating system logos. U.S. Public Law 105–330, the Trademark Law Treaty Implementation Act, provides some international protection for U.S. registered trademarks.

Copyrights

A copyright is a form of protection granted to the authors of “original works of authorship,” both published and unpublished. A copyright protects a tangible form of expression rather than the idea or subject matter itself. Under the original Copyright Act of 1909, publication was generally the key to obtaining a federal copyright. However, the Copyright Act of 1976 changed this requirement, and copyright protection now applies to any original work of authorship immediately, from the time that it’s created in a tangible form. Object code or documentation are examples of computer-related objects that may be protected by copyrights.

Copyrights can be registered through the Copyright Office of the Library of Congress, but a work doesn’t need to be registered to be protected by copyright. Copyright protection generally lasts for the lifetime of the author plus 70 years.

Trade secrets

A trade secret is proprietary or business-related information that a company or individual uses and has exclusive rights to. To be considered a trade secret, the information must meet the following requirements:

  • Must be genuine and not obvious: Any unique method of accomplishing a task would constitute a trade secret, especially if it is backed up by copyrighted, patented, or proprietary software or methods that give that organization a competitive advantage.
  • Must provide the owner a competitive or economic advantage and, therefore, have value to the owner: For example, Google’s search algorithms — the “secret sauce” that makes it popular with users (and therefore advertisers) — aren’t universally known. Some secrets are protected.
  • Must be reasonably protected from disclosure: This doesn’t mean that it must be kept absolutely and exclusively secret, but the owner must exercise due care in its protection.

Software source code or firmware code are examples of computer-related objects that an organization may protect as trade secrets.

Import/export controls

International import and export controls exist between countries to protect both intellectual property rights and certain sensitive technologies (such as encryption).

Information security professionals need to be aware of relevant import/export controls for any countries in which their organization operates or to which their employees travel. For example, it is not uncommon for laptops to be searched, and possibly confiscated, at airports to enforce various import/export controls.

Trans-border data flow

Related to import/export controls is the issue of trans-border data flow. As discussed earlier in this chapter, data privacy and breach disclosure laws vary greatly across different regions, countries, and U.S. states. Australia and European Union countries are two examples where data privacy regulations, in general, are far more stringent than in the U.S. Many countries restrict or completely forbid personal data of their citizens from leaving the country.

Issues of trans-border data flow, and data residency (where data is physically stored) are particularly germane for organizations operating in the public cloud. For these organizations, it is important to know — and have control over — where their data is stored. Issues of data residency and trans-border data flow should be addressed in any agreements or contracts with cloud service providers.

Privacy

Privacy in the context of electronic information about citizens is not well understood by everyone. Simply put, privacy has two main components:

  • Data protection. Here, we just mean the usual data security measures discussed in most of this book.
  • Appropriate handling and use. This refers to the ways in which information owners choose to process and distribute personal data.

Several important pieces of privacy and data protection legislation include the Federal Privacy Act, the Health Insurance Portability and Accountability Act (HIPAA), the Health Information Technology for Economic and Clinical Health Act (HITECH), and the Gramm-Leach-Bliley Act (GLBA) in the United States, and the Data Protection Act (DPA) in the United Kingdom. Finally, the Payment Card Industry Data Security Standard (PCI DSS) is an example of an industry policing itself — without the need for government laws or regulations.

Several privacy related laws that CISSP candidates should be familiar with include

  • U.S. Federal Privacy Act of 1974
  • U.S. Health Insurance Portability and Accountability Act (HIPAA) of 1996
  • U.S. Children’s Online Privacy Protection Act (COPPA) of 1998
  • U.S. Gramm-Leach-Bliley Financial Services Modernization Act of 1999
  • U.S. Health Information Technology for Economic and Clinical Health Act (HITECH) of 2009
  • U.K. Data Protection Act of 1999
  • European General Data Protection Regulation (GDPR)

U.S. Federal Privacy Act of 1974, 5 U.S.C. § 552A

The Federal Privacy Act of 1974 protects records and information maintained by U.S. government agencies about U.S. citizens and lawful permanent residents. Except under certain specific conditions, no agency may disclose any record about an individual “except pursuant to a written request by, or with the prior written consent of, the individual to whom the record pertains.” The Privacy Act also has provisions for access and amendment of an individual’s records by that individual, except in cases of “information compiled in reasonable anticipation of a civil action or proceeding.” The Privacy Act provides individual penalties for violations, including a misdemeanor charge and fines up to $5,000.

warning Although the Federal Privacy Act of 1974 pre-dates the Internet as we know it today, don’t dismiss its relevance. The provisions of the Privacy Act are as important as ever and remain in full force and effect today.

U.S. Health Insurance Portability and Accountability Act (HIPAA) of 1996, PL 104–191

HIPAA was signed into law effective August 1996. The HIPAA legislation provided Congress three years from that date to pass comprehensive health privacy legislation. When Congress failed to pass legislation by the deadline, the Department of Health and Human Services (HHS) received the authority to develop the privacy and security regulations for HIPAA. In October 1999, HHS released proposed HIPAA privacy regulations entitled “Privacy Standards for Individually Identifiable Health Information,” which took effect in April 2003. HIPAA security standards were subsequently published in February 2003 and took effect in April 2003. Organizations that must comply with HIPAA regulations are referred to as covered entities and include

  • Payers (or health plan): An individual or group health plan that provides — or pays the cost of — medical care; for example, insurers.
  • Healthcare clearinghouses: A public or private entity that processes or facilitates the processing of nonstandard data elements of health information into standard data elements, such as data warehouses.
  • Healthcare providers: A provider of medical or other health services, such as hospitals, HMOs, doctors, specialists, dentists, and counselors.

Civil penalties for HIPAA violations include fines of $100 per incident, up to $25,000 per provision, per calendar year. Criminal penalties include fines up to $250,000 and potential imprisonment of corporate officers for up to ten years. Additional state penalties may also apply.

In 2009, Congress passed additional HIPAA provisions as part of the American Recovery and Reinvestment Act of 2009, requiring covered entities to publicly disclose security breaches involving personal information. (See the section “Disclosure laws” later in this chapter for a discussion of disclosure laws.)

Children’s Online Privacy Protection Act (COPPA) of 1998

This law provides for protection of online information about children under the age of 13. The law defines rules for the collection of information from children and means for obtaining consent from parents. Organizations are also restricted from marketing to children under the age of 13.

U.S. Gramm-Leach-Bliley Financial Services Modernization Act (GLBA) of 1999, PL 106-102

Gramm-Leach-Bliley (known as GLBA) opened up competition among banks, insurance companies, and securities companies. GLBA also requires financial institutions to better protect their customers’ personally identifiable information (PII) with three rules:

  • Financial Privacy Rule: Requires each financial institution to provide information to each customer regarding the protection of customers’ private information.
  • Safeguards Rule: Requires each financial institution to develop a formal written security plan that describes how the institution will protect its customers’ PII.
  • Pretexting Protection: Requires each financial institution to take precautions to prevent attempts by social engineers to acquire private information about institutions’ customers.

Civil penalties for GLBA violations are up to $100,000 for each violation. Furthermore, officers and directors of financial institutions are personally liable for civil penalties of not more than $10,000 for each violation.

U.S. Health Information Technology for Economic and Clinical Health Act (HITECH) of 2009

The HITECH Act, passed as part of the American Recovery and Reinvestment Act of 2009, broadens the scope of HIPAA compliance to include the business associates of HIPAA covered entities. These include third-party administrators, pharmacy benefit managers for health plans, claims processing/billing/transcription companies, and persons performing legal, accounting and administrative work.

Another highly important provision of the HITECH Act promotes and, in many cases, funds the adoption of electronic health records (EHRs), in order to increase the effectiveness of individual medical treatment, improve efficiency in the U.S. healthcare system, and reduce the overall cost of healthcare. Anticipating that the widespread adoption of EHRs will increase privacy and security risks, the HITECH Act introduces new security and privacy-related requirements.

In the event of a breach of “unsecured protected health information,” the HITECH Act requires covered entities to notify the affected individuals and the Secretary of the U.S. Department of Health and Human Services (HHS). The regulation defines unsecured protected health information (PHI) as PHI that is not secured through the use of a technology or methodology to render it unusable, unreadable, or indecipherable to unauthorized individuals.

The notification requirements vary according to the amount of data breached

  • A data breach affecting more than 500 people must be reported immediately to the HHS, major media outlets and individuals affected by the breach, and must be posted on the official HHS website.
  • A data breach affecting fewer than 500 people must be reported to the individuals affected by the breach, and to the HHS secretary.

Finally, the HITECH Act also requires the issuance of technical guidance on the technologies and methodologies “that render protected health information unusable, unreadable, or indecipherable to unauthorized individuals”. The guidance specifies data destruction and encryption as actions that render PHI unusable if it is lost or stolen. PHI that is encrypted and whose encryption keys are properly secured provides a “safe harbor” to covered entities and does not require them to issue data-breach notifications.

U.K. Data Protection Act

Passed by Parliament in 1998, the U.K. Data Protection Act (DPA) applies to any organization that handles sensitive personal data about living persons. Such data includes

  • Names
  • Birth and anniversary dates
  • Addresses, phone numbers, and e-mail addresses
  • Racial or ethnic origins
  • Political opinions and religious (or similar) beliefs
  • Trade or labor union membership
  • Physical or mental condition
  • Sexual orientation or lifestyle
  • Criminal or civil records or allegations

The DPA applies to electronically stored information, but certain paper records used for commercial purposes may also be covered. The DPA consists of eight privacy and disclosure principles as follows:

  • “Personal data shall be processed fairly and lawfully and [shall not be processed unless certain other conditions (set forth in the Act) are met].”
  • “Personal data shall be obtained only for one or more specified and lawful purposes, and shall not be further processed in any manner incompatible with that purpose or those purposes.”
  • “Personal data shall be adequate, relevant, and not excessive in relation to the purpose or purposes for which they are processed.”
  • “Personal data shall be accurate and, where necessary, kept up-to-date.”
  • “Personal data processed for any purpose or purposes shall not be kept for longer than is necessary for that purpose or those purposes.”
  • “Personal data shall be processed in accordance with the rights of data subjects under this Act.”
  • “Appropriate technical and organizational measures shall be taken against unauthorized or unlawful processing of personal data and against accidental loss or destruction of, or damage to, personal data.”
  • “Personal data shall not be transferred to a country or territory outside the European Economic Area unless that country or territory ensures an adequate level of protection for the rights and freedoms of data subjects in relation to the processing of personal data.”

DPA compliance is enforced by the Information Commissioner’s Office (ICO), an independent official body. Penalties generally include fines which may also be imposed against the officers of a company.

European Union General Data Protection Regulation (GDPR)

The European Union General Data Protection Regulation, known as GDPR, represents a significant revision of the 1995 privacy directive. Highlights of GDPR include the following:

  • Requires the enactment of a formal, documented data privacy program, which must direct all relevant business activities be designed with privacy by default and privacy by design.
  • Requires that organizations that collect personally identifiable information from any European resident obtain explicit consent for the collection and use of such information. Organizations’ collection of personally identifiable information must be opt-in as opposed to opt-out. In other words, users must choose to opt in to data collection and usage.
  • Data subjects must have the ability to review information about themselves, be able to request that corrections be made, and be able to request that their data be expunged upon request.
  • Definition of a data controller, an organization that stores and processes personally identifiable information.
  • Definition of a data processor, an organization that stores and processes personally identifiable information as directed by a data controller.
  • Requires the appointment of a data protection officer (DPO), an individual that oversees the creation and operation of an organization’s data privacy program.
  • Requires that all affected data subjects be notified within 72 hours of a data breach.
  • Permits European authorities to levy fines on organizations that violate terms of GDPR, with those fines being as high as €20 million or 4 percent of the organization’s annual turnover (revenue).

Data breaches

In an effort to combat identity theft, many U.S. states have passed disclosure laws that compel organizations to publicly disclose security breaches that may result in the compromise of personal data.

Although these laws typically include statutory penalties, the damage to an organization’s reputation and the potential loss of business — caused by the public disclosure requirement of these laws — can be the most significant and damaging aspect to affected organizations. Thus, public disclosure laws shame organizations into implementing more effective information security policies and practices to lessen the risk of a data breach occurring in the first place.

By requiring organizations to notify individuals of a data breach, disclosure laws enable potential victims to take defensive or corrective action to help avoid or minimize the damage resulting from identity theft.

California Security Breach Information Act (SB-1386)

Passed in 2003, the California Security Breach Information Act (SB-1386) was the first U.S. state law to require organizations to notify all affected individuals “in the most expedient time possible and without unreasonable delay, consistent with the legitimate needs of law enforcement,” if their confidential or personal data is lost, stolen, or compromised, unless that data is encrypted.

The law is applicable to any organization that does business in the state of California — even a single customer or employee in California. An organization is subject to the law even if it doesn’t directly do business in California (for example, if it stores personal information about California residents for another company).

Other U.S. states have quickly followed suit, and 46 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands now have public disclosure laws. However, these laws aren’t necessarily consistent from one state to another, nor are they without flaws and critics.

For example, until early 2008, Indiana’s Security Breach Disclosure and Identity Deception law (HEA 1101) did not require an organization to disclose a security breach “if access to the [lost or stolen] device is protected by a password [emphasis added] that has not been disclosed.” Indiana’s law has since been amended and is now one of the toughest state disclosure laws in effect, requiring public disclosure unless “all personal information … is protected by encryption.”

Finally, a provision in California’s and Indiana’s disclosure laws, as well as in most other states’ laws, allows an organization to avoid much of the cost of disclosure if the cost of providing such notice would exceed $250,000 or if more than 500,000 individuals would need to be notified. Instead, a substitute notice, consisting of e-mail notifications, conspicuous posting on the organization’s website, and notification of major statewide media, is permitted.

Understand Professional Ethics

Ethics (or moral values) help to describe what you should do in a given situation based on a set of principles or values. Ethical behavior is important to maintaining credibility as an information security professional and is a requirement for maintaining your CISSP certification. An organization often defines its core values (along with its mission statement) to help ensure that its employees understand what is acceptable and expected as they work to achieve the organization’s mission, goals, and objectives.

Ethics are not easily discerned, and a fine line often hovers between ethical and unethical activity. Unethical activity doesn’t necessarily equate to illegal activity. And what may be acceptable in some organizations, cultures, or societies may be unacceptable or even illegal in others.

Ethical standards can be based on a common or national interest, individual rights, laws, tradition, culture, or religion. One helpful distinction between laws and ethics is that laws define what we must do and ethics define what we should do.

Many common fallacies abound about the proper use of computers, the Internet, and information, which contribute to this gray area:

  • The Computer Game Fallacy: Any system or network that’s not properly protected is fair game.
  • The Law-Abiding Citizen Fallacy: If no physical theft is involved, an activity really isn’t stealing.
  • The Shatterproof Fallacy: Any damage done will have a limited effect.
  • The Candy-from-a-Baby Fallacy: It’s so easy, it can’t be wrong.
  • The Hacker’s Fallacy: Computers provide a valuable means of learning that will, in turn, benefit society.

    remember The problem here lies in the distinction between hackers and crackers. Although both may have a genuine desire to learn, crackers do it at the expense of others.

  • The Free Information Fallacy: Any and all information should be free and thus can be obtained through any means.

Almost every recognized group of professionals defines a code of conduct or standards of ethical behavior by which its members must abide. For the CISSP, it is the (ISC)2 Code of Ethics. The CISSP candidate must be familiar with the (ISC)2 Code of Ethics and Request for Comments (RFC) 1087 “Ethics and the Internet” for professional guidance on ethics (and information that you need to know for the exam).

Exercise the (ISC)2 Code of Professional Ethics

As a requirement for (ISC)2 certification, all CISSP candidates must subscribe to and fully support all portions of the (ISC)2 Code of Ethics. Intentionally or knowingly violating any provision of the (ISC)2 Code of Ethics may subject you to a peer review panel and revocation of your hard-earned CISSP certification.

The (ISC)2 Code of Ethics consists of a preamble and four canons. The canons are listed in order of precedence, thus any conflicts should be resolved in the order presented below:

Preamble:

  • The safety and welfare of society and the common good, duty to our principals, and to each other, requires that we adhere, and be seen to adhere, to the highest ethical standards of behavior.
  • Therefore, strict adherence to this Code is a condition of certification.

Canons:

  • Protect society, the common good, necessary public trust and confidence, and the infrastructure.
  • Act honorably, honestly, justly, responsibly, and legally.
  • Provide diligent and competent service to principals.
  • Advance and protect the profession.

tip The best approach to complying with the (ISC)2 Code of Professional Ethics is to never partake in any activity that provides even the appearance of an ethics violation. Making questionable moves puts your certification at risk, and it may also convey to others that such activity is acceptable. Remember to lead by example!

Support your organization’s code of ethics

Just about every organization has a code of ethics, or a statement of values, which it requires its employees or members to follow in their daily conduct. As a CISSP-certified information security professional, you are expected to be a leader in your organization, which means you exemplify your organization’s ethics (or values) and set a positive example for others to follow.

In addition to your organization’s code of ethics, two other computer security ethics standards you should be familiar with for the CISSP exam and adhere to are the Internet Activities Board’s (IAB) “Ethics and the Internet” (RFC 1087) and the Computer Ethics Institute’s (CEI) “Ten Commandments of Computer Ethics”.

Internet Architecture Board (IAB) — Ethics and the Internet (RFC 1087)

Published by the Internet Architecture Board (IAB) (www.iab.org) in January 1989, RFC 1087 characterizes as unethical and unacceptable any activity that purposely

  • “Seeks to gain unauthorized access to the resources of the Internet.”
  • “Disrupts the intended use of the Internet.”
  • “Wastes resources (people, capacity, computer) through such actions.”
  • “Destroys the integrity of computer-based information.”
  • “Compromises the privacy of users.”

Other important tenets of RFC 1087 include

  • “Access to and use of the Internet is a privilege and should be treated as such by all users of [the] system.”
  • “Many of the Internet resources are provided by the U.S. Government. Abuse of the system thus becomes a Federal matter above and beyond simple professional ethics.”
  • “Negligence in the conduct of Internet-wide experiments is both irresponsible and unacceptable.”
  • “In the final analysis, the health and well-being of the Internet is the responsibility of its users who must, uniformly, guard against abuses which disrupt the system and threaten its long-term viability.”

Computer Ethics Institute (CEI)

The Computer Ethics Institute (CEI; http://computerethicsinstitute.org) is a nonprofit research, education, and public policy organization originally founded in 1985 by the Brookings Institution, IBM, the Washington Consulting Group, and the Washington Theological Consortium. CEI members include computer science and information technology professionals, corporate representatives, professional industry associations, public policy groups, and academia.

CEI’s mission is “to provide a moral compass for cyberspace.” It accomplishes this mission through computer-ethics educational activities that include publications, national conferences, membership and certificate programs, a case study repository, the Ask an Ethicist online forum, consultation, and (most famously) its “Ten Commandments of Computer Ethics,” which has been published in 23 languages (presented here in English):

  1. Thou shalt not use a computer to harm other people.
  2. Thou shalt not interfere with other people’s computer work.
  3. Thou shalt not snoop around in other people’s computer files.
  4. Thou shalt not use a computer to steal.
  5. Thou shalt not use a computer to bear false witness.
  6. Thou shalt not copy or use proprietary software for which you have not paid.
  7. Thou shalt not use other people’s computer resources without authorization or proper compensation.
  8. Thou shalt not appropriate other people’s intellectual output.
  9. Thou shalt think about the social consequences of the program you are writing or the system you are designing.
  10. Thou shalt always use a computer in ways that ensure consideration and respect for your fellow humans.

Develop and Implement Documented Security Policies, Standards, Procedures, and Guidelines

Policies, standards, procedures, and guidelines are all different from each other, but they also interact with each other in a variety of ways. It’s important to understand these differences and relationships, and also to recognize the different types of policies and their applications. To successfully develop and implement information security policies, standards, guidelines, and procedures, you must ensure that your efforts are consistent with the organization’s mission, goals, and objectives (discussed earlier in this chapter).

Policies, standards, procedures, and guidelines all work together as the blueprints for a successful information security program. They

  • Establish governance.
  • Provide valuable guidance and decision support.
  • Help establish legal authority.
  • Ensure that risks are kept to acceptable levels.

Too often, technical security solutions are implemented without these important blueprints. The results are often expensive and ineffective controls that aren’t uniformly applied and don’t support an overall security strategy.

Governance is a term that collectively represents the system of policies, standards, guidelines, and procedures — together with management oversight — that help steer an organization’s day-to-day operations and decisions.

Policies

A security policy forms the basis of an organization’s information security program. RFC 2196, The Site Security Handbook, defines a security policy as “a formal statement of rules by which people who are given access to an organization’s technology and information assets must abide.”

The four main types of policies are:

  • Senior Management: A high-level management statement of an organization’s security objectives, organizational and individual responsibilities, ethics and beliefs, and general requirements and controls.
  • Regulatory: Highly detailed and concise policies usually mandated by federal, state, industry, or other legal requirements.
  • Advisory: Not mandatory, but highly recommended, often with specific penalties or consequences for failure to comply. Most policies fall into this category.
  • Informative: Only informs, with no explicit requirements for compliance.

remember Standards, procedures, and guidelines are supporting elements of a policy and provide specific implementation details of the policy.

tip ISO/IEC 27002, Information Technology — Security Techniques — Code of Practice for Information Security Management, is an international standard for information security policy. ISO/IEC is the International Organization for Standardization and International Electrotechnical Commission. ISO/IEC 27002 consists of 12 sections that largely (but not completely) overlap the eight (ISC)2 security domains.

Standards (and baselines)

Standards are specific, mandatory requirements that further define and support higher-level policies. For example, a standard may require the use of a specific technology, such as a minimum requirement for encryption of sensitive data using AES. A standard may go so far as to specify the exact brand, product, or protocol to be implemented. A device or system hardening standard would define specific security configuration settings for applicable systems.

Baselines are similar to and related to standards. A baseline can be useful for identifying a consistent basis for an organization’s security architecture, taking into account system-specific parameters, such as different operating systems. After consistent baselines are established, appropriate standards can be defined across the organization.

tip Some organizations call their configuration documents standards (and still others call them standard operating environments) instead of baselines. This is a common and acceptable practice.

Procedures

Procedures provide detailed instructions on how to implement specific policies and meet the criteria defined in standards. Procedures may include Standard Operating Procedures (SOPs), run books, and user guides. For example, a procedure may be a step-by-step guide for encrypting sensitive files by using a specific software encryption product.

Guidelines

Guidelines are similar to standards but they function as recommendations rather than as compulsory requirements. For example, a guideline may provide tips or recommendations for determining the sensitivity of a file and whether encryption is required.

Understand Business Continuity Requirements

Business continuity and disaster recovery (discussed in detail in Chapter 9) work hand in hand to provide an organization with the means to continue and recover business operations when a disaster strikes. Business continuity and disaster recovery are two sides of the same coin. Each springs into action when a disaster strikes. But they do have different goals:

  • Business continuity deals with keeping business operations running — perhaps in another location or by using different tools and processes — after a disaster has struck. This is sometimes called continuity of operations (COOP).
  • Disaster recovery deals with restoring normal business operations after the disaster takes place.

While the business continuity team is busy keeping business operations running via one of possibly several contingency plans, the disaster recovery team members are busy restoring the original facilities and equipment so that they can resume normal operations.

Here’s an analogy. Two boys kick a big anthill — a disaster for the ant colony. Some of the ants scramble to save the eggs and the food supply; that’s Ant City business continuity. Other ants work on rebuilding the anthill; that’s Ant City disaster recovery. Both teams work to ensure the anthill’s survival, but each team has its own role to play.

Business continuity and disaster recovery planning have these common elements:

  • Identification of critical business functions: The Business Impact Analysis (BIA) and Risk Assessment (discussed in the section “Conduct Business Impact Analysis,” later in this chapter) identify these functions.
  • Identification of possible scenarios: The planning team identifies all the likely man-made and natural disaster activation scenarios, ranked by event probability and impact to the organization.
  • Experts: People who understand the organization’s critical business processes.

The similarities end with this list. Business continuity planning concentrates on continuing business operations, whereas disaster recovery planning focuses on recovering the original business functions. Although both plans deal with the long-term survival of the business, they involve different activities. When a significant disaster occurs, both activities kick into gear at the same time, keeping vital business functions running (business continuity) and getting things back to normal as soon as possible (disaster recovery).

Business continuity (and disaster recovery) planning exist because bad things happen. Organizations that want to survive a disastrous event need to make formal and extensive plans — contingency plans to keep the business running and recovery plans to return operations to normal.

Keeping a business operating during a disaster can be like juggling with one arm tied behind your back (we first thought of plate-spinning and one-armed paper hangers, but most of our readers are probably too young to understand these). You’d better plan in advance how you’re going to do it, and practice! It could happen at night, you know (one-handed juggling in the dark is a lot harder).

Before business continuity planning can begin, everyone on the project team has to make and understand some basic definitions and assumptions. These critical items include

  • Senior management support: The development of a Business Continuity Plan (BCP) is time consuming, with no immediate or tangible return on investment (ROI). To ensure a successful business continuity planning project, you need the support of the organization’s senior management, including adequate budget, manpower, and visible statements backing the project. Senior management needs to make explicit statements identifying the responsible parties, as well as the importance of the business continuity planning project, budget, priorities, urgency, and timing.
  • Senior management involvement: Senior management can’t just bless the business continuity planning project. Because senior managers and directors may have implicit and explicit responsibility for the organization’s ability to recover from a disaster, senior management needs to have a degree of direct involvement in the business continuity planning effort. The careers that these people save may be their own.
  • Project team membership: Which people do you want to put on the business continuity planning project team? The team must represent all relevant functions and business units. Many of the team members probably have their usual jobs, too, so the team needs to develop a realistic timeline for how quickly the business continuity planning project can make progress.
  • Who brings the donuts: Because it’s critical that business continuity planning meetings are well attended, quality donuts are an essential success component.

A business continuity planning project typically has four components: scope determination, the Business Impact Analysis (BIA), the Business Continuity Plan (BCP), and implementation. We discuss each of these components in the following sections.

Develop and document project scope and plan

The success and effectiveness of a business continuity planning project depends greatly on whether senior management and the project team properly define its scope. Business processes and technology can muddy the waters and make this task difficult. For instance, distributed systems dependence on at least some desktop systems for vital business functions expands the scope beyond core functions. Geographically dispersed companies — often the result of mergers — complicate matters as well.

Also, large companies are understandably more complex. The boundaries between where a function begins and ends are oftentimes fuzzy and sometimes poorly documented and not well understood.

Political pressures can influence the scope of the business continuity planning project as well. A department that thinks it’s vital, but which falls outside the business continuity planning project scope, may lobby to be included in the project. Everybody wants to be important (and some just want to appear to be important). You need senior management support of scope (what the project team really needs to include and what it doesn’t) to put a stop to the political games.

Scope creep (what happens when a project’s scope grows beyond the original intent) can become scope leap if you have a weak or inexperienced business continuity planning project team. For the success of the project, strong leaders must make rational decisions about the scope of the project. Remember, you can change the scope of the business continuity planning project in later iterations of the project.

The project team needs to find a balance between too narrow a scope, which makes the plan ineffective, and too wide a scope, which makes the plan too cumbersome.

A complete BCP consists of several components that handle not only the continuation of critical business functions, but also all the functions and resources that support those critical functions. The various elements of a BCP are described in the following sections.

Emergency response

Emergency response teams must be identified for every possible type of disaster. These response teams need playbooks (detailed written procedures and checklists) to keep critical business functions operating.

Written procedures are vital for two reasons. First, the people who perform critical functions after a disaster may not be familiar with them: They may not usually perform those functions. (During a disaster, the people who ordinarily perform the function may be unavailable.) Second, the team probably needs to use different procedures and processes for performing the critical functions during a disaster than they would under normal conditions. Also, the circumstances surrounding a disaster might have people feeling out-of-sorts; having a written procedure guides them into action (kind of like the “break glass” instructions on some fire alarms, in case you forget what to do).

Damage assessment

When a disaster strikes, experts need to be called in to inspect the premises and determine the extent of the damage. Typically, you need experts who can assess building damage, as well as damage to any special equipment and machinery.

Depending on the nature of the disaster, you may have to perform damage assessment in stages. A first assessment may involve a quick walkthrough to look for obvious damage, followed by a more time-consuming and detailed assessment to look for problems that you don’t see right away.

Damage assessments determine whether an organization can still use buildings and equipment, whether they can use those items after some repairs, or whether they must abandon those items altogether.

Personnel safety

In any kind of disaster, the safety of personnel is the highest priority, ahead of buildings, equipment, computers, backup tapes, and so on. Personnel safety is critical not only because of the intrinsic value of human life, but also because people — not physical assets — make the business run.

Personnel notification

The BCP must have some provisions for notifying all affected personnel that a disaster has occurred. An organization needs to establish multiple methods for notifying key business-continuity personnel in case public communications infrastructures are interrupted.

Not all disasters are obvious: A fire or broken water main is a local event, not a regional one. And in an event such as a tornado or flood, employees who live even a few miles away may not know the condition of the business. Consequently, the organization needs a plan for communicating with employees, no matter what the situation.

Throughout a disaster and its recovery, management must be given regular status reports as well as updates on crucial tactical issues so that management can align resources to support critical business operations that function on a contingency basis. For instance, a manager of a corporate facilities department can loan equipment that critical departments need so that they can keep functioning.

Backups and media storage

Things go wrong with hardware and software, resulting in wrecked or unreachable data. When it’s gone, it’s gone! Thus IT departments everywhere make copies of their critical data on tapes, removable discs, or external storage systems, or in the cloud.

These backups must be performed regularly, usually once per day. For organizations with on-premises systems, backup media must also be stored off-site in the event that the facility housing the original systems is damaged. Having backup tapes in the data center may be convenient for doing a quick data restore but of little value if backup tapes are destroyed along with their respective systems. For organizations with cloud-based systems, the problem here is the same, but the technology differs a bit: It is imperative that data be backed up (or replicated) to a different geographic location, so that data can be recovered, no matter what happens.

For systems with large amounts of data, that data must be well understood in order to determine what kinds of backups need to be performed (real-time replication, full, differential, and incremental) and how frequently. Consider these factors:

  • The time that it takes to perform backups.
  • The effort required to restore data.
  • The procedures for restoring data from backups, compared with other methods for recovering the data.

For example, consider whether you can restore application software from backup tapes more quickly than by installing them from their release media (the original CD-ROMs or downloaded install files). Just make sure you can recover your configuration settings if you re-install software from release media. Also, if a large part of the database is static, do you really need to back it all up every day?

You must choose off-site storage of backup media and other materials (documentation, and so on) carefully. Factors to consider include survivability of the off-site storage facility, as well as the distance from the off-site facility to the data center, media transportation, and alternate processing sites. The facility needs to be close enough so that media retrieval doesn’t take too long (how long depends on the organization’s recovery needs), but not so close that the facility becomes involved in the same natural disaster as the business.

Cloud-based data replication and backup services are a viable alternative to off-site backup media storage. Today’s Internet speeds make it possible to back up critical data to a cloud-based storage provider — often faster than magnetic tapes can be returned from an off-site facility and data recovered from them.

tip Some organizations have one or more databases so large that the organizations literally can’t (or, at any rate, don’t) back them up to tape. Instead, they keep one or more replicated copies of their databases on other computers in other cities. Business continuity planners need to consider this possibility when developing continuity plans.

The purpose of off-site media storage is to ensure that up-to-date data is available in the event that systems in the primary data center are damaged.

Software escrow agreements

Your organization should consider software escrow agreements (wherein the software vendor sends a copy of its software code to a third-party escrow organization for safekeeping) with the software vendors whose applications support critical business functions. In the event that an insurmountable disaster (which could include bankruptcy) strikes the software vendor, your organization must consider all options for the continued maintenance of those critical applications, including in-house support.

External communications

The Corporate Communications, External Affairs, and (if applicable) Investor Relations departments should all have plans in place for communicating the facts about a disaster to the press, customers, and public. You need contingency plans for these functions if you want the organization to continue communicating to the outside world. Open communication during a disaster is vital so that customers, suppliers, and investors don’t panic (which they might do if they don’t know the true extent of the disaster).

The emergency communications plan needs to take into account the possibility that some corporate facilities or personnel may be unavailable. Thus you need to keep even the data and procedures related to the communications plan safe so that they’re available in any situation.

Utilities

Data-processing facilities that support time-critical business functions must keep running in the event of a power failure. Although every situation is different, the principle remains the same: The business continuity planning team must determine for what period of time the data-processing facility must be able to continue operating without utility power. A power engineer can find out the length of typical (we don’t want to say routine) power outages in your area and crunch the numbers to arrive at the mean time of outages. By using that information, as well as an inventory of the data center’s equipment and environmental equipment, you can determine whether the organization needs an uninterruptible power supply (UPS) alone, or a UPS and an electric generator.

A business can use uninterruptible power supplies (UPSs) and emergency electric generators to provide electric power during prolonged power outages. A UPS is also good for a controlled shutdown, if the organization is better off having their systems powered off during a disaster. A business can also use a stand-alone power system (SPS), another term for an off-the-grid system that generates power with solar, wind, hydro, or employees madly pedaling stationary bicycles (we’re kidding about that last one).

In a really long power outage (more than a day or two), it is also essential to have a plan for the replenishment of generator fuel.

Logistics and supplies

The business continuity planning team needs to study every aspect of critical functions that must be made to continue in a disaster. Every resource that’s needed to sustain the critical operation must be identified and then considered against every possible disaster scenario to determine what special plans must be made. For instance, if a business operation relies upon a just-in-time shipment of materials for its operation and an earthquake has closed the region’s only highway (or airport or sea/lake port), then alternative means for acquiring those materials must be determined in advance. Or, perhaps an emergency ration of those materials needs to be stockpiled so that the business function can continue uninterrupted.

Fire and water protection

Many natural disasters disrupt public utilities, including water supplies or delivery. In the event that a disaster has interrupted water delivery, new problems arise. Your facility may not be allowed to operate without the means for fighting a fire, should one occur.

In many places, businesses could be ordered to close if they can’t prove that they can effectively fight a fire using other means, such as FM-200 inert gas. Then again, if water supplies have been interrupted, you have other issues to contend with, such as drinking water and water for restrooms. Without water, you’re hosed!

We discuss fire protection in more detail in Chapter 5.

Documentation

Any critical business function must be able to continue operating after a disaster strikes. And to make sure you can sustain operations, you need to make available all relevant documentation for every critical piece of equipment, as well as every critical process and procedure that the organization performs in a given location.

Don’t be lulled into taking for granted the emerging trend of hardware and software products that don’t come with any documentation. Many vendors deliver their documentation only over the Internet, or they charge extra for a hard copy. But many types of disasters may disrupt Internet communications, thereby leaving an operation high and dry with no instructions for how to use and manage tools or applications.

At least one set of hard copy (or CD-ROM soft copy) documentation — including your BCP and Disaster Recovery Plan (DRP) — should be stored at the same off-site storage facility that stores the organization’s backup tapes. It would also be smart to issue electronic copies of BCP and DRP documentation to all relevant personnel on USB storage devices (with encryption).

If the preceding sounds like the ancient past to you, then your organization may be fully in the cloud today. In such a case, you may be more inclined to maintain multiple soft copies of all required documentation so that personnel can use it when needed.

Continuity and recovery documentation must exist in hard copy in the event that it’s unavailable via electronic means.

Data processing continuity planning

Data processing facilities are so vital to businesses today that a lot of emphasis is placed on them. Generally, this comes down to these variables: where and how the business will continue to sustain its data processing functions.

Because data centers are so expensive and time-consuming to build, better business sense dictates having an alternate processing site available. The types of sites are

  • Cold site: A cold site is basically an empty computer room with environmental facilities (UPS; heating, ventilation, and air conditioning [HVAC]; and so on) but no computing equipment. This is the least-costly option, but more time is required to assume a workload because computers need to be brought in from somewhere and set up, and data and applications need to be loaded. Connectivity to other locations also needs to be installed.
  • Warm site: A warm site is basically a cold site, but with computers and communications links already in place. In order to take over production operations, you must load the computers with application software and business data.
  • Hot site: Indisputably the most expensive option, you equip a hot site with the same computers as the production system, with application changes, operating system changes, and even patches kept in sync with their live production-system counterparts. You even keep business data up-to-date at the hot site by using some sort of mirroring or transaction replication. Because the organization trains its staff in how to operate the organization’s business applications (and staff members have documentation), the operations staff knows what to do to take over data processing operations at a moment’s notice. Hot sites may be cloud-based or in a co-location center.
  • Reciprocal site: Your organization and another organization sign a reciprocal agreement in which you both pledge the availability of your organization’s data center in the event of a disaster. Back in the day, when data centers were rare, many organizations made this sort of arrangement, but it’s fallen out of favor in recent years.
  • Multiple data centers: Larger organizations can consider the option of running daily operations out of two or more regional data centers that are hundreds (or more) of miles apart. The advantage of this arrangement is that the organization doesn’t have to make arrangements with outside vendors for hot/warm/cold sites, and the organization’s staff is already onsite and familiar with business and computer operations.
  • Cloud site: Organizations with primary information processing in the cloud are likely to employ cloud assets in multiple regions, and possibly with more than one cloud provider. Many organizations with primary processing on-premises employ hybrid cloud infrastructure for disaster recovery purposes — this is a common way for companies to ease their way into the cloud. Depending on the degree of readiness required, a cloud site can be as ready as a hot site, a warm site, or a cold site, as determined by the resources devoted to keeping the cloud site ready for production operations.

A hot site provides the most rapid recovery capability, but it also costs the most because of the effort required to maintain its readiness.

Table 3-1 compares these options side by side.

TABLE 3-1 Data Processing Continuity Planning Site Comparison

Feature

Hot Site

Warm Site

Cold Site

Multiple Data Centers

Cloud Site

Cost

Highest

Medium

Low

No additional

Variable

Computer-equipped

Yes

Yes

No

Yes

Yes

Connectivity-equipped

Yes

Yes

No

Yes

Yes

Data-equipped

Yes

No

No

Yes

Variable

Staffed

Yes

No

No

Yes

No

Typical lead time to readiness

Minutes to hours

Hours to days

Days to weeks

Minutes to hours or longer

Minutes to hours

Conduct Business Impact Analysis

The Business Impact Analysis (BIA) describes the impact that a disaster is expected to have on business operations. This important early step in business continuity planning helps an organization figure out which business processes are more resilient and which are more fragile.

A disaster’s impact includes quantitative and qualitative effects. The quantitative impact is generally financial, such as loss of revenue or output of production. The qualitative impact has more to do with the quality of goods and/or services.

Any BIA worth its salt needs to perform the following tasks well:

  • Perform a Vulnerability Assessment — not so much an application/infrastructure vulnerability assessment, but a big-picture, business process vulnerability assessment.
  • Carry out a Criticality Assessment — determining how critically important a particular business function is to the ongoing viability of the organization.
  • Determine the Maximum Tolerable Downtime.
  • Establish recovery targets.
  • Determine resource requirements.

You can get the scoop on these activities in the following sections.

Vulnerability Assessment

Often, a BIA includes a Vulnerability Assessment that helps get a handle on obvious and not-so-obvious weaknesses in business critical systems. A Vulnerability Assessment has quantitative (financial) and qualitative (operational) sections, similar to a Risk Assessment, which is covered later in this chapter.

The purpose of a Vulnerability Assessment is to determine the impact — both quantitative and qualitative — of the loss of a critical business function.

Quantitative losses include

  • Loss of revenue
  • Loss of operating capital
  • Market share
  • Loss because of personal liabilities
  • Increase in expenses
  • Penalties because of violations of business contracts
  • Violations of laws and regulations (which can result in legal costs such as fines and civil penalties)

Qualitative losses include loss of

  • Service quality
  • Competitive advantages
  • Customer satisfaction
  • Prestige and reputation

The Vulnerability Assessment identifies critical support areas, which are business functions that, if lost, would cause significant harm to the business by jeopardizing critical business processes or the lives and safety of personnel. The Vulnerability Assessment should carefully study critical support areas to identify the resources that those areas require to continue functioning.

Quantitative losses include an increase in operating expenses because of any higher costs associated with executing the contingency plan. In other words, planners need to remember to consider operating costs that may be higher during a disaster situation.

Criticality Assessment

The business continuity planning team should inventory all high-level business functions (for example, customer support, order processing, returns, cash management, accounts receivable, payroll, and so on) and rank them in order of criticality. The team should also describe the impact of a disruption to each function on overall business operations.

The team members need to estimate the duration of a disaster event to effectively prepare the Criticality Assessment. Project team members need to consider the impact of a disruption based on the length of time that a disaster impairs specific critical business functions. You can see the vast difference in business impact of a disruption that lasts one minute, compared to one hour, one day, one week, or longer. Generally, the criticality of a business function depends on the degree of impact that its impairment has on the business.

remember Planners need to consider disasters that occur at different times in the business cycle, whatever that might be for an organization. Response to a disaster at the busiest time of the month (or year) may vary quite a bit from response at other times.

Identifying key players

Although you can consider a variety of angles when evaluating vulnerability and criticality, commonly you start with a high-level organization chart. (Hip people call this chart the org chart). In most companies, the major functions pretty much follow the structure of the organization.

Following an org chart helps the business continuity planning project team consider all the steps in a critical process. Walk through the org chart, stopping at each manager’s or director’s position and asking, “What does he do?”, “What does she do?”, and “Who files the TPS reports?” This mental stroll can help jog your memory, and help you better see all the parts of the organization’s big picture.

tip When you’re cruising an org chart to make sure that it covers all areas of the organization, you may easily overlook outsourced functions that might not show up in the org chart. For instance, if your organization outsources accounts payable (A/P) functions, you might miss this detail if you don’t see it on an org chart. Okay, you’d probably notice the absence of all A/P. But if your organization outsources only part of A/P — say, a group that detects and investigates A/P fraud (looking for payment patterns that suggest the presence of phony payment requests) — your org chart probably doesn’t include that vital function.

Establishing Maximum Tolerable Downtime (MTD)

An extension of the Criticality Assessment (which we talk about in the section “Criticality Assessment,” earlier in this chapter) is a statement of Maximum Tolerable Downtime (MTD — also known as Maximum Tolerable Period of Disruption or MTPD) for each critical business function. Maximum Tolerable Downtime is the maximum period of time that a critical business function can be inoperative before the company incurs significant and long-lasting damage.

For example, imagine that your favorite online merchant — a bookseller, an auction house, or an online trading company — goes down for an hour, a day, or a week. At some point, you have to figure that a prolonged disruption sinks the ship, meaning the business can’t survive. Determining MTD involves figuring out at what point the organization suffers permanent, measurable loss as a result of a disaster. Online retailers know that even short outages may mean that some customers will switch brands and take their business elsewhere.

Make the MTD assessment a major factor in determining the criticality — and priority — of business functions. A function that can withstand only two hours of downtime obviously has a higher priority than another function that can withstand several days of downtime.

MTD is a measure of the longest period of time that a critical business function can be disrupted without suffering unacceptable consequences, perhaps threatening the actual survivability of the organization.

Determining Maximum Tolerable Outage (MTO)

During the Criticality Assessment, you establish a statement of Maximum Tolerable Outage (MTO) for each critical business function. Maximum Tolerable Outage is the maximum period of time that a critical business function can be operating in emergency or alternate processing mode. This matters because, in many cases, emergency or alternate processing mode performs at a lower level of throughput or quality, or at a higher cost. Although an organization’s survival can be assured through an interim period in alternate processing mode, the long-term business model may not be able to sustain the differences in throughput, quality, cost, or whatever aspects of alternate processing mode are different from normal processing.

Establish recovery targets

When you establish the Criticality Assessment, MTD, and MTO for each business process (which we talk about in the preceding sections), the planning team can establish recovery targets. These targets represent the period of time from the onset of a disaster until critical processes have resumed functioning.

Two primary recovery targets are usually established for each business process: a Recovery Time Objective (RTO) and Recovery Point Objective (RPO). We discuss these targets in the following sections.

RECOVERY TIME OBJECTIVE (RTO)

A Recovery Time Objective (RTO) is the maximum period of time in which a business process must be restored after a disaster.

An organization without a BCP that suffers a serious disaster, such as an earthquake or hurricane, could experience a recovery time of one to two weeks or more. An organization could possibly need this length of time to select a new location for processing data, purchase new systems, load application software and data, and resume processing. An organization that can’t tolerate such a long outage needs to establish a shorter RTO and determine the level of investments required to meet that target.

RECOVERY POINT OBJECTIVE (RPO)

A Recovery Point Objective (RPO) is the maximum period of time in which data might be lost if a disaster strikes.

A typical schedule for backing up data is once per day. If a disaster occurs before backups are done, the organization can lose an entire day’s worth of information. This is because system and data recovery are often performed using the last good set of backups. An organization that requires a shorter RPO needs to figure out a way to make copies of transaction data more frequently than once per day.

Here are some examples of how organizations might establish their RPOs:

  • Keyed Invoices: An accounts payable department opens the mail and manually keys in the invoices that it receives from its suppliers. Data entry clerks spend their entire day inputting invoices. If a disaster occurs before backups are run at the end of the business day (and if that disaster requires the organization to rebuild systems from backup tapes), those clerks have to redo that whole day’s worth of data entry.
  • Online orders: A small business develops an online web application that customers can use to place orders. At the end of each day, the Orders department runs a program that prints out all the day’s orders, and the Shipping department fills those orders on the following day. If a disaster occurs at any time during the day, the business loses all online orders placed since the previous day’s backup.

If you establish the MTD for processes such as the ones in the preceding list as less than one business day, the organization needs to take some steps to save online data more than once per day.

Many organizations consider off-site backup media storage, where backup tapes are transported off-site as frequently as every day, or where electronic vaulting to an offsite location is performed several times each day. An event such as a fire can destroy computers as well as backup media if it is nearby.

HOW RTO AND RPO WORK TOGETHER

RPO and RTO targets are different measures of recovery for a system, but they work together. When the team establishes proposed targets, the team members need to understand how each target works.

At first glance, you might think that RPO should be a shorter time than RTO (or maybe the other way around). In fact, different businesses and applications present different business requirements that might make RPO less than RTO, equal to RTO, or greater than RTO. Here are some examples:

  • RPO greater than RTO: A business can recover an application in 4 hours (RTO), and it has a maximum data loss (RPO) of 24 hours. So, if a disaster occurs, the business can get the application running again in 4 hours, but data recovered in the system consists of data entered prior to 24 hours before the incident took place.
  • RPO equal to RTO: A business can recover an application in 12 hours (RTO), with a maximum data loss of 12 hours (RPO). You can probably imagine this scenario: An application mirrors (or replicates) data to a backup system in real-time. If a disaster occurs, the disaster recovery team requires 12 hours to start the backup system. After the team gets the system running, the business has data from until 12 hours in the past — the time when the primary system failed.
  • RPO less than RTO: The disaster recovery team can recover an application in 4 hours (RTO), with a maximum data loss of 1 hour (RPO). How can this situation happen? Maybe a back-office transaction-posting application, which receives and processes data from order-processing applications, fails. If the back-office application is down for 4 hours, data coming from the order-processing applications may be buffered someplace else, and when the back-office application resumes processing, it can then receive and process the waiting input data.

Defining Resource Requirements

The Resource Requirements portion of the BIA is a listing of the resources that an organization needs in order to continue operating each critical business function. In an organization that has finite resources (which is pretty much every organization), the most critical functions get first pick, and the lower-priority functions get the leftovers.

Understanding what resources are required to support a business process helps the project team to figure out what the contingency plan for that process needs to contain, and how the process can be operated in Emergency mode and then recovered.

Examples of required resources include

  • Systems and applications: In order for a business process to continue operating, it may require one or more IT systems or applications — not only the primary supporting application, but also other systems and applications that the primary application requires in order to continue functioning. Depending on the nature of the organization’s primary and alternate processing resources, these systems may be physical, virtual, or cloud-based.
  • Suppliers and partners: Many business processes require a supply of materials or services from outside organizations, without which the business process can’t continue operating.
  • Key personnel: Most business processes require a number of specifically trained or equipped staff members — or contingent workers such as contractors or personnel from another company — to run business processes and operate systems.
  • Business equipment: Anything from PBXs to copiers, postage machines, POS (point-of-sale) machines, red staplers, and any other machinery required to support critical business processes.

tip When you identify required resources for complex business processes, you may want to identify additional information about each resource, including resource owners, criticality, and dependencies.

Developing the Business Continuity Plan

After you define the scope of the business continuity planning project and develop the BIA, Criticality Assessment, MTDs, and MTOs, you know

  • What portion of the organization is included in the plan.
  • Of this portion of the organization, which business functions are so critical that the business would fail if these functions were interrupted for long (or even short) periods of time.
  • The general degree of impact on the business when one of the critical functions fails. This idea comes from quantitative and qualitative data.

The hard part of the business continuity planning project begins now: You need to develop the strategy for continuing each critical business function when disasters occur, which is known as the Continuity Strategy.

When you develop a Continuity Strategy, you must set politics aside and look at the excruciating details of critical business functions. You need lots of strong coffee, several pizzas, buckets of Rolaids, and cool heads.

Making your business continuity planning project a success

For the important and time-consuming Continuity Strategy phase of the project, you need to follow these guidelines:

  • Call things like you see them. No biases. No angles. No politics. No favorites. No favors. You’re trying to ensure survival of the business before the disaster strikes.
  • Build smaller teams of experts. Each critical business function should have teams dedicated to just that function. That team’s job is to analyze just one critical business function and figure out how you can keep it functioning despite a disaster of some sort. Pick the right people for each team — people who really understand the details of the business process that they’re examining.
  • Brainstorm. Proper brainstorming considers all ideas, even silly ones (up to a point). Even a silly-sounding idea can lead to a good idea.
  • Have teams share results with each other. Teams working on individual continuity strategies can get ideas from each other. Each team can share highlights of its work over the past week or two. Some of the things that they say may spark ideas in other teams. You can improve the entire effort by holding these sharing sessions.
  • Don’t encourage competition or politics in or between teams. Don’t pit teams against each other. Identifying success factors isn’t a zero-sum game: Everyone needs to do an excellent job.
  • Retain a business continuity planning mentor/expert. If your organization doesn’t have experienced business continuity planners on staff, you need to bring in a consultant — someone who has helped develop plans for other organizations. Even more important than that, make sure the consultant you hire has been on the scene when disaster struck a business he or she was consulting for and has seen a BCP in action.

Simplifying large or complex critical functions

Some critical business functions may be too large and complex to examine in one big chunk. You can break down those complex functions into smaller components, perhaps like this:

  • People: Has the team identified the critical people — or more appropriately, the critical sub-functions — required to keep the function running?
  • Facilities: In the event that the function’s primary facilities are unavailable, where can the business perform the function?
  • Technology: What hardware, software, and other computing/network components support the critical function? If parts or all of these components are unavailable, what other equipment can support the critical business functions? Do you need to perform the functions any differently?
  • Miscellaneous: What supplies, other equipment, and services do you need to support the critical business function?

Analyzing processes is like disassembling toy building block houses — you have to break them down to the level of their individual components. You really do need to understand each step in even the largest processes in order to be able to develop good continuity plans for them.

If a team that analyzes a large complex business function breaks it into groups, such as the groups in the preceding list, the team members need to get together frequently to ensure that their respective strategies for each group eventually become a cohesive whole. Eventually these groups need to come back together and integrate their separate materials into one complete, cohesive plan.

Documenting the strategy

Now for the part that everyone loves: documentation. The details of the continuity plans for each critical function must be described in minute detail, step by step by step.

Why? The people who develop the strategy may very well not be the people who execute it. The people who develop the strategy may change roles in the company or change jobs altogether. Or the scope of an actual disaster may be wide enough that the critical personnel just aren’t available. Any skeptics should consider September 11 and the impact that this disaster had on a number of companies that lost practically everyone and everything.

Best practices for documenting BCPs exist. For this reason, you may want to have an expert around. For $300 an hour, a consultant can spend a couple of weeks developing templates. But watch out — your consultant might just download templates from a business continuity planning website, tweak them a little bit, and spend the rest of his or her time playing Candy Crush. To be sure you get a solid consultant, do the old-fashioned things: check his references, ask for work samples, see if he has a decent LinkedIn page. (We’re kidding about that last one!)

Implementing the BCP

It is an accomplishment indeed when the BCP documentation has been written, reviewed, edited, placed into three-ring binders, and distributed via thumb drives or online file storage accounts. However, the job isn’t yet done. The BCP needs senior management buy-in, the plan must be announced and socialized throughout the organization, and one or more persons must be dedicated to keeping the plan up-to-date. Oh yeah, and the plan needs to be tested!

Securing senior management approval

After the entire plan has been documented and reviewed by all stakeholders, it’s time for senior management to examine it and approve it. Not only must senior management approve the plan, but senior management must also publicly approve it. By “public” we don’t mean the general public; instead, we mean that senior management should make it well known inside the business that they support the business continuity planning process.

Senior management’s approval is needed so that all affected and involved employees in the organization understand the importance of emergency planning.

Promoting organizational awareness

Everyone in the organization needs to know about the plan and his or her role in it. You may need to establish training for potentially large numbers of people who need to be there when a disaster strikes.

All employees in the organization must know about the BCP.

Testing the plan

Regularly testing the BCP ensures that all essential personnel required to implement the plan understand their roles and responsibilities, and helps to ensure that the plan is kept up to date as the organization changes. BCP testing methods are similar to DRP testing methods (discussed in Chapter 9), and include

  • Read-through
  • Walkthrough
  • Simulation
  • Parallel
  • Full interruption

See Chapter 9 for a full explanation of these testing methods.

Maintaining the plan

No, the plan isn’t finished. It has just begun! Now the business continuity planning person (the project team members by this time have collected their commemorative denim shirts, mugs, and mouse pads, and have moved on to other projects) needs to periodically chase The Powers That Be to make sure that they know about all significant changes to the environment.

In fact, if the business continuity planning person has any leadership left at this point in the process, he or she needs to start attending the Change Control Board and IT Steering Committee (or whatever that company calls them) meetings and to jot down notes that may mean that some detail in a BCP document may need some changes.

tip The BCP is easier to modify than it is to create out of thin air. Once or twice each year, someone knowledgeable needs to examine the detailed strategy and procedure documents in the BCP to make sure that they’ll still work — and update them if necessary.

tip You can read more about business continuity and disaster recovery planning in IT Disaster Recovery Planning For Dummies, by Peter H. Gregory.

Contribute to Personnel Security Policies

An organization needs clearly documented personnel security policies and procedures in order to facilitate the use and protection of information. There are numerous essential practices for protecting the business and its important information assets. These essential practices all have to do with how people — not technology — work together to support the business.

This is collectively known as administrative management and control.

Note: We tend to use the term essential practices versus best practices. The reason is simple: Best practices refers to the very best practices and technologies that can be brought to bear against a business problem, whereas essential practices means those activities and technologies that are considered essential to implement in an organization. Best practices are nearly impossible to achieve, and few organizations attempt it. However, essential practices are, well, essential, and definitely achievable in many organizations.

Employment candidate screening

Even before posting a “Help Wanted” sign (Do people still do that?!) or an ad on a job search website, an employer should ensure that the position to be filled is clearly documented and contains a complete description of the job requirements, the qualifications, and the scope of responsibilities and authority.

The job (or position) description should be created as a collaborative effort between the hiring manager — who fully understands the functional requirements of the specific position to be filled — and the human resources manager — who fully understands the applicable employment laws and organizational requirements to be addressed.

Having a clearly documented job (or position) description can benefit an organization for many reasons:

  • The hiring manager knows (and can clearly articulate) exactly what skills a certain job requires.
  • The human resources manager can pre-screen job applicants quickly and accurately.
  • Potential candidates can ensure they apply only for positions for which they’re qualified, and they can properly prepare themselves for interviews (for example, by matching their skills and experiences to the specific requirements of the position).
  • After the organization fills the position, the position description (in some cases, the employment contract) helps to reduce confusion about what the organization expects from the new employee and provides objective criteria for evaluating performance.

Concise job descriptions that clearly identify an individual’s responsibility and authority, particularly on information security issues, can help:

  • Reduce confusion and ambiguity.
  • Provide legal basis for an individual’s authority or actions.
  • Demonstrate any negligence or dereliction in carrying out assigned duties.

An organization should conduct background checks and verify application information for all potential employees and contractors. This process can help to expose any undesirable or unqualified candidates. For example:

  • A previous criminal conviction may immediately disqualify a candidate from certain positions within an organization.
  • Even when the existence of a criminal record itself doesn’t automatically disqualify a candidate, if the candidate fails to disclose this information in the job application or interview, it should be a clear warning sign for a potential employer.
  • Some positions that require a U.S. government security clearance are available only to U.S. citizens.
  • A candidate’s credit history should be examined if the position has significant financial responsibilities or handles high-value assets, or if a high opportunity for fraud exists.
  • It has been estimated that as many as 40 percent of job applicants “exaggerate the truth” on their résumés and applications. Common sources of omitted, exaggerated, or outright misleading information include employment dates, salary history, education, certifications, and achievements. Although the information itself may not be disqualifying, a dishonest applicant should not be given the opportunity to become a dishonest employee.

Most background checks require the written consent of the applicant and disclosure of certain private information (such as the applicant’s Social Security or other retirement system number). Private information obtained for the purposes of a background check, as well as the results of the background check, must be properly handled and safeguarded in accordance with applicable laws and the organization’s records retention and destruction policies.

Basic background checks and verification might include the following information:

  • Criminal record
  • Citizenship
  • Employment history
  • Education
  • Certifications and licenses
  • Financial history including judgments
  • Reference checks (personal and professional)
  • Union and association membership

Pre- and post-employment background checks can provide an employer with valuable information about an individual whom an organization is considering for a job or position within an organization. Such checks can give an immediate indication of an individual’s integrity (for example, by providing verification of information in the employment application) and can help screen out unqualified and undesirable applicants.

Personnel who fill sensitive positions should undergo a more extensive pre-employment screening and background check, possibly including:

  • Credit records (minimally, including bankruptcies, foreclosures, judgments, and public records; possibly a full credit report, depending on the position).
  • Drug testing (even in countries or U.S. states where certain narcotics and other substances such as cannabis are legal, if the organization’s policies prohibit their use, then drug testing should be used to enforce the policy).
  • Special background investigation (FBI and INTERPOL records, field interviews with former associates, or a personal interview with a private investigator).

Periodic post-employment screenings (such as credit records and drug testing) may also be necessary, particularly for personnel with access to financial data, cash, or high-value assets, or for personnel being considered for promotions to more sensitive or responsible positions.

Many organizations that did not perform drug screenings in the past do so today. Instead of drug testing all employees, some take a measured approach by screening employees when promoted to higher levels of responsibility, such as director or vice president.

Employment agreements and policies

Various employment agreements and policies should be signed when an individual joins an organization or is promoted to a more sensitive position within an organization. Employment agreements often include non-compete agreements, non-disclosure agreements, codes of conduct, and acceptable use policies. Typical employment policies might include Internet acceptable use, social media policy, remote access, mobile and personal device use (for example, “Bring Your Own Device,” or BYOD), and sexual harassment/fraternization.

Employment termination processes

Formal employment termination procedures should be implemented to help protect the organization from potential lawsuits, property theft and destruction, unauthorized access, or workplace violence. Procedures should be developed for various scenarios including resignations, termination, layoffs, accident or death, immediate departures versus prior notification, and hostile situations. Termination procedures may include

  • Having the former employee surrender keys, security badges, and parking permits.
  • Conducting an exit interview.
  • Requiring that security escort the former employee to collect his or her personal belongings and/or to leave the premises.
  • Recovering company assets and materials including laptop computers, mobile phones, tablets, and so on.
  • Changing door locks and system passwords as needed.
  • Formally turning over duties and responsibilities.
  • Removing network and system access and disabling user accounts.
  • Enforcing policies regarding retention of e-mail, personal files, and employment records.
  • Notifying customers, partners, vendors, service providers, and contractors, as appropriate.

Vendor, consultant, and contractor controls

Organizations commonly outsource many IT functions, particularly data center hosting, call-center or contact-center support, and application development. Information security policies and procedures must address outsourcing security and the use of service providers, vendors and consultants, when appropriate. Access control, document exchange and review, maintenance hooks, on-site assessment, process and policy review, and service level agreements (SLAs) are good examples of outsourcing security considerations.

Compliance

Individual responsibilities for compliance with applicable policies and regulations within the organization should be understood by all personnel within an organization. Signed statements that attest to an individual’s understanding, acknowledgement, and/or agreement to comply may be appropriate for certain regulations and policies.

Privacy

Applicable policy regulations and policy requirements should be documented and understood by all personnel within the organization. Signed statements that attest to an individual’s understanding, acknowledgement, and/or agreement to comply may also be appropriate.

Understand and Apply Risk Management Concepts

Beyond basic security fundamentals, the concepts of risk management are perhaps the most important and complex part of the security and risk management domain. Indeed, risk management is the process from which decisions are made to establish what security controls are necessary, implement security controls, acquire and use security tools, and hire security personnel.

Risk can never be completely eliminated. Given sufficient time, resources, motivation, and money, any system or environment, no matter how secure, can eventually be compromised. Some threats or events, such as natural disasters, are entirely beyond our control and often unpredictable. Therefore, the main goal of risk management is risk treatment: making intentional decisions about specific risks that organizations identify. Risk management consists of three main elements (each treated in the upcoming sections):

  • Threat identification
  • Risk analysis
  • Risk treatment

Identify threats and vulnerabilities

The business of information security is all about risk management. A risk consists of a threat and a vulnerability of an asset:

  • Threat: Any natural or man-made circumstance or event that could have an adverse or undesirable impact, minor or major, on an organizational asset or process.
  • Vulnerability: The absence or weakness of a safeguard or control in an asset or process (or an intrinsic weakness) that makes a threat potentially more harmful or costly, more likely to occur, or likely to occur more frequently.
  • Asset: A resource, process, product, or system that has some value to an organization and must therefore be protected. Assets may be tangible (computers, data, software, records, and so on) or intangible (privacy, access, public image, ethics, and so on), and those assets may likewise have a tangible value (purchase price) or intangible value (competitive advantage).

Remember: Risk = Asset Value × Threat Impact × Threat Probability.

The risk management triple consists of an asset, a threat, and vulnerability.

Risk assessment/analysis (treatment)

Two key elements of risk management are the risk assessment and risk treatment (discussed in the following sections).

Risk assessment

A risk assessment begins with risk identification — detecting and defining specific elements of the three components of risk: assets, threats, and vulnerabilities.

The process of risk identification occurs during a risk assessment.

ASSET VALUATION

Identifying an organization’s assets and determining their value is a critical step in determining the appropriate level of security. The value of an asset to an organization can be both quantitative (related to its cost) and qualitative (its relative importance). An inaccurate or hastily conducted asset valuation process can have the following consequences:

  • Poorly chosen or improperly implemented controls
  • Controls that aren’t cost-effective
  • Controls that protect the wrong asset

A properly conducted asset valuation process has several benefits to an organization:

  • Supports quantitative and qualitative risk assessments, Business Impact Analyses (BIAs), and security auditing.
  • Facilitates cost-benefit analysis and supports management decisions regarding selection of appropriate safeguards.
  • Can be used to determine insurance requirements, budgeting, and replacement costs.
  • Helps demonstrate due care, thus (potentially) limiting personal liability on the part of directors and officers.

Three basic elements used to determine the value of an asset are

  • Initial and maintenance costs: Most often, a tangible dollar value that may include purchasing, licensing, development (or acquisition), maintenance, and support costs.
  • Organizational (or internal) value: Often a difficult and intangible value. It may include the cost of creating, acquiring, and re-creating information, and the business impact or loss if the information is lost or compromised. It can also include liability costs associated with privacy issues, personal injury, and death.
  • Public (or external) value: Another difficult and often intangible cost, public value can include loss of proprietary information or processes, as well as loss of business reputation.
  • Contribution to revenue: For instance, an asset worth $10,000 may be instrumental to the realization of $5 million in annual revenue. Hence, risk decisions for such an asset should consider not only its cost, but also its role in generating or protecting revenue.

THREAT ANALYSIS

To perform threat analysis, you follow these four basic steps:

  1. Define the actual threat.
  2. Identify possible consequences to the organization if the threat event occurs.
  3. Determine the probable frequency and impact of a threat event.
  4. Assess the probability that a threat will actually materialize.

For example, a company that has a major distribution center located along the Gulf Coast of the United States may be concerned about hurricanes. Possible consequences include power and communications outages, wind damage, and flooding. Using climatology, the company can determine that an annual average of three hurricanes pass within 50 miles of its location between June and September, and that a specific probability exists of a hurricane actually affecting the company’s operations during this period. During the remainder of the year, the threat of hurricanes has a low probability.

The number and types of threats that an organization must consider can be overwhelming, but you can generally categorize them as

  • Natural: Earthquakes, floods, hurricanes, lightning, fire, and so on.
  • Man-made: Unauthorized access, data-entry errors, strikes/labor disputes, theft, terrorism, sabotage, arson, social engineering, malicious code and viruses, and so on.

warning Not all threats can be easily or rigidly classified. For example, fires and utility losses can be both natural and man-made. See Chapter 9 for more on disaster recovery.

VULNERABILITY ASSESSMENT

A vulnerability assessment provides a valuable baseline for identifying vulnerabilities in an asset as well as identifying one or more potential methods for mitigating those vulnerabilities. For example, an organization may consider a Denial of Service (DoS) threat, coupled with a vulnerability found in Microsoft’s implementation of Domain Name System (DNS). However, if an organization’s DNS servers have been properly patched or the organization uses a UNIX-based DNSSEC server, the specific vulnerability may already have been adequately addressed, and no additional safeguards may be necessary for that threat.

Risk analysis

The next element in risk management is risk analysis — a methodical examination that brings together all the elements of risk management (identification, analysis, and control) and is critical to an organization for developing an effective risk management strategy.

Risk analysis involves the following four steps:

  1. Identify the assets to be protected, including their relative value, sensitivity, or importance to the organization.

    This component of risk identification is asset valuation.

  2. Define specific threats, including threat frequency and impact data.

    This component of risk identification is threat analysis.

  3. Calculate Annualized Loss Expectancy (ALE).

    The ALE calculation is a fundamental concept in risk analysis; we discuss this calculation later in this section.

  4. Select appropriate safeguards.

    This process is a component of both risk identification (vulnerability assessment) and risk control (which we discuss in the section “Risk control,” later in this chapter).

The Annualized Loss Expectancy (ALE) provides a standard, quantifiable measure of the impact that a realized threat has on an organization’s assets. Because it’s the estimated annual loss for a threat or event, expressed in dollars, ALE is particularly useful for determining the cost-benefit ratio of a safeguard or control. You determine ALE by using this formula:

SLE × ARO = ALE

Here’s an explanation of the elements in this formula:

  • Single Loss Expectancy (SLE): A measure of the loss incurred from a single realized threat or event, expressed in dollars. You calculate the SLE by using the formula Asset value × Exposure Factor (EF).

    Exposure Factor (EF) is a measure of the negative effect or impact that a realized threat or event would have on a specific asset, expressed as a percentage.

  • Annualized Rate of Occurrence (ARO): The estimated annual frequency of occurrence for a threat or event.

The two major types of risk analysis are qualitative and quantitative, which we discuss in the following sections.

QUALITATIVE RISK ANALYSIS

Qualitative risk analysis is more subjective than a quantitative risk analysis; unlike quantitative risk analysis, this approach to analyzing risk can be purely qualitative and avoids specific numbers altogether. The challenge of such an approach is developing real scenarios that describe actual threats and potential losses to organizational assets.

Qualitative risk analysis has some advantages when compared with quantitative risk analysis; these include

  • No complex calculations are required.
  • Time and work effort involved is relatively low.
  • Volume of input data required is relatively low.

Disadvantages of qualitative risk analysis, compared with quantitative risk analysis, include

  • No financial costs are defined; therefore cost-benefit analysis isn’t possible.
  • The qualitative approach relies more on assumptions and guesswork.
  • Generally, qualitative risk analysis can’t be automated.
  • Qualitative analysis is less easily communicated. (Executives seem to understand “This will cost us $3 million over 12 months” better than “This will cause an unspecified loss at an undetermined future date.”)

A distinct advantage of qualitative risk analysis is that a large set of identified risks can be charted and sorted by asset value, risk, or other means. This can help an organization identify and distinguish higher risks from lower risks, even though precise dollar amounts may not be known.

A qualitative risk analysis doesn’t attempt to assign numeric values to the components (the assets and threats) of the risk analysis.

QUANTITATIVE RISK ANALYSIS

A fully quantitative risk analysis requires all elements of the process, including asset value, impact, threat frequency, safeguard effectiveness, safeguard costs, uncertainty, and probability, to be measured and assigned numeric values.

A quantitative risk analysis attempts to assign more objective numeric values (costs) to the components (assets and threats) of the risk analysis.

Advantages of a quantitative risk analysis, compared with qualitative risk analysis, include the following:

  • Financial costs are defined; therefore, cost-benefit analysis can be determined.
  • More concise, specific data supports analysis; thus fewer assumptions and less guesswork are required.
  • Analysis and calculations can often be automated.
  • Specific quantifiable results are easier to communicate to executives and senior-level management.

Disadvantages of a quantitative risk analysis, compared with qualitative risk analysis, include the following:

  • Human biases will skew results.
  • Many complex calculations are usually required.
  • Time and work effort involved is relatively high.
  • Volume of input data required is relatively high.
  • The probability of threat events is difficult to determine.
  • Some assumptions are required.

Purely quantitative risk analysis is generally not possible or practical. Primarily, this is because it is difficult to determine a precise probability of occurrence for any given threat scenario. For this reason, many risk analyses are a blend of qualitative and quantitative risk analysis, known as a hybrid risk analysis.

HYBRID RISK ANALYSIS

A hybrid risk analysis combines elements of both a quantitative and qualitative risk analysis. The challenges of determining accurate probabilities of occurrence, as well as the true impact of an event, compel many risk managers to take a middle ground. In such cases, easily determined quantitative values (such as asset value) are used in conjunction with qualitative measures for probability of occurrence and risk level. Indeed, many so-called quantitative risk analyses are more accurately described as hybrid.

Risk treatment

A properly conducted risk analysis provides the basis for the next step in the risk management process: deciding what to do about risks that have been identified. The decision-making process is known as risk treatment. The four general methods of risk treatment are

  • Risk mitigation: This involves the implementation of one or more policies, controls, or other measures to protect an asset. Mitigation generally reduces the probability of threat realization or the impact of threat realization to an acceptable level.

    This is the most common risk control remedy.

  • Risk assignment (or transference): Transferring the potential loss associated with a risk to a third party, such as an insurance company or a service provider that explicitly agrees to accept risk.
  • Risk avoidance: Eliminating the risk altogether through a cessation of the activity or condition that introduced the risk in the first place.
  • Risk acceptance: Accepting the risk associated with a potential threat. This is sometimes done for convenience (not prudent) but more appropriately when the cost of other countermeasures is prohibitive, the probability or impact is low, or the benefits outweigh the costs.

Countermeasure selection

As stated in the preceding section, mitigation is the most common method of risk treatment. Mitigation involves the implementation of one or more countermeasures. Several criteria for selecting countermeasures include cost-effectiveness, legal liability, operational impact, and technical factors.

COST-EFFECTIVENESS

The most common criterion for countermeasure selection is cost-effectiveness, which is determined through cost-benefit analysis. Cost-benefit analysis for a given countermeasure (or collection of countermeasures) can be computed as follows:

ALE before countermeasure – ALE after countermeasure – Cost of countermeasure = Value of countermeasure to the organization

For example, if the ALE associated with a specific threat (data loss) is $1,000,000; the ALE after a countermeasure (enterprise tape backup) has been implemented is $10,000 (recovery time); and the cost of the countermeasure (purchase, installation, training, and maintenance) is $140,000; then the value of the countermeasure to the organization is $850,000.

When calculating the cost of the countermeasure, you should consider the total cost of ownership (TCO), including:

  • Purchase, development, and licensing
  • Architecture and design
  • Testing and installation
  • Normal operating costs
  • Resource allocation
  • Maintenance and repair
  • Production or service disruptions

The total cost of a countermeasure is normally stated as an annualized amount.

LEGAL LIABILITY

An organization that fails to implement a countermeasure against a threat is exposed to legal liability if the cost to implement a countermeasure is less than the loss resulting from a realized threat (see due care and due diligence, discussed earlier in this chapter). The legal liability we’re talking about here could encompass statutory liability (as a result of failing to obey the law) or civil liability (as a result of failing to comply with a legal contract). A cost-benefit analysis is a useful tool for determining legal liability.

OPERATIONAL IMPACT

The operational impact of a countermeasure must also be considered. If a countermeasure is too difficult to implement and operate, or interferes excessively with normal operations or production, it may be circumvented or ignored and thus not be effective. The end result may be a risk that is higher than the original risk prior to the so-called mitigation.

TECHNICAL FACTORS

The countermeasure itself shouldn’t, in principle (but often does, in practice), introduce new vulnerabilities. For example, improper placement, configuration, or operation of a countermeasure can cause new vulnerabilities; lack of fail-safe capabilities, insufficient auditing and accounting features, or improper reset functions can cause asset damage or destruction; finally, covert channel access or other unsafe conditions are technical issues that can create new vulnerabilities. Every new component in an environment, including security solutions, adds to the potential attack surface.

Implementation

After appropriate countermeasures have been selected, they need to be implemented in the organization and integrated with other systems and countermeasures, when appropriate. Organizations that implement countermeasures are making planned changes to their environment in specific ways. Examples of countermeasure implementation include

  • Change to policy, standard, or procedure. An update to an official policy, technology standard, or procedure will require planning to ensure that the change will not have unintended effects in the organization. There will be some level of review(s), analysis, and discussion before the changes are accepted, published, and communicated. Changes to policy, standard, or procedure may also require changes to technology, and vice versa.
  • Change to technology. An update to something as big as network architecture, or as focused as the configuration setting of an individual system, is used to mitigate risk. Changes to technology usually involve business processes such as change management and configuration management, and may also impact procedures or standards. Significant changes may also involve discussions or processes at the IT Steering Committee or Security Committee levels. Disaster recovery planning and the CMDB may also be impacted.
  • Change to staff. A change in staffing could include training, reallocation of responsibilities, the addition of temporary staff (contractors or consultants), or hiring of additional staff.

Types of controls

A control is defined as a safeguard that is used to ensure a desired outcome. A control can be implemented in technology (for example, a program that enforces password complexity policy by requiring users to employ complex passwords), in a procedure (for example, a security incident response process that requires an incident responder to inform upper management), or a policy (for example, a policy that requires users to report security incidents to management). Organizations typically will have dozens to hundreds or even thousands of controls. There are so many controls that, sometimes, it makes sense to categorize controls in various ways. This can help security professionals better understand the types and categories of controls used in their organization. A few of these category groupings are discussed here.

The major types of controls are

  • Preventive controls: Used to prevent errors, unwanted events, and unauthorized actions.
  • Detective controls: Used to detect errors, unwanted events, and unauthorized activities. An example of a detective control is a video surveillance system.
  • Deterrent controls: Used to discourage people from carrying out an activity. For example, a video surveillance system employs visibly placed monitors to inform and remind people that a video surveillance system is in place.

Other types of controls include

  • Corrective controls: Used to reverse or minimize the impact of errors and unauthorized events. These are also known as recovery controls. An example of a recovery control is the verification of successful data recovery after a hardware failure.
  • Administrative controls: These are policies, standards, or procedures (typically, just statements written down in some way).
  • Compensating controls: These are controls that are enacted when primary controls cannot be implemented for any reason, such as excessive cost.

Another way to think of controls is how they are enforced. These types are

  • Automatic controls: Some form of automated mechanism ensures their enforcement and effectiveness. For example, such a system may automatically display a login page that requires a user to successfully authenticate prior to accessing the system.
  • Manual controls: Controls must be performed manually. For example, there may be a mandatory review of proposed changes in the change control process.

Most organizations don’t attempt to create their control frameworks from scratch; instead, they adopt one of these well-known industry standard control frameworks:

  • ISO/IEC 27002 (Code of practice for information security management).
  • NIST 800-53 (Security and Privacy Controls for Federal Information Systems and Organizations).
  • NIST 800-171 (Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations).
  • COBIT 5 for Information Security.
  • PCI DSS (Payment Card Industry Data Security Standard).
  • HIPAA Security Rule controls.
  • CIS (Center for Internet Security) Security Controls.

Organizations typically start with one of these, then make individual additions, changes, or deletions to controls, until they arrive at the precise set of controls they deem sufficient.

Control assessment

An organization that implemented controls, but failed to periodically assess those controls, would be considered negligent. The periodic assessment of controls is a necessary part of a sound risk management system.

Control assessment approach

There are various approaches to security control assessments (SCA), including:

  • Control self assessment. Here, an organization examines its own controls to determine whether they are being followed and whether they are effective.
  • External assessment. An organization employs an external agency (which could be a different part of the organization, or an external entity such as an audit firm or a consulting firm) to assess its controls.
  • Variations in assessment frequency. Various circumstances will compel an organization to set schedules for control assessment. For example, highly critical controls may be assessed monthly or quarterly, other controls assessed annually, and low-risk controls assessed every other year.

Organizations often take a blended approach to control assessment: some controls may be assessed internally, others externally. There may be a mix of the two; some controls are assessed both internally and externally.

tip Laws, regulations, and standards often have requirements dictating the frequency of control assessment, as well as whether controls must be assessed internally or externally.

Control assessment methodology

It would take an entire book (a long chapter, anyway) to detail the methods used to assess controls. Most of this subject matter lies outside the realm of most CISSPs, so we’ll just summarize here. If you are “fortunate” enough to work in a highly regulated environment, you may get exposure to these concepts, and more.

CONTROL ASSESSMENT TECHNIQUES

There are five basic techniques used to assess the effectiveness of a control:

  • Observation. Here, an auditor watches a control as it is being performed.
  • Inquiry. An auditor asks questions of control owners about the control, how it is performed, and how records (if any) are produced.
  • Corroborative inquiry. Here, an auditor asks other persons about a control, in order to see if their descriptions agree or conflict with those given by control owners.
  • Inspection. An auditor examines records, and other artifacts, to see whether the control is operating properly.
  • Reperformance. An auditor will perform actions associated with the control to see whether the results indicate proper control function.

Auditors often use more than one of the techniques above when testing control effectiveness. The method(s) used are sometimes determined by the auditor, but sometimes the law, regulation, or standard specifies the type of control testing required.

SAMPLING TECHNIQUES

Some controls are manifested in many physical locations, or are present in many separate information systems. Sometimes, an auditor will elect to examine a subset of systems or locations instead of all of them. In large organizations, or in organizations where controls are implemented identically in all locations, it makes sense to examine a subset of the total number of instances (auditors call the entire collection of instances the population).

The available techniques include the following:

  • Statistical. Random selection that represents the entire population.
  • Judgmental. Auditor selects samples based on specific criteria.
  • Discovery. In high-risk controls where even a single exception may represent a high risk, an auditor may continue to examine a large population in search of a single exception.

Some laws, regulations, and standards have their own rules about sampling and the techniques that are permitted.

REPORTING

Auditors will typically create formal reports that include several components, including:

  • Audit objectives
  • Personnel interviewed
  • Documents and records examined
  • Dates of interviews and examinations
  • Controls examined
  • Findings for each control (whether effective or ineffective)

Some laws, regulations, and standards specify elements required in audit reports, and sometimes even the format of a report.

Monitoring and measurement

Any safeguards or controls that are implemented need to be managed and, as you know, you can’t manage what you don’t measure! Monitoring and measurement not only helps you manage safeguards and controls, it also helps you verify and prove effectiveness (for auditing and other purposes).

Monitoring and measurement refer to active, intentional steps in controls and processes, so that management can understand how controls and processes are operating. Depending on the control or process, one or more of the following will be recorded for management reporting:

  • Number of events that occur
  • Outcome of each event (for example, success or failure)
  • Assets involved
  • Persons, departments, business units, or customers involved
  • Costs involved
  • Length of time
  • Location

For some controls, management may direct personnel (or systems, for automatic controls) to create alerts or exceptions in specific circumstances. This will inform management of specific events where they may wish to take action of some kind. For example, a bank’s customer representative might be required to inform a branch manager if a customer asks for change for a ten-thousand dollar bill.

Asset valuation

Asset valuation is an important part of risk management, because managers and executives need to be aware of the tangible and intangible value of all assets involved in specific incidents of risk management.

Once in a while, an asset’s valuation can come from the accounting department’s balance sheet (for better organizations that have a good handle on asset inventory, value, and depreciation), but often that’s only a part of the story. For example, if an older server is involved in an incident and must be replaced, that replacement cost will be far higher than the asset’s depreciated value. Further, the time required to deploy and ready a replacement server, and the cost of downtime, also need to be considered.

There are sometimes other ways to assign values to assets. For example, an asset’s contribution to revenue may change one’s perspectives on an asset’s value. If an asset with a $10,000 replacement cost is key in helping the organization realize $5 million in revenue, is it still worth just $10,000?

Reporting

Regular reporting is critical to ensure that risk management is always “top of mind” for management. Reports should be accurate and concise. Never attempt to hide or downplay an issue, incident, or other bad news. Any changes to the organization’s risk posture — whether due to a new acquisition, changing technology, new threats, or the failure of a safeguard, among others — should be promptly reported and explained.

Potentially, there is a lot of reporting going on in a risk management process, including:

  • Additions and changes to the risk ledger
  • Risk treatment decisions
  • Internal audits
  • External audits
  • Changes to controls
  • Controls monitoring and key metrics
  • Changes in personnel related to the risk management program

You guessed it: Some laws, regulations, and standards may require these and other types of reports (and, in some cases, in specific formats).

Continuous improvement

Continuous (or continual) improvement is more than a state of mind or a philosophy. It is a way of thinking about security and risk management. Better organizations bake continual improvement into their business processes, as a way of intentionally seeking opportunities to do things better.

ISO/IEC 27001 (Information Security Management Systems [ISMS] requirements) specifically requires continual improvement in several ways:

  • It requires management to promote continual improvement.
  • It requires a statement of commitment to continual improvement in an organization’s security policy.
  • It requires security planning to achieve continual improvement.
  • It requires that the organization provide resources in order to achieve continual improvement.
  • It requires that management reviews seek opportunities for continual improvement.
  • It requires a formal corrective action process that helps to bring about continual improvement.

Risk frameworks

If you ask an experienced security and risk professional about risk frameworks, chances are they will think you are talking about either risk assessment frameworks or risk management frameworks. These frameworks are distinct, but deal with the same general subject matter: identification of risk that can be treated in some way.

Risk assessment frameworks

Risk assessment frameworks are methodologies used to identify and assess risk in an organization. These methodologies are, for the most part, mature and well established.

Some common risk assessment methods include

  • Factor Analysis of Information Risk (FAIR). A proprietary framework for understanding, analyzing, and measuring information risk.
  • OpenFAIR. An open-source version of FAIR.
  • Operationally Critical Threat, Asset and Vulnerability Evaluation (OCTAVE). Developed by the CERT Coordination Center.
  • Threat Agent Risk Assessment (TARA). Developed by Intel, this is a newer kid on the block.

Risk management frameworks

A risk framework is a set of linked processes and records that work together to identify and manage risk in an organization. The activities in a typical risk management framework are

  • Create strategies and policies.
  • Establish risk tolerance.
  • Categorize systems and information.
  • Select a baseline set of security controls.
  • Implement security controls.
  • Assess security controls for effectiveness.
  • Authorize system operation.
  • Monitor security controls.

There is no need to build a risk management framework from scratch. Instead, there are several excellent frameworks available that can be adapted for any size and type of organization. These frameworks include

  • NIST SP800-37, Guide for Applying the Risk Management Framework to Federal Information Systems.
  • ISO/IEC 27005 (Information Security Risk Management).
  • Risk Management Framework (RMF) from the National Institute of Standards and Technology (NIST).
  • COBIT 5 from ISACA.
  • Enterprise Risk Management — Integrated Framework from COSO (Committee of Sponsoring Organizations of the Treadway Commission).

Understand and Apply Threat Modeling

Threat modeling is a type of risk analysis used to identify security defects in the design phase of an information system or business process. Threat modeling is most often applied to software applications, but it can be used for operating systems, devices, and business processes with equal effectiveness.

Threat modeling is typically attack-centric; threat modeling most often is used to identify vulnerabilities that can be exploited by an attacker in software applications.

Threat modeling is most effective when performed at the design phase of an information system, application, or process. When threats and their mitigation are identified at the design phase, much effort is saved through the avoidance of design changes and fixes in an existing system.

While there are different approaches to threat modeling, the typical steps are

  • Identifying threats
  • Determining and diagramming potential attacks
  • Performing reduction analysis
  • Remediation of threats

Identifying threats

Threat identification is the first step that is performed in threat modeling. Threats are those actions that an attacker may be able to successfully perform if there are corresponding vulnerabilities present in the application or system.

For software applications, there are two mnemonics used as a memory aid during threat modeling. They are

  • STRIDE, a list of basic threats (developed by Microsoft):
    • Spoofing of user identity
    • Tampering
    • Repudiation
    • Information disclosure
    • Denial of service
    • Elevation of privilege
  • DREAD, an older technique used for assessing threats:
    • Damage
    • Reproducibility
    • Exploitability
    • Affected users
    • Discoverability

While these mnemonics themselves don’t contain threats, they do assist the individual performing threat modeling, by reminding the individual of basic threat categories (STRIDE) and their analysis (DREAD).

tip Appendices D and E in NIST SP800-30, Guide for Conducting Risk Assessments, are an excellent general-purpose source for threats.

Determining and diagramming potential attacks

After threats have been identified, threat modeling continues through the creation of diagrams that illustrate attacks on an application or system. An attack tree can be developed. It outlines the steps required to attack a system. Figure 3-2 illustrates an attack tree of a mobile banking application.

image

FIGURE 3-2: Attack tree for a mobile banking application.

remember An attack tree illustrates the steps used to attack a target system.

Performing reduction analysis

When performing a threat analysis on a complex application or a system, it is likely that there will be many similar elements that represent duplications of technology. Reduction analysis is an optional step in threat modeling to avoid duplication of effort. It doesn’t make sense to spend a lot of time analyzing different components in an environment if they are all using the same technology and configuration.

Here are typical examples:

  • An application contains several form fields (which are derived from the same source code) that request bank account number. Because all of the field input modules use the same code, detailed analysis only needs to be done once.
  • An application sends several different types of messages over the same TLS connection. Because the same certificate and connection are being used, detailed analysis of the TLS connection only needs to be done once.

Technologies and processes to remediate threats

Just as in routine risk analysis, the next step in threat analysis is the enumeration of potential measures to mitigate the identified threat. Because the nature of threats varies widely, remediation may consist of one or more of the following for each risk:

  • Change source code (for example, add functions to closely examine input fields and filter out injection attacks).
  • Change configuration (for example, switch to a more secure encryption algorithm, or expire passwords more frequently).
  • Change business process (for example, add or change steps in a process or procedure to record or examine key data).
  • Change personnel (for example, provide training, move responsibility for a task to another person).

remember Recall that the four options for risk treatment are mitigation, transfer, avoidance, and acceptance. In the case of threat modeling, some threats may be accepted as-is.

Integrate Security Risk Considerations into Supply Chain Management, Mergers, and Acquisitions

Integrating security risk considerations into supply chain management and merger and acquisition strategy helps to minimize the introduction of new or unknown risks into the organization.

It is often said that security in an organization is only as strong as its weakest link. In the context of service providers, mergers, and acquisitions, the security of all organizations in a given ecosystem will be dragged down by shoddy practices in any one of them. Connecting organizations together before sufficient analysis can result in significant impairment of the security capabilities overall.

remember The task of reconciling policies, requirements, business processes, and procedures during a merger or acquisition is rarely straightforward. Further, there should be no assumption of one organization’s policies, requirements, processes and procedures being the “right” or “best” way for all parties in the merger or acquisition – even if that organization is the acquiring entity.

Instead, each organization’s individual policies, requirements, processes and procedures should be assessed to identify the best solution for the new formed organization going forward.

Hardware, software, and services

Any new hardware, software, or services being considered by an organization should be appropriately evaluated to determine both how it will impact the organization’s overall security and risk posture, and how it will affect other hardware, software, services, and processes already in place within the organization. For example, integration issues can have a negative impact on a system’s integrity and availability.

Third-party assessment and monitoring

It’s important to consider the third parties that organizations use. Not only do organizations need to carefully examine their third-party risk programs, but also a fresh look of third parties themselves is needed, to ensure that the risk level related to each third party has not changed to the detriment of the organization.

Any new third-party assessments or monitoring should be carefully considered. Contracts (including privacy, non-disclosure requirements, and security requirements) and service-level agreements (SLAs, discussed later in this section) should be reviewed to ensure that all important security issues and regulatory requirements still are addressed adequately.

Minimum security requirements

Minimum security requirements, standards and baselines should be documented to ensure they are fully understood and considered in acquisition strategy and practice. Blending security requirements from two previously separate organizations is almost never as easy as simply combining them together into one document. Instead, there may be many instances of overlap, underlap, gaps, and contradiction that must all be reconciled. A transition period may be required, so that there is ample time to adjust the security configurations, architectures, processes, and practices to meet the new set of requirements after the merger or acquisition.

Service-level requirements

Service-level agreements (SLAs) establish minimum performance standards for a system, application, network, service, or process. An organization establishes internal SLAs and operating level agreements (OLAs) to provide its end-users with a realistic expectation of the performance of its information systems, services, and processes. For example, a help desk SLA might prioritize incidents as 1, 2, 3, and 4, and establish SLA response times of ten minutes, 1 hour, 4 hours, and 24 hours, respectively. In third-party relationships, SLAs provide contractual performance requirements that an outsourcing partner or vendor must meet. For example, an SLA with an Internet service provider might establish a maximum acceptable downtime which, if exceeded within a given period, results in invoice credits or (if desired) cancellation of the service contract.

Establish and Manage Information Security Education, Training, and Awareness

The CISSP candidate should be familiar with the tools and objectives of security awareness, training, and education programs. Adversaries are well aware that, as organizations’ technical defenses improve, the most effective way to attack an organization is through its staff. Hence, all personnel in an organization need to be aware of attack techniques so that they can be on the lookout for these attacks and not be fooled by them.

Appropriate levels of awareness, training and education required within organization

remember Security awareness is an often-overlooked factor in an information security program. Although security is the focus of security practitioners in their day-to-day functions, it’s often taken for granted that common users possess this same level of security awareness. As a result, users can unwittingly become the weakest link in an information security program’s defenses. Several key factors are critical to the success of a security awareness program:

  • Senior-level management support: Under ideal circumstances, senior management is seen attending and actively participating in training efforts.
  • Clear demonstration of how security supports the organization’s business objectives: Employees need to understand why security is important to the organization and how it benefits the organization as a whole.
  • Clear demonstration of how security affects all individuals and their job functions: The awareness program needs to be relevant for everyone, so that everyone understands that “security is everyone’s responsibility.”
  • Taking into account the audience’s current level of training and understanding of security principles: Training that’s too basic will be ignored; training that’s too technical will not be understood.
  • Ensuring training is relevant and engaging: Training needs to be relevant and engaging for all audiences, reflecting applicable regulations, technologies in use, and the organization’s culture.
  • Action and follow-up: A glitzy presentation that’s forgotten as soon as the audience leaves the room is useless. Find ways to incorporate the security information you present with day-to-day activities and follow-up plans.

The three main components of an effective security awareness program are a general awareness program, formal training, and education.

Awareness

A general security awareness program provides basic security information and ensures that everyone understands the importance of security. Awareness programs may include the following elements:

  • Indoctrination and orientation: New employees and contractors should receive basic indoctrination and orientation. During the indoctrination, they may receive a copy of the corporate information security policy, be required to acknowledge and sign acceptable-use statements and non-disclosure agreements, and meet immediate supervisors and pertinent members of the security and IT staff.
  • Presentations: Lectures, video presentations, and interactive computer-based training (CBTs) are excellent tools for disseminating security training and useful information. Employee bonuses and performance reviews are sometimes tied to participation in these types of security awareness programs.
  • Printed materials: Security posters, corporate newsletters, and periodic bulletins are useful for disseminating basic information such as security tips and promoting awareness of security.

Training

Formal training programs provide more in-depth information than an awareness program and may focus on specific security-related skills or tasks. Such training programs may include

  • Classroom training: Instructor-led or other formally facilitated training, possibly at corporate headquarters or a company training facility.
  • Self-paced training: Usually web-based training where students can proceed at their own pace.
  • On-the-job training: May include one-on-one mentoring with a peer or immediate supervisor.
  • Technical or vendor training: Training on a specific product or technology provided by a third party.
  • Apprenticeship or qualification programs: Formal probationary status or qualification standards that must be satisfactorily completed within a specified time period.

Education

An education program provides the deepest level of security training, focusing on underlying principles, methodologies, and concepts. In all but the largest organizations, this training is delivered by external agencies, as well as colleges, universities, and vocational schools.

An education program may include

  • Continuing education requirements: Continuing Education Units (CEUs) are becoming popular for maintaining high-level technical or professional certifications such as the CISSP or Certified Information Systems Auditor (CISA).
  • Certificate programs: Many colleges and universities offer adult education programs that have classes about current and relevant subjects for working professionals.
  • Formal education or degree requirements: Many companies offer tuition assistance or scholarships for employees enrolled in classes that are relevant to their profession.

Measuring the effectiveness of security training

As we say often in this book, you can’t manage what you don’t measure. Security awareness training is definitely included here. It is vital that security awareness training include a number of different measurements so that security managers and company leadership know whether the effort is worth it. Some examples include

  • Quizzes: Whether delivered in the classroom or via on-demand web-based training, quizzes send a clear message that workers are expected to learn and retain security awareness knowledge. When minimum passing scores are enacted, this is made even more effective.
  • Training metrics: It’s helpful to track completion rates to ensure that as many workers as possible complete required and optional training.
  • Other security program metrics: It may be interesting to track security awareness training metrics with other metrics such as security incidents, reports to ethics hot lines, and employees’ reporting of security issues. It should be noted that some of these metrics may trend upward, which would represent workers’ being more aware of security-related issues and a greater likelihood of their being reported.

Periodic reviews for content relevancy

Congratulations! You’ve chosen a profession that is constantly and rapidly changing! As such, security education, training, and awareness programs constantly must be reviewed and updated to ensure they remain relevant, and to ensure your own knowledge of current security concepts, trends, and technologies remains current. We suggest that the content of security education and training programs be examined at least once per year, to ensure that there is no mention of obsolete or retired technologies or systems, and that current topics are included.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.51.153