Chapter 2

Domain 1: Security and Risk Management (e.g., Security, Risk, Compliance, Law, Regulations, Business Continuity)

Abstract

Security and Risk Management, the topic of this chapter and Domain 1 of the CISSP, presents numerous critically important terms and concepts that permeate several domains. This chapter introduces the CIA triad of Confidentiality, Integrity, and Availability, which are touched upon in virtually every domain and chapter. In addition to CIA, concepts such as the Principle of Least Privilege and Need to Know are presented. Key terms, concepts, and formulas related to risk management are presented within this chapter. Risk, threat, vulnerability are basic terms that must be understood to prove successful with this domain. Understanding how to perform calculations using Annualized Loss Expectancy (ALE), Single Loss Expectancy (SLE), Annualized Rate of Occurrence (ARO), and Exposure Factor (EF) are highlighted as part of quantitative risk analysis. Important concepts related to information security governance such as privacy, due care, due diligence, certification and accreditation are also a focus of this chapter.

Keywords

Confidentiality
Integrity
Availability
Subject
Object
Annualized Loss Expectancy
Threat
Vulnerability
Risk
Safeguard
Total Cost of Ownership
Return on Investment

Exam objectives in this chapter

Cornerstone Information Security Concepts
Legal and Regulatory Issues
Security and 3rd Parties
Ethics
Information Security Governance
Access Control Defensive Categories and Types
Risk Analysis
Types of Attackers

Unique Terms and Definitions

Confidentiality - seeks to prevent the unauthorized disclosure of information: it keeps data secret
Integrity - seeks to prevent unauthorized modification of information. In other words, integrity seeks to prevent unauthorized write access to data. Integrity also seeks to ensure data that is written in an authorized manner is complete and accurate.
Availability - ensures that information is available when needed
Subject - An active entity on an information system
Object - A passive data file
Annualized Loss Expectancy—the cost of loss due to a risk over a year
Threat—a potentially negative occurrence
Vulnerability—a weakness in a system
Risk—a matched threat and vulnerability
Safeguard—a measure taken to reduce risk
Total Cost of Ownership—the cost of a safeguard
Return on Investment—money saved by deploying a safeguard

Introduction

Our job as information security professionals is to evaluate risks against our critical assets and deploy safeguards to mitigate those risks. We work in various roles: firewall engineers, penetration testers, auditors, management, etc. The common thread is risk: it is part of our job description.
The Security and Risk Management domain focuses on risk analysis and mitigation. This domain also details security governance, or the organizational structure required for a successful information security program. The difference between organizations that are successful versus those that fail in this realm is usually not tied to dollars or size of staff: it is tied to the right people in the right roles. Knowledgeable and experienced information security staff with supportive and vested leadership is the key to success.
Speaking of leadership, learning to speak the language of your leadership is another key to personal success in this industry. The ability to effectively communicate information security concepts with C-level executives is a rare and needed skill. This domain will also help you to speak their language by discussing risk in terms such as Total Cost of Ownership (TCO) and Return on Investment (ROI).

Cornerstone Information Security Concepts

Before we can explain access control we must define cornerstone information security concepts. These concepts provide the foundation upon which the 8 domains of the Common Body of Knowledge are built.

Note

Cornerstone information security concepts will be repeated throughout this book. This repetition is by design: we introduce the concepts at the beginning of the first domain, and then reinforce them throughout the later domains, while focusing on issues specific to that domain. If you do not understand these cornerstone concepts, you will not pass the exam.

Confidentiality, Integrity and Availability

Confidentiality, Integrity, and Availability are referred to as the “CIA triad,” the cornerstone concept of information security. The triad, shown in Figure 2.1, form the three-legged stool information security is built upon. The order of the acronym may change (some prefer “AIC,” perhaps to avoid association with a certain intelligence agency), which is not important: understanding each concept is critical. This book will use the “CIA” acronym.
image
Figure 2.1 The CIA Triad
All three pieces of the CIA triad work together to provide assurance that data and systems remain secure. Do not assume that one part of the triad is more important than another. Every IT system will require a different prioritization of the three, depending on the data, user community, and timeliness required for accessing the data. There are opposing forces to CIA. As shown in Figure 2.2, those forces are disclosure, alteration, and destruction (DAD).
image
Figure 2.2 Disclosure, Alteration and Destruction

Confidentiality

Confidentiality seeks to prevent the unauthorized disclosure of information: it keeps data secret. In other words, confidentiality seeks to prevent unauthorized read access to data. An example of a confidentiality attack would be the theft of Personally Identifiable Information (PII), such as credit card information.
Data must only be accessible to users who have the clearance, formal access approval, and the need to know. Many nations share the desire to keep their national security information secret and accomplish this by ensuring that confidentiality controls are in place.
Large and small organizations need to keep data confidential. One U.S. law, the Health Insurance Portability and Accountability Act (HIPAA), requires that medical providers keep the personal and medical information of their patients private. Can you imagine the potential damage to a medical business if patients’ medical and personal data were somehow released to the public? That would not only lead to a loss in confidence but could expose the medical provider to possible legal action by the patients or government regulators.

Integrity

Integrity seeks to prevent unauthorized modification of information. In other words, integrity seeks to prevent unauthorized write access to data.
There are two types of integrity: data integrity and system integrity. Data integrity seeks to protect information against unauthorized modification; system integrity seeks to protect a system, such as a Windows 2008 server operating system, from unauthorized modification. If an unethical student compromises a college grade database to raise his failing grades, he has violated the data integrity. If he installs malicious software on the system to allow future “back door” access, he has violated the system integrity.

Availability

Availability ensures that information is available when needed. Systems need to be usable (available) for normal business use. An example of attack on availability would be a Denial of Service (DoS) attack, which seeks to deny service (or availability) of a system.

Tension Between the Concepts

Confidentiality, integrity, and availability are sometimes at opposition: locking your data in a safe and throwing away the key may help confidentiality and integrity, but harms availability. That is the wrong answer: our mission as information security professionals is to balance the needs of confidentiality, integrity, and availability, and make tradeoffs as needed. One sure sign of an information security rookie is throwing every confidentiality and integrity control at a problem, while not addressing availability. Properly balancing these concepts, as shown in Figure 2.3, is not easy, but worthwhile endeavors rarely are.
image
Figure 2.3 Balancing the CIA Triad

Disclosure, Alteration and Destruction

The CIA triad may also be described by its opposite: Disclosure, Alteration, and Destruction (DAD). Disclosure is unauthorized release of information; alteration is the unauthorized modification of data, and destruction is making systems or data unavailable. While the order of the individual components of the CIA acronym sometimes changes, the DAD acronym is shown in that order.

Identity and Authentication, Authorization and Accountability (AAA)

The term “AAA” is often used to describe the cornerstone concepts Authentication, Authorization, and Accountability. Left out of the AAA acronym is Identification (which is required, before the remaining three “A’s” can be achieved).

Identity and Authentication

Identity is a claim: if your name is “Person X,” you identify yourself by saying “I am Person X.” Identity alone is weak because there is no proof. You can also identify yourself by saying “I am Person Y.” Proving an identity claim is called authentication: you authenticate the identity claim, usually by supplying a piece of information or an object that only you possess, such as a password in the digital world, or your passport in the physical world.
When you check in at the airport, the ticket agent asks for your name (your identity). You can say anything you would like, but if you lie you will quickly face a problem: the agent will ask for your driver’s license or passport. In other words, they will seek to authenticate your identity claim.
Figure 2.4 shows the relationship between identity and authentication. User Deckard logs into his email account at ericconrad.com. He types “deckard” in the username box; this is his identity on the system. Note that Deckard could type anything in the Username box: identification alone is weak. It requires proof, which is authentication. Deckard then types a password “R3plicant!” This is the correct password for the user Deckard at ericconrad.com, so Deckard’s identity claim is proven and he is logged in.
image
Figure 2.4 Identification and Authentication
Identities must be unique: if two employees are named John Smith, their usernames (identities) cannot both be jsmith: this would harm accountability. Sharing accounts (identities) also harms accountability: policy should forbid sharing accounts, and security awareness should be conducted to educate users of this risk.
Ideally, usernames should be non-descriptive. The example username “jsmith” is a descriptive username: an attacker could guess the username by simply knowing the user’s actual name. This would provide one half (a valid identity) of the information required to launch a successful password guessing attack (the second half is jsmith’s password, required to authenticate). A non-descriptive identity of “bcon1203” would make password-guessing attacks (and many other types of attacks) more difficult.

Authorization

Authorization describes the actions you can perform on a system once you have been identified and authenticated. Actions may include reading, writing, or executing files or programs. If you are an information security manager for a company with a human resources database, you may be authorized to view your own data and perhaps some of your employees’ data (such as accrued sick time or vacation time). You would not be authorized to view the CIO’s salary.
Figure 2.5 shows authorization using an Ubuntu Linux system. User Deckard has identified and authenticated himself, and logged into the system. He uses the Linux “cat” command to view the contents of “sebastian-address.txt.” Deckard is authorized to view this file, so permission is granted. Deckard then tries to view the file “/etc/shadow,” which stores the users’ password hashes. Deckard is not authorized to view this file, and permission is denied.
image
Figure 2.5 Linux File Authorization

Accountability

Accountability holds users accountable for their actions. This is typically done by logging and analyzing audit data. Enforcing accountability helps keep “honest people honest.” For some users, knowing that data is logged is not enough to provide accountability: they must know that the data is logged and audited, and that sanctions may result from violation of policy.
The healthcare company Kaiser Permanente enforced accountability in 2009 when it fired or disciplined over 20 workers for violating policy (and possibly violating regulations such as HIPAA) by viewing Nadya Suleman’s (aka the Octomom) medical records without a need to know. See http://www.scmagazineus.com/octomoms-hospital-records-accessed-15-workers-fired/article/129820/ for more details. Logging that data is not enough: identifying violations and sanctioning the violators is also required.

Non-Repudiation

Non-repudiation means a user cannot deny (repudiate) having performed a transaction. It combines authentication and integrity: non-repudiation authenticates the identity of a user who performs a transaction, and ensures the integrity of that transaction. You must have both authentication and integrity to have non-repudiation: proving you signed a contract to buy a car (authenticating your identity as the purchaser) is not useful if the car dealer can change the price from $20,000 to $40,000 (violate the integrity of the contract).

Least Privilege and Need to Know

Least privilege means users should be granted the minimum amount of access (authorization) required to do their jobs, but no more. Need to know is more granular than least privilege: the user must need to know that specific piece of information before accessing it.
Sebastian is a nurse who works in a medical facility with multiple practices. His practice has four doctors, and Sebastian could treat patients for any of those four doctors. Least privilege could allow Sebastian to access the records of the four doctors’ patients, but not access records for patients of other doctors in other practices.
Need to know means Sebastian can access a patient’s record only if he has a business need to do so. If there is a patient being treated by Sebastian’s practice, but not by Sebastian himself, least privilege could allow access, but need to know would not.

Learn By Example

Real-World Least Privilege

A large healthcare provider had a 60-member IT staff responsible for 4000 systems running Microsoft Windows. The company did not employ least privilege: the entire IT staff was granted Windows Domain Administrator access. Staff with such access included help desk personnel, backup administrators, and many others. All 60 domain administrators had super-user privileges on all 4000 windows systems.
This level of privilege was excessive and led to problems. Operator errors led to violation of CIA. Because so many could do so much, damage to the environment was prevalent. Data was lost; unauthorized changes were made; systems crashed, and it was difficult to pinpoint the causes.
A new security officer was hired, and one of his first tasks was to enforce least privilege. Role-based accounts were created: a help desk role that allowed access to the ticketing system, a backup role that allowed backups and restoration, and so on. The domain administrator list was whittled down to a handful of authorized personnel.
Many former domain administrators complained about loss of super-user authorization, but everyone got enough access to do their job. The improvements were immediate and impressive: unauthorized changes virtually stopped and system crashes became far less common. Operators still made mistakes, but those mistakes were far less costly.

Subjects and Objects

A subject is an active entity on a data system. Most examples of subjects involve people accessing data files. However, computer programs can be subjects as well. A Dynamic Link Library file or a Perl script that updates database files with new information is also a subject.
An object is any passive data within the system. Objects can range from documents on physical paper, to database tables to text files. The important thing to remember about objects is that they are passive within the system. They do not manipulate other objects.
There is one tricky example of subjects and objects that is important to understand. For example, if you are running iexplore.exe (Internet Explorer browser on a Microsoft Windows system), it is a subject while running in memory. When the browser is not running in memory, the file iexplore.exe is an object on the filesystem.

Exam Warning

Keep all examples on the CISSP® exam simple by determining whether they fall into the definition of a subject or an object.

Defense-in-Depth

Defense-in-Depth (also called layered defenses) applies multiple safeguards (also called controls: measures taken to reduce risk) to protect an asset. Any single security control may fail; by deploying multiple controls, you improve the confidentiality, integrity, and availability of your data.

Learn By Example

Defense-in-Depth Malware Protection

A 12,000-employee company received 250,000 Internet emails per day. The vast majority of these emails were malicious, ranging from time- and resource-wasting spam, to malware such as worms and viruses. Attackers changed tactics frequently, always trying to evade safeguards designed to keep the spam and malware out.
The company deployed preventive defense-in-depth controls for Internet email-based malware protection. One set of UNIX mail servers filtered the incoming Internet email, each running two different auto-updating antivirus/antimalware solutions by two different major vendors. Mail that scanned clean was then forwarded to an internal Microsoft Exchange mail server, which ran yet another vendor’s antivirus software. Mail that passed that scan could reach a user’s client, which ran a fourth vendor’s antivirus software. The client desktops and laptops were also fully patched.
Despite those safeguards, a small percentage of malware successfully evaded four different antivirus checks and infected the users’ client systems. Fortunately, the company deployed additional defense-in-depth controls, such as Intrusion Detection Systems (IDSs), incident handling policies, and a CIRT (Computer Incident Response Team) to handle incidents. These defensive measures successfully identified infected client systems, allowing for timely response.
All controls can fail, and sometimes multiple controls will fail. Deploying a range of different defense-in-depth safeguards in your organization lowers the chance that all controls will fail.

Due Care and Due Diligence

Due care is doing what a reasonable person would do. It is sometimes called the “prudent man” rule. The term derives from “duty of care”: parents have a duty to care for their children, for example. Due diligence is the management of due care.
Due care and due diligence are often confused; they are related, but different. Due care is informal; due diligence follows a process. Think of due diligence as a step beyond due care. Expecting your staff to keep their systems patched means you expect them to exercise due care. Verifying that your staff has patched their systems is an example of due diligence.

Gross Negligence

Gross negligence is the opposite of due care. It is a legally important concept. If you suffer loss of PII, but can demonstrate due care in protecting the PII, you are on legally stronger ground, for example. If you cannot demonstrate due care (you were grossly negligent), you are in a much worse legal position.

Legal and Regulatory Issues

Though general understanding of major legal systems and types of law is important, it is critical that information security professionals understand the concepts described in the next section. With the ubiquity of information systems, data, and applications comes a host of legal issues that require attention. Examples of legal concepts affecting information security include: crimes being committed or aided by computer systems, attacks on intellectual property, privacy concerns, and international issues.

Compliance with Laws and Regulations

Complying with laws and regulations is a top information security management priority: both in the real world and on the exam. An organization must be in compliance with all laws and regulations that apply to it. Ignorance of the law is never a valid excuse for breaking the law. Details of specific laws are covered in Chapter 10: Domain 9: Legal, Regulations, Investigations, and Compliance.

Exam Warning

The exam will hold you to a very high standard in regard to compliance with laws and regulations. We are not expected to know the law as well as a lawyer, but we are expected to know when to call a lawyer. Confusing the technical details of a security control such as Kerberos may or may not cause a significant negative consequence, for example. Breaking search and seizure laws due to confusion over the legality of searching an employee’s personal property, for example, is likely to cause very negative consequences. The most legally correct answer is often the best for the exam.

Major Legal Systems

In order to begin to appreciate common legal concepts at work in today’s global economy, an understanding of the major legal systems is required. These legal systems provide the framework that determines how a country develops laws pertaining to information systems in the first place. The three major systems of law are civil, common, and religious law.

Civil Law (Legal System)

The most common of the major legal systems is that of civil law, which is employed by many countries throughout the world. The system of civil law leverages codified laws or statutes to determine what is considered within the bounds of law. Though a legislative branch typically wields the power to create laws there will still exist a judicial branch that is tasked with interpretation of the existing laws. The most significant difference between civil and common law is that, under civil law, judicial precedents and particular case rulings do not carry the weight they do under common law.

Common Law

Common law is the legal system used in the United States, Canada, the United Kingdom, and most former British colonies, amongst others. As we can see by the short list above, English influence has historically been the main indicator of common law being used in a country. The primary distinguishing feature of common law is the significant emphasis on particular cases and judicial precedents as determinants of laws. Though there is typically also a legislative body tasked with the creation of new statutes and laws, judicial rulings can, at times, supersede those laws. Because of the emphasis on judges’ interpretations there is significant possibility that as society changes over time, so too can judicial interpretations change in kind.

Note

Common law is the major legal system most likely to be referenced by the CISSP® exam. Therefore, this chapter will focus primarily on common law, which is the basis of the United Kingdom’s and the United States’ legal systems.

Religious Law

Religious law serves as the third of the major legal systems. Religious doctrine or interpretation serves as a source of legal understanding and statutes. However, the extent and degree to which religious texts, practices, or understanding are consulted can vary greatly. While Christianity, Judaism, and Hinduism have all had significant influence on national legal systems, Islam serves as the most common source for religious legal systems. Though there is great diversity in its application throughout the world, Sharia is the term used for Islamic law and it uses the Qur’an and Hadith as its foundation.

Other Systems

Though Customary Law is not considered as important as the other major legal systems described above, it is important with respect to information security. Customary law refers to those customs or practices that are so commonly accepted by a group that the custom is treated as a law. These practices can be later codified as laws in the more traditional sense, but the emphasis on prevailing acceptance of a group is quite important with respect to the concept of negligence, which, in turn, is important in information security. The concept of “best practices” is closely associated with Customary Law.
Suppose an organization maintains sensitive data, but has no specific legal requirements regarding how the data must be protected. The data is later compromised. If it were discovered that the company did not employ firewalls, antivirus software, and used outdated systems to house the data, many would believe the organization violated, perhaps not a particular legal requirement, but accepted practices by not employing customary practices associated with safeguarding sensitive data.

Criminal, Civil, and Administrative Law

As stated above, common law will be the most represented in the exam, so it will be the primary focus here. Within common law there are various branches of laws, including criminal, civil, and administrative law.

Criminal Law

Criminal law pertains to those laws where the victim can be seen as society itself. While it might seem odd to consider society the victim when an individual is murdered, the goal of criminal law is to promote and maintain an orderly and law abiding citizenry. Criminal law can include penalties that remove an individual from society by incarceration or, in some extreme cases in some regions, death. The goals of criminal law are to deter crime and to punish offenders.
Due to the seriousness of potentially depriving someone of either their freedom or, in the most extreme cases, his or her life, the burden of proof in criminal cases is considerable. In order to convict someone accused of a criminal act, the crime must be proved beyond any reasonable doubt. Once proven, the punishment for commission of a criminal act will potentially include incarceration, financial penalties, or, in some jurisdictions, execution as punishment for the most heinous of criminal acts.

Civil Law

In addition to civil law being a major legal system in the world, it also serves as a type of law within the common law legal system. Another term associated with civil law is tort law, which deals with injury (loosely defined), resulting from someone violating their responsibility to provide a duty of care. Tort law is the primary component of civil law, and is the most significant source of lawsuits that seek damages.
Society is seen as the victim under criminal law; under civil law the victim will be an individual, group, organization. While the government prosecutes an individual or organization under criminal law, within civil law the concerned parties are most commonly private parties. Another difference between criminal and civil law is the goal of each. The focus of criminal law is punishment and deterrence; civil law focuses on compensating the victim.
Note that one act can, and very often does, result in both criminal and civil actions. A recent example of someone having both criminal and civil penalties levied is in the case of Bernie Madoff, whose elaborate Ponzi scheme swindled investors out of billions of dollars. Madoff pleaded guilty in a criminal court to 11 felonies including securities fraud, wire fraud, perjury, and money laundering. In addition to the criminal charges levied by the government, numerous civil suits sought compensatory damages for the monies lost by investors in the fraud.
The most popular example in recent history involves the O.J. Simpson murder trial, in which Mr. Simpson was acquitted in a criminal court for the murder of his wife Nicole Brown and Ronald Goldman, but later found liable in civil court proceedings for causing the wrongful death of Mr. Goldman.
The difference in outcomes is explained by the difference in the burden of proof for civil and criminal law. In the United States, the burden of proof in a criminal court is beyond a reasonable doubt, while the burden of proof in civil proceedings is the preponderance of the evidence. “Preponderance” means it is more likely than not. Satisfying the burden of proof requirement of the preponderance of the evidence in a civil matter is a much easier task than meeting the burden of proof requirement in criminal proceedings. The most common outcome of a successful ruling against a defendant is requiring the payment of financial damages. The most common types of financial damages are presented in Table 2.1.

Table 2.1

Common Types of Financial Damages

Financial Damages Description
Statutory Statutory damages are those prescribed by law, which can be awarded to the victim even if the victim incurred no actual loss or injury.
Compensatory The purpose of compensatory damages is to provide the victim with a financial award in effort to compensate for the loss or injury incurred as a direct result of the wrongdoing.
Punitive The intent of punitive damages is to punish an individual or organization. These damages are typically awarded to attempt to discourage a particularly egregious violation where the compensatory or statutory damages alone would not act as a deterrent.

Administrative Law

Administrative law or regulatory law is law enacted by government agencies. The executive branch (deriving from the Office of the President) enacts administrative law in the United States. Government-mandated compliance measures are administrative laws.
The executive branch can create administrative law without requiring input from the legislative branch, but the law must still operate within the confines of the civil and criminal code, and can still come under scrutiny by the judicial branch. Some examples of administrative law are FCC regulations, HIPAA Security mandates, FDA regulations, and FAA regulations.

Liability

Legal liability is another important legal concept for information security professionals and their employers. Society has grown quite litigious over the years, and the question of whether an organization is legally liable for specific actions or inactions can prove costly. Questions of liability often turn into questions regarding potential negligence. When attempting to determine whether certain actions or inactions constitute negligence, the Prudent Man Rule is often applied.
Two important terms to understand are due care and due diligence, which have become common standards that are used in determining corporate liability in courts of law.

Due Care

The standard of due care, or a duty of care, provides a framework that helps to define a minimum standard of protection that business stakeholders must attempt to achieve. Due care discussions often reference the Prudent Man Rule, and require that the organization engage in business practices that a prudent, right thinking, person would consider to be appropriate. Businesses that are found to have not been applying this minimum duty of care can be deemed as having been negligent in carrying out their duties.
The term “best practices” is used to discuss which information security technologies to adopt in organizations. Best practices are similar to due care in that they are both abstract concepts that must be inferred and are not explicit. Best practices mean organizations align themselves with the practices of the best in their industry; due care requires that organizations meet the minimum standard of care that prudent organizations would apply. As time passes, those practices which might today be considered best will tomorrow be thought of as the minimum necessary, which are those required by the standard of due care.

Due Diligence

A concept closely related to due care is due diligence. While due care intends to set a minimum necessary standard of care to be employed by an organization, due diligence requires that an organization continually scrutinize their own practices to ensure that they are always meeting or exceeding the requirements for protection of assets and stakeholders. Due diligence is the management of due care: it follows a formal process.
Prior to its application in information security, due diligence was already used in legal realms. Persons are said to have exercised due diligence, and therefore cannot be considered negligent, if they were prudent in their investigation of potential risks and threats. In information security there will always be unknown or unexpected threats just as there will always be unknown vulnerabilities. If an organization were compromised in such a way that caused significant financial harm to their consumers, stockholders, or the public, one of the ways in which the organization would defend its actions or inactions is by showing that they exercised due diligence in investigating the risk to the organization and acted sensibly and prudently in protecting against the risks being manifested.

Legal Aspects of Investigations

Investigations are a critical way in which information security professionals come into contact with the law. Forensic and incident response personnel often conduct investigations, and both need to have a basic understanding of legal matters to ensure that the legal merits of the investigation are not unintentionally tarnished. Evidence, and the appropriate method for handling evidence, is a critical legal issue that all information security professionals must understand. Another issue that touches both information security and legal investigations is search and seizure.

Evidence

Evidence is one of the most important legal concepts for information security professionals to understand. Information security professionals are commonly involved in investigations, and often have to obtain or handle evidence during the investigation. Some types of evidence carry more weight than others; however, information security professionals should attempt to provide all evidence, regardless of whether that evidence proves or disproves the facts of a case. While there are no absolute means to ensure that evidence will be allowed and helpful in a court of law, information security professionals should understand the basic rules of evidence. Evidence should be relevant, authentic, accurate, complete, and convincing. Evidence gathering should emphasize these criteria.
Real Evidence
The first, and most basic, category of evidence is that of real evidence. Real evidence consists of tangible or physical objects. A knife or bloody glove might constitute real evidence in some traditional criminal proceedings. However, with most computer incidents, real evidence is commonly made up of physical objects such as hard drives, DVDs, USB storage devices, or printed business records.
Direct Evidence
Direct evidence is testimony provided by a witness regarding what the witness actually experienced with her five senses. The witnesses must have experienced what they are testifying to, rather than have gained the knowledge indirectly through another person (hearsay, see below).
Circumstantial Evidence
Circumstantial evidence is evidence which serves to establish the circumstances related to particular points or even other evidence. For instance, circumstantial evidence might support claims made regarding other evidence or the accuracy of other evidence. Circumstantial evidence provides details regarding circumstances that allow for assumptions to be made regarding other types of evidence. This type of evidence offers indirect proof, and typically cannot be used as the sole evidence in a case. For instance, if a person testified that she directly witnessed the defendant create and distribute malware this would constitute direct evidence. If the forensics investigation of the defendant’s computer revealed the existence of source code for the malware, this would constitute circumstantial evidence.
Corroborative Evidence
In order to strengthen a particular fact or element of a case there might be a need for corroborative evidence. This type of evidence provides additional support for a fact that might have been called into question. This evidence does not establish a particular fact on its own, but rather provides additional support for other facts.
Hearsay
Hearsay evidence constitutes second-hand evidence. As opposed to direct evidence, which someone has witnessed with her five senses, hearsay evidence involves indirect information. Hearsay evidence is normally considered inadmissible in court. Numerous rules including Rules 803 and 804 of the Federal Rules of Evidence of the United States provide for exceptions to the general inadmissibility of hearsay evidence that is defined in Rule 802.
Business and computer generated records are generally considered hearsay evidence, but case law and updates to the Federal Rules of Evidence have established exceptions to the general rule of business records and computer generated data and logs being hearsay. The exception defined in Rule 803 provides for the admissibility of a record or report that was “made at or near the time by, or from information transmitted by, a person with knowledge, if kept in the course of a regularly conducted business activity, and if it was the regular practice of that business activity to make the memorandum, report, record or data compilation.”[1]
An additional consideration important to computer investigations pertains to the admissibility of binary disk and physical memory images. The Rule of Evidence that is interpreted to allow for disk and memory images to be admissible is actually not an exception to the hearsay rule, Rule 802, but is rather found in Rule 1001, which defines what constitutes originals when dealing with writings, recordings, and photographs. Rule 1001 states that “if data are stored in a computer or similar device, any printout or other output readable by sight, shown to reflect the data accurately, is an ‘original’.”[2] This definition has been interpreted to allow for both forensic reports as well as memory and disk images to be considered even though they would not constitute the traditional business record exception of Rule 803.
Best Evidence Rule
Courts prefer the best evidence possible. Original documents are preferred over copies: conclusive tangible objects are preferred over oral testimony. Recall that the five desirable criteria for evidence suggest that, where possible, evidence should be: relevant, authentic, accurate, complete, and convincing. The best evidence rule prefers evidence that meets these criteria.
Secondary Evidence
With computer crimes and incidents best evidence might not always be attainable. Secondary evidence is a class of evidence common in cases involving computers. Secondary evidence consists of copies of original documents and oral descriptions. Computer-generated logs and documents might also constitute secondary rather than best evidence. However, Rule 1001 of the United States Federal Rules of Evidence can allow for readable reports of data contained on a computer to be considered original as opposed to secondary evidence.

Evidence Integrity

Evidence must be reliable. It is common during forensic and incident response investigations to analyze digital media. It is critical to maintain the integrity of the data during the course of its acquisition and analysis. Checksums can ensure that no data changes occurred as a result of the acquisition and analysis. One-way hash functions such as MD5 or SHA-1 are commonly used for this purpose. The hashing algorithm processes the entire disk or image (every single bit), and a resultant hash checksum is the output. After analysis is completed the entire disk can again be hashed. If even one bit of the disk or image has changed then the resultant hash checksum will differ from the one that was originally obtained.

Chain of Custody

In addition to the use of integrity hashing algorithms and checksums, another means to help express the reliability of evidence is by maintaining chain of custody documentation. Chain of custody requires that once evidence is acquired, full documentation be maintained regarding the who, what, when and where related to the handling of said evidence. Initials and/or signatures on the chain of custody form indicate that the signers attest to the accuracy of the information concerning their role noted on the chain of custody form.
The goal is to show that throughout the evidence lifecycle it is both known and documented how the evidence was handled. This also supports evidence integrity: no reasonable potential exists for another party to have altered the evidence. Figure 2.6 shows an evidence bag, which may be used to document the chain of custody for small items, such as disk drives.
image
Figure 2.6 Evidence Bag
While neither integrity checksums nor a chain of custody form is required in order for evidence to be admissible in a court of law, they both support the reliability of digital evidence. Use of integrity checksums and chain of custody by forensics investigators is best practice. An example chain of custody form can be seen in Figure 2.7.
image
Figure 2.7 Chain of Custody Form

Reasonable Searches

The Fourth Amendment to the United States Constitution protects citizens from unreasonable search and seizure by the government. In all cases involving seized evidence, if a court determines the evidence was obtained illegally then it will be inadmissible in court. In most circumstances in order for law enforcement to search a private citizen’s property both probable cause and a search warrant issued by a judge are required. The search warrant will specify the area that will be searched and what law enforcement is searching for.
There are circumstances that do not require a search warrant, such as if the property is in plain sight or at public checkpoints. One important exception to the requirement for a search warrant in computer crimes is that of exigent circumstances. Exigent circumstances are those in which there is an immediate threat to human life or of evidence being destroyed. A court of law will later decide whether the circumstances were such that seizure without a warrant was indeed justified.
Search warrants only apply to law enforcement and those who are acting under the color of law enforcement. If private citizens carry out actions or investigations or on behalf of law enforcement, then these individuals are acting under the color of law and can be considered as agents of law enforcement. An example of acting under the color of law would be when law enforcement becomes involved in a corporate case and corporate security professionals are seizing data under direct supervision of law enforcement. If a person is acting under the color of law, then they must be cognizant of the Fourth Amendment rights related to unreasonable searches and seizures. A person acting under the color of law who deprives someone of his or her constitutionally protected rights can be found guilty of having committed a crime under Title 18. U. S. C. Section 242—Deprivation of Rights Under Color of Law.
A search warrant is not required if law enforcement is not involved in the case. However, organizations should exercise care in ensuring that employees are made aware in advance that their actions are monitored, and that their equipment, and perhaps even personal belongings, are subject to search. Certainly, these notifications should only be made if the organization’s security policy warrants them. Further, corporate policy regarding search and seizure must take into account the various privacy laws in the applicable jurisdiction.

Note

Due to the particular issues unique to investigations being carried out by, or on behalf of, law enforcement, an organization will need to make an informed decision about whether, or when, law enforcement will be brought in to assist with investigations.

Entrapment and Enticement

Another topic closely related to the involvement of law enforcement in the investigative process deals with the concepts of entrapment and enticement. Entrapment is when law enforcement, or an agent of law enforcement, persuades someone to commit a crime when the person otherwise had no intention to commit a crime. Entrapment can serve as a legal defense in a court of law, and, therefore, should be avoided if prosecution is a goal. A closely related concept is enticement. Enticement could still involve agents of law enforcement making the conditions for commission of a crime favorable, but the difference is that the person is determined to have already broken a law or is intent on doing so. The question as to whether the actions of law enforcement will constitute enticement or entrapment is ultimately up to a jury. Care should be taken to distinguish between these two terms.

Computer Crime

One aspect of the interaction of information security and the legal system is that of computer crimes. Applicable computer crime laws vary throughout the world, according to jurisdiction. However, regardless of region, some generalities exist. Computer crimes can be understood as belonging loosely to three different categories based upon the way in which computer systems relate to the wrongdoing: computer systems as targets; computer systems as a tool to perpetrate the crime; or computer systems involved but incidental. The last category occurs commonly because computer systems are such an indispensable component of modern life. The other two categories are more significant:
Computer systems as target—Crimes where the computer systems serve as a primary target, such as: disrupting online commerce by means of Distributed Denial of Service attacks, installing malware on systems for the distribution of spam, or exploiting vulnerability on a system to leverage it to store illegal content.
Computer as a tool—Crimes where the computer is a central component enabling the commission of the crime. Examples include: stealing trade secrets by compromising a database server, leveraging computers to steal cardholder data from payment systems, conducting computer based reconnaissance to target an individual for information disclosure or espionage, and using computer systems for the purposes of harassment.
As information systems have evolved, and as our businesses now leverage computer systems to a larger extent, traditional crimes such as theft and fraud are being perpetrated both by using and targeting computers. One of the most difficult aspects of prosecution of computer crimes is attribution. Meeting the burden of proof requirement in criminal proceedings, beyond a reasonable doubt, can be difficult given an attacker can often spoof the source of the crime or can leverage different systems under someone else’s control.

Intellectual Property

As opposed to physical or tangible property, intellectual property refers to intangible property that resulted from a creative act. The purpose of intellectual property law is to control the use of intangible property that can often be trivial to reproduce or abuse once made public or known. The following intellectual property concepts effectively create an exclusive monopoly on their use.

Trademark

Trademarks are associated with marketing: the purpose is to allow for the creation of a brand that distinguishes the source of products or services. A distinguishing name, logo, symbol, or image represents the most commonly trademarked items. In the United States two different symbols are used with distinctive marks that an individual or organization is intending to protect. The superscript TM symbol can be used freely to indicate an unregistered mark, and is shown in Figure 2.8.
image
Figure 2.8 Trademark Symbol
The circle R symbol is used with marks that have been formally registered as a trademark with the U.S. Patent and Trademark Office, and is shown in Figure 2.9. In addition to the registered and unregistered version of a trademark, servicemarks constitute a subset of brand recognition related intellectual property. As suggested by the name, a servicemark is used to brand a service offering rather than a particular product or company, and looks similar to the unregistered trademark, being denoted by a superscript SM symbol.
image
Figure 2.9 Registered Trademark Symbol

Patent

Patents provide a monopoly to the patent holder on the right to use, make, or sell an invention for a period of time in exchange for the patent holder’s making the invention public. During the life of the patent, the patent holder can, through the use of civil litigation, exclude others from leveraging the patented invention. Obviously, in order for an invention to be patented, it should be novel and unique. The length that a patent is valid (the patent term) varies throughout the world, and also by the type of invention being patented. Generally, in both Europe and the United States the patent term is 20 years from the initial filing date. Upon expiration of a patent the invention is publicly available for production.

Learn By Example

Velcro

A quick example that illustrates patents and patent terms as well as trademarks is found in Velcro. Velcro, which is a particular brand of small fabric based hook and loop fastener, was invented in Switzerland in 1941 by George de Mestral. Expecting many commercial applications of his fabric hook and loop fastener, de Mestral applied for patents in numerous countries throughout the 1950s. In addition to seeking patents for his invention, de Mestral also trademarked the name Velcro in many countries. In 1978 the patent term for de Mestral’s invention expired, and small fabric-based hook and loop fasteners began being mass-produced cheaply by numerous companies. Though the patent expired, trademarks do not have an explicit expiration date, so use of the term Velcro on a product is still reserved for use by the company de Mestral started.

Copyright

Copyright represents a type of intellectual property that protects the form of expression in artistic, musical, or literary works, and is typically denoted by the circle c symbol as shown in Figure 2.10. The purpose of copyright is to preclude unauthorized duplication, distribution, or modification of a creative work. Note that the form of expression is protected rather than the subject matter or ideas represented. The creator or author of a work is, by default, the copyright holder at the time of creation, and has exclusive rights regarding the distribution of the copyrighted material. Even though there is an implied copyright granted to the author at the time of creation, a more explicit means of copyright exists. A registered copyright is one in which the creator has taken the trouble to file the copyright with the Copyright Office, in the United States, and provides a more formal means of copyright than that of the implied copyright of the author.
image
Figure 2.10 Copyright Symbol
Copyrights, like patents, have a specific term for which they are valid. Also like patents, this term can vary based on the type of work as well as the country in which the work is published. Once the copyright term has expired, then the work becomes part of the public domain. Currently, in the United States, a work typically has an enforceable copyright for 70 years after the death of the author. However, if the work is a product of a corporation then the term lasts for 95 years after the first publication or 120 years after creation, whichever comes first.[3] Though there are exceptions to this general rule, most European countries also subscribe to the copyright term lasting for life of the author plus an additional 70 years.

Learn By Example

Copyright Term

One point of serious contention between Europe and the United States is the former’s lack of longer corporate copyrights. Whereas in the United States, a product of corporate production might have an additional 25–50 years of copyright protection, currently Europe has no such additional protections. This issue became prominent in 2009 as the European copyright for a cartoon icon, Popeye, expired. In Europe, Popeye is now part of the public domain as it has been 70 years since Popeye’s creator, Elzie Segar, died in 1938.
Though there have been successful attempts to bring better harmony to global copyright law, especially within the United States and Europe, serious inconsistencies still exist throughout the world. Many nations do not even acknowledge copyrights or their legal protection. This lack of acknowledgment further exacerbates the issue of global piracy.

Note

In the United States, as some extremely high value copyrights have been close to becoming part of the public domain there have been extensions to the copyright term. Copyright terms have consistently been lengthened as individuals and corporations have voiced concerns over financial losses resulting from works becoming part of the public domain.
The Copyright Term Extension Act, which was passed in 1998, extended the copyright term by 20 years. At the time, the copyright term was the author’s life plus 50 years, or 75 years for corporate works, but the extension increased the copyright term to life plus 70 years and 95 years, respectively. There are some, notably Lawrence Lessig, who derisively refer to the Copyright Term Extension Act as the Mickey Mouse Protection Act given the Act’s proximity to Mickey Mouse’s originally scheduled entry into the public domain.
Software is typically covered by copyright as if it were a literary work. Recall that copyright is intended to cover the form of expression rather than the ideas or subject matter. Software licensing fills some of this gap regarding intellectual property protections of software. Another software copyright issue is the concept of work for hire. Although the creator of the work is the implied copyright holder, care should be taken to distinguish whether the software developers or their employers are considered the copyright holders. In most instances, when a developer is working on creating a code for a specific organization, the organization itself is the copyright holder rather than the individual developer, as the code is being developed specifically as part of their employment.
Copyright limitations
Two important limitations on the exclusivity of the copyright holder’s monopoly exist: the doctrines of first sale and fair use. The first sale doctrine allows a legitimate purchaser of copyrighted material to sell it to another person. If the purchasers of a CD later decide that they no longer cared to own the CD, the first sale doctrine gives them the legal right to sell the copyrighted material even though they are not the copyright holders.
Fair use is another limitation on the copyright holder’s exclusive intellectual property monopoly. The fair use doctrine allows someone to duplicate copyrighted material without requiring the payment, consent, or even knowledge of the copyright holder. There are no explicit requirements that must be met to ensure that a particular usage constitutes fair use, but there are established guidelines that a judge would use in determining whether or not the copyright holder’s legal rights had been infringed upon. The four factors defined in the Copyright Act of 1976 as criteria to determine whether a use would be covered by the fair use doctrine are: the purpose and style of the excerpt; the nature of the copyrighted work; the amount of content duplicated compared to the overall length of the work; and whether the duplication might reduce the value or desirability of the original work.[4]

Licenses

Software licenses are a contract between a provider of software and the consumer. Though there are licenses that provide explicit permission for the consumer to do virtually anything with the software, including modifying it for use in another commercial product, most commercial software licensing provides explicit limits on the use and distribution of the software. Software licenses such as end-user license agreements (EULAs) are an unusual form of contract because using the software typically constitutes contractual agreement, even though a small minority of users read the lengthy EULA.

Trade Secrets

The final form of intellectual property that will be discussed is the concept of trade secrets. Trade secrets are business-proprietary information that is important to an organization’s ability to compete. The easiest to understand trade secrets are of the “special sauce” variety. Kentucky Fried Chicken could suffer catastrophic losses if another fried chicken shop were able to crack Colonel Sanders’ secret blend of 11 herbs and spices that result in the “finger licking goodness” we have all grown to know and love. Although the “special sauces” are very obviously trade secrets, any business information that provides a competitive edge, and is actively protected by the organization can constitute a trade secret. The organization must exercise due care and due diligence in the protection of their trade secrets. Some of the most common protection methods used are non-compete and non-disclosure agreements (NDA). These methods require that employees or other persons privy to business confidential information respect the organization’s intellectual property by not working for an organization’s competitor or disclosing this information in an unauthorized manner. Lack of reasonable protection of trade secrets can make them cease to be trade secrets. If the organization does not take reasonable steps to ensure that the information remains confidential, then it is reasonable to assume that the organization must not derive a competitive advantage from the secrecy of this information.

Intellectual Property Attacks

Though attacks upon intellectual property have existed since at least the first profit driven intellectual creation, the sophistication and volume of attacks has only increased with the growth of portable electronic media and Internet-based commerce. Well-known intellectual property attacks are software piracy and copyright infringement associated with music and movies. Both have grown easier with increased Internet connectivity and growth of piracy enabling sites, such as The Pirate Bay, and protocols such as BitTorrent. Other common intellectual property attacks include attacks against trade secrets and trademarks. Trade secrets can be targeted in corporate espionage schemes and also are prone to be targeted by malicious insiders. Because of the potentially high value of the targeted trade secrets, this type of intellectual property can draw highly motivated and sophisticated attackers.
Trademarks can fall under several different types of attacks including: counterfeiting, dilution, as well as cybersquatting and typosquatting. Counterfeiting involves attempting to pass off a product as if it were the original branded product. Counterfeiters try to capitalize on the value associated with a brand. Trademark dilution typically represents an unintentional attack in which the trademarked brand name is used to refer to the larger general class of products of which the brand is a specific instance. For example: the word Kleenex is commonly used in some parts of the United States to refer to any facial tissue, regardless of brand, rather than the particular brand named version itself; this is an example of trademark dilution.
Two more recent trademark attacks have developed out of the Internet-based economy: cyber- and typosquatting. Cybersquatting refers to an individual or organization registering or using, in bad faith, a domain name that is associated with another person’s trademark. People will often assume that the trademark owner and the domain owner are the same. This can allow the domain owner to infringe upon the actual trademark owner’s rights. The primary motivation of cybersquatters is money: they typically intend to capitalize on traffic to the domain by people assuming they are visiting the trademark owner’s Web site. Typosquatting refers to a specific type of cybersquatting in which the cybersquatter registers likely misspellings or mistyping of legitimate domain trademarks.

Privacy

Privacy is the protection of the confidentiality of personal information. Many organizations host personal information about their users: PII (Personally Identifiable Information) such as social security numbers, financial information such as annual salary and bank account information required for payroll deposits, and healthcare information for insurance purposes. The confidentiality of this information must be assured.
One of the unfortunate side effects of the explosion of information systems over the past few decades is the loss of privacy. As more and more data about individuals is used and stored by information systems, the likelihood of that data being either inadvertently disclosed, sold to a third party, or intentionally compromised by a malicious insider or third party increases. Further, with breaches of financial and health records being publicly disclosed, routinely numbering in the millions to tens of millions of records compromised, the erosion of privacy of some of the most sensitive data is now commonplace. Previously, stealing millions of financial records could have meant physically walking out with enough paper records to fill a tractor trailer; now all of this data can fit onto a thumbnail-sized flash memory device.
Privacy laws related to information systems have cropped up throughout the world to provide citizens either greater control or security of their confidential data. While there are numerous different international privacy laws, one issue to understand is whether the citizen’s privacy protections are primarily opt-in or opt-out: does the citizen have to choose to do something to gain the benefit of the privacy law or is it chosen for them by default? For example: a company gathering personal data clearly states that the data can be sold to third party companies. Even though they clearly state this fact, albeit in fine print, the organization might require the individual to check a box to disallow their data being sold. This is an opt-out agreement because the individual had to do something in order to prevent their data from being resold. Privacy advocates typically prefer opt-in agreements where the individual would have to do something in order to have their data used in this fashion.

European Union Privacy

The European Union has taken an aggressive pro-privacy stance, while balancing the needs of business. Commerce would be impacted if member nations had different regulations regarding the collection and use of personally identifiable information. The EU Data Protection Directive allows for the free flow of information while still maintaining consistent protections of each member nation’s citizens’ data. The principles of the EU Data Protection Directive are:
Notifying individuals how their personal data is collected and used
Allowing individuals to opt out of sharing their personal data with third parties
Requiring individuals to opt into sharing the most sensitive personal data
Providing reasonable protections for personal data

OECD Privacy Guidelines

The Organization for Economic Cooperation and Development (OECD), though often considered exclusively European, consists of 30 member nations from around the world. The members, in addition to prominent European countries, include such countries as the United States, Mexico, Australia, Japan, and the Czech Republic. The OECD provides a forum in which countries can focus on issues that impact the global economy. The OECD will routinely issue consensus recommendations that can serve as an impetus to change current policy and legislation in the OECD member countries and beyond.
An example of such guidance is found in the OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, which was issued in 1980. Global commerce requires that a citizen’s personal data flow between companies based in divergent regions. The OECD privacy guidance sought to provide a basic framework for the protections that should be afforded this personal data as it traverses the various world economies. The eight driving principles regarding the privacy of personal data are as follows:
Collection Limitation Principle—personal data collection should have limits, be obtained in a lawful manner, and, unless there is a compelling reason to the contrary, with the individual’s knowledge and approval.
Data Quality Principle—personal data should be complete, accurate, and maintained in a fashion consistent with the purposes for the data collection.
Purpose Specification Principle—the purpose for the data collection should be known, and the subsequent use of the data should be limited to the purposes outlined at the time of collection.
Use Limitation Principle—personal data should never be disclosed without either the consent of the individual or as the result of a legal requirement.
Security Safeguards Principle—personal data should be reasonably protected against unauthorized use, disclosure, or alteration.
Openness Principle—the general policy concerning collection and use of personal data should be readily available.
Individual Participation Principle—individuals should be:
Able to find out if an entity holds any of their personal data
Made aware of any personal data being held
Given a reason for any denials to account for personal data being held, and a process for challenging any denials
Able to challenge the content of any personal data being held, and have a process for updating their personal data if found to be inaccurate or incomplete
Accountability Principle—the entity using the personal data should be accountable for adhering to the principles above.[5]

EU-US Safe Harbor

An interesting aspect of the EU Data Protection Directive is that the personal data of EU citizens may not be transmitted, even when permitted by the individual, to countries outside of the EU unless the receiving country is perceived by the EU to adequately protect their data. This presents a challenge regarding the sharing of the data with the United States, which is perceived to have less stringent privacy protections. To help resolve this issue, the United States and European Union created the safe harbor framework that will give US based organizations the benefit of authorized data sharing. In order to be part of the safe harbor, US organizations must voluntarily consent to data privacy principles that are consistent with the EU Data Protection Directive.

US Privacy Act of 1974

All governments have a wealth of personally identifiable information on their citizens. The Privacy Act of 1974 was created to codify protection of US citizens’ data that is being used by the federal government. The Privacy Act defined guidelines regarding how US citizens’ personally identifiable information would be used, collected, and distributed. An additional protection was that the Privacy Act provides individuals with access to the data being maintained related to them, with some national security oriented exceptions.

International Cooperation

Beyond attribution, attacks bounced off multiple systems present an additional jurisdiction challenge: searching or seizing assets. Some involved systems might be in countries where the computer crime laws differ from the country prosecuting the crime. Or the country where evidence exists might not want to share the information with the country prosecuting the crime. These challenges can make successful prosecution of computer crimes very difficult.
To date, the most significant progress toward international cooperation in computer crime policy is the Council of Europe Convention on Cybercrime. In addition to the treaty being signed and subsequently ratified by a majority of the 47 European member countries, the United States has also signed and ratified the treaty. The primary focus of the Convention on Cybercrime is establishing standards in cybercrime policy to promote international cooperation during the investigation and prosecution of cybercrime. Additional information on the Council of Europe Convention on Cybercrime can be found here: http://conventions.coe.int/Treaty/en/Treaties/Html/185.htm.

Import/Export Restrictions

In the United States, law enforcement can, in some cases, be granted the legal right to perform wiretaps to monitor phone conversations. We will discuss legal searches and search warrants in the Reasonable Searches section of Legal Aspects of Investigations below. What if a would-be terrorist used an encrypted tunnel to carry Voice over IP calls rather than using traditional telephony? Even though law enforcement might have been granted the legal right to monitor this conversation, their attempts would be stymied by the encryption. Due to the successes of cryptography, many nations have limited the import and/or export of cryptosystems and associated cryptographic hardware. In some cases, countries would prefer their citizens to not have access to cryptosystems that their intelligence agencies cannot crack, and therefore attempt to impose import restrictions on cryptographic technologies.
In addition to import controls, some countries enact bans on the export of cryptographic technology to specific countries in an attempt to prevent unfriendly nations from having advanced encryption capabilities. Effectively, cryptography is treated as if it was a more traditional weapon, and nations desire to limit the spread of these arms. During the Cold War, CoCom, the Coordinating Committee for Multilateral Export Controls, was a multinational agreement to not export certain technologies, which included encryption, to many communist countries. After the Cold War, the Wassenaar Arrangement became the standard for export controls. This multinational agreement was far less restrictive than the former CoCom, but did still suggest significant restrictions on the export of cryptographic algorithms and technologies to countries not included in the Wassenaar Arrangement.
During the 1990s the United States was one of the primary instigators of banning the export of cryptographic technologies. The previous United States export restrictions have been greatly relaxed, though there are still countries to which it would be illegal to distribute cryptographic technologies. The countries to which the United States bars export of encryption technology changes over time, but typically includes countries considered to pose a significant threat to US interests. The United States is not alone in restricting the export to specific countries considered politically unfriendly to their interests. Further information on laws surrounding cryptography can be found in the Cryptography Laws section of Chapter 4, Domain 3: Security Engineering.

Trans-border Data Flow

The concept of trans-border data flow was discussed tangentially with respect to privacy (see Privacy: OECD Privacy Guidelines above). While the OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data was issued in 1980, the need for considering the impact of data being transferred between countries has greatly increased in years since. In general, the OECD recommends the unfettered flow of information, albeit with notable legitimate exceptions to the free information flow. The most important exceptions to unfettered data transfer were identified in the Privacy and Transborder Flows of Personal Data. Five years after the privacy guidance, the OECD issued their Declaration on Transborder Data Flows, which further supported efforts to support unimpeded data flows.

Important Laws and Regulations

An entire book could easily be filled with discussions of both US and international laws that directly or indirectly pertain to issues in information security. This section is not an exhaustive review of these laws. Instead only those laws that are represented on examination will be included in the discussion. Table 2.2 at the end of this section provides a quick summary of laws and regulations that are commonly associated with information security.

Table 2.2

Common Information Security Laws and Regulations

Laws Noteworthy Points
HIPAA – Health Insurance Portability and Accountability Act The Privacy and Security portions seek to guard Protected Health Information (PHI) from unauthorized use or disclosure. The Security Rule provides guidance on Administrative, Physical, and Technical safeguards for the protection of PHI. HIPAA applies to covered entities that are typically healthcare providers, health plans, and clearinghouses. Also, the HITECH Act of 2009 makes HIPAA’s privacy and security provisions apply to business associates of covered entities as well.
Computer Fraud and Abuse Act – Title 18 Section 1030 One of the first US laws pertaining to computer crimes. Attacks on protected computers, which include government and financial computers as well as those engaged in foreign or interstate commerce, which resulted in $5,000 in damages during one year, were criminalized. The foreign and interstate commerce portion of the protected computer definition allowed for many more computers than originally intended to be covered by this law.
Electronic Communications Privacy Act (ECPA) This law brought the similar level of search and seizure protection to non-telephony electronic communications that were afforded to telephone communications. Effectively, the ECPA protected electronic communications from warrantless wiretapping. The PATRIOT Act weakened some of the ECPA restrictions.
PATRIOT Act of 2001 Expanded law enforcement’s electronic monitoring capabilities. Provided broader coverage for wiretaps. Allowed for search and seizure without requiring immediate disclosure. Generally lessened the judicial oversight required of law enforcement as related to electronic monitoring.
Gramm-Leach-Bliley Act (GLBA) Requires financial institutions to protect the confidentiality and integrity of consumer financial information. Forced them to notify consumers of their privacy practices.
California Senate Bill 1386 (SB1386) One of the first US state level breach notification laws. Requires organizations experiencing a personal data breach involving California residents to notify them of the potential disclosure. Served as impetus in the US for later state and federal attempts at breach notification laws.
Sarbanes-Oxley Act of 2002 (SOX) As a direct result of major accounting scandals in the United States, the Sarbanes-Oxley Act of 2002, more commonly referred to simply as SOX, was passed. SOX created regulatory compliance mandates for publicly traded companies. The primary goal of SOX was to ensure adequate financial disclosure and financial auditor independence. SOX requires financial disclosure, auditor independence, and internal security controls such as a risk assessment. Intentional violation of SOX can result in criminal penalties.
Payment Card Industry Data Security Standard (PCI-DSS) The major vendors in the payment card portion of the financial industry have attempted to achieve adequate protection of cardholder data through self-regulation. By requiring merchants that process credit cards to adhere to the Payment Card Industry Data Security Standard (PCI-DSS), the major credit card companies seek to ensure better protection of cardholder data through mandating security policy, security devices, control techniques, and monitoring of systems and networks comprising cardholder data environments.

US Computer Fraud and Abuse Act

Title 18 United States Code Section 1030, which is more commonly known as the Computer Fraud and Abuse Act, was originally drafted in 1984, but still serves as an important piece of legislation related to the prosecution of computer crimes. The law has been amended numerous times most notably by the USA PATRIOT Act and the more recent Identity Theft Enforcement and Restitution Act of 2008, which is too new to be included in the exam at the time of this writing.

Note

What do bot herders, phreakers, the New York Times attackers, and the authors of Blaster and Melissa all have in common? They were all convicted, in part, as a result of Title 18 United States Code Section 1030, the frequently amended Computer Fraud and Abuse Act. This law has provided for the largest number of computer crime convictions in the United States. Almost all of the notorious cyber criminals to receive convictions were prosecuted under this statute. The Computer Fraud and Abuse Act was instrumental in the successful prosecution of Albert Gonzales, who compromised Heartland Payment Systems and TJX; Adrian Lamo, the “homeless hacker” who broke into the New York Times and Microsoft; Kevin Mitnick, perhaps the most widely known of all computer related felons; and Jeanson James Ancheta, one of the first persons to be prosecuted for his role as a bot herder.
The goal of the Computer Fraud and Abuse Act was to develop a means of deterring and prosecuting acts that damaged federal interest computers. “Federal interest computer” includes government, critical infrastructure or financial processing systems; the definition also referenced computers engaging in interstate commerce. With the ubiquity of Internet based commerce, this definition can be used to justify almost any Internet-connected computer as being a protected computer. The Computer Fraud and Abuse Act criminalized actions involving intentional attacks against protected computers that resulted in aggregate damages of $5,000 in 1 year.

Note

The Computer Fraud and Abuse Act criminalized actions that resulted in damages of $5,000 to protected computers in 1 year. In 2008 the Identity Theft Enforcement and Restitution Act was passed which amended the Computer Fraud and Abuse Act. One of the more important changes involved removing the requirement that damages should total $5,000. Another important amendment made the damage of 10 or more computers a felony.

USA PATRIOT Act

The USA PATRIOT Act of 2001 was passed in response to the attacks on the US that took place on September 11, 2001. The full title is “Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act,” but it is often simply called the “Patriot Act.” The main thrust of the Patriot Act that applies to information security professionals addresses less stringent oversight of law enforcement regarding data collection. Wiretaps have become broader in scope. Searches and seizures can be done without immediate notification to the person whose data or property might be getting seized. An additional consideration is the Patriot Act amended the Computer Fraud and Abuse Act to strengthen the penalties for those convicted of attempting to damage a protected computer such that up to 20 years in prison could be served, assuming a second offense.

HIPAA

One of the more important regulations is HIPAA, the Health Insurance Portability and Accountability Act that was developed in the United States in 1996. HIPAA is a large and complex set of provisions that required changes in the health care industry. The Administrative Simplification portion, Title II, contains the information most important to information security professionals and includes the Privacy and Security Rules. The Administrative Simplification portion applies to what are termed covered entities, which includes health plans, healthcare providers, and clearinghouses. See the note below for additional information regarding HIPAA’s applicability.

Note

Though not testable at the time of this book’s printing, HIPAA has now become more widely applicable due to recent legislation. The Health Information Technology for Economic and Clinical Health Act (HITECH Act), which was signed into law as part of the American Recovery and Reinvestment Act of 2009, extended the privacy and security requirements under HIPAA to those that serve as business associates of covered entities. An additional component added by the HITECH Act is a requirement for breach notification. General breach notification information will be discussed in the next section.
The Privacy and Security portions are largely concerned with the safeguarding of Protected Health Information (PHI), which includes almost any individually identifiable information that a covered entity would use or store. The HIPAA Security Rule includes sections on Administrative, Physical, and Technical safeguards. Each safeguard is considered either a required or addressable implementation specification, which speaks of the degree of flexibility a covered entity has in implementation.

Exam Warning

Breach notification laws are still too recent and mutable to be considered testable material, but their importance to the marketplace will make them a subject of test questions in the very near future.

United States Breach Notification Laws

At present, over 47 US states have enacted breach notification laws (see: http://www.ncsl.org/issues-research/telecom/security-breach-notification-laws.aspx). There have been attempts at passing a general federal breach notification law in the United States, but these efforts have been unsuccessful thus far. Although it would be impossible to make blanket statements that would apply to all of the various state laws, there are some themes common to quite a few of the state laws that are quickly being adopted by organizations concerned with adhering to best practices.
The purpose of the breach notification laws is typically to notify the affected parties when their personal data has been compromised. One issue that frequently comes up in these laws is what constitutes a notification-worthy breach. Many laws have clauses that stipulate that the business only has to notify the affected parties if there is evidence to reasonably assume that their personal data will be used maliciously.
Another issue that is found in some of the state laws is a safe harbor for data that was encrypted at the time of compromise. This safe harbor could be a strong impetus for organizations to encrypt data that otherwise might not have a regulatory or other legal requirement for the data to be encrypted. Breach notification laws are certainly here to stay, and a federal law seems as if it is quite likely to come on the horizon in the near future. Many organizations in both the US and abroad consider encryption of confidential data to be a due diligence issue even if a specific breach notification law is not in force within the organization’s particular jurisdiction.

Security and 3rd Parties

Organizations are increasingly reliant upon 3rd parties to provide significant, and sometimes business-critical services. While leveraging external organizations is by no means a recent phenomenon, the criticality of the role and also the volume of services and products now typically warrant specific attention of an organization’s information security department.

Service Provider Contractual Security

Contracts are the primary control for ensuring security when dealing with 3rd party organizations’ providing services. The tremendous surge in outsourcing, especially the ongoing shift toward cloud services, has made contractual security measures much more prominent. While contractual language will vary, there are several common contracts or agreements that are used when attempting to ensure security when dealing with 3rd party organizations.

Service Level Agreements (SLA)

A common way of ensuring security is through the use of Service Level Agreements, or SLAs. The SLA identifies key expectations that the vendor is contractually required to meet. SLAs are widely used for general performance expectations, but are increasingly leveraged for security purposes as well. SLAs primarily address availability.

Attestation

Larger providers and more discerning customers regularly look to attestation as a means of ensuring that some level of scrutiny has been applied to the organization’s security posture. Information security attestation involves having a 3rd party organization review the practices of the service provider and make a statement about the security posture of the organization. The goal of the service provider is to provide evidence that they should be trusted. Typically, a 3rd party provides attestation after performing an audit of the service provider against a known baseline. However, another means of attestation that some service providers will offer is in the form of penetration test reports from assessments conducted by a 3rd party.
Historically, the primary attestation vehicle in security has been via a SAS 70 review. However, the SAS 70 is not overtly concerned with information security. Increasingly ISO 27001 certification is sought by larger service providers for attestation purposes. See Chapter 3, Domain 2: Asset Security for additional details on ISO 27001.
The Payment Card Industry Digital Security Standard (PCI-DSS) also uses attestation: a PCI Qualified Security Assessor (QSA) may assess the security of an organization that uses credit cards. If the security meets the PCI-DSS standard, a Report of Compliance (ROC) and Attestation of Compliance (AOC) may be issued to the organization.

Right to Penetration Test/Right to Audit

Though 3rd party attestation is commonly being offered by vendors as a way to verify they are employing sound security practices, some organizations still would prefer to derive their own opinion as to the security of the 3rd party organization. The Right to Penetration Test and Right to Audit documents provide the originating organization with written approval to perform their own testing or have a trusted provider perform the assessment on their behalf. Typically, there will be limitations on what the pen testers or auditors are allowed to use or target, but these should be clearly defined in advance.
An alternative to the Right to Penetration Test/Right to Audit documents is for the service provider to present the originating organization with a 3rd party audit or penetration test that the service provider had performed. As stated above, these documents can also be thought of as attestation.

Procurement

Procurement is the process of acquiring products or services from a 3rd party. In many, if not most, organizations there is often little insight either sought or provided regarding the security of the solution. If involved, traditionally, security considerations were an afterthought and incorporated rather late in the procurement process. Leveraging the security department early and often can serve as a preventive control that can allow the organization to make risk-based decisions even prior to vendor or solution acceptance. While security will certainly not be the only, or most important, consideration, the earlier security is involved the more of a chance there is for meaningful discussion about the security challenges as well as countermeasures that might be required as a result of the procurement.

Vendor Governance

Given the various ways organizations leverage 3rd party organizations and vendors, there is a need for employing vendor governance, also called vendor management. The goal of vendor governance is to ensure that the business is continually getting sufficient quality from its 3rd party providers. Professionals performing this function will often be employed at both the originating organization as well as the 3rd party. Interestingly, the vendor governance or management can itself be outsourced to an additional 3rd party. Ultimately, the goal is to ensure that strategic partnerships between organizations continually provide the expected value.

Acquisitions

Acquisitions can be disruptive to business, impacting aspects of both organizations. That goes doubly so for information security. Imagine that Tyrell Corporation has acquired Tannhauser, Inc. Tyrell Corporation has made a significant investment in information security, while Tannhauser has not. In fact, there are multiple live intrusions on the Tannhauser, including a live worm infestation. What if Tyrell simply links the two corporate WANs together, with little or no filtering between the two?
Due diligence requires a thorough risk assessment of any acquired company’s information security program, including an effective assessment of the current state of network security. This includes performing vulnerability assessment and penetration testing of the acquired company before any merger of networks. See Chapter 7, Domain 6: Security Assessment and Testing for more information on the types of tests that should be performed.

Divestitures

Divestitures (also known as de-mergers and de-acquisitions) represent the flip side of Acquisitions: one company becomes two or more. Divestitures can represent more risk than acquisitions: how exactly will sensitive data be split up? How will IT systems be split?
It is quite common for formerly unified companies to split off, and inadvertently maintain duplicate accounts and passwords within the two newly spun-off companies. This allows (former) insider attacks: where an employee of the formerly unified company hacks into a divested company by re-using old credentials. Similar risks exist with the reuse of physical security controls, including keys and badges. All forms of access for former employees must be revoked.

Ethics

Ethics is doing what is morally right. The Hippocratic Oath, taken by doctors, is an example of a code of ethics.
Ethics are of paramount concern for information security professionals: we are often trusted with highly sensitive information, and our employers, clients, and customers must know that we will treat their information ethically.
Digital information also raises ethical issues. Imagine that your DNA were sequenced and stored in a database. That database could tell you whether you were predisposed to suffer certain genetic illnesses, such as Huntington’s disease. Then imagine insurance companies using that database to deny coverage today because you are likely to have disease in the future.

The (ISC) Code of Ethics

The (ISC) code of ethics is the most testable code of ethics on the exam. That’s fair: you cannot become a CISSP® without agreeing to the code of ethics (among other steps); so it is reasonable to expect new CISSPs® to understand what they are agreeing to.

Note

Download the (ISC) code of ethics at http://www.isc2.org/ethics/default.aspx and study it carefully. You must understand the entire code, not just the details covered in this book.
The (ISC) code of ethics include the preamble, canons, and guidance. The preamble is the introduction to the code. The canons are mandatory: you must follow them to become (and remain) a CISSP®. The guidance is “advisory” (not mandatory): it provides supporting information for the canons.
The code of ethics preamble and canons are quoted here: “Safety of the commonwealth, duty to our principals, and to each other requires that we adhere, and be seen to adhere, to the highest ethical standards of behavior. Therefore, strict adherence to this Code is a condition of certification.”
The canons are the following:
Protect society, the commonwealth, and the infrastructure.
Act honorably, honestly, justly, responsibly, and legally.
Provide diligent and competent service to principals.
Advance and protect the profession.[6]
The canons are applied in order, and when faced with an ethical dilemma, you must follow the canons in order. In other words, it is more important to protect society than to advance and protect the profession.
This order makes sense. The South African system of Apartheid (racial segregation) was legal, but unethical, for example. The canons address these issues in an unambiguous fashion.

The (ISC) Code of Ethics Canons in Detail

The first, and therefore most important, canon of the (ISC) Code of Ethics requires the information security professional to “protect society, the commonwealth, and the infrastructure.”[7] The focus of the first canon is on the public and their understanding and faith in information systems. Security professionals are charged with the promoting of safe security practices and bettering the security of systems and infrastructure for the public good.
The second canon in the (ISC) Code of Ethics charges information security professionals to “act honorably, honestly, justly, responsibly, and legally.”[8] This canon is fairly straightforward, but there are a few points worth emphasizing here. One point that is detailed within this canon is related to laws from different jurisdictions being found to be in conflict. The (ISC) Code of Ethics suggest that priority be given to the jurisdiction in which services are being provided. Another point made by this canon is related to providing prudent advice, and cautioning the security professional from unnecessarily promoting fear, uncertainty, and doubt.
The (ISC) Code of Ethics’ third canon requires that security professionals “provide diligent and competent service to principals.”[9] The primary focus of this canon is ensuring that the security professional provides competent service for which she is qualified and which maintains the value and confidentiality of information and the associated systems. An additional important consideration is to ensure that the professional does not have a conflict of interest in providing quality services.
The fourth and final canon in the (ISC) Code of Ethics mandates that information security professionals “advance and protect the profession.”[10] This canon requires that the security professionals maintain their skills, and advance the skills and knowledge of others. An additional consideration that warrants mention is that this canon requires that individuals ensure not to negatively impact the security profession by associating in a professional fashion with those who might harm the profession.

Exam Warning

The (ISC) code of ethics is highly testable, including applying the canons in order. You may be asked for the “best” ethical answer, when all answers are ethical, per the canons. In that case, choose the answer that is mentioned first in the canons. Also, the most ethical answer is usually the best: hold yourself to a very high ethical level on questions posed during the exam.

Computer Ethics Institute

The Computer Ethics Institute provides their “Ten Commandments of Computer Ethics” as a code of computer ethics. The code is both short and fairly straightforward. Both the name and format are reminiscent of the Ten Commandments of Judaism, Christianity, and Islam, but there is nothing overtly religious in nature about the Computer Ethics Institute’s Ten Commandments. The Computer Ethics Institute’s Ten Commandments of Computer Ethics are:
1. Thou shalt not use a computer to harm other people.
2. Thou shalt not interfere with other people’s computer work.
3. Thou shalt not snoop around in other people’s computer files.
4. Thou shalt not use a computer to steal.
5. Thou shalt not use a computer to bear false witness.
6. Thou shalt not copy or use proprietary software for which you have not paid.
7. Thou shalt not use other people’s computer resources without authorization or proper compensation.
8. Thou shalt not appropriate other people’s intellectual output.
9. Thou shalt think about the social consequences of the program you are writing or the system you are designing.
10. Thou shalt always use a computer in ways that ensure consideration and respect for your fellow humans.[11]

IAB’s Ethics and the Internet

Much like the fundamental protocols of the Internet, the Internet Activities Board’s (IAB) code of ethics, Ethics and the Internet, is defined in an RFC document. RFC 1087, Ethics and the Internet, was published in 1987 to present a policy relating to ethical behavior associated with the Internet. The RFC is short and easy to read, and provides five basic ethical principles. According to the IAB, the following practices would be considered unethical behavior if someone purposely:
Seeks to gain unauthorized access to the resources of the Internet;
Disrupts the intended use of the Internet;
Wastes resources (people, capacity, computer) through such actions;
Destroys the integrity of computer-based information;
Compromises the privacy of users.[12]

Information Security Governance

Information Security Governance is information security at the organizational level: senior management, policies, processes, and staffing. It is also the organizational priority provided by senior leadership, which is required for a successful information security program.

Security Policy and Related Documents

Documents such as policies and procedures are a required part of any successful information security program. These documents should be grounded in reality: they are not idealistic documents that sit on shelves collecting dust. They should mirror the real world, and provide guidance on the correct (and sometimes required) way of doing things.

Exam Warning

When discussing policies and related documents, terms like “mandatory” (compulsory) and “discretionary” may be a bit of an overstatement, but it is a useful one for the exam. This text will use those terms. We live in an information security world that is painted in shades of gray, but the exam asks black-and-white questions about the best choice. A guideline to follow best practices is “discretionary,” but if you decide not to follow a guideline, the decision should be well thought out and documented.

Policy

Policies are high-level management directives. Policy is mandatory: if you do not agree with your company’s sexual harassment policy, for example, you do not have the option of not following it.
Policy is high level: it does not delve into specifics. A server security policy would discuss protecting the confidentiality, integrity, and availability of the system (usually in those terms). It may discuss software updates and patching. The policy would not use terms like “Linux” or “Windows”; that is too low level. In fact, if you converted your servers from Windows to Linux, your server policy would not change. Other documents, like procedures, would change.
Components of Program Policy
All policy should contain these basic components:
Purpose
Scope
Responsibilities
Compliance
Purpose describes the need for the policy, typically to protect the confidentiality, integrity, and availability of protected data.
Scope describes what systems, people, facilities, and organizations are covered by the policy. Any related entities that are not in scope should be documented, to avoid confusion.
Responsibilities include responsibilities of information security staff, policy and management teams, as well as responsibilities of all members of the organization.
Compliance describes two related issues: how to judge the effectiveness of the policies (how well they are working), and what happens when policy is violated (the sanction). All policy must have “teeth”: a policy that forbids accessing explicit content via the Internet is not useful if there are no consequences for doing so.
Policy Types
NIST Special Publication 800-12 (see http://csrc.nist.gov/publications/nistpubs/800-12/handbook.pdf) discusses three specific policy types: program policy, issue-specific policy, and system-specific policy.
Program policy establishes an organization’s information security program. Examples of issue-specific policies listed in NIST SP 800-12 include email policy and email privacy policy. Examples of system-specific policies include a file server policy, or a Web server policy.

Procedures

A procedure is a step-by-step guide for accomplishing a task. They are low level and specific. Like policies, procedures are mandatory.
Here is a simple example procedure for creating a new user:
1. Receive a new-user request form and verify its completeness.
2. Verify that the user’s manager has signed the form.
3. Verify that the user has read and agreed to the user account security policy.
4. Classify the user’s role by following role-assignment procedure NX-103.
5. Verify that the user has selected a “secret word,” such as their mother’s maiden name, and enter it into the help desk account profile.
6. Create the account and assign the proper role.
7. Assign the secret word as the initial password, and set “Force user to change password on next login to ‘True.’ ”
8. Email the New Account document to the user and their manager.
The steps of this procedure are mandatory. Security administrators do not have the option of skipping step 1, for example, and create an account without a form.
Other safeguards depend on this fact: when a user calls the help desk as a result of a forgotten password, the help desk will follow their “forgotten password” procedure, which includes asking for the user’s secret word. They cannot do that unless step 5 was completed: without that word, the help desk cannot securely reset the password. This mitigates social engineering attacks, where an imposter tries to trick the help desk to resetting a password for an account they are not authorized to access.

Standards

A standard describes the specific use of technology, often applied to hardware and software. “All employees will receive an ACME Nexus-6 laptop with 2 gigabytes of memory, a 2.8 GHZ dual core CPU, and 300-gigabyte disk” is an example of a hardware standard. “The laptops will run Windows 7 Professional, 32-bit version” is an example of a software (operating system) standard.
Standards are mandatory. They lower the Total Cost of Ownership of a safeguard. Standards also support disaster recovery. Imagine two companies in buildings side by side an office park. Both have 1000 laptops in each building.
One company uses standard laptop hardware and software. The laptop operating system is installed from a central preconfigured and patched image. The standard operating system has preconfigured network file storage, all required tools, and software preinstalled, and preconfigured antivirus and firewall software. Users are forbidden from installing their own applications.
The other company does not employ standards. The laptop hardware is made by a variety of vendors. Multiple operating systems are used, at various patch levels. Some use network storage; others do not. Many have applications installed by end-users.
Which company will recover more quickly if the buildings burn down? The first company needs to buy 1000 identical laptops, recover the OS image and imaging software from offsite storage, configure an imaging server, and rebuild the laptops. Not easy, but doable. The second company’s recovery will be far more difficult, and more likely to fail.

Guidelines

Guidelines are recommendations (which are discretionary). A guideline can be a useful piece of advice, such as “To create a strong password, take the first letter of every word in a sentence, and mix in some numbers and symbols. ‘I will pass the CISSP® exam in 6 months!’ becomes ‘Iwptcei6m!’ ”
You can create a strong password without following this advice, which is why guidelines are not mandatory. They are useful, especially for novice users.

Baselines

Baselines are uniform ways of implementing a standard. “Harden the system by applying the Center for Internet Security Linux benchmarks” is an example of a baseline (see http://benchmarks.cisecurity.org/en-us/?route=default for the Security Benchmarks division of the Center for Internet Security; they are a great resource). The system must meet the baseline described by those benchmarks.
Baselines are discretionary: it is acceptable to harden the system without following the aforementioned benchmarks, as long as it is at least as secure as a system hardened using the benchmarks. Formal exceptions to baselines will require senior management sign-off.
Table 2.3 summarizes the types of security documentation.

Table 2.3

Summary of Security Documentation

image

Personnel Security

Users can pose the biggest security risk to an organization. Background checks should be performed, contractors need to be securely managed, and users must be properly trained and made aware of security risks, as we will discuss next. Controls such as Non-Disclosure Agreements (NDA) and related employment agreements are a recommended personnel security control, as we will discuss in Chapter 8, Domain 7: Security Operations.

Security Awareness and Training

Security awareness and training are often confused. Awareness changes user behavior; training provides a skill set.
Reminding users to never share accounts or write their passwords down is an example of awareness. It is assumed that some users are doing the wrong thing, and awareness is designed to change that behavior.
Security training teaches a user how to do something. Examples include training new help desk personnel to open, modify, and close service tickets; training network engineers to configure a router, or training a security administrator to create a new account.

Background Checks

Organizations should conduct a thorough background check before hiring an individual. A criminal records check should be conducted, and all experience, education and certifications should be verified. Lying or exaggerating about education, certifications, and related credentials is one of the most common examples of dishonesty in regards to the hiring process.
More thorough background checks should be conducted for roles with heightened privileges, such as access to money or classified information. These checks can include a financial investigation, a more through criminal records check, and interviews with friends, neighbors, and current and former coworkers.

Employee Termination

Termination should result in immediate revocation of all employee access. Beyond account revocation, termination should be a fair process. There are ethical and legal reasons for employing fair termination, but there is also an additional information security advantage. An organization’s worst enemy can be a disgruntled former employee, who, even without legitimate account access, knows where the “weak spots are.” This is especially true for IT personnel.
A negative reaction to termination is always possible, but using a fair termination process may lower the risk. As in many areas on the CISSP® exam, process trumps informal actions. A progressive discipline (also called ladder of discipline) process includes:
Coaching
Formal discussion
Verbal warning meeting, with Human Resources attendance (perhaps multiple warnings)
Written warning meeting, with Human Resources attendance (perhaps multiple warnings)
Termination
The employee should be given clear guidance on the cause of the discipline, and also given direct actionable steps required to end the process. An example is “You are being disciplined for failing to arrive at work in a timely fashion. You must arrive for work by 9:00 AM each workday, unless otherwise arranged or in cases of an emergency. This process will end when you consistently arrive for work on time. This process will continue if you continue to fail to arrive at work on time. This process can lead to termination of employment if the problem continues.”
If the process ends in termination, there are no surprises left. This is fair, and also lowers the chance of a negative reaction. People tend to act more reasonably if they feel they have been treated fairly.

Vendor, Consultant and Contractor Security

Vendors, Consultants and Contractors can introduce risks to an organization. They are not direct employees, and sometimes have access to systems at multiple organizations. If allowed to, they may place an organization’s sensitive data on devices not controlled (or secured) by the organization.
Third-party personnel with access to sensitive data must be trained and made aware of risks, just as employees are. Background checks may also be required, depending on the level of access required. Information security policies, procedures and other guidance should apply as well. Additional policies regarding ownership of data and intellectual property should be developed. Clear rules dictating where and when a 3rd party may access or store data must be developed.
Other issues to consider include: how does a vendor with access to multiple organizations’ systems manage access control? Many vendors will re-use the same credentials across multiple sites, manually synchronizing passwords (if they are able or allowed to). As we will discuss in Chapter 6, Domain 5: Identity and Access Management, multi-factor authentication mitigates the risk of stolen, guessed or cracked credentials being reused elsewhere.
Also, from a technical perspective, how are the vendor’s systems secured and interconnected? Can a breach at vendor’s site (or any of the vendor’s clients) result in a breach at the client organization? Who is responsible for patching and securing vendor systems that exist onsite at the client?

Outsourcing and Offshoring

Outsourcing is the use of a third party to provide Information Technology support services that were previously performed in-house. Offshoring is outsourcing to another country.
Both can lower Total Cost of Ownership by providing IT services at lower cost. They may also enhance the information technology resources and skill set and resources available to a company (especially a small company), which can improve confidentiality, integrity, and availability of data.
Offshoring can raise privacy and regulatory issues. For example, for a U.S. company that offshores data to Australia, there is no Health Insurance Portability and Accountability Act (HIPAA, the primary regulation covering health care data in the United States) in Australia. There is no SOX (Sarbanes-Oxley, protecting publicly traded data in the United States), Gramm-Leach-Bliley Act (GLBA, which protects financial information in the United States), etc.
A thorough and accurate Risk Analysis must be performed before outsourcing or offshoring sensitive data. If the data will reside in another country, you must ensure that laws and regulations governing the data are followed, even beyond the laws of the offshored jurisdiction. This can be done contractually: the Australian company can agree to follow HIPAA via contract, for example.

Learn By Example

Do You Know Where Your Data Is?

University of California at San Francisco (UCSF) Medical Center outsourced transcription work to a Florida company. A transcriptionist working for the Florida company in 2003 subcontracted some of the work to a man in Texas, who then subcontracted it again to Ms. Beloch, a woman working in Pakistan.
Unbeknownst to UCSF, some of their transcription work had been offshored. USCF’s ePHI – Electronically Protected Healthcare Information (federally regulated medical information) was in Pakistan, where HIPAA does not apply.
Ms. Beloch was not paid in a timely fashion, and emailed USCF, threatening if she was not paid, “I will expose all the voice files and patient records of UCSF … on the Internet.”[13] She attached USCF ePHI to the email to prove her access. She was paid, and the data was not released.
You must always know where your data is. Any outsourcing agreement must contain rules on subcontractor access to sensitive data. Any offshoring agreement must contractually account for relevant laws and regulations such as HIPAA.

Access Control Defensive Categories and Types

In order to understand and appropriately implement access controls, understanding what benefits each control can add to security is vital. In this section, each type of access control will be defined on the basis of how it adds to the security of the system.
There are six access control types:
Preventive
Detective
Corrective
Recovery
Deterrent
Compensating
These access control types can fall into one of three categories: administrative, technical, or physical.
1. Administrative (also called directive) controls are implemented by creating and following organizational policy, procedure, or regulation. User training and awareness also fall into this category.
2. Technical controls are implemented using software, hardware, or firmware that restricts logical access on an information technology system. Examples include firewalls, routers, encryption, etc.
3. Physical controls are implemented with physical devices, such as locks, fences, gates, security guards, etc.

Preventive

Preventive controls prevent actions from occurring. It applies restrictions to what a potential user, either authorized or unauthorized, can do. The assigning of privileges on a system is a good example of a preventive control because having limited privileges prevents the user from accessing and performing unauthorized actions on the system. An example of an administrative preventive control is a pre-employment drug screening. It is designed to prevent an organization from hiring an employee who is using illegal drugs.

Note

Some sources use the term “preventive,” others use “preventative” (extra “ta”). As far as the exam is concerned, they are synonyms.

Detective

Detective controls are controls that alert during or after a successful attack. Intrusion detection systems alerting after a successful attack, closed-circuit television cameras (CCTV) that alert guards to an intruder, and a building alarm system that is triggered by an intruder are all examples of detective controls.

Corrective

Corrective controls work by “correcting” a damaged system or process. The corrective access control typically works hand in hand with detective access controls. Antivirus software has both components. First, the antivirus software runs a scan and uses its definition file to detect whether there is any software that matches its virus list. If it detects a virus, the corrective controls take over, places the suspicious software in quarantine, or deletes it from the system.

Recovery

After a security incident has occurred, recovery controls may need to be taken in order to restore functionality of the system and organization. Recovery means that the system must be recovered: reinstalled from OS media or image, data restored from backups, etc.
The connection between corrective and recovery controls is important to understand. For example, let us say a user downloads a Trojan horse. A corrective control may be the antivirus software “quarantine.” If the quarantine does not correct the problem, then a recovery control may be implemented to reload software and rebuild the compromised system.

Deterrent

Deterrent controls deter users from performing actions on a system. Examples include a “beware of dog” sign: a thief facing two buildings, one with guard dogs and one without, is more likely to attack the building without guard dogs. A large fine for speeding is a deterrent for drivers to not speed. A sanction policy that makes users understand that they will be fired if they are caught surfing illicit or illegal Web sites is a deterrent.

Compensating

A compensating control is an additional security control put in place to compensate for weaknesses in other controls. For example, surfing explicit Web sites would be a cause for an employee to lose his/her job. This would be an administrative deterrent control. However, by also adding a review of each employee’s Web logs each day, we are adding a detective compensating control to augment the administrative control of firing an employee who surfs inappropriate Web sites.

Comparing Access Controls

Knowing how to categorize access control examples into the appropriate type and category is important. The exam requires that the taker be able to identify types and categories of access controls. However, in the real world, remember that controls do not always fit neatly into one category: the context determines the category.

Exam Warning

For control types on the exam, do not memorize examples: instead look for the context. A firewall is a clear-cut example of a preventive technical control, and a lock is a good example of a preventive physical control.
Other examples are less clear-cut. What control is an outdoor light? Light allows a guard to see an intruder (detective). Light may also deter crime (criminals will favor poorly-lit targets).
What control is a security guard? The guard could hold a door shut (prevent it from opening), or could see an intruder in a hallway (detect the intruder), or the fact that the guard is present could deter an attack, etc. In other words, a guard could be almost any control: the context is what determines which control the guard fulfills.
Here are more clear-cut examples:
Preventive
Physical: Lock, mantrap
Technical: Firewall
Administrative: Pre-employment drug screening
Detective
Physical: CCTV, light (used to see an intruder)
Technical: IDS
Administrative: Post-employment random drug tests
Deterrent
Physical: “Beware of dog” sign, light (deterring a physical attack)
Technical: Warning Banner presented before a login prompt
Administrative: Sanction policy

Risk Analysis

All information security professionals assess risk: we do it so often that it becomes second nature. A patch is released on a Tuesday. Your company normally tests for 2 weeks before installing, but a network-based worm is spreading on the Internet that infects un-patched systems. If you install the patch now, you risk downtime due to lack of testing. If you wait to test, you risk infection by the worm. What is the bigger risk? What should you do? Risk Analysis (RA) will help you decide.
The average person does a poor job of accurately analyzing risk: if you fear the risk of dying while traveling, and drive from New York to Florida instead of flying to mitigate that risk, you have done a poor job of analyzing risk. It is far riskier, per mile, to travel by car than by airplane when considering the risk of death while traveling.
Accurate Risk Analysis is a critical skill for an information security professional. We must hold ourselves to a higher standard when judging risk. Our risk decisions will dictate which safeguards we deploy to protect our assets, and the amount of money and resources we spend doing so. Poor decisions will result in wasted money, or even worse, compromised data.

Assets

Assets are valuable resources you are trying to protect. Assets can be data, systems, people, buildings, property, and so forth. The value or criticality of the asset will dictate what safeguards you deploy. People are your most valuable asset.

Threats and Vulnerabilities

A threat is a potentially harmful occurrence, like an earthquake, a power outage, or a network-based worm such as the Conficker (aka Downadup, see http://www.microsoft.com/security/worms/Conficker.aspx) worm, which began attacking Microsoft Windows operating systems in late 2008. A threat is a negative action that may harm a system.
A vulnerability is a weakness that allows a threat to cause harm. Examples of vulnerabilities (matching our previous threats) are buildings that are not built to withstand earthquakes, a data center without proper backup power, or a Microsoft Windows XP system that has not been patched in a few years.
Using the worm example, the threat is the Conficker worm. Conficker spreads through three vectors: lack of the MS08-067 patch (see http://technet.microsoft.com/en-us/security/bulletin/ms08-067), infected USB tokens that “autorun” when inserted into a Windows system, and weak passwords on network shares.
A networked Microsoft Windows system is vulnerable if it lacks the patch, or will automatically run software on a USB token when inserted, or has a network share with a weak password. If any of those three conditions are true, you have risk. A Linux system has no vulnerability to Conficker, and therefore no risk to Conficker.

Risk = Threat × Vulnerability

To have risk, a threat must connect to a vulnerability. This relationship is stated by the formula:

Risk=Threat×Vulnerability

image
You can assign a value to specific risks using this formula. Assign a number to both threats and vulnerabilities. We will use a range of 1–5 (the range is arbitrary; just keep it consistent when comparing different risks).

Learn By Example

Earthquake Disaster Risk Index

Risk is often counterintuitive. If you ask a layman whether the city of Boston or San Francisco had the bigger risk to earthquakes, most would answer “San Francisco.” It is on the California coast near the famous Pacific Ocean “Ring of Fire,” and has suffered major earthquakes in the past. Boston is in the northeast, which has not suffered a major earthquake since colonial times.
Rachel Davidson created the Earthquake Disaster Risk Index, which is used to judge risks of earthquakes between major world cities. Details are available at: http://www.sciencedaily.com/releases/1997/08/970821233648.htm.
She discovered that the risk of earthquakes to Boston and San Francisco was roughly the same: “Bostonians face an overall earthquake risk comparable to San Franciscans, despite the lower frequency of major earthquakes in the Boston area. The reason: Boston has a much larger percentage of buildings constructed before 1975, when the city incorporated seismic safety measures into its building code.”[14]
Compared to Boston, the threat of an earthquake in San Francisco is higher (more frequent earthquakes), but the vulnerability is lower (stronger seismic safety building codes). Boston has a lower threat (less earthquakes), but a higher vulnerability (weaker buildings). This means the two cities have roughly equal risk.
Using a scale of 1–5, here is San Francisco’s risk, using the risk = threat x vulnerability calculation:
San Francisco threat: 4
San Francisco vulnerability: 2
San Francisco risk: 4 × 2 = 8
Here is Boston’s risk:
Boston threat: 2
Boston vulnerability: 4
Boston risk: 2 × 4 = 8

Impact

The “Risk = Threat × Vulnerability” equation sometimes uses an added variable called impact: “Risk = Threat × Vulnerability × Impact.” Impact is the severity of the damage, sometimes expressed in dollars. Risk = Threat × Vulnerability × Cost is sometimes used for that reason. A synonym for impact is consequences.
Let’s use the “impact” formula using the same earthquake risk example for buildings in Boston. A company has two buildings in the same office park that are virtually identical. One building is full of people and equipment; the other is empty (awaiting future growth). The risk of damage from an earthquake to both is 8, using “Risk = Threat × Vulnerability.” The impact from a large earthquake is 2 for the empty building (potential loss of the building), and 5 for the full building (potential loss of human life). Here is the risk calculated using “Risk = Threat × Vulnerability × Impact”:
Empty Building Risk: 2 (threat) × 4 (vulnerability) × 2 (impact) = 16
Full Building Risk: 2 (threat) × 4 (vulnerability) × 5 (impact) = 40

Exam Warning

Loss of human life has near-infinite impact on the exam. When calculating risk using the “Risk = Threat × Vulnerability × Impact” formula, any risk involving loss of human life is extremely high, and must be mitigated.

Risk Analysis Matrix

The Risk Analysis Matrix uses a quadrant to map the likelihood of a risk occurring against the consequences (or impact) that risk would have. Australia/New Zealand ISO 31000:2009 Risk Management – Principles and Guidelines (AS/NZS ISO 31000: 2009, see http://infostore.saiglobal.com/store/Details.aspx?ProductID=1378670) describes the Risk Analysis Matrix, shown in Table 2.4.

Table 2.4

Risk Analysis Matrix

image

The Risk Analysis Matrix allows you to perform Qualitative Risk Analysis (see section “Qualitative and Quantitative Risk Analysis”) based on likelihood (from “rare” to “almost certain”) and consequences (or impact), from “insignificant” to “catastrophic.” The resulting scores are Low (L), Medium (M), High (H), and Extreme Risk (E). Low risks are handled via normal processes; moderate risk require management notification; high risks require senior management notification, and extreme risks require immediate action including a detailed mitigation plan (and senior management notification).
The goal of the matrix is to identify high likelihood/high consequence risks (upper right quadrant of Table 2.4), and drive them down to low likelihood/low consequence risks (lower left quadrant of Table 2.4).

Calculating Annualized Loss Expectancy

The Annualized Loss Expectancy (ALE) calculation allows you to determine the annual cost of a loss due to a risk. Once calculated, ALE allows you to make informed decisions to mitigate the risk.
This section will use an example of risk due to lost or stolen unencrypted laptops. Assume your company has 1000 laptops that contain Personally Identifiable Information (PII). You are the Security Officer, and you are concerned about the risk of exposure of PII due to lost or stolen laptops. You would like to purchase and deploy a laptop encryption solution. The solution is expensive, so you need to convince management that the solution is worthwhile.

Asset Value

The Asset value (AV) is the value of the asset you are trying to protect. In this example, each laptop costs $2500, but the real value is the PII. Theft of unencrypted PII has occurred previously, and has cost the company many times the value of the laptop in regulatory fines, bad publicity, legal fees, staff hours spent investigating, etc. The true average Asset Value of a laptop with PII for this example is $25,000 ($2500 for the hardware, and $22,500 for the exposed PII).
Tangible assets (such as computers or buildings) are straightforward to calculate. Intangible assets are more challenging. For example, what is the value of brand loyalty? According to Deloitte, there are three methods for calculating the value of intangible assets – market approach, income approach and cost approach:
“Market Approach: This approach assumes that the fair value of an asset reflects the price which comparable assets have been purchased in transactions under similar circumstances.
Income Approach: This approach is based on the premise that the value of an ... asset is the present value of the future earning capacity that an asset will generate over its remaining useful life.
Cost Approach: This approach estimates the fair value of the asset by reference to the costs that would be incurred in order to recreate or replace the asset” [15]

Exposure Factor

The Exposure Factor (EF) is the percentage of value an asset lost due to an incident. In the case of a stolen laptop with unencrypted PII, the Exposure Factor is 100%: the laptop and all the data are gone.

Single Loss Expectancy

The Single Loss Expectancy (SLE) is the cost of a single loss. SLE is the Asset Value (AV) times the Exposure Factor (EF). In our case, SLE is $25,000 (Asset Value) times 100% (Exposure Factor), or $25,000.

Annual Rate of Occurrence

The Annual Rate of Occurrence (ARO) is the number of losses you suffer per year. Looking through past events, you discover that you have suffered 11 lost or stolen laptops per year on average. Your ARO is 11.

Annualized Loss Expectancy

The Annualized Loss Expectancy (ALE) is your yearly cost due to a risk. It is calculated by multiplying the Single Loss Expectancy (SLE) times the Annual Rate of Occurrence (ARO). In our case, it is $25,000 (SLE) times 11 (ARO), or $275,000.
Table 2.5 summarizes the equations used to determine Annualized Loss Expectancy.

Table 2.5

Summary of Risk Equations

image

Total Cost of Ownership

The Total Cost of Ownership (TCO) is the total cost of a mitigating safeguard. TCO combines upfront costs (often a one-time capital expense) plus annual cost of maintenance, including staff hours, vendor maintenance fees, software subscriptions, etc. These ongoing costs are usually considered operational expenses.
Using our laptop encryption example, the upfront cost of laptop encryption software is $100/laptop, or $100,000 for 1000 laptops. The vendor charges a 10% annual support fee, or $10,000/year. You estimate that it will take 4 staff hours per laptop to install the software, or 4000 staff hours. The staff that will perform this work makes $50/hour plus benefits. Including benefits, the staff cost per hour is $70, times 4000 hours, that is $280,000.
Your company uses a 3-year technology refresh cycle, so you calculate the Total Cost of Ownership over 3 years:
Software cost: $100,000
Three year’s vendor support: $10,000 × 3 = $30,000
Hourly staff cost: $280,000
Total Cost of Ownership over 3 years: $410,000
Total Cost of Ownership per year: $410,000/3 = $136,667/year
Your Annual Total Cost of Ownership for the laptop encryption project is $136,667 per year.

Return on Investment

The Return on Investment (ROI) is the amount of money saved by implementing a safeguard. If your annual Total Cost of Ownership (TCO) is less than your Annualized Loss Expectancy (ALE), you have a positive ROI (and have made a good choice). If the TCO is higher than your ALE, you have made a poor choice.
The annual TCO of laptop encryption is $136,667; the Annualized Loss Expectancy for lost or stolen unencrypted laptops is $275,000. The math is summarized in Table 2.6.

Table 2.6

Annualized Loss Expectancy of Unencrypted Laptops

image

Implementing laptop encryption will change the Exposure Factor. The laptop hardware is worth $2500, and the exposed PII costs an additional $22,500, for $25,000 Asset Value. If an unencrypted laptop is lost or stolen, the exposure factor is 100% (the hardware and all data is exposed). Laptop encryption mitigates the PII exposure risk, lowering the exposure factor from 100% (the laptop and all data) to 10% (just the laptop hardware).
The lower Exposure Factor lowers the Annualized Loss Expectancy from $275,000 to $27,500, as shown in Table 2.7.

Table 2.7

Annualized Loss Expectancy of Encrypted Laptops

image

You will save $247,500/year (the old ALE, $275,000, minus the new ALE, $27,500) by making an investment of $136,667. Your ROI is $110,833 per year ($247,500 minus $136,667). The laptop encryption project has a positive ROI, and is a wise investment.

Budget and Metrics

When combined with Risk Analysis, the Total Cost of Ownership and Return on Investment calculations factor into proper budgeting. Some organizations have the enviable position of ample information security funding, yet they are often compromised. Why? The answer is usually because they mitigated the wrong risks. They spent money where it may not have been necessary, and ignored larger risks. Regardless of staff size or budget, all organizations can take on a finite amount of information security projects. If they choose unwisely, information security can suffer.
Metrics can greatly assist the information security budgeting process. They help illustrate potentially costly risks, and demonstrate the effectiveness (and potential cost savings) of existing controls. They can also help champion the cause of information security.
The CIS Security Benchmarks (available at: http://benchmarks.cisecurity.org/en-us/?route=downloads.metrics) lists the following metrics:
“Application Security
Number of Applications
Percentage of Critical Applications
Risk Assessment Coverage
Security Testing Coverage
Configuration Change Management
Mean-Time to Complete Changes
Percent of Changes with Security Review
Percent of Changes with Security Exceptions
Financial
Information Security Budget as % of IT Budget
Information Security Budget Allocation
Incident Management
Mean-Time to Incident Discovery
Incident Rate
Percentage of Incidents Detected by Internal Controls
Mean-Time Between Security Incidents
Mean-Time to Recovery
Patch Management
Patch Policy Compliance
Patch Management Coverage
Mean-Time to Patch
Vulnerability Management
Vulnerability Scan Coverage
Percent of Systems Without Known Severe Vulnerabilities
Mean-Time to Mitigate Vulnerabilities
Number of Known Vulnerability Instances” [16]

Risk Choices

Once we have assessed risk, we must decide what to do. Options include accepting the risk, mitigating or eliminating the risk, transferring the risk, and avoiding the risk.

Accept the Risk

Some risks may be accepted: in some cases, it is cheaper to leave an asset unprotected due to a specific risk, rather than make the effort (and spend the money) required to protect it. This cannot be an ignorant decision: the risk must be considered, and all options must be considered before accepting the risk.

Learn By Example

Accepting the Risk

A company conducted a Risk Analysis, which identified a mainframe as a source of risk. The mainframe was no longer used for new transactions; it served as an archive for historical data. The ability to restore the mainframe after a disk failure had eroded over time: hardware aged, support contracts expired and were not renewed, and employees who were mainframe subject matter experts left the company. The company was not confident it could restore lost data in a timely fashion, if at all.
The archival data needed to be kept online for 6 more months, pending the installation of a new archival system. What should be done about the backups in the meantime? Should the company buy new mainframe restoration hardware, purchase support contracts, or hire outsourced mainframe experts?
The risk management team asked the team supporting the archive retrieval, “What would happen if this data disappeared tomorrow, 6 months before the new archival system goes live?” The answer: the company could use paper records in the interim, which would represent a small operational inconvenience. No laws or regulations prohibited this plan.
The company decided to accept the risk of failing to restore the archival data due to a mainframe failure. Note that this decision was well thought out. Stakeholders were consulted, the operational impact was assessed, and laws and regulations were considered.
Risk Acceptance Criteria
Low likelihood/low consequence risks are candidates for risk acceptance. High and extreme risks cannot be accepted. There are cases, such as data protected by laws or regulations or risk to human life or safety, where accepting the risk is not an option.

Mitigate the Risk

Mitigating the risk means lowering the risk to an acceptable level. Lowering risk is also called “risk reduction,” and the process of lowering risk is also called “reduction analysis.” The laptop encryption example given in the previous Annualized Loss Expectancy section is an example of mitigating the risk. The risk of lost PII due to stolen laptops was mitigated by encrypting the data on the laptops. The risk has not been eliminated entirely: a weak or exposed encryption password could expose the PII, but the risk has been reduced to an acceptable level.
In some cases it is possible to remove the risk entirely: this is called eliminating the risk.

Transfer the Risk

Transferring the risk is sometimes referred to as the “insurance model.” Most people do not assume the risk of fire to their house: they pay an insurance company to assume that risk for them. The insurance companies are experts in Risk Analysis: buying risk is their business. If the average yearly monetary risk of fire to 1000 homes is $500,000 ($500/house), and they sell 1000 fire insurance policies for $600/year, they will make 20% profit. That assumes the insurance company has accurately evaluated risk, of course.

Risk Avoidance

A thorough Risk Analysis should be completed before taking on a new project. If the Risk Analysis discovers high or extreme risks that cannot be easily mitigated, avoiding the risk (and the project) may be the best option.
The math for this decision is straightforward: calculate the Annualized Loss Expectancy of the new project, and compare it with the Return on Investment expected due to the project. If the ALE is higher than the ROI (even after risk mitigation), risk avoidance is the best course. There may also be legal or regulatory reasons that will dictate avoiding the risk.

Learn By Example

Avoiding the Risk

A company sells Apple iPods online. For security reasons, repeat customers must reenter their credit numbers for each order. This is done to avoid the risk of storing credit card numbers on an Internet-facing system (where they may be more easily stolen).
Based on customer feedback, the business unit proposes a “save my credit card information” feature for repeat customers. A Risk Analysis of the new feature is conducted once the project is proposed. The business unit also calculates the Return on Investment for this feature.
The Risk Analysis shows that the information security architecture would need significant improvement to securely protect stored credit card information on Internet-facing systems. Doing so would also require more stringent Payment Card Industry (PCI) auditing, adding a considerable amount of staff hours to the Total Cost of Ownership (TCO).
The TCO is over double the ROI of the new feature, once all costs are tallied. The company decides to avoid the risk and not implement the credit card saving feature.

Quantitative and Qualitative Risk Analysis

Quantitative and Qualitative Risk Analysis are two methods for analyzing risk. Quantitative Risk Analysis uses hard metrics, such as dollars. Qualitative Risk Analysis uses simple approximate values. Quantitative is more objective; qualitative is more subjective. Hybrid Risk Analysis combines the two: using quantitative analysis for risks which may be easily expressed in hard numbers such as money, and qualitative for the remainder.

Exam Warning

Quantitative Risk Analysis requires you to calculate the quantity of the asset you are protecting. Quantitative-quantity is a hint to remember this for the exam.
Calculating the Annualized Loss Expectancy (ALE) is an example of Quantitative Risk Analysis. The inputs for ALE are hard numbers: Asset Value (in dollars), Exposure Factor (as a percentage) and Annual Rate of Occurrence (as a hard number).
The Risk Analysis Matrix (shown previously in Table 2.4) is an example of Qualitative Risk Analysis. Likelihood and Consequences are rough (and sometimes subjective) values, ranging from 1 to 5. Whether the consequences of a certain risk are a “4” or a “5” can be a matter of (subjective) debate.
Quantitative Risk Analysis is more difficult: to quantitatively analyze the risk of damage to a data center due to an earthquake, you would need to calculate the asset value of the data center: the cost of the building, the servers, network equipment, computer racks, monitors, etc. Then calculate the Exposure Factor, and so on.
To qualitatively analyze the same risk, you would research the risk, and agree that the likelihood is a 2, and the consequences are a 4, and use the Risk Analysis matrix to determine a risk of “high.”

The Risk Management Process

The United States National Institute of Standards and Technology (NIST) published Special Publication 800-30, Risk Management Guide for Information Technology Systems (see http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf). The guide describes a 9-step Risk Analysis process:
1. System Characterization
2. Threat Identification
3. Vulnerability Identification
4. Control Analysis
5. Likelihood Determination
6. Impact Analysis
7. Risk Determination
8. Control Recommendations
9. Results Documentation
We have covered these steps individually; let us end this section by following NIST’s process.
System characterization describes the scope of the risk management effort and the systems that will be analyzed. The next two steps, Threat Identification and Vulnerability Identification, identify the threats and vulnerabilities, required to identify risks using the “Risk = Threat × Vulnerability” formula.
Step 4, Control Analysis, analyzes the security controls (safeguards) that are in place or planned to mitigate risk. Steps 5 and 6, Likelihood Determination and Impact Analysis, are needed to identify important risks (especially those with the high likelihood and high impact/consequence).
The previous 7 steps are used to determine Control Recommendations, or the risk mitigation strategy. That strategy is documented in the final step, Results Documentation.

Types of Attackers

Controlling access is not just controlling authorized users; it includes preventing unauthorized access. Information systems may be attacked by a variety of attackers, ranging from script kiddies to worms to militarized attacks. Attackers may use a variety of methods to attempt to compromise the confidentiality, integrity, and availability of systems.

Hackers

The term “hacker” is often used in the media to describe a malicious individual who attacks computer systems. The term hacker originally described a non-malicious explorer who used technologies in ways its creators did not intend. The first definition of a hacker from a 1981 version of the Jargon File (see http://www.catb.org/jargon/) is: “HACKER [originally, someone who makes furniture with an axe] n. 1. A person who enjoys exploring the details of programming systems and how to stretch their capabilities, as opposed to most users who prefer to learn only the minimum necessary.”[17] The term “how to stretch their capabilities” is key: the original “hackers” were experts at pushing the bounds of technology, and enjoyed doing so.
The eighth definition of hacker from the same version of the Jargon File references malice: “A malicious or inquisitive meddler who tries to discover information by poking around. Hence ‘password hacker’, ‘network hacker’.”[18]
While some simply use the term “hacker” to now describe a malicious computer attacker, better terms include “malicious hacker,” or “black hat.” “Cracker” is another, sometimes controversial, commonly used term used for a malicious hacker. The issue is the term cracker, which also applies to cracking software copy protection, cracking password hashes, and is also a derogative racial term.

Black Hats and White Hats

Black hat attackers are malicious hackers, sometimes called crackers. The “black” derives from villains in fiction: Darth Vader wore all black. Black hats lack ethics, sometimes violate laws, and break into computer systems with malicious intent, and may violate the confidentiality, integrity, or availability of organizations’ systems and data.
White hat hackers are the “good guys,” including professional penetration testers who break into systems with permission, malware researches who research malicious code to provide better understanding and ethically disclose vulnerabilities to vendors, etc. White hat hackers are also known as ethical hackers; they follow a code of ethics and obey laws. The name derives from fictional characters who wore white hats, like “Gandalf the White.”
Finally, gray hat hackers (sometimes spelled with the British “grey,” even outside of the UK) fall somewhere between black and white hats. According to searchsecurity.com, “Gray hat describes a cracker (or, if you prefer, hacker) who exploits a security weakness in a computer system or product in order to bring the weakness to the attention of the owners. Unlike a black hat, a gray hat acts without malicious intent. The goal of a gray hat is to improve system and network security. However, by publicizing a vulnerability, the gray hat may give other crackers the opportunity to exploit it. This differs from the white hat who alerts system owners and vendors of a vulnerability without actually exploiting it in public.”[19]

Script Kiddies

Script kiddies attack computer systems with tools they have little or no understanding of. Modern exploitation tools, such as the Metasploit Framework (http://www.metasploit.com/), are of high quality and so easy to use that security novices can successfully compromise some systems.

Note

The fact that script kiddies use tools such as Metasploit is not meant to infer anything negative about the tools. These tools are of high quality, and that quality allows novices to sometimes achieve impressive results. An older Metasploit slogan (“Point. Click. Root.”) illustrates this fact.
In the case of Metasploit, exploiting a system may take as few as four steps. Assume a victim host is a Microsoft XP system that is missing patch MS08-067. Gaining a remote SYSTEM-level shell is as simple as:
1. Choose the exploit (MS08-067)
2. Choose the payload (run a command shell)
3. Choose the remote host (victim IP address)
4. Type “exploit”
The attacker then types “exploit” and, if successful, accesses a command shell running with SYSTEM privileges on the victim host. Figure 2.11 shows this process within Metasploit.
image
Figure 2.11 Using Metasploit to Own a System in 4 Steps
While script kiddies are not knowledgeable or experienced, they may still cause significant security issues for poorly protected systems.

Outsiders

Outsiders are unauthorized attackers with no authorized privileged access to a system or organization. The outsider seeks to gain unauthorized access. Outsiders launch the majority of attacks, but most are usually mitigated by defense-in-depth perimeter controls.

Insiders

An insider attack is launched by an internal user who may be authorized to use the system that is attacked. An insider attack may be intentional or accidental. Insider attackers range from poorly trained administrators who make mistakes, to malicious individuals who intentionally compromise the security of systems. An authorized insider who attacks a system may be in a position to cause significant impact.
NIST Special Publication 800-30 (http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf) lists the following threat actions caused by insider attackers:
Assault on an employee
Blackmail
Browsing of proprietary information
Computer abuse
Fraud and theft
Information bribery
Input of falsified, corrupted data
Interception
Malicious code (e.g., virus, logic bomb, Trojan horse)
Sale of personal information
System bugs
System intrusion
System sabotage
Unauthorized system access[20]
Insiders cause most high-impact security incidents. This point is sometimes debated: most attacks are launched by outside attackers. Defense-in-depth mitigates most outside attacks: Internet-facing firewalls may deny thousands of attacks or more per day. Most successful attacks are launched by insiders.

Hacktivist

A hacktivist is a hacker activist, someone who attacks computer systems for political reasons. “Hacktivism” is hacking activism. There have been many recent cases of hacktivism, including the DDoS on the Internet infrastructure in the country of Estonia in reaction to the plan to move a Soviet-era statue in Tallinn, Estonia. See http://www.wired.com/politics/security/magazine/15-09/ff_estonia for more information on this attack.
In March of 2010, Google came under attack by Vietnamese hacktivists. The story, reported in The Register (“Google frets over Vietnam hacktivist botnet,” http://www.theregister.co.uk/2010/03/31/vietnam_botnet/) said “Hackers used malware to establish a botnet in Vietnam as part of an apparently politically motivated attack with loose ties to the Operation Aurora attacks that hit Google and many other blue chip firms late last year, according to new research from McAfee and Google.”[21]
Google reported: “The malware infected the computers of potentially tens of thousands of users…These infected machines have been used both to spy on their owners as well as participate in distributed denial of service (DDoS) attacks against blogs containing messages of political dissent. Specifically, these attacks have tried to squelch opposition to bauxite mining efforts in Vietnam, an important and emotionally charged issue in the country.”[22]

Bots and Botnets

A “bot” (short for robot) is a computer system running malware that is controlled via a botnet. A botnet contains a central command and control (C&C) network, managed by humans called bot herders. The term “zombie” is sometimes used to describe a bot.
Many botnets use Internet Relay Chat (IRC) networks to provide command and control; others use HTTP, HTTPS, or proprietary protocols (sometimes obscured or encrypted). Figure 2.12 shows a packet capture of bot IRC command and control traffic, connecting to the “pLagUe” botnet, displayed with the Wireshark network protocol analyzer (see http://www.wireshark.org).
image
Figure 2.12 IRC botnet Command and Control Traffic
The bot in Figure 2.12 (called pLagUe{USA}{LAN}72705, indicating it is in the United States) reports to the C&C network. Other bots report in from Brazil (BRA), Mexico (MEX), and the United States. They report injecting viruses into autorun.inf: they are most likely infecting attached USB drives with viruses.
Systems become bots after becoming compromised via a variety of mechanisms, including server-side attacks, client-side attacks, and running Remote Access Trojans (RATs). As described in Domain 3: Security Engineering, a Trojan horse program performs two functions, one overt (such as playing a game) and one covert (such as joining the system to a botnet).
Once joined to a botnet, the bot may be instructed to steal local information such as credit card numbers or credentials for other systems, including online banks. Bots also send spam, host illicit Web sites including those used by drug-sale spam, and are used in coordinated Distributed Denial of Service (DDoS) attacks.

Phishers and Spear Phishers

A phisher (“fisher” spelled with the hacker spelling of “ph” instead of “f”) is malicious attacker who attempts to trick users into divulging account credentials or PII. Many phishers attempt to steal online banking information, as the phishing attack in Figure 2.13 shows.
image
Figure 2.13 “PNC” Bank Phishing Attempt
This phishing attack triggered a warning from the email system, correctly warning, “This message may not be from whom it claims to be.” The attack is attempting to trick the user into clicking on the “demo” link, which is a malicious link pointing to a domain in Costa Rica (with no connection to PNC Bank); the relevant email plain text is highlighted in Figure 2.14.
image
Figure 2.14 Phishing Email “DEMO” URL
Phishing is a social engineering attack that sometimes includes other attacks, including client-side attacks. Users who click links in phishing emails may be subject to client-side attacks and theft of credentials. Simply visiting a phishing site is dangerous: the client may be automatically compromised.
Phishing attacks tend to be large scale: thousands or many more users may be targeted. The phishers are playing the odds: if they email 100,000 users and 1/10th of 1% of them click, the phisher will have 100 new victims. Spear phishing targets far fewer users: as little as a handful of users per organization. These targets are high value (often executives), and spear phishing attacks are more targeted, typically referring to the user by their full name, title, and other supporting information. Spear phishers target fewer users, but each potential victim is worth far more. Spear phishing is also called whaling or whale hunting (the executives are high-value “whales”).
Finally, vishing is voice phishing: attacks launched using the phone system. Attackers use automated voice scripts on voice over IP (VoIP) systems to automate calls to thousands of targets. Typical vishing attacks include telling the user that their bank account is locked, and the automated voice system will unlock it after verifying key information, such as account number and PIN.

Summary of Exam Objectives

Information security governance assures that an organization has the correct information structure, leadership, and guidance. Governance helps assure that a company has the proper administrative controls to mitigate risk. Risk Analysis (RA) helps ensure that an organization properly identifies, analyzes, and mitigates risk. Accurately assessing risk, and understanding terms such as Annualized Loss Expectancy, Total Cost of Ownership, and Return on Investment will not only help you in the exam, but also help advance your information security career.
An understanding and appreciation of legal systems, concepts, and terms are required of an information security pra ctitioner working in the information-centric world today. The impact of the ubiquity of information systems on legal systems cannot be overstated. Whether the major legal system is Civil, Common, Religious, or a hybrid, information systems have made a lasting impact on legal systems throughout the world, causing the creation of new laws, reinterpretation of existing laws, and simply a new appreciation for the unique aspects that computers bring to the courts.
Finally, the nature of information security and the inherent sensitivity therein, makes ethical frameworks an additional point requiring attention. This chapter presented the IAB’s RFC on Ethics and the Internet, the Computer Ethics Institute’s Ten Commandments of Computer Ethics, and The (ISC) Code of Ethics. The CISSP® exam will, no doubt, emphasize the Code of Ethics proffered by (ISC), which presents an ordered set of four canons that attend to matters of the public, the individual’s behavior, providing competent service, and the profession as a whole.

Self Test

Note

Please see the Self Test Appendix for explanations of all correct and incorrect answers.
1. Which of the following would be an example of a policy statement?
A. Protect PII by hardening servers
B. Harden Windows 7 by first installing the pre-hardened OS image
C. You may create a strong password by choosing the first letter of each word in a sentence and mixing in numbers and symbols
D. Download the CISecurity Windows benchmark and apply it
2. Which of the following describes the money saved by implementing a security control?
A. Total Cost of Ownership
B. Asset Value
C. Return on Investment
D. Control Savings
3. Which of the following is an example of program policy?
A. Establish the information security program
B. Email Policy
C. Application development policy
D. Server policy
4. Which of the following proves an identity claim?
A. Authentication
B. Authorization
C. Accountability
D. Auditing
5. Which of the following protects against unauthorized changes to data?
A. Confidentiality
B. Integrity
C. Availability
D. Alteration
Use the following scenario to answer questions 6 through 8:
Your company sells Apple iPods online and has suffered many denial-of-service (DoS) attacks. Your company makes an average $20,000 profit per week, and a typical DoS attack lowers sales by 40%. You suffer seven DoS attacks on average per year. A DoS-mitigation service is available for a subscription fee of $10,000/month. You have tested this service, and believe it will mitigate the attacks.
6. What is the Annual Rate of Occurrence in the above scenario?
A. $20,000
B. 40%
C. 7
D. $10,000
7. What is the annualized loss expectancy (ALE) of lost iPod sales due to the DoS attacks?
A. $20,000
B. $8000
C. $84,000
D. $56,000
8. Is the DoS mitigation service a good investment?
A. Yes, it will pay for itself
B. Yes, $10,000 is less than the $56,000 Annualized Loss Expectancy
C. No, the annual Total Cost of Ownership is higher than the Annualized Loss Expectancy
D. No, the annual Total Cost of Ownership is lower than the Annualized Loss Expectancy
9. Which of the following steps would be taken while conducting a Qualitative Risk Analysis?
A. Calculate the Asset Value
B. Calculate the Return on Investment
C. Complete the Risk Analysis Matrix
D. Complete the Annualized Loss Expectancy
10. What is the difference between a standard and a guideline?
A. Standards are compulsory and guidelines are mandatory
B. Standards are recommendations and guidelines are requirements
C. Standards are requirements and guidelines are recommendations
D. Standards are recommendations and guidelines are optional
11. An attacker sees a building is protected by security guards, and attacks a building next door with no guards. What control combination are the security guards?
A. Physical/Compensating
B. Physical/Detective
C. Physical/Deterrent
D. Physical/Preventive
12. Which canon of The (ISC)2® Code of Ethics should be considered the most important?
A. Protect society, the commonwealth, and the infrastructure
B. Advance and protect the profession
C. Act honorably, honestly, justly, responsibly, and legally
D. Provide diligent and competent service to principals
13. Which doctrine would likely allow for duplication of copyrighted material for research purposes without the consent of the copyright holder?
A. First sale
B. Fair use
C. First privilege
D. Free dilution
14. Which type of intellectual property is focused on maintaining brand recognition?
A. Patent
B. Trade Secrets
C. Copyright
D. Trademark
15. Drag and drop: Identify all objects listed below. Drag and drop all objects from left to right.
image
Figure 2.15 Drag and Drop

Self Test Quick Answer Key

1. A
2. C
3. A
4. A
5. B
6. C
7. D
8. C
9. C
10. C
11. C
12. A
13. B
14. D
15.
image
Figure 2.16 Drag and Drop Answer
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.90.131