Chapter 5
IN THIS CHAPTER
Using secure design principles
Understanding security models
Choosing the right controls and countermeasures
Recognizing security capabilities in information systems
Assessing and mitigating vulnerabilities
Decrypting cryptographic concepts and fundamentals
Getting physical with physical security design concepts
Security must be part of the design of information systems, as well as the facilities housing information systems and workers, which is covered in the Security Architecture and Engineering domain. This domain represents 13 percent of the CISSP certification exam.
It is a natural human tendency to build things without first considering their design or security implications. A network engineer who is building a new network may just start plugging cables into routers and switches without first thinking about the overall design — much less any security considerations. Similarly, a software engineer assigned to write a new program is apt to just begin coding without planning the program’s design.
If we observe the outside world and the consumer products that are available, sometimes we see egregious usability and security flaws that make us wonder how the person or organization was ever allowed to participate in its design and development.
The engineering processes that require the inclusion of secure design principles include the following:
The application development lifecycle also includes security considerations that are nearly identical to security engineering principles here. Application development is covered in Chapter 10.
Security models help us understand complex security mechanisms in information systems. Security models illustrate concepts that can be used when analyzing an existing system or designing a new one.
In this section, we describe the concepts of confidentiality, integrity, and availability (known together as CIA, or the CIA Triad), and access control models. Learn more about the CIA Triad in Chapter 3.
Confidentiality refers to the concept that information and functions (objects) should be accessed only by authorized subjects. This is usually accomplished by several means, including:
These characteristics work together to ensure that secrets remain secrets.
Integrity refers to the concept that information in a system will arrive or be created correctly and maintain that correctness throughout its lifetime. Systems storing the information will reject attempted changes by unauthorized parties or unauthorized means. The characteristics of data integrity that are ensured by systems are
Some of the measures taken to ensure data integrity are
All of these steps help to ensure that the data in a system has the highest possible quality.
Availability refers to the concept that a system (and the data within it) will be accessible when and where users want to use it. The characteristics of a system that determine its availability include
Models are used to express access control requirements in a theoretical or mathematical framework that precisely describes or quantifies real access control systems. Common access control models include Bell-LaPadula, Access Matrix, Take-Grant, Biba, Clark-Wilson, Information Flow, and Non-interference.
The Bell-LaPadula model was the first formal confidentiality model of a mandatory access control system. (We discuss mandatory and discretionary access controls in Chapter 7.) It was developed for the U.S. Department of Defense (DoD) to formalize the DoD multilevel security policy. As we discuss in Chapter 3, the DoD classifies information based on sensitivity at three basic levels: Confidential, Secret, and Top Secret. In order to access classified information (and systems), an individual must have access (a clearance level equal to or exceeding the classification of the information or system) and need-to-know (legitimately in need of access to perform a required job function). The Bell-LaPadula model implements the access component of this security policy.
Bell-LaPadula is a state machine model that addresses only the confidentiality of information. The basic premise of Bell-LaPadula is that information can’t flow downward. This means that information at a higher level is not permitted to be copied or moved to a lower level. Bell-LaPadula defines the following two properties:
Bell-LaPadula also defines two additional properties that give it the flexibility of a discretionary access control model:
An Access Matrix model, in general, provides object access rights (read/write/execute, or R/W/X) to subjects in a discretionary access control (DAC) system. An Access Matrix consists of access control lists (columns) and capability lists (rows). See Table 5-1 for an example.
TABLE 5-1 An Access Matrix Example
Subject/Object |
Directory: H/R |
File: Personnel |
Process: LPD |
Thomas |
Read |
Read/Write |
Execute |
Lisa |
Read |
Read |
Execute |
Harold |
None |
None |
None |
Take-Grant systems specify the rights that a subject can transfer to or from another subject or object. These rights are defined through four basic operations: create, revoke, take, and grant.
The Biba integrity model (sometimes referred to as Bell-LaPadula upside down) was the first formal integrity model. Biba is a lattice-based model that addresses the first goal of integrity: ensuring that modifications to data aren’t made by unauthorized users or processes. (See Chapter 3 for a complete discussion of the three goals of integrity.) Biba defines the following two properties:
The Clark-Wilson integrity model establishes a security framework for use in commercial activities, such as the banking industry. Clark-Wilson addresses all three goals of integrity and identifies special requirements for inputting data based on the following items and procedures:
The Clark-Wilson integrity model is based on the concept of a well-formed transaction, in which a transaction is sufficiently ordered and controlled so that it maintains internal and external consistency.
An Information Flow model is a type of access control model based on the flow of information, rather than on imposing access controls. Objects are assigned a security class and value, and their direction of flow — from one application to another or from one system to another — is controlled by a security policy. This model type is useful for analyzing covert channels, through detailed analysis of the flow of information in a system, including the sources of information and the paths of flow.
A non-interference model ensures that the actions of different objects and subjects aren’t seen by (and don’t interfere with) other objects and subjects on the same system.
Designing and building secure software is critical to information security, but the systems that software runs on must themselves be securely designed and built. Selecting appropriate controls is essential to designing a secure computing architecture. Numerous systems security evaluation models exist to help you select the right controls and countermeasures for your environment.
Evaluation criteria provide a standard for quantifying the security of a computer system or network. These criteria include the Trusted Computer System Evaluation Criteria (TCSEC), Trusted Network Interpretation (TNI), European Information Technology Security Evaluation Criteria (ITSEC), and the Common Criteria.
The Trusted Computer System Evaluation Criteria (TCSEC), commonly known as the Orange Book, is part of the Rainbow Series developed for the U.S. DoD by the National Computer Security Center (NCSC). It’s the formal implementation of the Bell-LaPadula model. The evaluation criteria were developed to achieve the following objectives:
The four basic control requirements identified in the Orange Book are
Covert channel analysis: TCSEC requires covert channel analysis that detects unintended communication paths not protected by a system’s normal security mechanisms. A covert storage channel conveys information by altering stored system data. A covert timing channel conveys information by altering a system resource’s performance or timing.
A systems or security architect must understand covert channels and how they work in order to prevent the use of covert channels in the system environment.
These classes are further defined in Table 5-2.
TABLE 5-2 TCSEC Classes
Class |
Name |
Sample Requirements |
D |
Minimal protection |
Reserved for systems that fail evaluation. |
C1 |
Discretionary protection (DAC) |
System doesn’t need to distinguish between individual users and types of access. |
C2 |
Controlled access protection (DAC) |
System must distinguish between individual users and types of access; object reuse security features required. |
B1 |
Labeled security protection (MAC) |
Sensitivity labels required for all subjects and storage objects. |
B2 |
Structured protection (MAC) |
Sensitivity labels required for all subjects and objects; trusted path requirements. |
B3 |
Security domains (MAC) |
Access control lists (ACLs) are specifically required; system must protect against covert channels. |
A1 |
Verified design (MAC) |
Formal Top-Level Specification (FTLS) required; configuration management procedures must be enforced throughout entire system lifecycle. |
Beyond A1 |
Self-protection and reference monitors are implemented in the Trusted Computing Base (TCB). TCB verified to source-code level. |
Major limitations of the Orange Book include that
Part of the Rainbow Series, like TCSEC (discussed in the preceding section), Trusted Network Interpretation (TNI) addresses confidentiality and integrity in trusted computer/communications network systems. Within the Rainbow Series, it’s known as the Red Book.
Part I of the TNI is a guideline for extending the system protection standards defined in the TCSEC (the Orange Book) to networks. Part II of the TNI describes additional security features such as communications integrity, protection from denial of service, and transmission security.
Unlike TCSEC, the European Information Technology Security Evaluation Criteria (ITSEC) addresses confidentiality, integrity, and availability, as well as evaluating an entire system, defined as a Target of Evaluation (TOE), rather than a single computing platform.
ITSEC evaluates functionality (security objectives, or why; security-enforcing functions, or what; and security mechanisms, or how) and assurance (effectiveness and correctness) separately. The ten functionality (F) classes and seven evaluation (E) (assurance) levels are listed in Table 5-3.
TABLE 5-3 ITSEC Functionality (F) Classes and Evaluation (E) Levels mapped to TCSEC levels
(F) Class |
(E) Level |
Description |
NA |
E0 |
Equivalent to TCSEC level D |
F-C1 |
E1 |
Equivalent to TCSEC level C1 |
F-C2 |
E2 |
Equivalent to TCSEC level C2 |
F-B1 |
E3 |
Equivalent to TCSEC level B1 |
F-B2 |
E4 |
Equivalent to TCSEC level B2 |
F-B3 |
E5 |
Equivalent to TCSEC level B3 |
F-B3 |
E6 |
Equivalent to TCSEC level A1 |
F-IN |
NA |
TOEs with high integrity requirements |
F-AV |
NA |
TOEs with high availability requirements |
F-DI |
NA |
TOEs with high integrity requirements during data communication |
F-DC |
NA |
TOEs with high confidentiality requirements during data communication |
F-DX |
NA |
Networks with high confidentiality and integrity requirements |
The Common Criteria for Information Technology Security Evaluation (usually just called Common Criteria) is an international effort to standardize and improve existing European and North American evaluation criteria. The Common Criteria has been adopted as an international standard in ISO 15408. The Common Criteria defines eight evaluation assurance levels (EALs), which are listed in Table 5-4.
TABLE 5-4 The Common Criteria
Level |
TCSEC Equivalent |
ITSEC Equivalent |
Description |
EAL0 |
N/A |
N/A |
Inadequate assurance |
EAL1 |
N/A |
N/A |
Functionally tested |
EAL2 |
C1 |
E1 |
Structurally tested |
EAL3 |
C2 |
E2 |
Methodically tested and checked |
EAL4 |
B1 |
E3 |
Methodically designed, tested, and reviewed |
EAL5 |
B2 |
E4 |
Semi-formally designed and tested |
EAL6 |
B3 |
E5 |
Semi-formally verified design and tested |
EAL7 |
A1 |
E6 |
Formally verified design and tested |
System certification is a formal methodology for comprehensive testing and documentation of information system security safeguards, both technical and nontechnical, in a given environment by using established evaluation criteria (the TCSEC).
Accreditation is an official, written approval for the operation of a specific system in a specific environment, as documented in the certification report. Accreditation is normally granted by a senior executive or Designated Approving Authority (DAA). The term DAA is used in the U.S. military and government. A DAA is normally a senior official, such as a commanding officer.
System certification and accreditation must be updated when any changes are made to the system or environment, and they must also be periodically re-validated, which typically happens every three years.
The certification and accreditation process has been formally implemented in U.S. military and government organizations as the Defense Information Technology Security Certification and Accreditation Process (DITSCAP) and National Information Assurance Certification and Accreditation Process (NIACAP), respectively. U.S. government agencies utilizing cloud-based systems and services are required to undergo FedRAMP certification and accreditation processes (described in this chapter). These important processes are used to make sure that a new (or changed) system has the proper design and operational characteristics, and that it’s suitable for a specific task.
The Defense Information Technology Security Certification and Accreditation Process (DITSCAP) formalizes the certification and accreditation process for U.S. DoD information systems through four distinct phases:
The National Information Assurance Certification and Accreditation Process (NIACAP) formalizes the certification and accreditation process for U.S. government national security information systems. NIACAP consists of four phases (Definition, Verification, Validation, and Post-Accreditation) that generally correspond to the DITSCAP phases. Additionally, NIACAP defines three types of accreditation:
The Federal Risk and Authorization Management Program (FedRAMP) is a standardized approach to assessments, authorization, and continuous monitoring of cloud-based service providers. This represents a change from controls-based security to risk-based security.
The Director of Central Intelligence Directive 6/3 is the process used to protect sensitive information that’s stored on computers used by the U.S. Central Intelligence Agency (CIA).
Various security controls and countermeasures that should be applied to security architecture, as appropriate, include defense in depth, system hardening, implementation of heterogeneous environments, and designing system resilience.
Defense in depth is a strategy for resisting attacks. A system that employs defense in depth will have two or more layers of protective controls that are designed to protect the system or data stored there.
An example defense-in-depth architecture would consist of a database protected by several components, such as:
All the layers listed here help to protect the database. In fact, each one of them by itself offers nearly complete protection. But when considered together, all these controls offer a varied (in effect, deeper) defense, hence the term defense in depth.
Most types of information systems, including computer operating systems, have several general-purpose features that make it easy to set up the systems. But systems that are exposed to the Internet should be “hardened,” or configured according to the following concepts:
System hardening guides can be obtained from a number of sources, such as:
www.cisecurity.org
).https://iase.disa.mil/stigs
).Rather than containing systems or components of a single type, a heterogeneous environment contains a variety of different types of systems. Contrast an environment that consists only of Windows 2016 servers and the latest SQL Server and IIS Server, to a more complex environment that contains Windows, Linux, and Solaris servers with Microsoft SQL Server, MySQL, and Oracle databases.
The advantage of a heterogeneous environment is its variety of systems; for one thing, the various types of systems probably won’t possess common vulnerabilities, which makes them harder to attack. However, the complexity of a heterogeneous environment also negatively impacts security, as there are more components that potentially can fail or be compromised.
The weakness of a homogeneous environment (one where all of the systems are the same) is its uniformity. If a weakness in one of the systems is discovered, all systems may have the weakness. If one of the systems is attacked and compromised, all may be attacked and compromised.
You can liken homogeneity to a herd of animals; if they are genetically identical, then they may all be susceptible to a disease that could wipe out the entire herd. If they are genetically diverse, then perhaps some will be able to survive the disease.
The resilience of a system is a measure of its ability to keep running, even under less-than-ideal conditions. Resilience is important at all levels, including network, operating system, subsystem (such as database management system or web server), and application.
Resilience can mean a lot of different things. Here are some examples:
Basic concepts related to security architecture include the Trusted Computing Base (TCB), Trusted Platform Module (TPM), secure modes of operation, open and closed systems, protection rings, security modes, and recovery procedures.
Basic computer (system) architecture refers to the structure of a computer system and comprises its hardware, firmware, and software.
Hardware consists of the physical components in computer architecture. The main components of the computer architecture include the CPU, memory, and bus.
The CPU (Central Processing Unit) or microprocessor is the electronic circuitry that performs a computer’s arithmetic, logic, and computing functions. As shown in Figure 5-1, the main components of a CPU include
ADD
, SUBTRACT
, DIVIDE
, and MULTIPLY
.The basic operation of a microprocessor consists of two distinct phases: fetch and execute. (It’s not too different from what your dog does: You throw the stick, and he fetches the stick.) During the fetch phase, the CPU locates and retrieves a required instruction from memory. During the execute phase, the CPU decodes and executes the instruction. These two phases make up a basic machine cycle that’s controlled by the CPU clock signals. Many complex instructions require more than a single machine cycle to execute.
The four operating states for a computer (CPU) are
The two basic types of CPU designs used in modern computer systems are
Microprocessors are also often described as scalar or superscalar. A scalar processor executes a single instruction at a time. A superscalar processor can execute multiple instructions concurrently.
Finally, many systems (microprocessors) are classified according to additional functionality (which must be supported by the installed operating system):
Two related concepts are multistate and multiuser systems that, more correctly, refer to operating system capabilities:
An important security issue in multiuser systems involves privileged accounts, and programs or processes that run in a privileged state. Programs such as su (UNIX/Linux) and RunAs (Windows) allow a user to switch to a different account, such as root or administrator, and execute privileged commands in this context. Many programs rely on privileged service accounts to function properly. Utilities such as IBM’s Superzap, for example, are used to install fixes to the operating system or other applications.
The bus is a group of electronic conductors that interconnect the various components of the computer, transmitting signals, addresses, and data between these components. Bus structures are organized as follows:
Main memory (also known as main storage) is the part of the computer that stores programs, instructions, and data. The two basic types of physical (or real — as opposed to virtual — more on that later) memory are
Secondary memory (also known as secondary storage) is a variation of these two basic types of physical memory. It provides dynamic storage on nonvolatile magnetic media such as hard drives, solid-state drives, or tape drives (which are considered sequential memory because data can’t be directly accessed — instead, you must search from the beginning of the tape). Virtual memory (such as a paging file, swap space, or swap partition) is a type of secondary memory that uses both installed physical memory and available hard-drive space to present a larger apparent memory space to the CPU than actually exists in main storage.
Two important security concepts associated with memory are the protection domain (also called protected memory) and memory addressing.
A protection domain prevents other programs or processes from accessing and modifying the contents of address space that’s already been assigned to another active program or process. This protection can be performed by the operating system or implemented in hardware. The purpose of a protection domain is to protect the memory space assigned to a process so that no other process can read from the space or alter it. The memory space occupied by each process can be considered private.
Memory space describes the amount of physical memory available in a computer system (for example, 2 GB), whereas address space specifies where memory is located in a computer system (a memory address). Memory addressing describes the method used by the CPU to access the contents of memory. A physical memory address is a hard-coded address assigned to physically installed memory. It can only be accessed by the operating system that maps physical addresses to virtual addresses. A virtual (or symbolic) memory address is the address used by applications (and programmers) to specify a desired location in memory. Common virtual memory addressing modes include
Firmware is a program or set of computer instructions stored in the physical circuitry of ROM memory. These types of programs are typically changed infrequently or not at all. In servers and user workstations, firmware usually stores the initial computer instructions that are executed when the server or workstation is powered on; the firmware starts the CPU and other onboard chips, and establishes communications by using the keyboard, monitor, network adaptor, and hard drive. The firmware retrieves blocks of data from the hard drive that are then used to load and start the operating system.
A computer’s BIOS is a common example of firmware. BIOS, or Basic Input-Output System, contains instructions needed to start a computer when it’s first powered on, initialize devices, and load the operating system from secondary storage (such as a hard drive).
Firmware is also found in devices such as smartphones, tablets, DSL/cable modems, and practically every other type of Internet-connected device, such as automobiles, thermostats, and even your refrigerator.
Firmware is typically stored on one or more ROM chips on a computer’s motherboard (the main circuit board containing the CPU(s), memory, and other circuitry).
Software includes the operating system and programs or applications that are installed on a computer system. We cover software security in Chapter 10.
A computer operating system (OS) is the software that controls the workings of a computer, enabling the computer to be used. The operating system can be thought of as a logical platform, through which other programs can be run to perform work.
The main components of an operating system are
The operating system controls a computer’s resources. The main functions of the operating system are
A virtual machine is a software implementation of a computer, enabling many running copies of an operating system to execute on a single running computer without interfering with each other. Virtual machines are typically controlled by a hypervisor, a software program that allocates resources for each resident operating system (called a guest).
A hypervisor serves as an operating system for multiple operating systems. One of the strengths of virtualization is that the resident operating system has little or no awareness of the fact that it’s running as a guest — instead, it may believe that it has direct control of the computer’s hardware. Only your system administrator knows for sure.
A container is a lightweight, standalone executable package of a piece of software that includes everything it needs to run. A container is essentially a bare-bones virtual machine that only has the minimum software installed necessary to deploy a given application. Popular container platforms include Docker and Kubernetes.
A Trusted Computing Base (TCB) is the entire complement of protection mechanisms within a computer system (including hardware, firmware, and software) that’s responsible for enforcing a security policy. A security perimeter is the boundary that separates the TCB from the rest of the system.
Access control is the ability to permit or deny the use of an object (a passive entity, such as a system or file) by a subject (an active entity, such as an individual or a process).
A reference monitor is a system component that enforces access controls on an object. Stated another way, a reference monitor is an abstract machine that mediates all access to an object by a subject.
A security kernel is the combination of hardware, firmware, and software elements in a Trusted Computing Base that implements the reference monitor concept. Three requirements of a security kernel are that it must:
A Trusted Platform Module (TPM) performs sensitive cryptographic functions on a physically separate, dedicated microprocessor. The TPM specification was written by the Trusted Computing Group (TCG) and is an international standard (ISO/IEC 11889 Series).
A TPM generates and stores cryptographic keys, and performs the following functions:
Common TPM uses include ensuring platform integrity, full disk encryption, password and cryptographic key protection, and digital rights management.
Security modes are used in mandatory access control (MAC) systems to enforce different levels of security. Techniques and concepts related to secure modes of operation include:
An open system is a vendor-independent system that complies with a published and accepted standard. This compliance with open standards promotes interoperability between systems and components made by different vendors. Additionally, open systems can be independently reviewed and evaluated, which facilitates identification of bugs and vulnerabilities and the rapid development of solutions and updates. Examples of open systems include the Linux operating system, the Open Office desktop productivity system, and the Apache web server.
A closed system uses proprietary hardware and/or software that may not be compatible with other systems or components. Source code for software in a closed system isn’t normally available to customers or researchers. Examples of closed systems include the Microsoft Windows operating system, Oracle database management system, and Apple iTunes.
The concept of protection rings implements multiple concentric domains with increasing levels of trust near the center. The most privileged ring is identified as Ring 0 and normally includes the operating system’s security kernel. Additional system components are placed in the appropriate concentric ring according to the principle of least privilege and to provide isolation, so that a breach of a component in one protection ring does not automatically provide access to components in more privileged rings. The MIT MULTICS operating system implements the concept of protection rings in its architecture, as did Novell Netware.
A system’s security mode of operation describes how a system handles stored information at various classification levels. Several security modes of operation, based on the classification level of information being processed on a system and the clearance level of authorized users, have been defined. These designations are typically used for U.S. military and government systems, and include
Security modes of operation generally come into play in environments that contain highly sensitive information, such as government and military environments. Most private and education systems run in multilevel mode, meaning they contain information at all sensitivity levels. See Chapter 3 for more on security clearance levels.
A hardware or software failure can potentially compromise a system’s security mechanisms. Security designs that protect a system during a hardware or software failure include
Unless detected (and corrected) by an experienced security analyst, many weaknesses may be present in a system and permit exploitation, attack, or malfunction. We discuss the most important problems in the following list:
Race conditions: Software code in multiprocessing and multiuser systems, unless very carefully designed and tested, can result in critical errors that are difficult to find. A race condition is a flaw in a system where the output or result of an activity in the system is unexpectedly tied to the timing of other events. The term race condition comes from the idea of two events or signals that are racing to influence an activity.
The most common race condition is the time-of-check-to-time-of-use bug caused by changes in a system between the checking of a condition and the use of the results of that check. For example, two programs that both try to open a file for exclusive use may both open the file, even though only one should be able to.
In this section, we discuss the techniques used to identify and fix vulnerabilities in systems. We will lightly discuss techniques for security assessments and testing, which is fully explored in Chapter 8.
The types of design vulnerabilities often found on endpoints involve defects in client-side code that is present in browsers and applications. The defects most often found include these:
Other weaknesses may be present in client systems. For a more complete understanding of application weaknesses, consult www.owasp.org
.
Identifying weaknesses like the preceding examples will require one or more of the following techniques:
Design vulnerabilities found on servers fall into the following categories:
These defects are similar to those in the preceding Client-based section. This is because the terms client and server have only to do with perspective: in both cases, software is running on a system.
Database management systems are nearly as complex as the operating systems on which they reside. Vulnerabilities in database management systems include these:
Database security defects can be identified through manual examination or automated tools. Mitigation may be as easy as changing access permissions or as complex as redesigning the database schema and related application software programs.
Large-scale parallel data systems are systems with large numbers of processors. The processors may either reside in one physical location or be geographically distributed. Vulnerabilities in these systems include
Security defects in parallel systems can be identified through manual examination and mitigated through either configuration changes or system design changes.
Distributed systems are simply systems with components scattered throughout physical and logical space. Oftentimes, these components are owned and/or managed by different groups or organizations, sometimes in different countries. Some components may be privately used while others represent services available to the public (for example, Google Maps). Vulnerabilities in distributed systems include these:
Lack of centralized security and control. A distributed system that is controlled by more than one organization often lacks overall oversight for security management and security operations.
This is especially true of peer-to-peer systems that are often run by end users on lightly managed or unmanaged endpoints.
All of these weaknesses can also be present in simpler environments. These weaknesses and other defects can be detected through either the use of security scanning tools or manual techniques, and corrective actions taken to mitigate those defects.
Cryptographic systems are especially apt to contain vulnerabilities, for the simple reason that people focus on the cryptographic algorithm but fail to implement it properly. Like any powerful tool, if the operator doesn’t know how to use it, it can be useless at best and dangerous at its worst.
The ways in which a cryptographic system may be vulnerable include these:
These and other vulnerabilities in cryptographic systems can be detected and mitigated through peer reviews of cryptosystems, assessments by qualified external parties, and the application of corrective actions to fix defects.
Industrial control systems (ICS) represent a wide variety of means for monitoring and controlling machinery of various kinds, including power generation, distribution, and consumption; natural gas and petroleum pipelines; municipal water, irrigation, and waste systems; traffic signals; manufacturing; and package distribution.
Weaknesses in industrial control systems include the following:
These vulnerabilities can be mitigated through a systematic process of establishing good controls, testing control effectiveness, and applying corrective action when controls are found to be ineffective.
The U.S. National Institute of Standards and Technology (NIST) defines three cloud computing service models as follows:
NIST further defines four cloud computing deployment models as follows:
Major public cloud service providers such as Amazon Web Services, Microsoft Azure, Google Cloud Platform, and Oracle Cloud Platform provide customers not only with virtually unlimited compute and storage at scale, but also a depth and breadth of security capabilities that often exceeds the capabilities of the customers themselves. However, this does not mean that cloud-based systems are inherently secure. The shared responsibility model is used by public cloud service providers to clearly define which aspects of security the provider is responsible for, and which aspects the customer is responsible for. SaaS models place the most responsibility on the cloud service provider, typically including securing the following:
However, the customer is always ultimately responsible for the security and privacy of its data. Additionally, identity and access management (IAM) is typically the customer’s responsibility.
In a PaaS model, the customer is typically responsible for the security of its applications and data, as well as IAM, among others.
In an IaaS model, the customer is typically responsible for the security of its applications and data, runtime and middleware, and operating systems. The cloud service provider is typically responsible for the security of networking and the data center (although cloud service providers generally do not provide firewalls). Virtualization, server, and storage security may be managed by either the cloud service provider or customer.
The security of Internet of Things (IoT) devices and systems is a rapidly evolving area of information security. IoT sensors and devices collect large amounts of both potentially sensitive data and seemingly innocuous data. However, under certain circumstances practically any data that is collected can be used for nefarious purposes, security must be a critical design consideration for IoT devices and systems. This includes not only securing the data stored on the systems, but also how the data is collected, transmitted, processed, and used. There are many networking and communications protocols commonly used in IoT devices, including the following:
The security of these various protocols and their implementations must also be carefully considered in the design of secure IoT devices and systems.
Web-based systems contain many components, including application code, database management systems, operating systems, middleware, and the web server software itself. These components may, individually and collectively, have security design or implementation defects. Some of the defects present include these:
Failure to block cross-site request forgery attacks. Web sites that fail to employ proper session and session context management can be vulnerable to attacks in which users are tricked into sending commands to web sites that may cause them harm.
The example we like to use is where an attacker tricks a user into clicking a link that actually takes the user to a URL like this: http://bank.com/transfer?tohackeraccount:amount=99999.99
.
These vulnerabilities can be mitigated in three main ways:
Mobile systems include the operating systems and applications on smartphones, tablets, phablets, smart watches, and wearables. The most popular operating system platforms for mobile systems are Apple iOS, Android, and Windows 10.
The vulnerabilities that are found on mobile systems include
In a managed corporate environment, the use of a mobile device management (MDM) system can mitigate many or all of these risks. For individual users, mitigation is up to individual users to do the right thing and use strong security settings.
Embedded devices encompass the wide variety of systems and devices that are Internet connected. Mainly, we’re talking about devices that are not human connected in the computing sense. Examples of such devices include
These devices often run embedded systems, which are specialized operating systems designed to run on devices lacking computer-like human interaction through a keyboard or display. They still have an operating system that is very similar to that found on endpoints like laptops and mobile devices.
Some of the design defects in this class of device include
Because the majority of these devices cannot be altered, mitigation of these defects typically involves isolation of these devices on separate, heavily guarded networks that have tools in place to detect and block attacks.
Cryptography (from the Greek kryptos, meaning hidden, and graphia, meaning writing) is the science of encrypting and decrypting communications to make them unintelligible for all but the intended recipient.
Cryptography can be used to achieve several goals of information security, including confidentiality, integrity, and authentication.
Cryptography today has evolved into a complex science (some say an art) presenting many great promises and challenges in the field of information security. The basics of cryptography include various terms and concepts, the individual components of the cryptosystem, and the classes and types of ciphers.
The cryptographic lifecycle is the sequence of events that occurs throughout the use of cryptographic controls in a system. These steps include
These steps are not altogether different from the selection, implementation, examination, and correction of any other type of security control in a network and computing environment. Like virtually any other component in a network and computing environment, components in a cryptosystem must be periodically examined to ensure that they are still effective and being operated properly.
A plaintext message is a message in its original readable format or a ciphertext message that has been properly decrypted (unscrambled) to produce the original readable plaintext message.
A ciphertext message is a plaintext message that has been transformed (encrypted) into a scrambled message that’s unintelligible. This term doesn’t apply to messages from your boss that may also happen to be unintelligible!
Encryption (or enciphering) is the process of converting plaintext communications into ciphertext. Decryption (or deciphering) reverses that process, converting ciphertext into plaintext. (See Figure 5-2.)
Traffic on a network can be encrypted by using either end-to-end or link encryption.
With end-to-end encryption, packets are encrypted once at the original encryption source and then decrypted only at the final decryption destination. The advantages of end-to-end encryption are its speed and overall security. However, in order for the packets to be properly routed, only the data is encrypted, not the routing information.
Link encryption requires that each node (for example, a router) has separate key pairs for its upstream and downstream neighbors. Packets are encrypted and decrypted, then re-encrypted at every node along the network path.
The following example, as shown in Figure 5-3, illustrates link encryption:
The advantage of using link encryption is that the entire packet (including routing information) is encrypted. However, link encryption has the following two disadvantages:
A cryptosystem is the hardware or software implementation that transforms plaintext into ciphertext (encrypting it) and back into plaintext (decrypting it).
An effective cryptosystem must have the following properties:
The encryption and decryption process is efficient for all possible keys within the cryptosystem’s keyspace.
A keyspace is the range of all possible values for a key in a cryptosystem.
Cryptosystems are typically composed of two basic elements:
Key clustering (or simply clustering) occurs when identical ciphertext messages are generated from a plaintext message by using the same encryption algorithm but different encryption keys. Key clustering indicates a weakness in a cryptographic algorithm because it statistically reduces the number of key combinations that must be attempted in a brute force attack.
Ciphers are cryptographic transformations. The two main classes of ciphers used in symmetric key algorithms are block and stream (see the section “Not Quite the Metric System: Symmetric and Asymmetric Key Systems,” later in this chapter), which describe how the ciphers operate on input data.
Block ciphers operate on a single fixed block (typically 128 bits) of plaintext to produce the corresponding ciphertext. Using a given key in a block cipher, the same plaintext block always produces the same ciphertext block. Advantages of block ciphers compared with stream ciphers are
Block ciphers are typically implemented in software. Examples of block ciphers include AES, DES, Blowfish, Twofish, and RC5.
Stream ciphers operate in real time on a continuous stream of data, typically bit by bit. Stream ciphers generally work faster than block ciphers and require less code to implement. However, the keys in a stream cipher are generally used only once (see the sidebar “A disposable cipher: The one-time pad”) and then discarded. Key management becomes a serious problem. Using a stream cipher, the same plaintext bit or byte will produce a different ciphertext bit or byte every time it is encrypted. Stream ciphers are typically implemented in hardware.
Examples of stream ciphers include Salsa20 and RC4.
The two basic types of ciphers are substitution and transposition. Both are involved in the process of transforming plaintext into ciphertext.
Substitution ciphers replace bits, characters, or character blocks in plaintext with alternate bits, characters, or character blocks to produce ciphertext. A classic example of a substitution cipher is one that Julius Caesar used: He substituted letters of the message with other letters from the same alphabet. (Read more about this in the sidebar “Tales from the crypt-o: A brief history of cryptography,” earlier in this chapter.) In a simple substitution cipher using the standard English alphabet, a cryptovariable (key) is added modulo 26 to the plaintext message. In modulo 26 addition, the remainder is the final result for any sum equal to or greater than 26. For example, a basic substitution cipher in which the word BOY is encrypted by adding three characters using modulo 26 math produces the following result:
B |
O |
Y |
PLAINTEXT |
|
2 |
15 |
25 |
NUMERIC VALUE |
|
+ |
3 |
3 |
3 |
SUBSTITUTION VALUE |
5 |
18 |
2 |
MODULO 26 RESULT |
|
E |
R |
B |
CIPHERTEXT |
A substitution cipher may be either monoalphabetic or polyalphabetic:
A more modern example of a substitution cipher is the S-boxes (Substitution boxes) employed in the Data Encryption Standard (DES) algorithm. The S-boxes in DES produce a nonlinear substitution (6 bits in, 4 bits out). Note: Do not attempt to sing this to the tune “Shave and a Haircut” to improve the strength of the encryption by hiding any statistical relationship between the plaintext and ciphertext characters.
Transposition ciphers rearrange bits, characters, or character blocks in plaintext to produce ciphertext. In a simple columnar transposition cipher, a message might be read horizontally but written vertically to produce the ciphertext as in the following example:
THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG
written in 9 columns as
THEQUICKB
ROWNFOXJU
MPSOVERTH
ELAZYDOG
then transposed (encrypted) vertically as
TRMEHOPLEWSAQNOZUFVYIOEDCXROKJTGBUH
The original letters of the plaintext message are the same; only the order has been changed to achieve encryption.
DES performs permutations through the use of P-boxes (Permutation boxes) to spread the influence of a plaintext character over many characters so that they’re not easily traced back to the S-boxes used in the substitution cipher.
Other types of ciphers include
Technology does provide valid and interesting alternatives to cryptography when a message needs to be protected during transmission. Some useful options are listed in the following sections.
Steganography is the art of hiding the very existence of a message. It is related to but different from cryptography. Like cryptography, one purpose of steganography is to protect the contents of a message. However, unlike cryptography, the contents of the message aren’t encrypted. Instead, the existence of the message is hidden in some other communications medium.
For example, a message may be hidden in a graphic or sound file, in slack space on storage media, in traffic noise over a network, or in a digital image. By using the example of a digital image, the least significant bit (the right-most bit) of each byte in the image file can be used to transmit a hidden message without noticeably altering the image. However, because the message itself isn’t encrypted, if it is discovered, its contents can be easily compromised.
Digital watermarking is a technique similar (and related) to steganography that can be used to verify the authenticity of an image or data, or to protect the intellectual property rights of the creator. Watermarking is the visible cousin of steganography — no attempt is made to hide its existence. Watermarks have long been used on paper currency and office letterhead or paper stock.
Within the last decade, the use of digital watermarking has become more widespread. For example, to display photo examples on the Internet without risking intellectual property theft, a copyright notice may be prominently imprinted across the image. As with steganography, nothing is encrypted using digital watermarking; the confidentiality of the material is not protected with a watermark.
Cryptographic algorithms are broadly classified as either symmetric or asymmetric key systems.
Symmetric key cryptography, also known as symmetric algorithm, secret key, single key, and private key cryptography, uses a single key to both encrypt and decrypt information. Two parties (for our example, Thomas and Richard) can exchange an encrypted message by using the following procedure:
In order for an attacker (Harold) to read the message, he must either guess the secret key (by using a brute-force attack, for example), obtain the secret key from Thomas or Richard using the rubber hose technique (another form of uh, brute-force attack — humans are typically the weakest link, and neither Thomas nor Richard have much tolerance for pain) or through social engineering (Thomas and Richard both like money and may be all too willing to help Harold’s Nigerian uncle claim his vast fortune) or intercept the secret key during the initial exchange.
The following list includes the main disadvantages of symmetric systems:
Of course, symmetric systems do have many advantages:
Symmetric key algorithms include Data Encryption Standard (DES), Triple DES (3DES), Advanced Encryption Standard (AES), International Data Encryption Algorithm (IDEA), and Rivest Cipher 5 (RC5).
In the early 1970s, the National Institute of Standards and Technology (NIST) solicited vendors to submit encryption algorithm proposals to be evaluated by the National Security Agency (NSA) in support of a national cryptographic standard. This new encryption standard was used for private-sector and Sensitive but Unclassified (SBU) government data. In 1974, IBM submitted a 128-bit algorithm originally known as Lucifer. After some modifications (the algorithm was shortened to 56 bits and the S-boxes were changed), the IBM proposal was endorsed by the NSA and formally adopted as the Data Encryption Standard. It was published in Federal Information Processing Standard (FIPS) PUB 46 in 1977 (updated and revised in 1988 as FIPS PUB 46-1) and American National Standards Institute (ANSI) X3.92 in 1981.
The DES algorithm is a symmetric (or private) key cipher consisting of an algorithm and a key. The algorithm is a 64-bit block cipher based on a 56-bit symmetric key. (It consists of 56 key bits plus 8 parity bits … or think of it as 8 bytes, with each byte containing 7 key bits and 1 parity bit.) During encryption, the original message (plaintext) is divided into 64-bit blocks. Operating on a single block at a time, each 64-bit plaintext block is split into two 32-bit blocks. Under control of the 56-bit symmetric key, 16 rounds of transpositions and substitutions are performed on each individual character to produce the resulting ciphertext output.
The four distinct modes of operation (the mode of operation defines how the plaintext/ciphertext blocks are processed) in DES are Electronic Code Book, Cipher Block Chaining, Cipher Feedback, and Output Feedback.
The original goal of DES was to develop an encryption standard that could be used for 10 to 15 years. Although DES far exceeded this goal, in 1999, the Electronic Frontier Foundation achieved the inevitable, breaking a DES key in only 23 hours.
Electronic Code Book (ECB) mode is the native mode for DES operation and normally produces the highest throughput. It is best used for encrypting keys or small amounts of data. ECB mode operates on 64-bit blocks of plaintext independently and produces 64-bit blocks of ciphertext. One significant disadvantage of ECB is that the same plaintext, encrypted with the same key, always produces the same ciphertext. If used to encrypt large amounts of data, it’s susceptible to Chosen Text Attacks (CTA) (discussed in the section “Chosen Text Attack (CTA),” later in this chapter) because certain patterns may be revealed.
Cipher Block Chaining (CBC) mode is the most common mode of DES operation. Like ECB mode, CBC mode operates on 64-bit blocks of plaintext to produce 64-bit blocks of ciphertext. However, in CBC mode, each block is XORed (see the following sidebar “The XORcist,”) with the ciphertext of the preceding block to create a dependency, or chain, thereby producing a more random ciphertext result. The first block is encrypted with a random block known as the initialization vector (IV). One disadvantage of CBC mode is that errors propagate. However, this problem is limited to the block in which the error occurs and the block that immediately follows, after which, the decryption resynchronizes.
Cipher Feedback (CFB) mode is a stream cipher most often used to encrypt individual characters. In this mode, previously generated ciphertext is used as feedback for key generation in the next keystream. The resulting ciphertext is chained together, which causes errors to be multiplied throughout the encryption process.
Output Feedback (OFB) mode is also a stream cipher very similar to CFB. It is often used to encrypt satellite communications. In this mode, previous plaintext is used as feedback for key generation in the next keystream. Because the resulting ciphertext is not chained together, errors don’t spread throughout the encryption process.
Triple Data Encryption Standard (3DES) effectively extended the life of the DES algorithm. In Triple DES implementations, a message is encrypted by using one key, encrypted by using a second key, and then again encrypted by using either the first key or a third key.
The use of three separate 56-bit encryption keys produces an effective key length of 168 bits. But Triple DES doesn’t just triple the work factor required to crack the DES algorithm (see the sidebar “Work factor: Force × effort = work!” in this chapter). Because the attacker doesn’t know whether he or she successfully cracked even the first 56-bit key (pick a number between 0 and 72 quadrillion!) until all three keys are cracked and the correct plaintext is produced, the work factor required is more like 256 x 256 x 256, or 72 quadrillion x 72 quadrillion x 72 quadrillion. (Don’t try this multiplication on a calculator; just trust us on this one.)
Using Triple DES (officially known as the Triple Data Encryption Algorithm, or TDEA) would seem enough to protect even the most sensitive data for at least a few lifetimes, but a few problems exist with Triple DES. First, the performance cost is significant. Although Triple DES is faster than many other symmetric encryption algorithms, it’s still unacceptably slow and therefore doesn’t work with many applications that require high-speed throughput of large volumes of data.
Second, a weakness exists in the implementation that allows a cryptanalyst to reduce the effective key size to 108 bits in a brute force attack. Although a 108-bit key size still requires a significant amount of time to crack (theoretically, several million millennia), it’s still a weakness.
In May 2002, NIST announced the Rijndael Block Cipher as the new standard to implement the Advanced Encryption Standard (AES), which replaced DES as the U.S. government standard for encrypting Sensitive but Unclassified data. AES was subsequently approved for encrypting classified U.S. government data up to the Top Secret level (using 192- or 256-key lengths).
The Rijndael Block Cipher, developed by Dr. Joan Daemen and Dr. Vincent Rijmen, uses variable block and key lengths (128, 192, or 256 bits) and between 10 and 14 rounds. It was designed to be simple, resistant to known attacks, and fast. It can be implemented in either hardware or software and has relatively low memory requirements.
Until recently, the only known successful attacks against AES were side-channel attacks, which don’t directly attack the encryption algorithm, but instead attack the system on which the encryption algorithm is implemented. Side-channel attacks using cache-timing techniques are most common against AES implementations. In 2009, a theoretical related-key attack against AES was published. The attack method is considered theoretical because, although it reduces the mathematical complexity required to break an AES key, it is still well beyond the computational capability available today.
The Blowfish Algorithm operates on 64-bit blocks, employs 16 rounds, and uses variable key lengths of up to 448 bits. The Twofish Algorithm, a finalist in the AES selection process, is a symmetric block cipher that operates on 128-bit blocks, employing 16 rounds with variable key lengths up to 256 bits. Both Blowfish and Twofish were designed by Bruce Schneier (and others) and are freely available in the public domain (neither algorithm has been patented). To date, there are no known successful cryptanalytic attacks against either algorithm.
Drs. Ron Rivest, Adi Shamir, and Len Adleman invented the RSA algorithm and founded the company RSA Data Security (RSA = Rivest, Shamir, Adleman). The Rivest Ciphers are a series of symmetric algorithms that include RC2, RC4, RC5, and RC6 (RC1 was never published and RC3 was broken during development):
The International Data Encryption Algorithm (IDEA) Cipher evolved from the Proposed Encryption Standard and the Improved Proposed Encryption Standard (IPES) originally developed in 1990. IDEA is a block cipher that operates on 64-bit plaintext blocks by using a 128-bit key. IDEA performs eight rounds on 16-bit sub-blocks and can operate in four distinct modes similar to DES. The IDEA Cipher provides stronger encryption than RC4 and Triple DES, but because it’s patented, it’s not widely used today. However, the patents were set to expire in various countries between 2010 and 2012. It is currently used in some software applications, including Pretty Good Privacy (PGP) email.
Asymmetric key cryptography (also known as asymmetric algorithm cryptography or public key cryptography) uses two separate keys: one key to encrypt and a different key to decrypt information. These keys are known as public and private key pairs. When two parties want to exchange an encrypted message by using asymmetric key cryptography, they follow these steps:
Only the private key can decrypt the message; thus, an attacker (Harold) possessing only the public key can’t decrypt the message. This also means that not even the original sender can decrypt the message. This use of an asymmetric key system is known as a secure message. A secure message guarantees the confidentiality of the message.
If the sender wants to guarantee the authenticity of a message (or, more correctly, the authenticity of the sender), he or she can sign the message with this procedure:
Of course, an attacker can also verify the authenticity of the message. This use of an asymmetric key system is known as an open message format because it guarantees only the authenticity, not the confidentiality.
If the sender wants to guarantee both the confidentiality and authenticity of a message, he or she can do so by using this procedure:
If an attacker intercepts the message, he or she can apply the sender’s public key, but then has an encrypted message that he or she can’t decrypt without the intended recipient’s private key. Thus, both confidentiality and authenticity are assured. This use of an asymmetric key system is known as a secure and signed message format.
A public key and a private key are mathematically related, but theoretically, no one can compute or derive the private key from the public key. This property of asymmetric systems is based on the concept of a one-way function. A one-way function is a problem that you can easily compute in one direction but not in the reverse direction. In asymmetric key systems, a trapdoor (private key) resolves the reverse operation of the one-way function.
Because of the complexity of asymmetric key systems, they are more commonly used for key management or digital signatures than for encryption of bulk information. Often, a hybrid system is employed, using an asymmetric system to securely distribute the secret keys of a symmetric key system that’s used to encrypt the data.
The main disadvantage of asymmetric systems is their lower speed. Because of the types of algorithms that are used to achieve the one-way hash functions, very large keys are required. (A 128-bit symmetric key has the equivalent strength of a 2,304-bit asymmetric key.) Those large keys, in turn, require more computational power, causing a significant loss of speed (up to 10,000 times slower than a comparable symmetric key system).
However, the many significant advantages of asymmetric systems include
Asymmetric key algorithms include RSA, Diffie-Hellman, El Gamal, Merkle-Hellman (Trapdoor) Knapsack, and Elliptic Curve, which we talk about in the following sections.
Drs. Ron Rivest, Adi Shamir, and Len Adleman published the RSA algorithm, which is a key transport algorithm based on the difficulty of factoring a number that’s the product of two large prime numbers (typically 512 bits). Two users (Thomas and Richard) can securely transport symmetric keys by using RSA, like this:
Drs. Whitfield Diffie and Martin Hellman published a paper, entitled “New Directions in Cryptography,” that detailed a new paradigm for secure key exchange based on discrete logarithms. Diffie-Hellman is described as a key agreement algorithm. Two users (Thomas and Richard) can exchange symmetric keys by using Diffie-Hellman, like this:
Diffie-Hellman key exchange is vulnerable to Man-in-the-Middle Attacks, in which an attacker (Harold) intercepts the public keys during the initial exchange and substitutes his own private key to create a session key that can decrypt the session. (You can read more about these attacks in the section “Man-in-the-Middle Attack,” later in this chapter.) A separate authentication mechanism is necessary to protect against this type of attack, ensuring that the two parties communicating in the session are, in fact, the legitimate parties.
El Gamal is an unpatented, asymmetric key algorithm based on the discrete logarithm problem used in Diffie-Hellman (discussed in the preceding section). El Gamal extends the functionality of Diffie-Hellman to include encryption and digital signatures.
The Merkle-Hellman (Trapdoor) Knapsack, published in 1978, employs a unique approach to asymmetric cryptography. It’s based on the problem of determining what items, in a set of items that have fixed weights, can be combined in order to obtain a given total weight. Knapsack was broken in 1982.
Elliptic curves (EC) are far more difficult to compute than conventional discrete logarithm problems or factoring prime numbers. (A 160-bit EC key is equivalent to a 1,024-bit RSA key.) The use of smaller keys means that EC is significantly faster than other asymmetric algorithms (and many symmetric algorithms), and can be widely implemented in various hardware applications including wireless devices and smart cards.
Message authentication guarantees the authenticity and integrity of a message by ensuring that
Checksums, CRC-values, and parity checks are examples of basic message authentication and integrity controls. More advanced message authentication is performed by using digital signatures and message digests.
The Digital Signature Standard (DSS), published by the National Institute of Standards and Technology (NIST) in Federal Information Processing Standard (FIPS) 186-4, specifies three acceptable algorithms in its standard: the RSA Digital Signature Algorithm, the Digital Signature Algorithm (DSA, which is based on a modified El Gamal algorithm), and the Elliptic Curve Digital Signature Algorithm (ECDSA).
A digital signature is a simple way to verify the authenticity (and integrity) of a message. Instead of encrypting a message with the intended receiver’s public key, the sender encrypts it with his or her own private key. The sender’s public key properly decrypts the message, authenticating the originator of the message. This process is known as an open message format in asymmetric key systems, which we discuss in the section “Asymmetric key cryptography,” earlier in this chapter.
It’s often impractical to encrypt a message with the receiver’s public key to protect confidentiality, and then encrypt the entire message again by using the sender’s private key to protect authenticity and integrity. Instead, a representation of the encrypted message is encrypted with the sender’s private key to produce a digital signature. The intended recipient decrypts this representation by using the sender’s public key, and then independently calculates the expected results of the decrypted representation by using the same, known, one-way hashing algorithm. If the results are the same, the integrity of the original message is assured. This representation of the entire message is known as a message digest.
To digest means to reduce or condense something, and a message digest does precisely that. (Conversely, indigestion means to expand … like gases … how do you spell relief?) A message digest is a condensed representation of a message; think Reader’s Digest. Ideally, a message digest has the following properties:
Message digests are produced by using a one-way hash function. There are several types of one-way hashing algorithms (digest algorithms), including MD5, SHA-2 variants, and HMAC.
A one-way hashing algorithm produces a hashing value (or message digest) that can’t be reversed; that is, it can’t be decrypted. In other words, no trapdoor exists for a one-way hashing algorithm. The purpose of a one-way hashing algorithm is to ensure integrity and authentication.
MD (Message Digest) is a family of one-way hashing algorithms developed by Dr. Ron Rivest that includes MD (obsolete), MD2, MD3 (not widely used), MD4, MD5, and MD6:
Like MD, SHA (Secure Hash Algorithm) is another family of one-way hash functions. The SHA family of algorithms is designed by the U.S. National Security Agency (NSA) and published by NIST. The SHA family of algorithms includes SHA-1, SHA-2, and SHA-3:
The Hashed Message Authentication Code (or Checksum) (HMAC) further extends the security of the MD5 and SHA-1 algorithms through the concept of a keyed digest. HMAC incorporates a previously shared secret key and the original message into a single message digest. Thus, even if an attacker intercepts a message, modifies its contents, and calculates a new message digest, the result doesn’t match the receiver’s hash calculation because the modified message’s hash doesn’t include the secret key.
A Public Key Infrastructure (PKI) is an arrangement whereby a designated authority stores encryption keys or certificates (an electronic document that uses the public key of an organization or individual to establish identity, and a digital signature to establish authenticity) associated with users and systems, thereby enabling secure communications through the integration of digital signatures, digital certificates, and other services necessary to ensure confidentiality, integrity, authentication, non-repudiation, and access control.
Like physical keys, encryption keys must be safeguarded. Most successful attacks against encryption exploit some vulnerability in key management functions rather than some inherent weakness in the encryption algorithm. The following are the major functions associated with managing encryption keys:
Law enforcement has always been concerned about the potential use of encryption for criminal purposes. To counter this threat, NIST published the Escrowed Encryption Standard (EES) in Federal Information Processing Standards (FIPS) Publication 185 (1994). The premise of the EES is to divide a secret key into two parts and place those two parts into escrow with two separate, trusted organizations. With a court order, the two parts can be obtained by law enforcement officials, the secret key recovered, and the suspected communications decrypted. One implementation of the EES is the Clipper Chip proposed by the U.S. government. The Clipper Chip uses the Skipjack Secret Key algorithm for encryption and an 80-bit secret key.
Attempts to crack a cryptosystem can be generally classified into four classes of attack methods:
The specific attack methods discussed in the following sections employ various elements of the four classes we describe in the preceding list.
The Birthday Attack attempts to exploit the probability of two messages producing the same message digest by using the same hash function. It’s based on the statistical probability (greater than 50 percent) that in a room containing 23 or more people, 2 people in that room have the same birthday. However, for 2 people in a room to share a specific birthday (such as August 3rd), 253 or more people must be in the room to have a statistical probability of greater than 50 percent (even if one of the birthdays is on February 29).
In a Ciphertext Only Attack (COA), the cryptanalyst obtains the ciphertext of several messages, all encrypted by using the same encryption algorithm, but he or she doesn’t have the associated plaintext. The cryptanalyst then attempts to decrypt the data by searching for repeating patterns and using statistical analysis. For example, certain words in the English language, such as the and or, occur frequently. This type of attack is generally difficult and requires a large sample of ciphertext.
In a Chosen Text Attack (CTA), the cryptanalyst selects a sample of plaintext and obtains the corresponding ciphertext. Several types of Chosen Text Attacks exist, including Chosen Plaintext, Adaptive Chosen Plaintext, Chosen Ciphertext, and Adaptive Chosen Ciphertext:
In a Known Plaintext Attack (KPA), the cryptanalyst has obtained the ciphertext and corresponding plaintext of several past messages, which he or she uses to decipher new messages.
A Man-in-the-Middle Attack involves an attacker intercepting messages between two parties on a network and potentially modifying the original message.
A Meet-in-the-Middle Attack involves an attacker encrypting known plaintext with each possible key on one end, decrypting the corresponding ciphertext with each possible key, and then comparing the results in the middle. Although commonly classified as a brute-force attack, this kind of attack may also be considered an analytic attack because it does involve some differential analysis.
A Replay Attack occurs when a session key is intercepted and used against a later encrypted session between the same two parties. Replay attacks can be countered by incorporating a time stamp in the session key.
Finally, securely designed and built software running on securely designed and built systems must be operated in securely designed and build facilities. Otherwise, an adversary with unrestricted access to a system and its installed software will inevitably succeed in compromising your security efforts. Astute organizations involve security professionals during the design, planning, and construction of new or renovated locations and facilities. Proper site- and facility-requirements planning during the early stages of construction helps ensure that a new building or data center is adequate, safe, and secure — all of which can help an organization avoid costly situations later.
The principles of Crime Prevention Through Environmental Design (CPTED) have been widely adopted by security practitioners in the design of public and private buildings, offices, communities, and campuses since CPTED was first published in 1971. CPTED focuses on designing facilities by using techniques such as unobstructed areas, creative lighting, and functional landscaping, which help to naturally deter crime through positive psychological effects. By making it difficult for a criminal to hide, gain access to a facility, escape a location, or otherwise perpetrate an illegal and/or violent act, such techniques may cause a would-be criminal to decide against attacking a target or victim, and help to create an environment that’s perceived as (and that actually is) safer for legitimate people who regularly use the area. CPTED is comprised of three basic strategies:
Location, location, location! Although, to a certain degree, this bit of conventional business wisdom may be less important to profitability in the age of e-commerce, it’s still a critical factor in physical security. Important factors when considering a location include
Many of the physical and technical controls that we discuss in the section “Implement Site and Facility Security Controls” later in this chapter, should be considered during the initial design of a secure facility. Doing so often helps reduce the costs and improves the overall effectiveness of these controls. Other building design considerations include
The CISSP candidate must understand the various threats to physical security; the elements of site- and facility-requirements planning and design; the various physical security controls, including access controls, technical controls, environmental and life safety controls, and administrative controls; as well as how to support the implementation and operation of these controls, as covered in this section.
Wiring closets, server rooms, and media and evidence storage facilities contain high-value equipment and/or media that is critical to ongoing business operations or in support of investigations. Physical security controls often found in these locations include
High-security work areas often employ physical security controls above and beyond what is seen in ordinary work areas. In addition to key card access control systems and video surveillance, additional physical security controls may include
TABLE 5-5 General Fencing Height Requirements
Height |
General Effect |
3–4 ft (1m) |
Deters casual trespassers |
6–7 ft (2m) |
Too high to climb easily |
8 ft (2.4m) + three-strand barbed wire |
Deters more determined intruders |
Environmental and life safety controls, such as utilities and HVAC (heating, ventilation, and air conditioning) are necessary for maintaining a safe and acceptable operating environment for computers and personnel.
General considerations for electrical power include having one or more dedicated feeders from one or more utility substations or power grids, as well as ensuring that adequate physical access controls are implemented for electrical distribution panels and circuit breakers. An Emergency Power Off (EPO) switch should be installed near major systems and exit doors to shut down power in case of fire or electrical shock. Additionally, a backup power source should be established, such as a diesel or natural-gas power generator. Backup power should be provided for critical facilities and systems, including emergency lighting, fire detection and suppression, mainframes and servers (and certain workstations), HVAC, physical access control systems, and telecommunications equipment.
Protective controls for electrostatic discharge (ESD) include
Protective controls for electrical noise include
Using an Uninterruptible Power Supply (UPS) is perhaps the most important protection against electrical anomalies. A UPS provides clean power to sensitive systems and a temporary power source during electrical outages (blackouts, brownouts, and sags); this power supply must be sufficient to properly shut down the protected systems. Note: A UPS shouldn’t be used as a backup power source. A UPS — even a building UPS — is designed to provide temporary power, typically for 5 to 30 minutes, in order to give a backup generator time to start up or to allow a controlled and proper shutdown of protected systems.
Sensitive equipment can be damaged or affected by various electrical hazards and anomalies, including:
Electrostatic discharge (ESD): The ideal humidity range for computer equipment is 40 to 60 percent. Higher humidity causes condensation and corrosion. Lower humidity increases the potential for ESD (static electricity). A static charge of as little as 40V (volts) can damage sensitive circuits, and 2,000V can cause a system shutdown. The minimum discharge that can be felt by humans is 3,000V, and electrostatic discharges of over 25,000V are possible — so if you can feel it, it’s a problem for your equipment!
The ideal humidity range for computer equipment is 40 to 60 percent.
TABLE 5-6 Electrical Anomalies
Electrical Event |
Definition |
Blackout |
Total loss of power |
Fault |
Momentary loss of power |
Brownout |
Prolonged drop in voltage |
Sag |
Short drop in voltage |
Inrush |
Initial power rush |
Spike |
Momentary rush of power |
Surge |
Prolonged rush of power |
Heating, ventilation, and air conditioning (HVAC) systems maintain the proper environment for computers and personnel. HVAC-requirements planning involves complex calculations based on numerous factors, including the average BTUs (British Thermal Units) produced by the estimated computers and personnel occupying a given area, the size of the room, insulation characteristics, and ventilation systems.
The ideal temperature range for computer equipment is between 50°F and 80°F (10°C and 27°C). At temperatures as low as 100°F (38°C), magnetic storage media can be damaged.
The ideal humidity range for computer equipment is between 40 and 60 percent. Higher humidity causes condensation and corrosion. Lower humidity increases the potential for ESD (static electricity).
Doors and side panels on computer equipment racks should be kept closed (and locked, as a form of physical access control) to ensure proper airflow for cooling and ventilation. When possible, empty spaces in equipment racks (such as a half-filled rack or gaps between installed equipment) should be covered with blanking panels to reduce hot and cold air mixing between the hot side (typically the power-supply side of the equipment) and the cold side (typically the front of the equipment); such mixing of hot and cold air can reduce the efficiency of cooling systems.
Heating and cooling systems should be properly maintained, and air filters should be cleaned regularly to reduce dust contamination and fire hazards.
Most gas-discharge fire suppression systems automatically shut down HVAC systems prior to discharging, but a separate Emergency Power Off (EPO) switch should be installed near exits to facilitate a manual shutdown in an emergency.
Ideally, HVAC equipment should be dedicated, controlled, and monitored. If the systems aren’t dedicated or independently controlled, proper liaison with the building manager is necessary to ensure that everyone knows who to call when there are problems. Monitoring systems should alert the appropriate personnel when operating thresholds are exceeded.
Water damage (and damage from liquids in general) can occur from many different sources, including pipe breakage, firefighting efforts, leaking roofs, spilled drinks, flooding, and tsunamis. Wet computers and other electrical equipment pose a potentially lethal hazard.
Both preventive as well as detective controls are used to ensure that water in unwanted places does not disrupt business operations or destroy expensive assets. Common features include
Threats from fire can be potentially devastating and lethal. Proper precautions, preparation, and training not only help limit the spread of fire and damage, but more important, can also save lives.
Other hazards associated with fires include smoke, explosions, building collapse, release of toxic materials or vapors, and water damage.
For a fire to burn, it requires three elements: heat, oxygen, and fuel. These three elements are sometimes referred to as the fire triangle. (See Figure 5-4.) Fire suppression and extinguishing systems fight fires by removing one of these three elements or by temporarily breaking up the chemical reaction between these three elements (separating the fire triangle). Fires are classified according to the fuel type, as listed in Table 5-7.
TABLE 5-7 Fire Classes and Suppression/Extinguishing Methods
Class |
Description (Fuel) |
Extinguishing Method |
A |
Common combustibles, such as paper, wood, furniture, and clothing |
Water or soda acid |
B |
Burnable fuels, such as gasoline or oil |
CO2 or soda acid |
C |
Electrical fires, such as computers or electronics |
CO2 (Note: The most important step to fight a fire in this class: Turn off electricity first!) |
D |
Special fires, such as combustible metals |
May require total immersion or other special techniques |
K (or F) |
Cooking oils or fats |
Water mist or fire blankets |
Fire detection and suppression systems are some of the most essential life safety controls for protecting facilities, equipment, and (most important) human lives.
The three main types of fire detection systems are
The two primary types of fire suppression systems are
Water sprinkler systems: Water extinguishes fire by removing the heat element from the fire triangle, and it’s most effective against Class A fires. Water is the primary fire-extinguishing agent for all business environments. Although water can potentially damage equipment, it’s one of the most effective, inexpensive, readily available, and least harmful (to humans) extinguishing agents available. The four variations of water sprinkler systems are
The four main types of water sprinkler systems are wet-pipe, dry-pipe, deluge, and preaction.
Carbon dioxide (CO2): CO2 is a commonly used colorless, odorless gas that extinguishes fire by removing the oxygen element from the fire triangle. (Refer to Figure 5-4.) CO2 is most effective against Class B and C fires. Because it removes oxygen, its use is potentially lethal and therefore best suited for unmanned areas or with a delay action (that includes manual override) in manned areas.
CO2 is also used in portable fire extinguishers, which should be located near all exits and within 50 feet (15 meters) of any electrical equipment. All portable fire extinguishers (CO2, water, and soda acid) should be clearly marked (listing the extinguisher type and the fire classes it can be used for) and periodically inspected. Additionally, all personnel should receive training in the proper use of fire extinguishers.
Gas-discharge: Gas-discharge systems suppress fire by separating the elements of the fire triangle (a chemical reaction); they are most effective against Class B and C fires. (Refer to Figure 5-4.) Inert gases don’t damage computer equipment, don’t leave liquid or solid residue, mix thoroughly with the air, and spread extremely quickly. However, these gases in concentrations higher than 10 percent are harmful if inhaled, and some types degrade into toxic chemicals (hydrogen fluoride, hydrogen bromide, and bromine) when used on fires that burn at temperatures above 900°F (482°C).
Halon used to be the gas of choice in gas-discharge fire suppression systems. However, because of Halon’s ozone-depleting characteristics, the Montreal Protocol of 1987 prohibited the further production and installation of Halon systems (beginning in 1994) and encouraging the replacement of existing systems. Acceptable replacements for Halon include FM-200 (most effective), CEA-410 or CEA-308, NAF-S-III, FE-13, Argon or Argonite, and Inergen.
Halon is an ozone-depleting substance. Acceptable replacements include FM-200, CEA-410 or CEA-308, NAF-S-III, FE-13, Argon or Argonite, and Inergen.
3.141.3.175