Chapter 5

Security Architecture and Engineering

IN THIS CHAPTER

check Using secure design principles

check Understanding security models

check Choosing the right controls and countermeasures

check Recognizing security capabilities in information systems

check Assessing and mitigating vulnerabilities

check Decrypting cryptographic concepts and fundamentals

check Getting physical with physical security design concepts

Security must be part of the design of information systems, as well as the facilities housing information systems and workers, which is covered in the Security Architecture and Engineering domain. This domain represents 13 percent of the CISSP certification exam.

Implement and Manage Engineering Processes Using Secure Design Principles

It is a natural human tendency to build things without first considering their design or security implications. A network engineer who is building a new network may just start plugging cables into routers and switches without first thinking about the overall design — much less any security considerations. Similarly, a software engineer assigned to write a new program is apt to just begin coding without planning the program’s design.

If we observe the outside world and the consumer products that are available, sometimes we see egregious usability and security flaws that make us wonder how the person or organization was ever allowed to participate in its design and development.

tip Security professionals need to help organizations understand that security-by-design principles are a vital component of the development of any system.

The engineering processes that require the inclusion of secure design principles include the following:

  • Concept development. From the idea stage, security considerations are vital to the success of any new IT engineering endeavor. Every project and product starts with something — a whiteboard session, sketches on cocktail napkins or pizza boxes, or a conference call. However the project starts, someone should ask how vital data, functions, and components will be protected in this new thing. We’re not looking for detailed answers, but just enough confidence to know we aren’t the latest lemmings rushing toward the nearest cliff.
  • Requirements. Before actual design begins, one or more persons will define the requirements for the new system or feature. Often, there are several categories of requirements. Security, privacy, and regulatory requirements need to be included.
  • Design. After all requirements have been established and agreed upon, formal design of the system or component can begin. Design must incorporate all requirements established in the preceding step.
  • Development. Depending on what is being built, development may take many forms, including creating
    • System and device configurations
    • Data center equipment racking diagrams
    • Data flows for management and monitoring systems
  • Testing. Individual components and the entire system are tested to confirm that each and every requirement developed earlier has been achieved. Generally, someone other than the builder/developer should perform testing.
  • Implementation. When the system or component is placed into service, security considerations help ensure this does not place the new system/ component or related things at risk. Implementation activities include
    • Configuring and cabling network devices.
    • Installing and configuring operating systems or subsystems, such as database management systems, web servers, or applications.
    • Construction of physical facilities, work areas, or data centers.
  • Maintenance and support. After the system or facility is placed into service, all subsequent changes need to undergo similar engineering steps to ensure that new or changing security risks are quickly mitigated.
  • Decommissioning. When a system or facility reaches the end of its service life, it must be decommissioned without placing data, other systems, or personnel at risk.

tip The Building Security in Maturity Model (BSIMM) is a software security benchmarking tool that provides a framework for software security. It is composed of 256 measurements and 113 activities. The BSIMM activities consist of 12 practices organized into four domains including governance, intelligence, SSDL touchpoints, and deployment. Go to www.bsimm.com to learn more about BSIMM.

The application development lifecycle also includes security considerations that are nearly identical to security engineering principles here. Application development is covered in Chapter 10.

Understand the Fundamental Concepts of Security Models

Security models help us understand complex security mechanisms in information systems. Security models illustrate concepts that can be used when analyzing an existing system or designing a new one.

In this section, we describe the concepts of confidentiality, integrity, and availability (known together as CIA, or the CIA Triad), and access control models. Learn more about the CIA Triad in Chapter 3.

Confidentiality

Confidentiality refers to the concept that information and functions (objects) should be accessed only by authorized subjects. This is usually accomplished by several means, including:

  • Access and authorization: Ranging from physical access to facilities containing computers, to user account access, role-based access controls, and attribute-based access controls, the objective here is to make sure that only those persons with proper business authorization are permitted to access information. This topic is covered in Chapter 7.
  • Vulnerability management: This includes everything from system hardening to patch management and the elimination of vulnerabilities from applications. What we’re trying to avoid here is any possibility that someone can attack the system and get to the data.
  • Thorough system design: The overall design of the system excludes unauthorized subjects from access to protected data.
  • Sound data management practices: The organization has established processes that define the use of the information it manages or controls.

These characteristics work together to ensure that secrets remain secrets.

Integrity

Integrity refers to the concept that information in a system will arrive or be created correctly and maintain that correctness throughout its lifetime. Systems storing the information will reject attempted changes by unauthorized parties or unauthorized means. The characteristics of data integrity that are ensured by systems are

  • Completeness
  • Timeliness
  • Accuracy
  • Validity

Some of the measures taken to ensure data integrity are

  • Authorization: This refers to whether data has proper authorization to enter a system. The integrity of a data record includes whether it should even be in the system.
  • Input control: This includes verifying that the new data entering the system is in the proper format and in the proper range.
  • Access control: This is used to control who (and what) is permitted to change the data and when the data can be changed.
  • Output control: This includes verifying that the data leaving the system is in the proper format and complete.

All of these steps help to ensure that the data in a system has the highest possible quality.

Availability

Availability refers to the concept that a system (and the data within it) will be accessible when and where users want to use it. The characteristics of a system that determine its availability include

  • Resilient hardware design: Features may include redundant power supplies, network adapters, processors and other components. These help to ensure that a system will keep running even if some of its internal components fail.
  • Resilient software: The operating system and other software components need to be designed and configured to be as reliable as possible incorporating techniques such as multithreading, multiprocessing, and multiprogramming.
  • Resilient architecture: We’re talking big picture here. In addition to resilient hardware design, we would suggest that other components have redundancy including routers, firewalls, switches, telecommunications circuits, and whatever other items may otherwise be single points of failure.
  • Sound configuration management, change management, and preventive maintenance processes: Availability includes not only the components of the system itself, but is also reliant on good system management practices. After all, availability means avoiding unscheduled downtime, which is often a consequence of sloppy configuration management and change management practices, or neglected preventive maintenance.
  • Established business continuity and disaster recovery plans: Organizations need to ensure that natural and man-made disasters do not negatively affect the availability of critical systems and data. This topic is covered in detail later in this chapter.

remember The CIA Triad comprises three principles of information protection: Confidentiality, Integrity, and Availability.

Access control models

Models are used to express access control requirements in a theoretical or mathematical framework that precisely describes or quantifies real access control systems. Common access control models include Bell-LaPadula, Access Matrix, Take-Grant, Biba, Clark-Wilson, Information Flow, and Non-interference.

remember Bell-LaPadula, Access Matrix, and Take-Grant models address confidentiality of stored information. Biba and Clark-Wilson address integrity of stored information.

Bell-LaPadula

The Bell-LaPadula model was the first formal confidentiality model of a mandatory access control system. (We discuss mandatory and discretionary access controls in Chapter 7.) It was developed for the U.S. Department of Defense (DoD) to formalize the DoD multilevel security policy. As we discuss in Chapter 3, the DoD classifies information based on sensitivity at three basic levels: Confidential, Secret, and Top Secret. In order to access classified information (and systems), an individual must have access (a clearance level equal to or exceeding the classification of the information or system) and need-to-know (legitimately in need of access to perform a required job function). The Bell-LaPadula model implements the access component of this security policy.

Bell-LaPadula is a state machine model that addresses only the confidentiality of information. The basic premise of Bell-LaPadula is that information can’t flow downward. This means that information at a higher level is not permitted to be copied or moved to a lower level. Bell-LaPadula defines the following two properties:

  • Simple security property (ss property): A subject can’t read information from an object that has a higher sensitivity label than the subject (also known as no read up, or NRU).
  • *-property (star property): A subject can’t write information to an object that has a lower sensitivity label than the subject (also known as no write down, or NWD).

Bell-LaPadula also defines two additional properties that give it the flexibility of a discretionary access control model:

  • Discretionary security property: This property determines access based on an Access Matrix — more on that model in the following section.
  • Trusted subject: A trusted subject is an entity that can violate the *-property but not its intent.

tip A state machine is an abstract model used to design computer programs; the state machine illustrates which “state” the program will be in at any time.

Access Matrix

An Access Matrix model, in general, provides object access rights (read/write/execute, or R/W/X) to subjects in a discretionary access control (DAC) system. An Access Matrix consists of access control lists (columns) and capability lists (rows). See Table 5-1 for an example.

TABLE 5-1 An Access Matrix Example

Subject/Object

Directory: H/R

File: Personnel

Process: LPD

Thomas

Read

Read/Write

Execute

Lisa

Read

Read

Execute

Harold

None

None

None

Take-Grant

Take-Grant systems specify the rights that a subject can transfer to or from another subject or object. These rights are defined through four basic operations: create, revoke, take, and grant.

Biba

The Biba integrity model (sometimes referred to as Bell-LaPadula upside down) was the first formal integrity model. Biba is a lattice-based model that addresses the first goal of integrity: ensuring that modifications to data aren’t made by unauthorized users or processes. (See Chapter 3 for a complete discussion of the three goals of integrity.) Biba defines the following two properties:

  • Simple integrity property: A subject can’t read information from an object that has a lower integrity level than the subject (also called no read down).
  • *-integrity property (star integrity property): A subject can’t write information to an object that has a higher integrity level than the subject (also known as no write up).

Clark-Wilson

The Clark-Wilson integrity model establishes a security framework for use in commercial activities, such as the banking industry. Clark-Wilson addresses all three goals of integrity and identifies special requirements for inputting data based on the following items and procedures:

  • Unconstrained data item (UDI): Data outside the control area, such as input data.
  • Constrained data item (CDI): Data inside the control area. (Integrity must be preserved.)
  • Integrity verification procedures (IVP): Checks validity of CDIs.
  • Transformation procedures (TP): Maintains integrity of CDIs.

The Clark-Wilson integrity model is based on the concept of a well-formed transaction, in which a transaction is sufficiently ordered and controlled so that it maintains internal and external consistency.

Information Flow

An Information Flow model is a type of access control model based on the flow of information, rather than on imposing access controls. Objects are assigned a security class and value, and their direction of flow — from one application to another or from one system to another — is controlled by a security policy. This model type is useful for analyzing covert channels, through detailed analysis of the flow of information in a system, including the sources of information and the paths of flow.

Non-Interference

A non-interference model ensures that the actions of different objects and subjects aren’t seen by (and don’t interfere with) other objects and subjects on the same system.

Select Controls Based upon Systems Security Requirements

Designing and building secure software is critical to information security, but the systems that software runs on must themselves be securely designed and built. Selecting appropriate controls is essential to designing a secure computing architecture. Numerous systems security evaluation models exist to help you select the right controls and countermeasures for your environment.

Evaluation criteria

Evaluation criteria provide a standard for quantifying the security of a computer system or network. These criteria include the Trusted Computer System Evaluation Criteria (TCSEC), Trusted Network Interpretation (TNI), European Information Technology Security Evaluation Criteria (ITSEC), and the Common Criteria.

Trusted Computer System Evaluation Criteria (TCSEC)

The Trusted Computer System Evaluation Criteria (TCSEC), commonly known as the Orange Book, is part of the Rainbow Series developed for the U.S. DoD by the National Computer Security Center (NCSC). It’s the formal implementation of the Bell-LaPadula model. The evaluation criteria were developed to achieve the following objectives:

  • Measurement: Provides a metric for assessing comparative levels of trust between different computer systems.
  • Guidance: Identifies standard security requirements that vendors must build into systems to achieve a given trust level.
  • Acquisition: Provides customers a standard for specifying acquisition requirements and identifying systems that meet those requirements.

The four basic control requirements identified in the Orange Book are

  • Security policy: The rules and procedures by which a trusted system operates. Specific TCSEC requirements include
    • Discretionary access control (DAC): Owners of objects are able to assign permissions to other subjects.
    • Mandatory access control (MAC): Permissions to objects are managed centrally by an administrator.
    • Object reuse: Protects confidentiality of objects that are reassigned after initial use. For example, a deleted file still exists on storage media; only the file allocation table (FAT) and first character of the file have been modified. Thus residual data may be restored, which describes the problem of data remanence. Object-reuse requirements define procedures for actually erasing the data.
    • Labels: Sensitivity labels are required in MAC-based systems. (Read more about information classification in Chapter 3.) Specific TCSEC labeling requirements include integrity, export, and subject/object labels.
  • Assurance: Guarantees that a security policy is correctly implemented. Specific TCSEC requirements (listed here) are classified as operational assurance requirements:
    • System architecture: TCSEC requires features and principles of system design that implement specific security features.
    • System integrity: Hardware and firmware operate properly and are tested to verify proper operation.
    • Covert channel analysis: TCSEC requires covert channel analysis that detects unintended communication paths not protected by a system’s normal security mechanisms. A covert storage channel conveys information by altering stored system data. A covert timing channel conveys information by altering a system resource’s performance or timing.

      remember A systems or security architect must understand covert channels and how they work in order to prevent the use of covert channels in the system environment.

    • Trusted facility management: The assignment of a specific individual to administer the security-related functions of a system. Closely related to the concepts of least privilege, separation of duties, and need-to-know.
    • Trusted recovery: Ensures that security isn’t compromised in the event of a system crash or failure. This process involves two primary activities: failure preparation and system recovery.
    • Security testing: Specifies required testing by the developer and the National Computer Security Center (NCSC).
    • Design specification and verification: Requires a mathematical and automated proof that the design description is consistent with the security policy.
    • Configuration management: Identifying, controlling, accounting for, and auditing all changes made to the Trusted Computing Base (TCB) during the design, development, and maintenance phases of a system’s lifecycle.
    • Trusted distribution: Protects a system during transport from a vendor to a customer.
  • Accountability: The ability to associate users and processes with their actions. Specific TCSEC requirements include
    • Identification and authentication (I&A): Systems need to track who performs what activities. We discuss this topic in Chapter 7.
    • Trusted Path: A direct communications path between the user and the Trusted Computing Base (TCB) that doesn’t require interaction with untrusted applications or operating-system layers.
    • Audit: Recording, examining, analyzing, and reviewing security-related activities in a trusted system.
  • Documentation: Specific TCSEC requirements include
    • Security Features User’s Guide (SFUG): User’s manual for the system.
    • Trusted Facility Manual (TFM): System administrator’s and/or security administrator’s manual.
    • Test documentation: According to the TCSEC manual, this documentation must be in a position to “show how the security mechanisms were tested, and results of the security mechanisms’ functional testing.”
    • Design documentation: Defines system boundaries and internal components, such as the Trusted Computing Base (TCB).

remember The Orange Book defines four major hierarchical classes of security protection and numbered subclasses (higher numbers indicate higher security):

  • D: Minimal protection
  • C: Discretionary protection (C1 and C2)
  • B: Mandatory protection (B1, B2, and B3)
  • A: Verified protection (A1)

These classes are further defined in Table 5-2.

TABLE 5-2 TCSEC Classes

Class

Name

Sample Requirements

D

Minimal protection

Reserved for systems that fail evaluation.

C1

Discretionary protection (DAC)

System doesn’t need to distinguish between individual users and types of access.

C2

Controlled access protection (DAC)

System must distinguish between individual users and types of access; object reuse security features required.

B1

Labeled security protection (MAC)

Sensitivity labels required for all subjects and storage objects.

B2

Structured protection (MAC)

Sensitivity labels required for all subjects and objects; trusted path requirements.

B3

Security domains (MAC)

Access control lists (ACLs) are specifically required; system must protect against covert channels.

A1

Verified design (MAC)

Formal Top-Level Specification (FTLS) required; configuration management procedures must be enforced throughout entire system lifecycle.

Beyond A1

Self-protection and reference monitors are implemented in the Trusted Computing Base (TCB). TCB verified to source-code level.

tip You don’t need to know specific requirements of each TCSEC level for the CISSP exam, but you should know at what levels DAC and MAC are implemented and the relative trust levels of the classes, including numbered subclasses.

Major limitations of the Orange Book include that

  • It addresses only confidentiality issues. It doesn’t include integrity and availability.
  • It isn’t applicable to most commercial systems.
  • It emphasizes protection from unauthorized access, despite statistical evidence that many security violations involve insiders.
  • It doesn’t address networking issues.

Trusted Network Interpretation (TNI)

Part of the Rainbow Series, like TCSEC (discussed in the preceding section), Trusted Network Interpretation (TNI) addresses confidentiality and integrity in trusted computer/communications network systems. Within the Rainbow Series, it’s known as the Red Book.

Part I of the TNI is a guideline for extending the system protection standards defined in the TCSEC (the Orange Book) to networks. Part II of the TNI describes additional security features such as communications integrity, protection from denial of service, and transmission security.

European Information Technology Security Evaluation Criteria (ITSEC)

Unlike TCSEC, the European Information Technology Security Evaluation Criteria (ITSEC) addresses confidentiality, integrity, and availability, as well as evaluating an entire system, defined as a Target of Evaluation (TOE), rather than a single computing platform.

ITSEC evaluates functionality (security objectives, or why; security-enforcing functions, or what; and security mechanisms, or how) and assurance (effectiveness and correctness) separately. The ten functionality (F) classes and seven evaluation (E) (assurance) levels are listed in Table 5-3.

TABLE 5-3 ITSEC Functionality (F) Classes and Evaluation (E) Levels mapped to TCSEC levels

(F) Class

(E) Level

Description

NA

E0

Equivalent to TCSEC level D

F-C1

E1

Equivalent to TCSEC level C1

F-C2

E2

Equivalent to TCSEC level C2

F-B1

E3

Equivalent to TCSEC level B1

F-B2

E4

Equivalent to TCSEC level B2

F-B3

E5

Equivalent to TCSEC level B3

F-B3

E6

Equivalent to TCSEC level A1

F-IN

NA

TOEs with high integrity requirements

F-AV

NA

TOEs with high availability requirements

F-DI

NA

TOEs with high integrity requirements during data communication

F-DC

NA

TOEs with high confidentiality requirements during data communication

F-DX

NA

Networks with high confidentiality and integrity requirements

tip You don’t need to know specific requirements of each ITSEC level for the CISSP exam, but you should know how the basic functionality levels (F-C1 through F-B3) and evaluation levels (E0 through E6) correlate to TCSEC levels.

Common Criteria

The Common Criteria for Information Technology Security Evaluation (usually just called Common Criteria) is an international effort to standardize and improve existing European and North American evaluation criteria. The Common Criteria has been adopted as an international standard in ISO 15408. The Common Criteria defines eight evaluation assurance levels (EALs), which are listed in Table 5-4.

TABLE 5-4 The Common Criteria

Level

TCSEC Equivalent

ITSEC Equivalent

Description

EAL0

N/A

N/A

Inadequate assurance

EAL1

N/A

N/A

Functionally tested

EAL2

C1

E1

Structurally tested

EAL3

C2

E2

Methodically tested and checked

EAL4

B1

E3

Methodically designed, tested, and reviewed

EAL5

B2

E4

Semi-formally designed and tested

EAL6

B3

E5

Semi-formally verified design and tested

EAL7

A1

E6

Formally verified design and tested

tip You don’t need to know specific requirements of each Common Criteria level for the CISSP exam, but you should understand the basic evaluation hierarchy (EAL0 through EAL7, in order of increasing levels of trust).

System certification and accreditation

System certification is a formal methodology for comprehensive testing and documentation of information system security safeguards, both technical and nontechnical, in a given environment by using established evaluation criteria (the TCSEC).

Accreditation is an official, written approval for the operation of a specific system in a specific environment, as documented in the certification report. Accreditation is normally granted by a senior executive or Designated Approving Authority (DAA). The term DAA is used in the U.S. military and government. A DAA is normally a senior official, such as a commanding officer.

System certification and accreditation must be updated when any changes are made to the system or environment, and they must also be periodically re-validated, which typically happens every three years.

The certification and accreditation process has been formally implemented in U.S. military and government organizations as the Defense Information Technology Security Certification and Accreditation Process (DITSCAP) and National Information Assurance Certification and Accreditation Process (NIACAP), respectively. U.S. government agencies utilizing cloud-based systems and services are required to undergo FedRAMP certification and accreditation processes (described in this chapter). These important processes are used to make sure that a new (or changed) system has the proper design and operational characteristics, and that it’s suitable for a specific task.

DITSCAP

The Defense Information Technology Security Certification and Accreditation Process (DITSCAP) formalizes the certification and accreditation process for U.S. DoD information systems through four distinct phases:

  • Definition: Security requirements are determined by defining the organization and system’s mission, environment, and architecture.
  • Verification: Ensures that a system undergoing development or modification remains compliant with the System Security Authorization Agreement (SSAA), which is a baseline security-configuration document.
  • Validation: Confirms compliance with the SSAA.
  • Post-Accreditation: Represents ongoing activities required to maintain compliance, and address new and evolving threats, throughout a system’s lifecycle.

NIACAP

The National Information Assurance Certification and Accreditation Process (NIACAP) formalizes the certification and accreditation process for U.S. government national security information systems. NIACAP consists of four phases (Definition, Verification, Validation, and Post-Accreditation) that generally correspond to the DITSCAP phases. Additionally, NIACAP defines three types of accreditation:

  • Site accreditation: All applications and systems at a specific location are evaluated.
  • Type accreditation: A specific application or system for multiple locations is evaluated.
  • System accreditation: A specific application or system at a specific location is evaluated.

FedRAMP

The Federal Risk and Authorization Management Program (FedRAMP) is a standardized approach to assessments, authorization, and continuous monitoring of cloud-based service providers. This represents a change from controls-based security to risk-based security.

DCID 6/3

The Director of Central Intelligence Directive 6/3 is the process used to protect sensitive information that’s stored on computers used by the U.S. Central Intelligence Agency (CIA).

Security controls and countermeasures

Various security controls and countermeasures that should be applied to security architecture, as appropriate, include defense in depth, system hardening, implementation of heterogeneous environments, and designing system resilience.

Defense in depth

Defense in depth is a strategy for resisting attacks. A system that employs defense in depth will have two or more layers of protective controls that are designed to protect the system or data stored there.

An example defense-in-depth architecture would consist of a database protected by several components, such as:

  • Screening router
  • Firewall
  • Intrusion prevention system
  • Hardened operating system
  • OS-based network access filtering

All the layers listed here help to protect the database. In fact, each one of them by itself offers nearly complete protection. But when considered together, all these controls offer a varied (in effect, deeper) defense, hence the term defense in depth.

remember Defense in depth refers to the use of multiple layers of protection.

System hardening

Most types of information systems, including computer operating systems, have several general-purpose features that make it easy to set up the systems. But systems that are exposed to the Internet should be “hardened,” or configured according to the following concepts:

  • Remove all unnecessary components.
  • Remove all unnecessary accounts.
  • Close all unnecessary network listening ports.
  • Change all default passwords to complex, difficult to guess passwords.
  • All necessary programs should run at the lowest possible privilege.
  • Security patches should be installed as soon as they are available.

System hardening guides can be obtained from a number of sources, such as:

tip Software and operating system vendors often provide their own hardening guides, which may also be useful.

Heterogeneous environment

Rather than containing systems or components of a single type, a heterogeneous environment contains a variety of different types of systems. Contrast an environment that consists only of Windows 2016 servers and the latest SQL Server and IIS Server, to a more complex environment that contains Windows, Linux, and Solaris servers with Microsoft SQL Server, MySQL, and Oracle databases.

The advantage of a heterogeneous environment is its variety of systems; for one thing, the various types of systems probably won’t possess common vulnerabilities, which makes them harder to attack. However, the complexity of a heterogeneous environment also negatively impacts security, as there are more components that potentially can fail or be compromised.

The weakness of a homogeneous environment (one where all of the systems are the same) is its uniformity. If a weakness in one of the systems is discovered, all systems may have the weakness. If one of the systems is attacked and compromised, all may be attacked and compromised.

You can liken homogeneity to a herd of animals; if they are genetically identical, then they may all be susceptible to a disease that could wipe out the entire herd. If they are genetically diverse, then perhaps some will be able to survive the disease.

System resilience

The resilience of a system is a measure of its ability to keep running, even under less-than-ideal conditions. Resilience is important at all levels, including network, operating system, subsystem (such as database management system or web server), and application.

Resilience can mean a lot of different things. Here are some examples:

  • Filter malicious input: System can recognize and reject input that may be an attack. Examples of suspicious input include what you get typically in an injection attack, buffer-overflow attack, or Denial of Service attack.
  • Data replication: System copies critical data to a separate storage system in the event of component failure.
  • Redundant components: System contains redundant components that permit the system to continue running even when hardware failures or malfunctions occur. Examples of redundant components include multiple power supplies, multiple network interfaces, redundant storage techniques such as RAID, and redundant server architecture techniques such as clustering.
  • Maintenance hooks: Hidden, undocumented features in software programs that are intended to inappropriately expose data or functions for illicit use.
  • Security countermeasures: Knowing that systems are subject to frequent or constant attack, systems architects need to include several security countermeasures in order to minimize system vulnerability. Such countermeasures include
    • Revealing as little information about the system as possible. For example, don’t permit the system to ever display the version of operating system, database, or application software that’s running.
    • Limiting access to only those persons who must use the system in order to fulfill needed organizational functions.
    • Disabling unnecessary services in order to reduce the number of attack targets.
    • Using strong authentication in order to make it as difficult as possible for outsiders to access the system.

Understand Security Capabilities of Information Systems

Basic concepts related to security architecture include the Trusted Computing Base (TCB), Trusted Platform Module (TPM), secure modes of operation, open and closed systems, protection rings, security modes, and recovery procedures.

Computer architecture

Basic computer (system) architecture refers to the structure of a computer system and comprises its hardware, firmware, and software.

tip The CompTIA A+ certification exam covers computer architecture in depth and is an excellent way to prepare for this portion of the CISSP examination.

Hardware

Hardware consists of the physical components in computer architecture. The main components of the computer architecture include the CPU, memory, and bus.

CPU

The CPU (Central Processing Unit) or microprocessor is the electronic circuitry that performs a computer’s arithmetic, logic, and computing functions. As shown in Figure 5-1, the main components of a CPU include

  • Arithmetic Logic Unit (ALU): Performs numerical calculations and comparative logic functions, such as ADD, SUBTRACT, DIVIDE, and MULTIPLY.
  • Bus Interface Unit (BIU): Supervises data transfers over the bus system between the CPU and I/O devices.
  • Control Unit: Coordinates activities of the other CPU components during program execution.
  • Decode Unit: Converts incoming instructions into individual CPU commands.
  • Floating-Point Unit (FPU): Handles higher math operations for the ALU and control unit.
  • Memory Management Unit (MMU): Handles addressing and cataloging data that's stored in memory and translates logical addressing into physical addressing.
  • Pre-Fetch Unit: Preloads instructions into CPU registers.
  • Protection Test Unit (PTU): Monitors all CPU functions to ensure that they’re properly executed.
  • Registers: Hold CPU data, addresses, and instructions temporarily, in special buffers.
image

FIGURE 5-1: The main components of a CPU.

The basic operation of a microprocessor consists of two distinct phases: fetch and execute. (It’s not too different from what your dog does: You throw the stick, and he fetches the stick.) During the fetch phase, the CPU locates and retrieves a required instruction from memory. During the execute phase, the CPU decodes and executes the instruction. These two phases make up a basic machine cycle that’s controlled by the CPU clock signals. Many complex instructions require more than a single machine cycle to execute.

The four operating states for a computer (CPU) are

  • Operating (or run) state: The CPU executes an instruction or instructions.
  • Problem (or application) state: The CPU calculates a solution to an application-based problem. During this state, only a limited subset of instructions (non-privileged instructions) is available.
  • Supervisory state: The CPU executes a privileged instruction, meaning that instruction is available only to a system administrator or other authorized user/process.
  • Wait state: The CPU hasn’t yet completed execution of an instruction and must extend the cycle.

The two basic types of CPU designs used in modern computer systems are

  • Complex-Instruction-Set Computing (CISC): Can perform multiple operations per single instruction. Optimized for systems in which the fetch phase is the longest part of the instruction execution cycle. CPUs that use CISC include Intel x86, PDP-11, and Motorola 68000.
  • Reduced-Instruction-Set Computing (RISC): Uses fewer, simpler instructions than CISC architecture, requiring fewer clock cycles to execute. Optimized for systems in which the fetch and execute phases are approximately equal. CPUs that have RISC architecture include Alpha, PowerPC, and SPARC.

Microprocessors are also often described as scalar or superscalar. A scalar processor executes a single instruction at a time. A superscalar processor can execute multiple instructions concurrently.

Finally, many systems (microprocessors) are classified according to additional functionality (which must be supported by the installed operating system):

  • Multitasking: Alternates the execution of multiple subprograms or tasks on a single processor.
  • Multiprogramming: Alternates the execution of multiple programs on a single processor.
  • Multiprocessing: Executes multiple programs on multiple processors simultaneously.

Two related concepts are multistate and multiuser systems that, more correctly, refer to operating system capabilities:

  • Multistate: The operating system supports multiple operating states, such as single-user and multiuser modes in the UNIX/Linux world and Normal and Safe modes in the Windows world.
  • Multiuser: The operating system can differentiate between users. For example, it provides different shell environments, profiles, or privilege levels for each user, as well as process isolation between users.

An important security issue in multiuser systems involves privileged accounts, and programs or processes that run in a privileged state. Programs such as su (UNIX/Linux) and RunAs (Windows) allow a user to switch to a different account, such as root or administrator, and execute privileged commands in this context. Many programs rely on privileged service accounts to function properly. Utilities such as IBM’s Superzap, for example, are used to install fixes to the operating system or other applications.

BUS

The bus is a group of electronic conductors that interconnect the various components of the computer, transmitting signals, addresses, and data between these components. Bus structures are organized as follows:

  • Data bus: Transmits data between the CPU, memory, and peripheral devices.
  • Address bus: Transmits addresses of data and instructions between the CPU and memory.
  • Control bus: Transmits control information (device status) between the CPU and other devices.

Main memory

Main memory (also known as main storage) is the part of the computer that stores programs, instructions, and data. The two basic types of physical (or real — as opposed to virtual — more on that later) memory are

  • Random Access Memory (RAM): Volatile memory (data is lost if power is removed) is memory that can be directly addressed and whose stored data can be altered. RAM is typically implemented in a computer’s architecture as cache memory and primary memory. The two main types of RAM are
    • Dynamic RAM (DRAM): Must be refreshed (the contents rewritten) every two milliseconds because of capacitance decay. Refreshing is accomplished by using multiple clock signals known as multiphase clock signals.
    • Static RAM (SRAM): Faster than DRAM and uses circuit latches to represent data, so it doesn’t need to be refreshed. Because SRAM doesn’t need to be refreshed, a single-phase clock signal is used.
  • Read-Only Memory (ROM): Nonvolatile memory (data is retained, even if power is removed) is memory that can be directly addressed but whose stored data can’t be easily altered. ROM is typically implemented in a computer’s architecture as firmware (which we discuss in the following section). Variations of ROM include
    • Programmable Read-Only Memory (PROM): This type of ROM can’t be rewritten.
    • Erasable Programmable Read-Only Memory (EPROM): This type of ROM is erased by shining ultraviolet light into the small window on the top of the chip. (No, we aren’t kidding.)
    • Electrically Erasable Programmable Read-Only Memory (EEPROM): This type of ROM was one of the first that could be changed without UV light. Also known as Electrically Alterable Read-Only Memory (EAROM).
    • Flash Memory: This type of memory is used in USB thumb drives.

remember Be sure you don’t confuse the term “main storage” with the storage provided by hard drives.

SECONDARY MEMORY

Secondary memory (also known as secondary storage) is a variation of these two basic types of physical memory. It provides dynamic storage on nonvolatile magnetic media such as hard drives, solid-state drives, or tape drives (which are considered sequential memory because data can’t be directly accessed — instead, you must search from the beginning of the tape). Virtual memory (such as a paging file, swap space, or swap partition) is a type of secondary memory that uses both installed physical memory and available hard-drive space to present a larger apparent memory space to the CPU than actually exists in main storage.

Two important security concepts associated with memory are the protection domain (also called protected memory) and memory addressing.

A protection domain prevents other programs or processes from accessing and modifying the contents of address space that’s already been assigned to another active program or process. This protection can be performed by the operating system or implemented in hardware. The purpose of a protection domain is to protect the memory space assigned to a process so that no other process can read from the space or alter it. The memory space occupied by each process can be considered private.

Memory space describes the amount of physical memory available in a computer system (for example, 2 GB), whereas address space specifies where memory is located in a computer system (a memory address). Memory addressing describes the method used by the CPU to access the contents of memory. A physical memory address is a hard-coded address assigned to physically installed memory. It can only be accessed by the operating system that maps physical addresses to virtual addresses. A virtual (or symbolic) memory address is the address used by applications (and programmers) to specify a desired location in memory. Common virtual memory addressing modes include

  • Base addressing: An address used as the origin for calculating other addresses.
  • Absolute addressing: An address that identifies a location without reference to a base address — or it may be a base address itself.
  • Indexed addressing: Specifies an address relative to an index register. (If the index register changes, the resulting memory location changes.)
  • Indirect addressing: The specified address contains the address to the final desired location in memory.
  • Direct addressing: Specifies the address of the final desired memory location.

remember Don’t confuse the concepts of virtual memory and virtual addressing. Virtual memory combines physical memory and hard drive space to create more apparent memory (or memory space). Virtual addressing is the method used by applications and programmers to specify a desired location in physical memory.

Firmware

Firmware is a program or set of computer instructions stored in the physical circuitry of ROM memory. These types of programs are typically changed infrequently or not at all. In servers and user workstations, firmware usually stores the initial computer instructions that are executed when the server or workstation is powered on; the firmware starts the CPU and other onboard chips, and establishes communications by using the keyboard, monitor, network adaptor, and hard drive. The firmware retrieves blocks of data from the hard drive that are then used to load and start the operating system.

A computer’s BIOS is a common example of firmware. BIOS, or Basic Input-Output System, contains instructions needed to start a computer when it’s first powered on, initialize devices, and load the operating system from secondary storage (such as a hard drive).

Firmware is also found in devices such as smartphones, tablets, DSL/cable modems, and practically every other type of Internet-connected device, such as automobiles, thermostats, and even your refrigerator.

Firmware is typically stored on one or more ROM chips on a computer’s motherboard (the main circuit board containing the CPU(s), memory, and other circuitry).

Software

Software includes the operating system and programs or applications that are installed on a computer system. We cover software security in Chapter 10.

OPERATING SYSTEMS

A computer operating system (OS) is the software that controls the workings of a computer, enabling the computer to be used. The operating system can be thought of as a logical platform, through which other programs can be run to perform work.

The main components of an operating system are

  • Kernel: The core component of the operating system that allows processes, control of hardware devices, and communications to external devices and systems to run.
  • Device drivers: Software modules used by the kernel to communicate with internal and external devices that may be connected to the computer.
  • Tools: Independent programs that perform specific maintenance functions, such as filesystem repair or network testing. Tools can be run automatically or manually.

The operating system controls a computer’s resources. The main functions of the operating system are

  • Process management: Sets up an environment in which multiple independent processes (programs) can run.
  • Resource management: Controls access to all available resources, using schemes that may be based on priority or efficiency.
  • I/O device management: Controls communication to all devices that are connected to the computer, including hard drives, printers, monitors, keyboard, mouse, and so on.
  • Memory management: Controls the allocation and access to main memory (RAM), allocating it to processes, as well as general uses such as disk caching.
  • File management: Controls the file systems that are present on hard drives and other types of devices, and performs all file operations on behalf of individual processes.
  • Communications management: Controls communications on all available communications media on behalf of processes.

VIRTUALIZATION

A virtual machine is a software implementation of a computer, enabling many running copies of an operating system to execute on a single running computer without interfering with each other. Virtual machines are typically controlled by a hypervisor, a software program that allocates resources for each resident operating system (called a guest).

A hypervisor serves as an operating system for multiple operating systems. One of the strengths of virtualization is that the resident operating system has little or no awareness of the fact that it’s running as a guest — instead, it may believe that it has direct control of the computer’s hardware. Only your system administrator knows for sure.

CONTAINERIZATION

A container is a lightweight, standalone executable package of a piece of software that includes everything it needs to run. A container is essentially a bare-bones virtual machine that only has the minimum software installed necessary to deploy a given application. Popular container platforms include Docker and Kubernetes.

Trusted Computing Base (TCB)

A Trusted Computing Base (TCB) is the entire complement of protection mechanisms within a computer system (including hardware, firmware, and software) that’s responsible for enforcing a security policy. A security perimeter is the boundary that separates the TCB from the rest of the system.

remember A Trusted Computing Base (TCB) is the total combination of protection mechanisms within a computer system (including hardware, firmware, and software) that’s responsible for enforcing a security policy.

Access control is the ability to permit or deny the use of an object (a passive entity, such as a system or file) by a subject (an active entity, such as an individual or a process).

remember Access control is the ability to permit or deny the use of an object (a system or file) by a subject (an individual or a process).

A reference monitor is a system component that enforces access controls on an object. Stated another way, a reference monitor is an abstract machine that mediates all access to an object by a subject.

remember A reference monitor is a system component that enforces access controls on an object.

A security kernel is the combination of hardware, firmware, and software elements in a Trusted Computing Base that implements the reference monitor concept. Three requirements of a security kernel are that it must:

  • Mediate all access
  • Be protected from modification
  • Be verified as correct

remember A security kernel is the combination of hardware, firmware, and software elements in a Trusted Computing Base (TCB) that implements the reference monitor concept.

Trusted Platform Module (TPM)

A Trusted Platform Module (TPM) performs sensitive cryptographic functions on a physically separate, dedicated microprocessor. The TPM specification was written by the Trusted Computing Group (TCG) and is an international standard (ISO/IEC 11889 Series).

A TPM generates and stores cryptographic keys, and performs the following functions:

  • Attestation. Enables third-party verification of the system state using a cryptographic hash of the known good hardware and software configuration.
  • Binding. Binds a unique cryptographic key to specific hardware.
  • Sealing. Encrypts data with a unique cryptographic key and ensures that ciphertext can only be decrypted if the hardware is in a known good state.

Common TPM uses include ensuring platform integrity, full disk encryption, password and cryptographic key protection, and digital rights management.

Secure modes of operation

Security modes are used in mandatory access control (MAC) systems to enforce different levels of security. Techniques and concepts related to secure modes of operation include:

  • Abstraction. The process of viewing an application from its highest-level functions, which makes all lower-level functions into abstractions. Lower-level functions are treated as black boxes — known to work, even if we don’t know how.
  • Data hiding. An object-oriented term that refers to the practice of encapsulating an object within another, in order to hide the first object’s functioning details.
  • System high mode. A system that operates at the highest level of information classification. Any user who wants to access such a system must have clearance at, or above, the information classification level.
  • Security kernel. Composed of hardware, software, and firmware components that mediate access and functions between subjects and objects. The security kernel is a part of the protection rings model, in which the operating system kernel occupies the innermost ring, and rings farther from the innermost ring represent fewer access rights. The security kernel is the innermost ring, and has full access to all system hardware and data. User programs occupy outer rings, and have fewer access privileges.
  • Reference monitor. A component implemented by the security kernel that enforces access controls on data and devices on a system. In other words, when a user tries to access a file, the reference monitor ultimately performs the “Is this person allowed to access this file?” function.

remember The system’s reference monitor enforces access controls on a system.

Open and closed systems

An open system is a vendor-independent system that complies with a published and accepted standard. This compliance with open standards promotes interoperability between systems and components made by different vendors. Additionally, open systems can be independently reviewed and evaluated, which facilitates identification of bugs and vulnerabilities and the rapid development of solutions and updates. Examples of open systems include the Linux operating system, the Open Office desktop productivity system, and the Apache web server.

A closed system uses proprietary hardware and/or software that may not be compatible with other systems or components. Source code for software in a closed system isn’t normally available to customers or researchers. Examples of closed systems include the Microsoft Windows operating system, Oracle database management system, and Apple iTunes.

technicalstuff The terms open systems and closed systems also refer to a system’s access model. A closed system does not allow access by default, whereas an open system does.

Protection rings

The concept of protection rings implements multiple concentric domains with increasing levels of trust near the center. The most privileged ring is identified as Ring 0 and normally includes the operating system’s security kernel. Additional system components are placed in the appropriate concentric ring according to the principle of least privilege and to provide isolation, so that a breach of a component in one protection ring does not automatically provide access to components in more privileged rings. The MIT MULTICS operating system implements the concept of protection rings in its architecture, as did Novell Netware.

Security modes

A system’s security mode of operation describes how a system handles stored information at various classification levels. Several security modes of operation, based on the classification level of information being processed on a system and the clearance level of authorized users, have been defined. These designations are typically used for U.S. military and government systems, and include

  • Dedicated: All authorized users must have a clearance level equal to or higher than the highest level of information processed on the system and a valid need-to-know.
  • System High: All authorized users must have a clearance level equal to or higher than the highest level of information processed on the system, but a valid need-to-know isn’t necessarily required.
  • Multilevel: Information at different classification levels is stored or processed on a trusted computer system (a system that employs all necessary hardware and software assurance measures and meets the specified requirements for reliability and security). Authorized users must have an appropriate clearance level, and access restrictions are enforced by the system accordingly.
  • Limited access: Authorized users aren’t required to have a security clearance, but the highest level of information on the system is Sensitive but Unclassified (SBU).

remember A trusted computer system is a system with a Trusted Computing Base (TCB).

Security modes of operation generally come into play in environments that contain highly sensitive information, such as government and military environments. Most private and education systems run in multilevel mode, meaning they contain information at all sensitivity levels. See Chapter 3 for more on security clearance levels.

Recovery procedures

A hardware or software failure can potentially compromise a system’s security mechanisms. Security designs that protect a system during a hardware or software failure include

  • Fault-tolerant systems: These systems continue to operate after the failure of a computer or network component. The system must be capable of detecting and correcting — or circumventing — a fault.
  • Fail-safe systems: When a hardware or software failure is detected, program execution is terminated, and the system is protected from compromise.
  • Fail-soft (resilient) systems: When a hardware or software failure is detected, certain noncritical processing is terminated, and the computer or network continues to function in a degraded mode.
  • Failover systems: When a hardware or software failure is detected, the system automatically transfers processing to a component, such as a clustered server.

Vulnerabilities in security architectures

Unless detected (and corrected) by an experienced security analyst, many weaknesses may be present in a system and permit exploitation, attack, or malfunction. We discuss the most important problems in the following list:

  • Covert channels: Unknown, hidden communications that take place within the medium of a legitimate communications channel.
  • Rootkits: By their very nature, rootkits are designed to subvert system architecture by inserting themselves into an environment in a way that makes it difficult or impossible to detect. For instance, some rootkits run as a hypervisor and change the computer’s operating system into a guest, which changes the basic nature of the system in a powerful but subtle way. We wouldn’t normally discuss malware in a chapter on computer and security architecture, but rootkits are a game-changer that warrants mention: They use various techniques to hide themselves from the target system.
  • Race conditions: Software code in multiprocessing and multiuser systems, unless very carefully designed and tested, can result in critical errors that are difficult to find. A race condition is a flaw in a system where the output or result of an activity in the system is unexpectedly tied to the timing of other events. The term race condition comes from the idea of two events or signals that are racing to influence an activity.

    The most common race condition is the time-of-check-to-time-of-use bug caused by changes in a system between the checking of a condition and the use of the results of that check. For example, two programs that both try to open a file for exclusive use may both open the file, even though only one should be able to.

  • State attacks: Web-based applications use session management to distinguish users from one another. The mechanisms used by the web application to establish sessions must be able to resist attack. Primarily, the algorithms used to create session identifiers must not permit an attacker from being able to steal session identifiers, or guess other users’ session identifiers. A successful attack would result in an attacker taking over another user’s session, which can lead to the compromise of confidential data, fraud, and monetary theft.
  • Emanations: The unintentional emissions of electromagnetic or acoustic energy from a system can be intercepted by others and possibly used to illicitly obtain information from the system. A common form of undesired emanations is radiated energy from CRT (cathode-ray tube, yes… they’re still out there, and not just in old movies!) computer monitors. A third party can discover what data is being displayed on a CRT by intercepting radiation emanating from the display adaptor or monitor from as far as several hundred meters. A third party can also eavesdrop on a network if it has one or more un-terminated coaxial cables in its cable plant.

Assess and Mitigate the Vulnerabilities of Security Architectures, Designs, and Solution Elements

In this section, we discuss the techniques used to identify and fix vulnerabilities in systems. We will lightly discuss techniques for security assessments and testing, which is fully explored in Chapter 8.

Client-based systems

The types of design vulnerabilities often found on endpoints involve defects in client-side code that is present in browsers and applications. The defects most often found include these:

  • Sensitive data left behind in the file system. Generally, this consists of temporary files and cache files, which may be accessible by other users and processes on the system.
  • Unprotected local data. Local data stores may have loose permissions and lack encryption.
  • Vulnerable applets. Many browsers and other client applications often employ applets for viewing documents and video files. Often, the applets themselves may have exploitable weaknesses.
  • Unprotected or weakly protected communications. Data transmitted between the client and other systems may use weak encryption, or use no encryption at all.
  • Weak or nonexistent authentication. Authentication methods on the client, or between the client and server systems, may be unnecessarily weak. This permits an adversary to access the application, local data, or server data without first authenticating.

Other weaknesses may be present in client systems. For a more complete understanding of application weaknesses, consult www.owasp.org.

Identifying weaknesses like the preceding examples will require one or more of the following techniques:

  • Operating system examination
  • Network sniffing
  • Code review
  • Manual testing and observation

Server-based systems

Design vulnerabilities found on servers fall into the following categories:

  • Sensitive data left behind in the file system. Generally, this consists of temporary files and cache files, which may be accessible by other users and processes on the system.
  • Unprotected local data. Local data stores may have loose permissions and also lack encryption.
  • Unprotected or weakly protected communications. Data transmitted between the server and other systems (including clients) may use weak encryption, or use no encryption at all.
  • Weak or nonexistent authentication. Authentication methods on the server may be unnecessarily weak. This permits an adversary to access the application, local data, or server data without first authenticating.

These defects are similar to those in the preceding Client-based section. This is because the terms client and server have only to do with perspective: in both cases, software is running on a system.

Database systems

Database management systems are nearly as complex as the operating systems on which they reside. Vulnerabilities in database management systems include these:

  • Loose access permissions. Like applications and operating systems, database management systems have schemes of access controls that are often designed far too loosely, which permits more access to critical and sensitive information than is appropriate. Another aspect of loose access permissions is an excessive number of persons with privileged access. Finally, there can be failures to implement cryptography as an access control when appropriate.
  • Excessive retention of sensitive data. Keeping sensitive data longer than necessary increases the impact of a security breach.
  • Aggregation of personally identifiable information. The practice known as aggregation of data about citizens is a potentially risky undertaking that can result in an organization possessing sensitive personal information. Sometimes, this happens when an organization deposits historic data from various sources into a data warehouse, where this disparate sensitive data is brought together for the first time. The result is a gold mine or a time bomb, depending on how you look at it.

Database security defects can be identified through manual examination or automated tools. Mitigation may be as easy as changing access permissions or as complex as redesigning the database schema and related application software programs.

Large-scale parallel data systems

Large-scale parallel data systems are systems with large numbers of processors. The processors may either reside in one physical location or be geographically distributed. Vulnerabilities in these systems include

  • Loose access permissions. Management interfaces or the processing systems themselves may have either default, easily guessed, or shared logon credentials that would permit an intruder to easily attack the system.
  • Unprotected or weakly protected communications. Data transmitted between systems may be using either weak encryption or no encryption at all. This could enable an attacker to obtain sensitive data in transit or enough knowledge to compromise the system.

Security defects in parallel systems can be identified through manual examination and mitigated through either configuration changes or system design changes.

Distributed systems

Distributed systems are simply systems with components scattered throughout physical and logical space. Oftentimes, these components are owned and/or managed by different groups or organizations, sometimes in different countries. Some components may be privately used while others represent services available to the public (for example, Google Maps). Vulnerabilities in distributed systems include these:

  • Loose access permissions. Individual components in a distributed system may have individual, separate access control systems, or there may be one overarching access control system for all of the distributed system’s components. Either way, there are too many opportunities for access permissions to be too loose, thereby enabling some subjects access to more data and functions than they need.
  • Unprotected or weakly protected communications. Data transmitted between the server and other systems (including clients) may be using either weak encryption or no encryption at all.
  • Weak security inheritance. What we mean here is that in a distributed system, one component having weak security may compromise the security of the entire system. For example, a publicly accessible component may have direct open access to other components, bypassing local controls in those other components.
  • Lack of centralized security and control. A distributed system that is controlled by more than one organization often lacks overall oversight for security management and security operations.

    This is especially true of peer-to-peer systems that are often run by end users on lightly managed or unmanaged endpoints.

  • Critical paths. A critical path weakness is one where a system’s continued operation depends on the availability of a single component.

All of these weaknesses can also be present in simpler environments. These weaknesses and other defects can be detected through either the use of security scanning tools or manual techniques, and corrective actions taken to mitigate those defects.

tip High quality standards for cloud computing — for cloud service providers as well as organizations using cloud services — can be found at the Cloud Security Alliance (www.cloudsecurityalliance.org) and the European Network and Information Security Agency (ENISA — www.enisa.europa.eu).

Cryptographic systems

Cryptographic systems are especially apt to contain vulnerabilities, for the simple reason that people focus on the cryptographic algorithm but fail to implement it properly. Like any powerful tool, if the operator doesn’t know how to use it, it can be useless at best and dangerous at its worst.

The ways in which a cryptographic system may be vulnerable include these:

  • Use of outdated algorithm. Developers and engineers must be careful to select encryption algorithms that are robust. Furthermore, algorithms in use should be reviewed at least once per year to ensure they continue to be sufficient.
  • Use of untested algorithm. Engineers sometimes make the mistake of either home-brewing their own cryptographic system or using one that is clearly insufficient. It’s best to use one of many publicly available cryptosystems that have stood the test of repeated scrutiny.
  • Failure to encrypt encryption keys. A proper cryptosystem sometimes requires that encryption keys themselves be encrypted.
  • Weak cryptographic keys. Choosing a great algorithm is all but undone if the initialization vector is too small, or too-short keys or too-simple keys are used.
  • Insufficient protection of cryptographic keys. A cryptographic system is only as strong as the protection of its encryption keys. If too many people have access to keys, or if the keys are not sufficiently protected, an intruder may be able to compromise the system simply by stealing and using the keys. Separate encryption keys should be used for the data encryption key (DEK) used to encrypt/decrypt data and the key encryption key (KEK) used to encrypt/decrypt the DEK.

These and other vulnerabilities in cryptographic systems can be detected and mitigated through peer reviews of cryptosystems, assessments by qualified external parties, and the application of corrective actions to fix defects.

Industrial control systems

Industrial control systems (ICS) represent a wide variety of means for monitoring and controlling machinery of various kinds, including power generation, distribution, and consumption; natural gas and petroleum pipelines; municipal water, irrigation, and waste systems; traffic signals; manufacturing; and package distribution.

Weaknesses in industrial control systems include the following:

  • Loose access permissions. Access to monitoring or controls of ICS’s are often set too loosely, thereby enabling some users or systems access to more data and control than they need.
  • Failure to change default access credentials. All too often, organizations implement ICS components and fail to change the default administrative credentials on those components. This makes it far too easy for intruders to take over the ICS.
  • Access from personally owned devices. In the name of convenience, some organizations permit personnel to control machinery from personally owned smartphones and tablets. This vastly increases the ICS’s attack surface and provides opportunities for intruders to access and control critical machinery.
  • Lack of malware control. Many ICS’s lack security components that detect and block malware and other malicious activity, resulting in intruders having too easy a time getting into the ICS.
  • Failure to air gap the ICS. Many organizations fail to air gap (isolate) the ICS from the rest of its corporate network, thereby enabling excessive opportunities for malware and intruders to access the ICS via a corporate network where users invite malware through phishing and other means.
  • Failure to update ICS components. While the manufacturers of ICS components are notorious for failing to issue security patches, organizations are equally culpable in their failure to install these patches when they do arrive.

These vulnerabilities can be mitigated through a systematic process of establishing good controls, testing control effectiveness, and applying corrective action when controls are found to be ineffective.

Cloud-based systems

The U.S. National Institute of Standards and Technology (NIST) defines three cloud computing service models as follows:

  • Software as a Service (SaaS): Customers are provided access to an application running on a cloud infrastructure. The application is accessible from various client devices and interfaces, but the customer has no knowledge of, and does not manage or control, the underlying cloud infrastructure. The customer may have access to limited user-specific application settings.
  • Platform as a Service (PaaS): Customers can deploy supported applications onto the provider’s cloud infrastructure, but the customer has no knowledge of, and does not manage or control, the underlying cloud infrastructure. The customer has control over the deployed applications and limited configuration settings for the application-hosting environment.
  • Infrastructure as a Service (IaaS): Customers can provision processing, storage, networks, and other computing resources and deploy and run operating systems and applications, but the customer has no knowledge of, and does not manage or control, the underlying cloud infrastructure. The customer has control over operating systems, storage, and deployed applications, as well as some networking components (for example, host firewalls).

NIST further defines four cloud computing deployment models as follows:

  • Public: A cloud infrastructure that is open to use by the general public. It’s owned, managed, and operated by a third party (or parties) and exists on the cloud provider’s premises.
  • Community: A cloud infrastructure that is used exclusively by a specific group of organizations.
  • Private: A cloud infrastructure that is used exclusively by a single organization. It may be owned, managed, and operated by the organization or a third party (or a combination of both), and may exist on or off premises.
  • Hybrid: A cloud infrastructure that is composed of two or more of the aforementioned deployment models, bound together by standardized or proprietary technology that enables data and application portability (for example, failover to a secondary data center for disaster recovery or content delivery networks across multiple clouds).

Major public cloud service providers such as Amazon Web Services, Microsoft Azure, Google Cloud Platform, and Oracle Cloud Platform provide customers not only with virtually unlimited compute and storage at scale, but also a depth and breadth of security capabilities that often exceeds the capabilities of the customers themselves. However, this does not mean that cloud-based systems are inherently secure. The shared responsibility model is used by public cloud service providers to clearly define which aspects of security the provider is responsible for, and which aspects the customer is responsible for. SaaS models place the most responsibility on the cloud service provider, typically including securing the following:

  • Applications and data
  • Runtime and middleware
  • Servers, virtualization, and operating systems
  • Storage and networking
  • Physical data center

However, the customer is always ultimately responsible for the security and privacy of its data. Additionally, identity and access management (IAM) is typically the customer’s responsibility.

In a PaaS model, the customer is typically responsible for the security of its applications and data, as well as IAM, among others.

In an IaaS model, the customer is typically responsible for the security of its applications and data, runtime and middleware, and operating systems. The cloud service provider is typically responsible for the security of networking and the data center (although cloud service providers generally do not provide firewalls). Virtualization, server, and storage security may be managed by either the cloud service provider or customer.

tip The Cloud Security Alliance (CSA) publishes the Cloud Controls Matrix, which provides a framework for information security that is specifically designed for the cloud industry.

Internet of Things

The security of Internet of Things (IoT) devices and systems is a rapidly evolving area of information security. IoT sensors and devices collect large amounts of both potentially sensitive data and seemingly innocuous data. However, under certain circumstances practically any data that is collected can be used for nefarious purposes, security must be a critical design consideration for IoT devices and systems. This includes not only securing the data stored on the systems, but also how the data is collected, transmitted, processed, and used. There are many networking and communications protocols commonly used in IoT devices, including the following:

  • IPv6 over Low power Wireless Personal Area Networks (6LoWPAN)
  • 5G
  • Wi-Fi
  • Bluetooth Mesh and Bluetooth Low-Energy (BLE)
  • Thread
  • Zigbee, and many others

The security of these various protocols and their implementations must also be carefully considered in the design of secure IoT devices and systems.

Assess and Mitigate Vulnerabilities in Web-Based Systems

Web-based systems contain many components, including application code, database management systems, operating systems, middleware, and the web server software itself. These components may, individually and collectively, have security design or implementation defects. Some of the defects present include these:

  • Failure to block injection attacks. Attacks such as JavaScript injection and SQL injection can permit an attacker to cause a web application to malfunction and expose sensitive internally stored data.
  • Defective authentication. There are many, many ways in which a web site can implement authentication — they are too numerous to list here. Authentication is essential to get right; many sites fail to do so.
  • Defective session management. Web servers create logical “sessions” to keep track of individual users. Many web sites’ session management mechanisms are vulnerable to abuse, most notably that permit an attacker to take over another user’s session.
  • Failure to block cross-site scripting attacks. Web sites that fail to examine and sanitize input data. As a result, attackers can sometimes create attacks that send malicious content to the user.
  • Failure to block cross-site request forgery attacks. Web sites that fail to employ proper session and session context management can be vulnerable to attacks in which users are tricked into sending commands to web sites that may cause them harm.

    The example we like to use is where an attacker tricks a user into clicking a link that actually takes the user to a URL like this: http://bank.com/transfer?tohackeraccount:amount=99999.99.

  • Failure to protect direct objects references. Web sites can sometimes be tricked into accessing and sending data to a user who is not authorized to view or modify it.

These vulnerabilities can be mitigated in three main ways:

  • Developer training on the techniques of safer software development.
  • Including security in the development lifecycle.
  • Use of dynamic and static application scanning tools.

tip For a more in-depth review of vulnerabilities in web-based systems, read the “Top 10” list at www.owasp.org.

Assess and Mitigate Vulnerabilities in Mobile Systems

Mobile systems include the operating systems and applications on smartphones, tablets, phablets, smart watches, and wearables. The most popular operating system platforms for mobile systems are Apple iOS, Android, and Windows 10.

The vulnerabilities that are found on mobile systems include

  • Lack of robust resource access controls. History has shown us that some mobile OSs lack robust controls that govern which apps are permitted to access resources on the mobile device, including:
    • Locally stored data
    • Contact list
    • Camera roll
    • Email messages
    • Location services
    • Camera
    • Microphone
  • Insufficient security screening of applications. Some mobile platform environments are quite good at screening out applications that contain security flaws or outright break the rules, but other platforms have more of an “anything goes” policy, apparently. The result is buyer beware: Your mobile app may be doing more than advertised.
  • Security settings defaults too lax. Many mobile platforms lack enforcement of basic security and, for example, don't require devices to automatically lock or have lock codes.

In a managed corporate environment, the use of a mobile device management (MDM) system can mitigate many or all of these risks. For individual users, mitigation is up to individual users to do the right thing and use strong security settings.

Assess and Mitigate Vulnerabilities in Embedded Devices

Embedded devices encompass the wide variety of systems and devices that are Internet connected. Mainly, we’re talking about devices that are not human connected in the computing sense. Examples of such devices include

  • Automobiles and other vehicles.
  • Home appliances, such as clothes washers and dryers, ranges and ovens, refrigerators, thermostats, televisions, video games, video surveillance systems, and home automation systems.
  • Medical care devices, such as IV infusion pumps and patient monitoring.
  • Heating, ventilation, and air conditioning (HVAC) systems.
  • Commercial video surveillance and key card systems.
  • Automated payment kiosks, fuel pumps, and automated teller machines (ATMs).
  • Network devices such as routers, switches, modems, firewalls, and so on.

These devices often run embedded systems, which are specialized operating systems designed to run on devices lacking computer-like human interaction through a keyboard or display. They still have an operating system that is very similar to that found on endpoints like laptops and mobile devices.

Some of the design defects in this class of device include

  • Lack of a security patching mechanism. Most of these devices utterly lack any means for remediating security defects that are found after manufacture.
  • Lack of anti-malware mechanisms. Most of these devices have no built-in defenses at all. They’re completely defenseless against attack by an intruder.
  • Lack of robust authentication. Many of these devices have simple, easily-guessed default login credentials that cannot be changed (or, at best, are rarely changed by their owners).
  • Lack of monitoring capabilities. Many of these devices lack any means for sending security and event alerts.

Because the majority of these devices cannot be altered, mitigation of these defects typically involves isolation of these devices on separate, heavily guarded networks that have tools in place to detect and block attacks.

tip Many manufacturers of embedded, network-enabled devices do not permit customers to alter their configuration or apply security settings. This compels organizations to place these devices on separate, guarded networks.

Apply Cryptography

Cryptography (from the Greek kryptos, meaning hidden, and graphia, meaning writing) is the science of encrypting and decrypting communications to make them unintelligible for all but the intended recipient.

Cryptography can be used to achieve several goals of information security, including confidentiality, integrity, and authentication.

  • Confidentiality: First, cryptography protects the confidentiality (or secrecy) of information. Even when the transmission or storage medium has been compromised, the encrypted information is practically useless to unauthorized persons without the proper keys for decryption.
  • Integrity: Cryptography can also be used to ensure the integrity (or accuracy) of information through the use of hashing algorithms and message digests.
  • Authentication: Finally, cryptography can be used for authentication (and non-repudiation) services through digital signatures, digital certificates, or a Public Key Infrastructure (PKI).

remember The CISSP exam tests the candidate’s ability to apply general cryptographic concepts to real-world issues and problems. You don’t have to memorize cryptographic algorithms or the step-by-step operation of various cryptographic systems. However, you should have a firm grasp of cryptographic concepts and technologies, as well as their specific strengths, weaknesses, uses, and applications.

warning Don’t confuse these three points with the C-I-A triad, which we discuss in Chapter 3: The C-I-A triad deals with confidentiality, integrity, and availability; cryptography does nothing to ensure availability.

Cryptography today has evolved into a complex science (some say an art) presenting many great promises and challenges in the field of information security. The basics of cryptography include various terms and concepts, the individual components of the cryptosystem, and the classes and types of ciphers.

Cryptographic lifecycle

The cryptographic lifecycle is the sequence of events that occurs throughout the use of cryptographic controls in a system. These steps include

  • Development of requirements for a cryptosystem.
  • Selection of cryptographic controls.
  • Implementation of cryptosystem.
  • Examination of cryptosystem for proper implementation, effective key management, and efficacy of cryptographic algorithms.
  • Rotation of cryptographic keys.
  • Mitigation of any defects identified.

These steps are not altogether different from the selection, implementation, examination, and correction of any other type of security control in a network and computing environment. Like virtually any other component in a network and computing environment, components in a cryptosystem must be periodically examined to ensure that they are still effective and being operated properly.

Plaintext and ciphertext

A plaintext message is a message in its original readable format or a ciphertext message that has been properly decrypted (unscrambled) to produce the original readable plaintext message.

A ciphertext message is a plaintext message that has been transformed (encrypted) into a scrambled message that’s unintelligible. This term doesn’t apply to messages from your boss that may also happen to be unintelligible!

Encryption and decryption

Encryption (or enciphering) is the process of converting plaintext communications into ciphertext. Decryption (or deciphering) reverses that process, converting ciphertext into plaintext. (See Figure 5-2.)

image

FIGURE 5-2: Encryption and decryption.

Traffic on a network can be encrypted by using either end-to-end or link encryption.

End-to-end encryption

With end-to-end encryption, packets are encrypted once at the original encryption source and then decrypted only at the final decryption destination. The advantages of end-to-end encryption are its speed and overall security. However, in order for the packets to be properly routed, only the data is encrypted, not the routing information.

Link encryption

Link encryption requires that each node (for example, a router) has separate key pairs for its upstream and downstream neighbors. Packets are encrypted and decrypted, then re-encrypted at every node along the network path.

The following example, as shown in Figure 5-3, illustrates link encryption:

  1. Computer 1 encrypts a message by using Secret Key A, and then transmits the message to Router 1.
  2. Router 1 decrypts the message by using Secret Key A, re-encrypts the message by using Secret Key B, and then transmits the message to Router 2.
  3. Router 2 decrypts the message by using Secret Key B, re-encrypts the message by using Secret Key C, and then transmits the message to Computer 2.
  4. Computer 2 decrypts the message by using Secret Key C.
image

FIGURE 5-3: Link encryption.

The advantage of using link encryption is that the entire packet (including routing information) is encrypted. However, link encryption has the following two disadvantages:

  • Latency: Packets must be encrypted/decrypted at every node, which creates latency (delay) in the transmission of those packets.
  • Inherent vulnerability: If a node is compromised or a packet’s decrypted contents are cached in a node, the message can be compromised.

Putting it all together: The cryptosystem

A cryptosystem is the hardware or software implementation that transforms plaintext into ciphertext (encrypting it) and back into plaintext (decrypting it).

An effective cryptosystem must have the following properties:

  • The encryption and decryption process is efficient for all possible keys within the cryptosystem’s keyspace.

    tip A keyspace is the range of all possible values for a key in a cryptosystem.

  • The cryptosystem is easy to use. A cryptosystem that is difficult to use might be used improperly, leading to data loss or compromise.
  • The strength of the cryptosystem depends on the secrecy of the cryptovariables (or keys), rather than the secrecy of the algorithm.

tip A restricted algorithm refers to a cryptographic algorithm that must be kept secret in order to provide security. Restricted or proprietary algorithms are not very effective, because the effectiveness depends on keeping the algorithm itself secret rather than the complexity and high number of variable solutions of the algorithm, and therefore are not commonly used today. They are generally used only for applications that require minimal security.

Cryptosystems are typically composed of two basic elements:

  • Cryptographic algorithm: Also called a cipher, the cryptographic algorithm details the step-by-step mathematical function used to produce
    • Ciphertext (encipher)
    • Plaintext (decipher)
  • Cryptovariable: Also called a key, the cryptovariable is a secret value applied to the algorithm. The strength and effectiveness of the cryptosystem largely depend on the secrecy and strength of the cryptovariable.

Key clustering (or simply clustering) occurs when identical ciphertext messages are generated from a plaintext message by using the same encryption algorithm but different encryption keys. Key clustering indicates a weakness in a cryptographic algorithm because it statistically reduces the number of key combinations that must be attempted in a brute force attack.

remember A cryptosystem consists of the cryptographic algorithm (cipher) and the cryptovariable (key), as well as all the possible plaintexts and ciphertexts produced by the cipher and key.

remember An analogy of a cryptosystem is a deadbolt lock. A deadbolt lock can be easily identified, and its inner working mechanisms aren’t closely guarded state secrets. What makes a deadbolt lock effective is the individual key that controls a specific lock on a specific door. However, if the key is weak (imagine only one or two notches on a flat key) or not well protected (left under your doormat), the lock won’t protect your belongings. Similarly, if an attacker is able to determine what cryptographic algorithm (lock) was used to encrypt a message, it should still be protected because you’re using a strong key that you’ve kept secret, rather than a six-character password that you wrote on a scrap of paper and left under your mouse pad.

Classes of ciphers

Ciphers are cryptographic transformations. The two main classes of ciphers used in symmetric key algorithms are block and stream (see the section “Not Quite the Metric System: Symmetric and Asymmetric Key Systems,” later in this chapter), which describe how the ciphers operate on input data.

remember The two main classes of ciphers are block ciphers and stream ciphers.

BLOCK CIPHERS

Block ciphers operate on a single fixed block (typically 128 bits) of plaintext to produce the corresponding ciphertext. Using a given key in a block cipher, the same plaintext block always produces the same ciphertext block. Advantages of block ciphers compared with stream ciphers are

  • Reusable keys: Key management is much easier.
  • Interoperability: Block ciphers are more widely supported.

Block ciphers are typically implemented in software. Examples of block ciphers include AES, DES, Blowfish, Twofish, and RC5.

STREAM CIPHERS

Stream ciphers operate in real time on a continuous stream of data, typically bit by bit. Stream ciphers generally work faster than block ciphers and require less code to implement. However, the keys in a stream cipher are generally used only once (see the sidebar “A disposable cipher: The one-time pad”) and then discarded. Key management becomes a serious problem. Using a stream cipher, the same plaintext bit or byte will produce a different ciphertext bit or byte every time it is encrypted. Stream ciphers are typically implemented in hardware.

Examples of stream ciphers include Salsa20 and RC4.

remember A one-time pad is an example of a stream cipher and is considered unbreakable.

Types of ciphers

The two basic types of ciphers are substitution and transposition. Both are involved in the process of transforming plaintext into ciphertext.

remember Most modern cryptosystems use both substitution and permutation to achieve encryption.

SUBSTITUTION CIPHERS

Substitution ciphers replace bits, characters, or character blocks in plaintext with alternate bits, characters, or character blocks to produce ciphertext. A classic example of a substitution cipher is one that Julius Caesar used: He substituted letters of the message with other letters from the same alphabet. (Read more about this in the sidebar “Tales from the crypt-o: A brief history of cryptography,” earlier in this chapter.) In a simple substitution cipher using the standard English alphabet, a cryptovariable (key) is added modulo 26 to the plaintext message. In modulo 26 addition, the remainder is the final result for any sum equal to or greater than 26. For example, a basic substitution cipher in which the word BOY is encrypted by adding three characters using modulo 26 math produces the following result:

B

O

Y

PLAINTEXT

2

15

25

NUMERIC VALUE

+

3

3

3

SUBSTITUTION VALUE

5

18

2

MODULO 26 RESULT

E

R

B

CIPHERTEXT

A substitution cipher may be either monoalphabetic or polyalphabetic:

  • Monoalphabetic: A single alphabet is used to encrypt the entire plaintext message.
  • Polyalphabetic: A more complex substitution that uses a different alphabet to encrypt each bit, character, or character block of a plaintext message.

A more modern example of a substitution cipher is the S-boxes (Substitution boxes) employed in the Data Encryption Standard (DES) algorithm. The S-boxes in DES produce a nonlinear substitution (6 bits in, 4 bits out). Note: Do not attempt to sing this to the tune “Shave and a Haircut” to improve the strength of the encryption by hiding any statistical relationship between the plaintext and ciphertext characters.

TRANSPOSITION (OR PERMUTATION) CIPHERS

Transposition ciphers rearrange bits, characters, or character blocks in plaintext to produce ciphertext. In a simple columnar transposition cipher, a message might be read horizontally but written vertically to produce the ciphertext as in the following example:

THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG

written in 9 columns as

THEQUICKB

ROWNFOXJU

MPSOVERTH

ELAZYDOG

then transposed (encrypted) vertically as

TRMEHOPLEWSAQNOZUFVYIOEDCXROKJTGBUH

The original letters of the plaintext message are the same; only the order has been changed to achieve encryption.

DES performs permutations through the use of P-boxes (Permutation boxes) to spread the influence of a plaintext character over many characters so that they’re not easily traced back to the S-boxes used in the substitution cipher.

Other types of ciphers include

  • Codes: Includes words and phrases to communicate a secret message.
  • Running (or book) ciphers: For example, the key is page 137 of The Catcher in the Rye, and text on that page is added modulo 26 to perform encryption/decryption.
  • Vernam ciphers: Also known as one-time pads, which are keystreams that can be used only once. We discuss these more in the earlier sidebar “A disposable cipher: The one-time pad.”
  • Concealment ciphers: These ciphers include steganography, which we discuss in the section “Steganography: A picture is worth a thousand (hidden) words,” later in this chapter.

Cryptography alternatives

Technology does provide valid and interesting alternatives to cryptography when a message needs to be protected during transmission. Some useful options are listed in the following sections.

Steganography: A picture is worth a thousand (hidden) words

Steganography is the art of hiding the very existence of a message. It is related to but different from cryptography. Like cryptography, one purpose of steganography is to protect the contents of a message. However, unlike cryptography, the contents of the message aren’t encrypted. Instead, the existence of the message is hidden in some other communications medium.

For example, a message may be hidden in a graphic or sound file, in slack space on storage media, in traffic noise over a network, or in a digital image. By using the example of a digital image, the least significant bit (the right-most bit) of each byte in the image file can be used to transmit a hidden message without noticeably altering the image. However, because the message itself isn’t encrypted, if it is discovered, its contents can be easily compromised.

Digital watermarking: The (ouch) low watermark

Digital watermarking is a technique similar (and related) to steganography that can be used to verify the authenticity of an image or data, or to protect the intellectual property rights of the creator. Watermarking is the visible cousin of steganography — no attempt is made to hide its existence. Watermarks have long been used on paper currency and office letterhead or paper stock.

Within the last decade, the use of digital watermarking has become more widespread. For example, to display photo examples on the Internet without risking intellectual property theft, a copyright notice may be prominently imprinted across the image. As with steganography, nothing is encrypted using digital watermarking; the confidentiality of the material is not protected with a watermark.

Not quite the metric system: Symmetric and asymmetric key systems

Cryptographic algorithms are broadly classified as either symmetric or asymmetric key systems.

Symmetric key cryptography

Symmetric key cryptography, also known as symmetric algorithm, secret key, single key, and private key cryptography, uses a single key to both encrypt and decrypt information. Two parties (for our example, Thomas and Richard) can exchange an encrypted message by using the following procedure:

  1. The sender (Thomas) encrypts the plaintext message with a secret key known only to the intended recipient (Richard).
  2. The sender then transmits the encrypted message to the intended recipient.
  3. The recipient decrypts the message with the same secret key to obtain the plaintext message.

In order for an attacker (Harold) to read the message, he must either guess the secret key (by using a brute-force attack, for example), obtain the secret key from Thomas or Richard using the rubber hose technique (another form of uh, brute-force attack — humans are typically the weakest link, and neither Thomas nor Richard have much tolerance for pain) or through social engineering (Thomas and Richard both like money and may be all too willing to help Harold’s Nigerian uncle claim his vast fortune) or intercept the secret key during the initial exchange.

The following list includes the main disadvantages of symmetric systems:

  • Distribution: Secure distribution of secret keys is absolutely required either through out-of-band methods or by using asymmetric systems.
  • Scalability: A different key is required for each pair of communicating parties.
  • Limited functionality: Symmetric systems can’t provide authentication or non-repudiation (see the earlier sidebar “He said, she said: The concept of non-repudiation”).

Of course, symmetric systems do have many advantages:

  • Speed: Symmetric systems are much faster than asymmetric systems.
  • Strength: Strength is gained when used with a large key (128 bit, 192 bit, 256 bit, or larger).
  • Availability: There are many algorithms available for organizations to select and use.

Symmetric key algorithms include Data Encryption Standard (DES), Triple DES (3DES), Advanced Encryption Standard (AES), International Data Encryption Algorithm (IDEA), and Rivest Cipher 5 (RC5).

remember Symmetric key systems use a shared secret key.

DATA ENCRYPTION STANDARD (DES)

In the early 1970s, the National Institute of Standards and Technology (NIST) solicited vendors to submit encryption algorithm proposals to be evaluated by the National Security Agency (NSA) in support of a national cryptographic standard. This new encryption standard was used for private-sector and Sensitive but Unclassified (SBU) government data. In 1974, IBM submitted a 128-bit algorithm originally known as Lucifer. After some modifications (the algorithm was shortened to 56 bits and the S-boxes were changed), the IBM proposal was endorsed by the NSA and formally adopted as the Data Encryption Standard. It was published in Federal Information Processing Standard (FIPS) PUB 46 in 1977 (updated and revised in 1988 as FIPS PUB 46-1) and American National Standards Institute (ANSI) X3.92 in 1981.

remember DES is a block cipher that uses a 56-bit key.

The DES algorithm is a symmetric (or private) key cipher consisting of an algorithm and a key. The algorithm is a 64-bit block cipher based on a 56-bit symmetric key. (It consists of 56 key bits plus 8 parity bits … or think of it as 8 bytes, with each byte containing 7 key bits and 1 parity bit.) During encryption, the original message (plaintext) is divided into 64-bit blocks. Operating on a single block at a time, each 64-bit plaintext block is split into two 32-bit blocks. Under control of the 56-bit symmetric key, 16 rounds of transpositions and substitutions are performed on each individual character to produce the resulting ciphertext output.

technicalstuff A parity bit is used to detect errors in a bit pattern. For example, if the bit pattern has 55 key bits (ones and zeros) that add up to an even number, an odd-parity bit should be a one, making the total of the bits — including the parity bit — an odd number. For an even-parity bit, if the 55 key bits add up to an even number, the parity bit should be a zero, making the total of the bits — including the parity bit — an even number. If an algorithm uses even parity and the resulting bit pattern (including the parity bit) is an odd number, then the transmission has been corrupted.

technicalstuff A round is a transformation (permutations and substitutions) that an encryption algorithm performs on a block of plaintext to convert (encrypt) it into ciphertext.

The four distinct modes of operation (the mode of operation defines how the plaintext/ciphertext blocks are processed) in DES are Electronic Code Book, Cipher Block Chaining, Cipher Feedback, and Output Feedback.

remember The four modes of DES are ECB, CBC, CFB, and OFB. ECB and CBC are the most commonly used.

The original goal of DES was to develop an encryption standard that could be used for 10 to 15 years. Although DES far exceeded this goal, in 1999, the Electronic Frontier Foundation achieved the inevitable, breaking a DES key in only 23 hours.

Electronic Code Book (ECB)

Electronic Code Book (ECB) mode is the native mode for DES operation and normally produces the highest throughput. It is best used for encrypting keys or small amounts of data. ECB mode operates on 64-bit blocks of plaintext independently and produces 64-bit blocks of ciphertext. One significant disadvantage of ECB is that the same plaintext, encrypted with the same key, always produces the same ciphertext. If used to encrypt large amounts of data, it’s susceptible to Chosen Text Attacks (CTA) (discussed in the section “Chosen Text Attack (CTA),” later in this chapter) because certain patterns may be revealed.

Cipher Block Chaining (CBC)

Cipher Block Chaining (CBC) mode is the most common mode of DES operation. Like ECB mode, CBC mode operates on 64-bit blocks of plaintext to produce 64-bit blocks of ciphertext. However, in CBC mode, each block is XORed (see the following sidebar “The XORcist,”) with the ciphertext of the preceding block to create a dependency, or chain, thereby producing a more random ciphertext result. The first block is encrypted with a random block known as the initialization vector (IV). One disadvantage of CBC mode is that errors propagate. However, this problem is limited to the block in which the error occurs and the block that immediately follows, after which, the decryption resynchronizes.

Cipher Feedback (CFB)

Cipher Feedback (CFB) mode is a stream cipher most often used to encrypt individual characters. In this mode, previously generated ciphertext is used as feedback for key generation in the next keystream. The resulting ciphertext is chained together, which causes errors to be multiplied throughout the encryption process.

Output Feedback (OFB)

Output Feedback (OFB) mode is also a stream cipher very similar to CFB. It is often used to encrypt satellite communications. In this mode, previous plaintext is used as feedback for key generation in the next keystream. Because the resulting ciphertext is not chained together, errors don’t spread throughout the encryption process.

TRIPLE DES (3DES)

Triple Data Encryption Standard (3DES) effectively extended the life of the DES algorithm. In Triple DES implementations, a message is encrypted by using one key, encrypted by using a second key, and then again encrypted by using either the first key or a third key.

The use of three separate 56-bit encryption keys produces an effective key length of 168 bits. But Triple DES doesn’t just triple the work factor required to crack the DES algorithm (see the sidebar “Work factor: Force × effort = work!” in this chapter). Because the attacker doesn’t know whether he or she successfully cracked even the first 56-bit key (pick a number between 0 and 72 quadrillion!) until all three keys are cracked and the correct plaintext is produced, the work factor required is more like 256 x 256 x 256, or 72 quadrillion x 72 quadrillion x 72 quadrillion. (Don’t try this multiplication on a calculator; just trust us on this one.)

warning Double DES wasn’t a significant improvement to DES. In fact, by using a Meet-in-the-Middle Attack (see the section “Meet-in-the-Middle Attack,” later in this chapter), the work factor required to crack Double DES is only slightly greater than for DES. For this reason, Double DES isn’t commonly used.

Using Triple DES (officially known as the Triple Data Encryption Algorithm, or TDEA) would seem enough to protect even the most sensitive data for at least a few lifetimes, but a few problems exist with Triple DES. First, the performance cost is significant. Although Triple DES is faster than many other symmetric encryption algorithms, it’s still unacceptably slow and therefore doesn’t work with many applications that require high-speed throughput of large volumes of data.

Second, a weakness exists in the implementation that allows a cryptanalyst to reduce the effective key size to 108 bits in a brute force attack. Although a 108-bit key size still requires a significant amount of time to crack (theoretically, several million millennia), it’s still a weakness.

ADVANCED ENCRYPTION STANDARD (AES)

In May 2002, NIST announced the Rijndael Block Cipher as the new standard to implement the Advanced Encryption Standard (AES), which replaced DES as the U.S. government standard for encrypting Sensitive but Unclassified data. AES was subsequently approved for encrypting classified U.S. government data up to the Top Secret level (using 192- or 256-key lengths).

The Rijndael Block Cipher, developed by Dr. Joan Daemen and Dr. Vincent Rijmen, uses variable block and key lengths (128, 192, or 256 bits) and between 10 and 14 rounds. It was designed to be simple, resistant to known attacks, and fast. It can be implemented in either hardware or software and has relatively low memory requirements.

remember AES is based on the Rijndael Block Cipher.

Until recently, the only known successful attacks against AES were side-channel attacks, which don’t directly attack the encryption algorithm, but instead attack the system on which the encryption algorithm is implemented. Side-channel attacks using cache-timing techniques are most common against AES implementations. In 2009, a theoretical related-key attack against AES was published. The attack method is considered theoretical because, although it reduces the mathematical complexity required to break an AES key, it is still well beyond the computational capability available today.

BLOWFISH AND TWOFISH ALGORITHMS

The Blowfish Algorithm operates on 64-bit blocks, employs 16 rounds, and uses variable key lengths of up to 448 bits. The Twofish Algorithm, a finalist in the AES selection process, is a symmetric block cipher that operates on 128-bit blocks, employing 16 rounds with variable key lengths up to 256 bits. Both Blowfish and Twofish were designed by Bruce Schneier (and others) and are freely available in the public domain (neither algorithm has been patented). To date, there are no known successful cryptanalytic attacks against either algorithm.

RIVEST CIPHERS

Drs. Ron Rivest, Adi Shamir, and Len Adleman invented the RSA algorithm and founded the company RSA Data Security (RSA = Rivest, Shamir, Adleman). The Rivest Ciphers are a series of symmetric algorithms that include RC2, RC4, RC5, and RC6 (RC1 was never published and RC3 was broken during development):

  • RC2: A block-mode cipher that encrypts 64-bit blocks of data by using a variable-length key.
  • RC4: A stream cipher (data is encrypted in real time) that uses a variable-length key (128 bits is standard).
  • RC5: Similar to RC2, but includes a variable-length key (0 to 2,048 bits), variable block size (32, 64, or 128 bits), and variable number of processing rounds (0 to 255).
  • RC6: Derived from RC5 and a finalist in the AES selection process. It uses a 128-bit block size and variable-length keys of 128, 192, or 256 bits.

IDEA CIPHER

The International Data Encryption Algorithm (IDEA) Cipher evolved from the Proposed Encryption Standard and the Improved Proposed Encryption Standard (IPES) originally developed in 1990. IDEA is a block cipher that operates on 64-bit plaintext blocks by using a 128-bit key. IDEA performs eight rounds on 16-bit sub-blocks and can operate in four distinct modes similar to DES. The IDEA Cipher provides stronger encryption than RC4 and Triple DES, but because it’s patented, it’s not widely used today. However, the patents were set to expire in various countries between 2010 and 2012. It is currently used in some software applications, including Pretty Good Privacy (PGP) email.

Asymmetric key cryptography

Asymmetric key cryptography (also known as asymmetric algorithm cryptography or public key cryptography) uses two separate keys: one key to encrypt and a different key to decrypt information. These keys are known as public and private key pairs. When two parties want to exchange an encrypted message by using asymmetric key cryptography, they follow these steps:

  1. The sender (Thomas) encrypts the plaintext message with the intended recipient’s (Richard) public key.
  2. This produces a ciphertext message that can then be transmitted to the intended recipient (Richard).
  3. The recipient (Richard) then decrypts the message with his private key, known only to him.

Only the private key can decrypt the message; thus, an attacker (Harold) possessing only the public key can’t decrypt the message. This also means that not even the original sender can decrypt the message. This use of an asymmetric key system is known as a secure message. A secure message guarantees the confidentiality of the message.

remember Asymmetric key systems use a public key and a private key.

remember Secure message format uses the recipient’s private key to protect confidentiality.

If the sender wants to guarantee the authenticity of a message (or, more correctly, the authenticity of the sender), he or she can sign the message with this procedure:

  1. The sender (Thomas) encrypts the plaintext message with his own private key.
  2. This produces a ciphertext message that can then be transmitted to the intended recipient (Richard).
  3. To verify that the message is in fact from the purported sender, the recipient (Richard) applies the sender’s (Thomas’s) public key (which is known to every Tom, Dick, and Harry).

Of course, an attacker can also verify the authenticity of the message. This use of an asymmetric key system is known as an open message format because it guarantees only the authenticity, not the confidentiality.

remember Open message format uses the sender’s private key to ensure authenticity.

If the sender wants to guarantee both the confidentiality and authenticity of a message, he or she can do so by using this procedure:

  1. The sender (Thomas) encrypts the message first with the intended recipient’s (Richard’s) public key and then with his own private key.
  2. This produces a ciphertext message that can then be transmitted to the intended recipient (Richard).
  3. The recipient (Richard) uses the sender’s (Thomas’s) public key to verify the authenticity of the message, and then uses his own private key to decrypt the message’s contents.

If an attacker intercepts the message, he or she can apply the sender’s public key, but then has an encrypted message that he or she can’t decrypt without the intended recipient’s private key. Thus, both confidentiality and authenticity are assured. This use of an asymmetric key system is known as a secure and signed message format.

remember A secure and signed message format uses the sender’s private key and the recipient’s public key to protect confidentiality and ensure authenticity.

A public key and a private key are mathematically related, but theoretically, no one can compute or derive the private key from the public key. This property of asymmetric systems is based on the concept of a one-way function. A one-way function is a problem that you can easily compute in one direction but not in the reverse direction. In asymmetric key systems, a trapdoor (private key) resolves the reverse operation of the one-way function.

Because of the complexity of asymmetric key systems, they are more commonly used for key management or digital signatures than for encryption of bulk information. Often, a hybrid system is employed, using an asymmetric system to securely distribute the secret keys of a symmetric key system that’s used to encrypt the data.

The main disadvantage of asymmetric systems is their lower speed. Because of the types of algorithms that are used to achieve the one-way hash functions, very large keys are required. (A 128-bit symmetric key has the equivalent strength of a 2,304-bit asymmetric key.) Those large keys, in turn, require more computational power, causing a significant loss of speed (up to 10,000 times slower than a comparable symmetric key system).

However, the many significant advantages of asymmetric systems include

  • Extended functionality: Asymmetric key systems can provide both confidentiality and authentication; symmetric systems can provide only confidentiality.
  • Scalability: Because symmetric key systems require secret key exchanges between all of the communicating parties, their scalability is limited. Asymmetric key systems, which do not require secret key exchanges, resolve key management issues associated with symmetric key systems, and are therefore more scalable.

Asymmetric key algorithms include RSA, Diffie-Hellman, El Gamal, Merkle-Hellman (Trapdoor) Knapsack, and Elliptic Curve, which we talk about in the following sections.

RSA

Drs. Ron Rivest, Adi Shamir, and Len Adleman published the RSA algorithm, which is a key transport algorithm based on the difficulty of factoring a number that’s the product of two large prime numbers (typically 512 bits). Two users (Thomas and Richard) can securely transport symmetric keys by using RSA, like this:

  1. Thomas creates a symmetric key, encrypts it with Richard’s public key, and then transmits it to Richard.
  2. Richard decrypts the symmetric key by using his own private key.

remember RSA is an asymmetric key algorithm based on factoring prime numbers.

DIFFIE-HELLMAN KEY EXCHANGE

Drs. Whitfield Diffie and Martin Hellman published a paper, entitled “New Directions in Cryptography,” that detailed a new paradigm for secure key exchange based on discrete logarithms. Diffie-Hellman is described as a key agreement algorithm. Two users (Thomas and Richard) can exchange symmetric keys by using Diffie-Hellman, like this:

  1. Thomas and Richard obtain each other’s public keys.
  2. Thomas and Richard then combine their own private keys with the public key of the other person, producing a symmetric key that only the two users involved in the exchange know.

Diffie-Hellman key exchange is vulnerable to Man-in-the-Middle Attacks, in which an attacker (Harold) intercepts the public keys during the initial exchange and substitutes his own private key to create a session key that can decrypt the session. (You can read more about these attacks in the section “Man-in-the-Middle Attack,” later in this chapter.) A separate authentication mechanism is necessary to protect against this type of attack, ensuring that the two parties communicating in the session are, in fact, the legitimate parties.

remember Diffie-Hellman is an asymmetric key algorithm based on discrete logarithms.

EL GAMAL

El Gamal is an unpatented, asymmetric key algorithm based on the discrete logarithm problem used in Diffie-Hellman (discussed in the preceding section). El Gamal extends the functionality of Diffie-Hellman to include encryption and digital signatures.

MERKLE-HELLMAN (TRAPDOOR) KNAPSACK

The Merkle-Hellman (Trapdoor) Knapsack, published in 1978, employs a unique approach to asymmetric cryptography. It’s based on the problem of determining what items, in a set of items that have fixed weights, can be combined in order to obtain a given total weight. Knapsack was broken in 1982.

remember Knapsack is an asymmetric key algorithm based on fixed weights.

ELLIPTIC CURVE (EC)

Elliptic curves (EC) are far more difficult to compute than conventional discrete logarithm problems or factoring prime numbers. (A 160-bit EC key is equivalent to a 1,024-bit RSA key.) The use of smaller keys means that EC is significantly faster than other asymmetric algorithms (and many symmetric algorithms), and can be widely implemented in various hardware applications including wireless devices and smart cards.

remember Elliptic Curve is more efficient than other asymmetric key systems and many symmetric key systems because it can use a smaller key.

Message authentication

Message authentication guarantees the authenticity and integrity of a message by ensuring that

  • A message hasn’t been altered (either maliciously or accidentally) during transmission.
  • A message isn’t a replay of a previous message.
  • The message was sent from the origin stated (it’s not a forgery).
  • The message is sent to the intended recipient.

Checksums, CRC-values, and parity checks are examples of basic message authentication and integrity controls. More advanced message authentication is performed by using digital signatures and message digests.

remember Digital signatures and message digests can be used to provide message authentication.

Digital signatures

The Digital Signature Standard (DSS), published by the National Institute of Standards and Technology (NIST) in Federal Information Processing Standard (FIPS) 186-4, specifies three acceptable algorithms in its standard: the RSA Digital Signature Algorithm, the Digital Signature Algorithm (DSA, which is based on a modified El Gamal algorithm), and the Elliptic Curve Digital Signature Algorithm (ECDSA).

A digital signature is a simple way to verify the authenticity (and integrity) of a message. Instead of encrypting a message with the intended receiver’s public key, the sender encrypts it with his or her own private key. The sender’s public key properly decrypts the message, authenticating the originator of the message. This process is known as an open message format in asymmetric key systems, which we discuss in the section “Asymmetric key cryptography,” earlier in this chapter.

Message digests

It’s often impractical to encrypt a message with the receiver’s public key to protect confidentiality, and then encrypt the entire message again by using the sender’s private key to protect authenticity and integrity. Instead, a representation of the encrypted message is encrypted with the sender’s private key to produce a digital signature. The intended recipient decrypts this representation by using the sender’s public key, and then independently calculates the expected results of the decrypted representation by using the same, known, one-way hashing algorithm. If the results are the same, the integrity of the original message is assured. This representation of the entire message is known as a message digest.

To digest means to reduce or condense something, and a message digest does precisely that. (Conversely, indigestion means to expand … like gases … how do you spell relief?) A message digest is a condensed representation of a message; think Reader’s Digest. Ideally, a message digest has the following properties:

  • The original message can’t be re-created from the message digest.
  • Finding a message that produces a particular digest shouldn’t be computationally feasible.
  • No two messages should produce the same message digest (known as a collision).
  • The message digest should be calculated by using the entire contents of the original message — it shouldn’t be a representation of a representation.

Message digests are produced by using a one-way hash function. There are several types of one-way hashing algorithms (digest algorithms), including MD5, SHA-2 variants, and HMAC.

warning The SHA-1 digest algorithm is now considered obsolete. SHA-3 should be used instead.

warning A collision results when two messages produce the same digest or when a message produces the same digest as a different message.

remember A one-way function ensures that the same key can’t encrypt and decrypt a message in an asymmetric key system. One key encrypts the message (produces ciphertext), and a second key (the trapdoor) decrypts the message (produces plaintext), effectively reversing the one-way function. A one-way function’s purpose is to ensure confidentiality.

A one-way hashing algorithm produces a hashing value (or message digest) that can’t be reversed; that is, it can’t be decrypted. In other words, no trapdoor exists for a one-way hashing algorithm. The purpose of a one-way hashing algorithm is to ensure integrity and authentication.

remember MD5, SHA-2, SHA-3, and HMAC are all examples of commonly used message authentication algorithms.

MD FAMILY

MD (Message Digest) is a family of one-way hashing algorithms developed by Dr. Ron Rivest that includes MD (obsolete), MD2, MD3 (not widely used), MD4, MD5, and MD6:

  • MD2: Developed in 1989 and still widely used today, MD2 takes a variable size input (message) and produces a fixed-size output (128-bit message digest). MD2 is very slow (it was originally developed for 8-bit computers) and is highly susceptible to collisions.
  • MD4: Developed in 1990, MD4 produces a 128-bit digest and is used to compute NT-password hashes for various Microsoft Windows operating systems, including NT, XP, and Vista. An MD4 hash is typically represented as a 32-digit hexadecimal number. Several known weaknesses are associated with MD4, and it’s also susceptible to collision attacks.
  • MD5: Developed in 1991, MD5 is one of the most popular hashing algorithms in use today, commonly used to store passwords and to check the integrity of files. Like MD2 and MD4, MD5 produces a 128-bit digest. Messages are processed in 512-bit blocks, using four rounds of transformation. The resulting hash is typically represented as a 32-digit hexadecimal number. MD5 is also susceptible to collisions and is now considered “cryptographically broken” by the U.S. Department of Homeland Security.
  • MD6: Developed in 2008, MD6 uses very large input message blocks (up to 512 bytes) and produces variable-length digests (up to 512 bits). MD6 was originally submitted for consideration as the new SHA-3 standard but was eliminated from further consideration after the first round in July 2009. Unfortunately, the first widespread use of MD6 (albeit, unauthorized and illicit) was in the Conficker.B worm in late 2008, shortly after the algorithm was published!

SHA FAMILY

Like MD, SHA (Secure Hash Algorithm) is another family of one-way hash functions. The SHA family of algorithms is designed by the U.S. National Security Agency (NSA) and published by NIST. The SHA family of algorithms includes SHA-1, SHA-2, and SHA-3:

  • SHA-1: Published in 1995, SHA-1 takes a variable size input (message) and produces a fixed-size output (160-bit message digest, versus MD5’s 128-bit message digest). SHA-1 processes messages in 512-bit blocks and adds padding to a message length, if necessary, to produce a total message length that’s a multiple of 512. Note that SHA-1 is no longer considered a viable hash algorithm.
  • SHA-2: Published in 2001, SHA-2 consists of four hash functions — SHA-224, SHA-256, SHA-384, and SHA-512 — that have digest lengths of 224, 256, 384, and 512 bits, respectively. SHA-2 processes messages in 512-bit blocks for the 224, 256, and 384 variants, and 1,024-bit blocks for SHA-512.
  • SHA-3: Published in 2015, SHA-3 includes SHA3-224, SHA3-256, SHA3-384, and SHA3-512, which produce digests of 224, 256, 384, and 512 bits, respectively. SHAKE128 and SHAKE256 are also variants of SHA3.

HMAC

The Hashed Message Authentication Code (or Checksum) (HMAC) further extends the security of the MD5 and SHA-1 algorithms through the concept of a keyed digest. HMAC incorporates a previously shared secret key and the original message into a single message digest. Thus, even if an attacker intercepts a message, modifies its contents, and calculates a new message digest, the result doesn’t match the receiver’s hash calculation because the modified message’s hash doesn’t include the secret key.

Public Key Infrastructure (PKI)

A Public Key Infrastructure (PKI) is an arrangement whereby a designated authority stores encryption keys or certificates (an electronic document that uses the public key of an organization or individual to establish identity, and a digital signature to establish authenticity) associated with users and systems, thereby enabling secure communications through the integration of digital signatures, digital certificates, and other services necessary to ensure confidentiality, integrity, authentication, non-repudiation, and access control.

remember The four basic components of a PKI are the Certification Authority, Registration Authority, repository, and archive:

  • Certificate Authority (CA): The Certificate Authority (CA) comprises hardware, software, and the personnel administering the PKI. The CA issues certificates, maintains and publishes status information and Certificate Revocation Lists (CRLs), and maintains archives.
  • Registration Authority (RA): The Registration Authority (RA) also comprises hardware, software, and the personnel administering the PKI. It’s responsible for verifying certificate contents for the CA.
  • Repository: A repository is a system that accepts certificates and CRLs from a CA and distributes them to authorized parties.
  • Archive: An archive offers long-term storage of archived information from the CA.

Key management functions

Like physical keys, encryption keys must be safeguarded. Most successful attacks against encryption exploit some vulnerability in key management functions rather than some inherent weakness in the encryption algorithm. The following are the major functions associated with managing encryption keys:

  • Key generation: Keys must be generated randomly on a secure system, and the generation sequence itself shouldn’t provide potential clues regarding the contents of the keyspace. Generated keys shouldn’t be displayed in the clear.
  • Key distribution: Keys must be securely distributed. This is a major vulnerability in symmetric key systems. Using an asymmetric system to securely distribute secret keys is one solution.
  • Key installation: Key installation is often a manual process. This process should ensure that the key isn’t compromised during installation, incorrectly entered, or too difficult to be used readily.
  • Key storage: Keys must be stored on protected or encrypted storage media, or the application using the keys should include safeguards that prevent extraction of the keys.
  • Key change: Keys, like passwords, should be changed regularly, relative to the value of the information being protected and the frequency of use. Keys used frequently are more likely to be compromised through interception and statistical analysis.
  • Key control: Key control addresses the proper use of keys. Different keys have different functions and may only be approved for certain levels of classification.
  • Key disposal: Keys (and any distribution media) must be properly disposed of, erased, or destroyed so that the key’s contents are not disclosed, possibly providing an attacker insight into the key management system.

remember The seven key management issues are generation, distribution, installation, storage, change, control, and disposal.

Key escrow and key recovery

Law enforcement has always been concerned about the potential use of encryption for criminal purposes. To counter this threat, NIST published the Escrowed Encryption Standard (EES) in Federal Information Processing Standards (FIPS) Publication 185 (1994). The premise of the EES is to divide a secret key into two parts and place those two parts into escrow with two separate, trusted organizations. With a court order, the two parts can be obtained by law enforcement officials, the secret key recovered, and the suspected communications decrypted. One implementation of the EES is the Clipper Chip proposed by the U.S. government. The Clipper Chip uses the Skipjack Secret Key algorithm for encryption and an 80-bit secret key.

Methods of attack

Attempts to crack a cryptosystem can be generally classified into four classes of attack methods:

  • Analytic attacks: An analytic attack uses algebraic manipulation in an attempt to reduce the complexity of the algorithm.
  • Brute-force attacks: In a brute-force (or exhaustion) attack, the cryptanalyst attempts every possible combination of key patterns, sometimes utilizing rainbow tables, and specialized or scalable computing architectures. This type of attack can be very time-intensive (up to several hundred million years) and resource-intensive, depending on the length of the key, the speed of the attacker’s computer … and the lifespan of the attacker.
  • Implementation attacks: Implementation attacks attempt to exploit some weakness in the cryptosystem such as vulnerability in a protocol or algorithm.
  • Statistical attacks: A statistical attack attempts to exploit some statistical weakness in the cryptosystem, such as a lack of randomness in key generation.

technicalstuff A rainbow table is a precomputed table used to reverse cryptographic hash functions in a specific algorithm. Examples of password-cracking programs that use rainbow tables include Ophcrack and RainbowCrack.

The specific attack methods discussed in the following sections employ various elements of the four classes we describe in the preceding list.

The Birthday Attack

The Birthday Attack attempts to exploit the probability of two messages producing the same message digest by using the same hash function. It’s based on the statistical probability (greater than 50 percent) that in a room containing 23 or more people, 2 people in that room have the same birthday. However, for 2 people in a room to share a specific birthday (such as August 3rd), 253 or more people must be in the room to have a statistical probability of greater than 50 percent (even if one of the birthdays is on February 29).

Ciphertext Only Attack (COA)

In a Ciphertext Only Attack (COA), the cryptanalyst obtains the ciphertext of several messages, all encrypted by using the same encryption algorithm, but he or she doesn’t have the associated plaintext. The cryptanalyst then attempts to decrypt the data by searching for repeating patterns and using statistical analysis. For example, certain words in the English language, such as the and or, occur frequently. This type of attack is generally difficult and requires a large sample of ciphertext.

Chosen Text Attack (CTA)

In a Chosen Text Attack (CTA), the cryptanalyst selects a sample of plaintext and obtains the corresponding ciphertext. Several types of Chosen Text Attacks exist, including Chosen Plaintext, Adaptive Chosen Plaintext, Chosen Ciphertext, and Adaptive Chosen Ciphertext:

  • Chosen Plaintext Attack (CPA): The cryptanalyst chooses plaintext to be encrypted, and the corresponding ciphertext is obtained.
  • Adaptive Chosen Plaintext Attack (ACPA): The cryptanalyst chooses plaintext to be encrypted; then based on the resulting ciphertext, he chooses another sample to be encrypted.
  • Chosen Ciphertext Attack (CCA): The cryptanalyst chooses ciphertext to be decrypted, and the corresponding plaintext is obtained.
  • Adaptive Chosen Ciphertext Attack (ACCA): The cryptanalyst chooses ciphertext to be decrypted; then based on the resulting ciphertext, he chooses another sample to be decrypted.

Known Plaintext Attack (KPA)

In a Known Plaintext Attack (KPA), the cryptanalyst has obtained the ciphertext and corresponding plaintext of several past messages, which he or she uses to decipher new messages.

Man-in-the-Middle Attack

A Man-in-the-Middle Attack involves an attacker intercepting messages between two parties on a network and potentially modifying the original message.

Meet-in-the-Middle Attack

A Meet-in-the-Middle Attack involves an attacker encrypting known plaintext with each possible key on one end, decrypting the corresponding ciphertext with each possible key, and then comparing the results in the middle. Although commonly classified as a brute-force attack, this kind of attack may also be considered an analytic attack because it does involve some differential analysis.

Replay Attack

A Replay Attack occurs when a session key is intercepted and used against a later encrypted session between the same two parties. Replay attacks can be countered by incorporating a time stamp in the session key.

Apply Security Principles to Site and Facility Design

Finally, securely designed and built software running on securely designed and built systems must be operated in securely designed and build facilities. Otherwise, an adversary with unrestricted access to a system and its installed software will inevitably succeed in compromising your security efforts. Astute organizations involve security professionals during the design, planning, and construction of new or renovated locations and facilities. Proper site- and facility-requirements planning during the early stages of construction helps ensure that a new building or data center is adequate, safe, and secure — all of which can help an organization avoid costly situations later.

The principles of Crime Prevention Through Environmental Design (CPTED) have been widely adopted by security practitioners in the design of public and private buildings, offices, communities, and campuses since CPTED was first published in 1971. CPTED focuses on designing facilities by using techniques such as unobstructed areas, creative lighting, and functional landscaping, which help to naturally deter crime through positive psychological effects. By making it difficult for a criminal to hide, gain access to a facility, escape a location, or otherwise perpetrate an illegal and/or violent act, such techniques may cause a would-be criminal to decide against attacking a target or victim, and help to create an environment that’s perceived as (and that actually is) safer for legitimate people who regularly use the area. CPTED is comprised of three basic strategies:

  • Natural access control: Uses security zones (or defensible space) to limit or restrict movement and differentiate between public, semi-private, and private areas that require differing levels of protection. For example, this natural access control can be accomplished by limiting points of entry into a building and using structures such as sidewalks and lighting to guide visitors to main entrances and reception areas. Target hardening complements natural access controls by using mechanical and/or operational controls, such as window and door locks, alarms, picture identification requirements, and visitor sign-in/out procedures.
  • Natural surveillance: Reduces criminal threats by making intruder activity more observable and easily detected. Natural surveillance can be accomplished by maximizing visibility and activity in strategic areas, for example, by placing windows to overlook streets and parking areas, landscaping to eliminate hidden areas and create clear lines of sight, installing open railings on stairways to improve visibility, and using numerous low-intensity lighting fixtures to eliminate shadows and reduce security-camera glare or blind spots (particularly at night).
  • Territorial reinforcement: Creates a sense of pride and ownership, which causes intruders to more readily stand out and encourages people to report suspicious activity, instead of ignoring it. Territorial reinforcement is accomplished through maintenance activities (picking up litter, cleaning up graffiti, repairing broken windows, and replacing light bulbs), assigning individuals responsibility for an area or space, placing amenities (such as benches and water fountains) in common areas, and displaying prominent signage (where appropriate). It can also include scheduled activities, such as corporate-sponsored beautification projects and company picnics.

Choosing a secure location

Location, location, location! Although, to a certain degree, this bit of conventional business wisdom may be less important to profitability in the age of e-commerce, it’s still a critical factor in physical security. Important factors when considering a location include

  • Climatology and natural disasters: Although an organization is unlikely to choose a geographic location solely based on the likelihood of hurricanes or earthquakes, these factors must be considered when designing a safe and secure facility. Other related factors may include flood plains, the location of evacuation routes, and the adequacy of civil and emergency preparedness.
  • Local considerations: Is the location in a high-crime area? Are hazards nearby, such as hazardous materials storage, railway freight lines, or flight paths for the local airport? Is the area heavily industrialized (will air and noise pollution, including vibration, affect your systems)?
  • Visibility: Will your employees and facilities be targeted for crime, terrorism, or vandalism? Is the site near another high-visibility organization that may attract undesired attention? Is your facility located near a government or military target? Keeping a low profile is generally best because you avoid unwanted and unneeded attention; avoid external building markings, if possible.
  • Accessibility: Consider local traffic patterns, convenience to airports, proximity to emergency services (police, fire, and medical facilities), and availability of adequate housing. For example, will on-call employees have to drive for an hour to respond when your organization needs them?
  • Utilities: Where is the facility located in the power grid? Is electrical power stable and clean? Is sufficient fiber optic cable already in place to support telecommunications requirements?
  • Joint tenants: Will you have full access to all necessary environmental controls? Can (and should) physical security costs and responsibilities be shared between joint tenants? Are other tenants potential high-visibility targets? Do other tenants take security as seriously as your organization?

Designing a secure facility

Many of the physical and technical controls that we discuss in the section “Implement Site and Facility Security Controls” later in this chapter, should be considered during the initial design of a secure facility. Doing so often helps reduce the costs and improves the overall effectiveness of these controls. Other building design considerations include

  • Exterior walls: Ideally, exterior walls should be able to withstand high winds (tornadoes and hurricanes/typhoons) and reduce electronic emanations that can be detected and used to re-create high-value data (for example government or military data). If possible, exterior windows should be avoided throughout the building, particularly on lower levels. Metal bars over windows or reinforced windows on lower levels may be necessary. Any windows should be fixed (meaning you can’t open them), shatterproof, and sufficiently opaque to conceal inside activities.
  • Interior walls: Interior walls adjacent to secure or restricted areas must extend from the floor to the ceiling (through raised flooring and drop ceilings) and must comply with applicable building and fire codes. Walls adjacent to storage areas (such as closets containing janitorial supplies, paper, media, or other flammable materials) must meet minimum fire ratings, which are typically higher than for other interior walls. Ideally, Kevlar (bulletproof) walls should protect the most sensitive areas.
  • Floors: Flooring (both slab and raised) must be capable of bearing loads in accordance with local building codes (typically 150 pounds per square foot). Additionally, raised flooring must have a nonconductive surface and be properly grounded to reduce personnel safety risks.
  • Ceilings: Weight-bearing and fire ratings must be considered. Drop ceilings may temporarily conceal intruders and small water leaks; conversely, stained drop-ceiling tiles can reveal leaks while temporarily impeding water damage.
  • Doors: Doors and locks must be sufficiently strong and well-designed to resist forcible entry, and they need a fire rating equivalent to adjacent walls. Emergency exits must remain unlocked from the inside and should also be clearly marked, as well as monitored or alarmed. Electronic lock mechanisms and other access control devices should fail open (unlock) in the event of an emergency to permit people to exit the building. Many doors swing out to facilitate emergency exiting; thus door hinges are located on the outside of the room or building. These hinges must be properly secured to prevent an intruder from easily lifting hinge pins and removing the door.
  • Lighting: Exterior lighting for all physical spaces and buildings in the security perimeter (including entrances and parking areas) should be sufficient to provide safety for personnel, as well as to discourage prowlers and casual intruders.
  • Wiring: All wiring, conduits, and cable runs must comply with building and fire codes, and be properly protected. Plenum cabling must be used below raised floors and above drop ceilings because PVC-clad cabling releases toxic chemicals when it burns.

technicalstuff A plenum is the vacant area above a drop ceiling or below a raised floor. A fire in these areas can spread very rapidly and can carry smoke and noxious fumes to other areas of a burning building. For this reason, non-PVC-coated cabling, known as plenum cabling, must be used in these areas.

  • Electricity and HVAC: Electrical load and HVAC requirements must be carefully planned to ensure that sufficient power is available in the right locations and that proper climate ranges (temperature and humidity) are maintained.
  • Pipes: Locations of shutoff valves for water, steam, or gas pipes should be identified and appropriately marked. Drains should have positive flow, meaning they carry drainage away from the building.
  • Lightning strikes: Approximately 10,000 fires are started every year by lightning strikes in the United States alone, despite the fact that only 20 percent of all lightning ever reaches the ground. Lightning can heat the air in immediate contact with the stroke to 54,000° Fahrenheit (F), which translates to 30,000° Celsius (C), and lightning can discharge 100,000 amperes of electrical current. Now that’s an inrush!
  • Magnetic fields: Monitors and storage media can be permanently damaged or erased by magnetic fields.
  • Sabotage/terrorism/war/theft/vandalism: Both internal and external threats must be considered. A heightened security posture is also prudent during certain other disruptive situations — including labor disputes, corporate downsizing, hostile terminations, bad publicity, demonstrations/protests, and civil unrest.
  • Equipment failure: Equipment failures are inevitable. Maintenance and support agreements, ready spare parts, and redundant systems can mitigate the effects.
  • Loss of communications and utilities: Including voice and data; electricity; and heating, ventilation, and air conditioning (HVAC). Loss of communications and utilities may happen because of any of the factors discussed in the preceding bullets, as well as human errors and mistakes.
  • Vibration and movement: Causes may include earthquakes, landslides, and explosions. Equipment may also be damaged by sudden or severe vibrations, falling objects, or equipment racks tipping over. More seriously, vibrations or movement may weaken structural integrity, causing a building to collapse or otherwise be unusable.
  • Severe weather: Includes hurricanes, tornadoes, high winds, severe thunderstorms and lightning, rain, snow, sleet, and ice. Such forces of nature may cause fires, water damage and flooding, structural damage, loss of communications and utilities, and hazards to personnel.
  • Personnel loss: Can happen because of illness, injury, death, transfer, labor disputes, resignations, and terminations. The negative effects of a personnel loss can be mitigated through good security practices, such as documented procedures, job rotations, cross-training, and redundant functions.

Implement Site and Facility Security Controls

The CISSP candidate must understand the various threats to physical security; the elements of site- and facility-requirements planning and design; the various physical security controls, including access controls, technical controls, environmental and life safety controls, and administrative controls; as well as how to support the implementation and operation of these controls, as covered in this section.

tip Although much of the information in this section may seem to be common sense, the CISSP exam asks very specific and detailed questions about physical security, and many candidates lack practical experience in fighting fires, so don’t underestimate the importance of physical security — in real life and on the CISSP exam!

Wiring closets, server rooms, media storage facilities, and evidence storage

Wiring closets, server rooms, and media and evidence storage facilities contain high-value equipment and/or media that is critical to ongoing business operations or in support of investigations. Physical security controls often found in these locations include

  • Strong access controls. Typically, this includes the use of key cards, plus a PIN pad or biometric.
  • Fire suppression. Often, you’ll find inert gas fire suppression instead of water sprinklers, because water can damage computing equipment in case of discharge.
  • Video surveillance. Cameras fixed at entrances to wiring closets and data center entrances, as well as the interior of those facilities, to observe the goings-on of both authorized personnel and intruders.
  • Visitor log. All visitors, who generally require a continuous escort, often are required to sign a visitor log.
  • Asset check-in / check-out log. All personnel are required to log the introduction and removal of any equipment and media.

Restricted and work area security

High-security work areas often employ physical security controls above and beyond what is seen in ordinary work areas. In addition to key card access control systems and video surveillance, additional physical security controls may include

  • Multi-factor key card entry. Together with key cards, employees may be required to use a PIN Pad or biometric to access restricted areas.
  • Security guards. There may be more guards present at ingress / egress points, as well as roaming within the facility, to be on the alert for unauthorized personnel or unauthorized activities.
  • Guard dogs. These provide additional deterrence against unauthorized entry, and also assist in the capture of unauthorized personnel in a facility.
  • Security walls and fences. Restricted facilities may employ one or more security walls and fences to keep unauthorized personnel away from facilities. General height requirements for fencing are listed in Table 5-5.
  • Security lighting. Restricted facilities may have additional lighting, to expose and deter any would-be intruders.
  • Security gates, crash gates, and bollards. These controls limit the movement of vehicles near a facility to reduce vehicle-borne threats.

TABLE 5-5 General Fencing Height Requirements

Height

General Effect

3–4 ft (1m)

Deters casual trespassers

6–7 ft (2m)

Too high to climb easily

8 ft (2.4m) + three-strand barbed wire

Deters more determined intruders

Utilities and HVAC considerations

Environmental and life safety controls, such as utilities and HVAC (heating, ventilation, and air conditioning) are necessary for maintaining a safe and acceptable operating environment for computers and personnel.

Electrical power

General considerations for electrical power include having one or more dedicated feeders from one or more utility substations or power grids, as well as ensuring that adequate physical access controls are implemented for electrical distribution panels and circuit breakers. An Emergency Power Off (EPO) switch should be installed near major systems and exit doors to shut down power in case of fire or electrical shock. Additionally, a backup power source should be established, such as a diesel or natural-gas power generator. Backup power should be provided for critical facilities and systems, including emergency lighting, fire detection and suppression, mainframes and servers (and certain workstations), HVAC, physical access control systems, and telecommunications equipment.

warning Although natural gas can be a cleaner alternative than diesel for backup power, in terms of air and noise pollution, it’s generally not acceptable for emergency life systems (such as emergency lighting and fire protection systems) because the fuel source (natural gas) can’t be locally stored, so the system relies instead on an external fuel source that must be supplied by pipelines.

Protective controls for electrostatic discharge (ESD) include

  • Maintain proper humidity levels (40 to 60 percent).
  • Ensure proper grounding.
  • Use anti-static flooring, anti-static carpeting, and floor mats.

Protective controls for electrical noise include

  • Install power line conditioners.
  • Ensure proper grounding.
  • Use shielded cabling.

Using an Uninterruptible Power Supply (UPS) is perhaps the most important protection against electrical anomalies. A UPS provides clean power to sensitive systems and a temporary power source during electrical outages (blackouts, brownouts, and sags); this power supply must be sufficient to properly shut down the protected systems. Note: A UPS shouldn’t be used as a backup power source. A UPS — even a building UPS — is designed to provide temporary power, typically for 5 to 30 minutes, in order to give a backup generator time to start up or to allow a controlled and proper shutdown of protected systems.

Electrical hazards

Sensitive equipment can be damaged or affected by various electrical hazards and anomalies, including:

  • Electrostatic discharge (ESD): The ideal humidity range for computer equipment is 40 to 60 percent. Higher humidity causes condensation and corrosion. Lower humidity increases the potential for ESD (static electricity). A static charge of as little as 40V (volts) can damage sensitive circuits, and 2,000V can cause a system shutdown. The minimum discharge that can be felt by humans is 3,000V, and electrostatic discharges of over 25,000V are possible — so if you can feel it, it’s a problem for your equipment!

    remember The ideal humidity range for computer equipment is 40 to 60 percent.

  • Electrical noise: Includes Electromagnetic Interference (EMI) and Radio Frequency Interference (RFI). EMI is generated by the different charges between the three electrical wires (hot, neutral, and ground) and can be either common-mode noise (caused by hot and ground) or traverse-mode noise (caused by a difference in power between the hot and neutral wires). RFI is caused by electrical components, such as fluorescent lighting and electric cables. A transient is a momentary line-noise disturbance.
  • Electrical anomalies: These anomalies include the ones listed in Table 5-6.

TABLE 5-6 Electrical Anomalies

Electrical Event

Definition

Blackout

Total loss of power

Fault

Momentary loss of power

Brownout

Prolonged drop in voltage

Sag

Short drop in voltage

Inrush

Initial power rush

Spike

Momentary rush of power

Surge

Prolonged rush of power

tip You may want to come up with some meaningless mnemonic for the list in Table 5-6, such as Bob Frequently Buys Shoes In Shoe Stores. You need to know these terms for the CISSP exam.

remember It’s not the volts that kill — it’s the amps!

warning Surge protectors and surge suppressors provide only minimal protection for sensitive computer systems, and they’re more commonly (and dangerously) used to overload an electrical outlet or as a daisy-chained extension cord. The protective circuitry in most of these units costs less than one dollar (compare the cost of a low-end surge protector with that of a 6-foot extension cord), and you get what you pay for — these glorified extension cords provide only minimal spike protection. True, a surge protector does provide more protection than nothing at all, but don’t be lured into complacency by these units — check them regularly for proper use and operation, and don’t accept them as a viable alternative to a UPS.

HVAC

Heating, ventilation, and air conditioning (HVAC) systems maintain the proper environment for computers and personnel. HVAC-requirements planning involves complex calculations based on numerous factors, including the average BTUs (British Thermal Units) produced by the estimated computers and personnel occupying a given area, the size of the room, insulation characteristics, and ventilation systems.

The ideal temperature range for computer equipment is between 50°F and 80°F (10°C and 27°C). At temperatures as low as 100°F (38°C), magnetic storage media can be damaged.

remember The ideal temperature range for computer equipment is between 50°F and 80°F (10°C and 27°C).

The ideal humidity range for computer equipment is between 40 and 60 percent. Higher humidity causes condensation and corrosion. Lower humidity increases the potential for ESD (static electricity).

Doors and side panels on computer equipment racks should be kept closed (and locked, as a form of physical access control) to ensure proper airflow for cooling and ventilation. When possible, empty spaces in equipment racks (such as a half-filled rack or gaps between installed equipment) should be covered with blanking panels to reduce hot and cold air mixing between the hot side (typically the power-supply side of the equipment) and the cold side (typically the front of the equipment); such mixing of hot and cold air can reduce the efficiency of cooling systems.

Heating and cooling systems should be properly maintained, and air filters should be cleaned regularly to reduce dust contamination and fire hazards.

Most gas-discharge fire suppression systems automatically shut down HVAC systems prior to discharging, but a separate Emergency Power Off (EPO) switch should be installed near exits to facilitate a manual shutdown in an emergency.

Ideally, HVAC equipment should be dedicated, controlled, and monitored. If the systems aren’t dedicated or independently controlled, proper liaison with the building manager is necessary to ensure that everyone knows who to call when there are problems. Monitoring systems should alert the appropriate personnel when operating thresholds are exceeded.

Water issues

Water damage (and damage from liquids in general) can occur from many different sources, including pipe breakage, firefighting efforts, leaking roofs, spilled drinks, flooding, and tsunamis. Wet computers and other electrical equipment pose a potentially lethal hazard.

Both preventive as well as detective controls are used to ensure that water in unwanted places does not disrupt business operations or destroy expensive assets. Common features include

  • Water diversion. Barriers of various types help to prevent water from entering sensitive areas.
  • Water detection alarms. Sensors that detect the presence of water can alert personnel of the matter and provide valuable time before damage occurs.

Fire prevention, detection, and suppression

Threats from fire can be potentially devastating and lethal. Proper precautions, preparation, and training not only help limit the spread of fire and damage, but more important, can also save lives.

remember Saving human lives is the first priority in any life-threatening situation.

Other hazards associated with fires include smoke, explosions, building collapse, release of toxic materials or vapors, and water damage.

For a fire to burn, it requires three elements: heat, oxygen, and fuel. These three elements are sometimes referred to as the fire triangle. (See Figure 5-4.) Fire suppression and extinguishing systems fight fires by removing one of these three elements or by temporarily breaking up the chemical reaction between these three elements (separating the fire triangle). Fires are classified according to the fuel type, as listed in Table 5-7.

image

FIGURE 5-4: A fire needs these three elements to burn.

TABLE 5-7 Fire Classes and Suppression/Extinguishing Methods

Class

Description (Fuel)

Extinguishing Method

A

Common combustibles, such as paper, wood, furniture, and clothing

Water or soda acid

B

Burnable fuels, such as gasoline or oil

CO2 or soda acid

C

Electrical fires, such as computers or electronics

CO2 (Note: The most important step to fight a fire in this class: Turn off electricity first!)

D

Special fires, such as combustible metals

May require total immersion or other special techniques

K (or F)

Cooking oils or fats

Water mist or fire blankets

remember Saving human lives is the first priority in any life-threatening situation.

tip You must be able to describe Class A, B, and C fires and their primary extinguishing methods. The CISSP exam doesn’t ask about Class D and K (or F) fires (they aren’t too common as it relates to computer fires — unless your server room happens to be located directly above the deep fat fryers of a local bar and hot wings restaurant).

Fire detection and suppression

Fire detection and suppression systems are some of the most essential life safety controls for protecting facilities, equipment, and (most important) human lives.

The three main types of fire detection systems are

  • Heat-sensing: These devices sense either temperatures exceeding a predetermined level (fixed-temperature detectors) or rapidly rising temperatures (rate-of-rise detectors). Fixed-temperature detectors are more common and exhibit a lower false-alarm rate than rate-of-rise detectors.
  • Flame-sensing: These devices sense either the flicker (or pulsing) of flames or the infrared energy of a flame. These systems are relatively expensive but provide an extremely rapid response time.
  • Smoke-sensing: These devices detect smoke, one of the by-products of fire. The four types of smoke detectors are
    • Photoelectric: Sense variations in light intensity.
    • Beam: Similar to photoelectric; sense when smoke interrupts beams of light.
    • Ionization: Detect disturbances in the normal ionization current of radioactive materials.
    • Aspirating: Draw air into a sampling chamber to detect minute amounts of smoke.

remember The three main types of fire detection systems are heat-sensing, flame-sensing, and smoke-sensing.

The two primary types of fire suppression systems are

  • Water sprinkler systems: Water extinguishes fire by removing the heat element from the fire triangle, and it’s most effective against Class A fires. Water is the primary fire-extinguishing agent for all business environments. Although water can potentially damage equipment, it’s one of the most effective, inexpensive, readily available, and least harmful (to humans) extinguishing agents available. The four variations of water sprinkler systems are

    • Wet-pipe (or closed-head): Most commonly used and considered the most reliable. Pipes are always charged with water and ready for activation. Typically, a fusible link in the nozzle melts or ruptures, opening a gate valve that releases the water flow. Disadvantages include flooding because of nozzle or pipe failure and because of frozen pipes in cold weather.
    • Dry-pipe: No standing water in the pipes. At activation, a clapper valve opens, air is blown out of the pipe, and water flows. This type of system is less efficient than the wet pipe system but reduces the risk of accidental flooding; the time delay provides an opportunity to shut down computer systems (or remove power), if conditions permit.
    • Deluge: Operates similarly to a dry-pipe system but is designed to deliver large volumes of water quickly. Deluge systems are typically not used for computer-equipment areas.
    • Preaction: Combines wet- and dry-pipe systems. Pipes are initially dry. When a heat sensor is triggered, the pipes are charged with water, and an alarm is activated. Water isn’t actually discharged until a fusible link melts (as in wet-pipe systems). This system is recommended for computer-equipment areas because it reduces the risk of accidental discharge by permitting manual intervention.

    remember The four main types of water sprinkler systems are wet-pipe, dry-pipe, deluge, and preaction.

  • Gas discharge systems: Gas discharge systems may be portable (such as a CO2 extinguisher) or fixed (beneath a raised floor). These systems are typically classified according to the extinguishing agent that’s employed. These agents include
    • Carbon dioxide (CO2): CO2 is a commonly used colorless, odorless gas that extinguishes fire by removing the oxygen element from the fire triangle. (Refer to Figure 5-4.) CO2 is most effective against Class B and C fires. Because it removes oxygen, its use is potentially lethal and therefore best suited for unmanned areas or with a delay action (that includes manual override) in manned areas.

      CO2 is also used in portable fire extinguishers, which should be located near all exits and within 50 feet (15 meters) of any electrical equipment. All portable fire extinguishers (CO2, water, and soda acid) should be clearly marked (listing the extinguisher type and the fire classes it can be used for) and periodically inspected. Additionally, all personnel should receive training in the proper use of fire extinguishers.

    • Soda acid: Includes a variety of chemical compounds that extinguish fires by removing the fuel element (suppressing the flammable components of the fuel) of the fire triangle. (Refer to Figure 5-4.) Soda acid is most effective against Class A and B fires. It is not used for Class C fires because of the highly corrosive nature of many of the chemicals used.
    • Gas-discharge: Gas-discharge systems suppress fire by separating the elements of the fire triangle (a chemical reaction); they are most effective against Class B and C fires. (Refer to Figure 5-4.) Inert gases don’t damage computer equipment, don’t leave liquid or solid residue, mix thoroughly with the air, and spread extremely quickly. However, these gases in concentrations higher than 10 percent are harmful if inhaled, and some types degrade into toxic chemicals (hydrogen fluoride, hydrogen bromide, and bromine) when used on fires that burn at temperatures above 900°F (482°C).

      Halon used to be the gas of choice in gas-discharge fire suppression systems. However, because of Halon’s ozone-depleting characteristics, the Montreal Protocol of 1987 prohibited the further production and installation of Halon systems (beginning in 1994) and encouraging the replacement of existing systems. Acceptable replacements for Halon include FM-200 (most effective), CEA-410 or CEA-308, NAF-S-III, FE-13, Argon or Argonite, and Inergen.

      remember Halon is an ozone-depleting substance. Acceptable replacements include FM-200, CEA-410 or CEA-308, NAF-S-III, FE-13, Argon or Argonite, and Inergen.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.3.175