Chapter 4

Security Architecture and Engineering

Terms you’ll need to understand:

  • Images Buffer overflows

  • Images Security models

  • Images Rings of protection

  • Images Public key infrastructure

  • Images Digital signatures

  • Images Common Criteria

  • Images Reference monitor

  • Images Trusted computing base

  • Images Open and closed systems

  • Images Emanations

  • Images Encryption

Topics you’ll need to master:

  • Images How to select controls based on system security requirements

  • Images Use of confidentiality models such as Bell-LaPadula

  • Images How to identify integrity models such as Biba and Clark-Wilson

  • Images Common flaws and security issues associated with security architecture designs

  • Images Cryptography and how it is used to protect sensitive information

  • Images The need for and placement of physical security control

Introduction

The CISSP exam Security Architecture and Engineering domain deals with hardware, software, security controls, and documentation. When hardware is designed, it needs to be built to specific standards that should provide mechanisms to protect the confidentiality, integrity, and availability of the data. The operating systems (OS) that will run on the hardware must also be designed in such a way as to ensure security.

Building secure hardware and operating systems is just a start. Both vendors and customers need to have a way to verify that hardware and software perform as stated, to rate these systems, and to have some level of assurance that such systems will function in a known manner. Evaluation criteria allow the parties involved to have a level of assurance.

This chapter introduces cryptography and how it can be used at multiple layers to enhance security. To pass the CISSP exam, you need to understand system hardware and software models and how physical and logical controls can be used to secure systems. This chapter also covers cryptography (both symmetric and asymmetric), hashing, and digital signatures, which are also potential test topics.

Secure Design Guidelines and Governance Principles

Building in security from the beginning of an architecture build is much cheaper than attempting to add it later. Part of this proactive approach should include an assessment to determine whether sensitive assets require any additional levels of security pertaining to confidentiality and integrity. Figure 4.1 illustrates the defense-in-depth design process.

There are two types of security controls:

  • Images Physical security controls: These controls can be used to restrict work areas, provide media security controls, restrict server room access, and maintain proper data storage and access.

  • Images Logical security controls: These controls can be deployed through the application of cryptographic controls.

Images

FIGURE 4.1 Defense-in-Depth Design Process for Security Architecture

Various types of cryptographic controls can be used. Choosing the appropriate type requires determining specific characteristics, such as the type of algorithm used, the key length, and the application.

A public key infrastructure (PKI) is an industry standard framework that establishes third-party trust between two different parties. Key management is a critical component of a PKI and includes cryptographic key generation, distribution, storage, validation, and destruction, all of which are critical components for key management.

It is important to remember that all systems can be attacked, and it is critical to choose a cryptographic system that is strong enough. Cryptographic keys can be compromised. Compromises can be due to weak algorithms or weak keys. Many methods of cryptanalytic attacks exist to compromise keys.

Note

Data at rest can be protected with a Trusted Platform Module (TPM) chip, which is a cryptographic hardware processor that can be used to provide a greater level of security than is provided through software encryption. A TPM chip installed on the motherboard of a client computer can also be used for system state authentication. A TPM chip can also be used to store encryption keys.

TPM chips are addressed in ISO 11889-1:2009 and can be used with other forms of data and system protections to provide a layered approach referred to as defense in depth.

A framework is used to categorize an information system or business and used to guide which controls or standards are applicable. These frameworks are typically tied to governance, which should focus on the availability of services, integrity of information, and protection of data confidentiality.

One early framework is Saltzer and Schroeder’s principles for effective security titled “The Protection of Information in Computer Systems.” This 1975 paper may seem somewhat dated today, but it is still relevant and often covered in college and university courses. In this paper, Saltzer and Schroeder define a framework for secure systems design that is based on eight architectural principles:

  • Images Complete mediation

  • Images Economy of mechanism

  • Images Fail-safe defaults

  • Images Least privilege

  • Images Least common mechanism

  • Images Open design

  • Images Psychological acceptability

  • Images Separation of privilege

Another approach is the ISO/IEC 19249, Security Techniques—Catalogue of Architectural and Design Principles for Secure Products, Systems and Applications, which breaks out design principles into two groupings, each with five items:

  • Images Architectural principles:

    • Images Domain separation

    • Images Layering

    • Images Encapsulation

    • Images Redundancy

    • Images Virtualization

  • Images Design principles:

    • Images Least privilege

    • Images Attack surface minimization

    • Images Centralized parameter validation

    • Images Centralized general security services

    • Images Preparing for error and exception handling

Another governance framework is the IT Infrastructure Library (ITIL). ITIL specifies a set of processes, procedures, and tasks that can be integrated with an organization’s strategy to deliver value and maintain a minimum level of competency. ITIL can be used to create a baseline from which the organization can plan, implement, and measure its governance progress. ITIL presents a service lifecycle that includes the following components:

  • Images Continual service improvement

  • Images Service strategy

  • Images Service design

  • Images Service transition

  • Images Service operation

Enterprise Architecture

Security and governance can be enhanced by implementing an enterprise architecture (EA) plan. EA is the practice in information technology of organizing and documenting a company’s IT assets to enhance planning, management, and expansion. The primary purpose of using EA is to ensure that business strategy and IT investments are aligned. The benefit of EA is that it provides a means of traceability that extends from the highest level of business strategy down to the fundamental technology.

One early EA model is the Zachman Framework, which was designed to allow companies to structure policy documents for information systems so they focus on who, what, where, when, why, and how (see Figure 4.2).

Images

FIGURE 4.2 Zachman Model

Federal law requires each government agency to set up its EA and a structure for its governance. This process is guided by the Federal Enterprise Architecture (FEA) framework, which is designed to use five models:

  • Images Performance reference model: A framework used to measure performance of major IT investments

  • Images Business reference model: A framework used to provide an organized, hierarchical model for day-to-day business operations

  • Images Service component reference model: A framework used to classify service components with respect to how they support business or performance objectives

  • Images Technical reference model: A framework used to categorize the standards, specifications, and technologies that support and enable the delivery of service components and capabilities

  • Images Data reference model: A framework used to provide a standard means by which data can be described, categorized, and shared

An independently designed, but later integrated, subset of the Zachman Framework is the Sherwood Applied Business Security Architecture (SABSA). Like the Zachman Framework, the SABSA model and methodology was developed for risk-driven enterprise information security architectures. It asks what, why, how, and where. For more information on the SABSA model, see www.sabsa-institute.org.

The ISO 27000 series is part of a family of governance standards that can trace their origins back to BS 7799. Organizations can become ISO 27000 certified by verifying their compliance with an accredited testing entity. Some of the core ISO standards include the following:

  • Images ISO 27001: This document describes requirements for establishing, implementing, operating, monitoring, reviewing, and maintaining an information security management system (ISMS). It follows the Plan-Do-Check-Act model.

  • Images ISO 27002: This document, which began as the BS 7799 standard and was republished as the ISO 17799 standard, describes ways to develop a security program within an organization.

  • Images ISO 27003: This document focuses on implementation.

  • Images ISO 27004: This document describes the ways to measure the effectiveness of an information security program.

  • Images ISO 27005: This document describes the code of practice in information security.

True security is a layered process and requires more than governance. The items discussed in the following sections can be used to build a more secure organization.

Regulatory Compliance and Process Control

One area of concern for a security professional is protection of sensitive information, including financial data. One attempt to provide this protection is the Payment Card Industry Data Security Standard (PCI-DSS). This multinational standard, which was first released in 2004, was created to enforce strict standards of control for the protection of credit card, debit card, ATM card, and gift card numbers by mandating policies, security devices, controls, and network monitoring. PCI also sets standards for the protection of personally identifiable information that is associated with the cardholder on an account. Participating vendors include American Express, MasterCard, Visa, and Discover.

Whereas PCI is used to protect financial data, Control Objectives for Information and Related Technology (COBIT) was developed to meet the requirements of business and IT processes. It is a standard used for auditors worldwide and was developed by the Information Systems Audit and Control Association (ISACA). COBIT is divided into four control areas:

  • Images Planning and Organization

  • Images Acquisition and Implementation

  • Images Delivery and Support

  • Images Monitoring

Fundamental Concepts of Security Models

Modern computer systems can be broken down into four groupings, or layers:

  • Images Hardware

  • Images Kernel and device drivers

  • Images Operating system

  • Images Applications

Hardware interacts with software, such as the operating system kernel, and operating systems and applications do the things we need done. At the core of every computer system are the central processing unit (CPU) and the hardware that makes it run. The CPU is just one of the items that you can find on the motherboard, which serves as the base for most crucial system components.

The following sections examine the various parts of a computer system, starting at the heart of the system.

Central Processing Unit

The CPU is the heart of a computer system and serves as the brain of the computer. The CPU consists of the following:

  • Images Arithmetic logic unit (ALU): The ALU performs arithmetic and logical operations. It is the brain of the CPU.

  • Images Control unit: The control unit manages the instructions it receives from memory. It decodes and executes the requested instructions and determines what instructions have priority for processing.

  • Images Memory: Memory is used to hold instructions and data to be processed. CPU memory is not typical memory; it is much faster than non-CPU memory.

A CPU is capable of executing a series of basic operations, including fetch, decode, execute, and write operations. Pipelining combines multiple steps into one process. A CPU has the capability to fetch instructions and then process them. A CPU can operate in one of four states:

  • Images Supervisor state: The program can access the entire system.

  • Images Problem state: Only non-privileged instructions can be executed.

  • Images Ready state: The program is ready to resume processing.

  • Images Wait state: The program is waiting for an event to complete.

Because CPUs have very specific designs, the operating system as well as applications must be developed to work with the CPU. CPUs also have different types of registers to hold data and instructions. The base register contains the beginning address assigned to a process, and the limit address marks the end of the memory segment. Together, these components are responsible for the recall and execution of programs.

CPUs have made great strides, as illustrated in Table 4.1. As the size of transistors has decreased, the number of transistors that can be placed on a CPU has increased. Thanks to increases in the total number of transistors and in clock speed, the power of CPUs has increased exponentially. Today, a 3.06 GHz Intel Core i7 can perform about 18 million instructions per second (MIPS).

TABLE 4.1 CPU Advancements

CPU

Year

Number of Transistors

Clock Speed

8080

1974

6,000

2 MHz

80386

1986

275,000

12.5 MHz

Pentium

1993

3,100,000

60 MHz

Intel Core 2

2006

291,000,000

2.66 GHz

Intel Core i7

2009

731,000,000

4.00 GHz

Intel Core M

2014

1,300,000,000

2.6 GHz

Note

Processor speed is measured in MIPS (millions of instructions per second). This standard is used to indicate how fast a CPU can work.

Two basic designs of CPUs are manufactured for modern computer systems:

  • Images Reduced instruction set computer (RISC): Uses simple instructions that require a reduced number of clock cycles

  • Images Complex instruction set computer (CISC): Performs multiple operations for a single instruction

The CPU requires two inputs to accomplish its duties: instructions and data. The data is passed to the CPU for manipulation, where it is typically worked on in either the problem state or the supervisor state. In the problem state, the CPU works on the data with non-privileged instructions. In the supervisor state, the CPU executes privileged instructions.

ExamAlert

A superscalar processor is a processor that can execute multiple instructions at the same time; a scalar processor can execute only one instruction at a time. You need to know this distinction for the CISSP exam.

A CPU can be classified into one of several categories, depending on its functionality. When the computer’s CPU, motherboard, and operating system all support the functionality, the computer system is also categorized according to the following:

  • Images Multiprogramming: Can interleave two or more programs for execution at any one time

  • Images Multitasking: Can perform one or more tasks or subtasks at a time

  • Images Multiprocessor: Supports one or more CPUs

A multiprocessor system can work in symmetric or asymmetric mode. With symmetric mode, all processors are equal and can handle any tasks equally with all devices (peripherals being equally accessible) or no specialized path is required for resources. With asymmetric mode, one CPU schedules and coordinates tasks between other processes and resources.

The data that CPUs work with is usually part of an application or a program. These programs are tracked using a process ID (PID). Anyone who has ever looked at Task Manager in Windows or executed a ps command on a Linux machine has probably seen a PID number. You can manipulate the priority of these tasks as well as start and stop them. Fortunately, most programs do much more than the first C code you wrote, which probably just said “Hello World.” Each line of code or piece of functionality that a program has is known as a thread.

A program that has the capability to carry out more than one thread at a time is referred to as multithreaded (see Figure 4.3).

Images

FIGURE 4.3 Processes and Threads

Process activity uses process isolation to separate processes. Four process isolation techniques are used to ensure that each application receives adequate processor time to operate properly:

  • Images Encapsulation of processes or objects: Other processes do not interact with the application.

  • Images Virtual mapping: The application is written in such a way that it believes it is the only application running.

  • Images Time multiplexing: This allows the application or process to share the computer’s resources.

  • Images Naming distinctions: Processes are assigned their own unique names.

ExamAlert

To get a good look at naming distinctions, run ps -aux from the terminal of a Linux system and note the unique PID values.

An interrupt is another key piece of a computer system. It is an electrical connection between a device and a CPU. The device can put an electrical signal on this connection to get the attention of the CPU. The following are common interrupt methods:

  • Images Programmed I/O: Used to transfer data between a CPU and a peripheral device

  • Images Interrupt-driven I/O: A more efficient input/output method that requires complex hardware

  • Images I/O using DMA: I/O based on direct memory access that can bypass the processor and write the information directly to main memory

  • Images Memory-mapped I/O: A method that requires the CPU to reserve space for I/O functions and to make use of the address for both memory and I/O devices

  • Images Port-mapped I/O: A method that uses a special class of instruction that can read and write a single byte to an I/O device

ExamAlert

Interrupts can be maskable and non-maskable. Maskable interrupts can be ignored by the application or the system, whereas non-maskable interrupts cannot be ignored by the system. An example of a non-maskable interrupt in Windows is the interrupt that occurs when you press Ctrl+Alt+Delete.

There is a natural hierarchy to memory, and there must therefore be a way to manage memory and ensure that it does not become corrupted. That is the job of the memory management system. Memory management systems on multitasking operating systems are responsible for the following tasks:

  • Images Relocation: The system maintains the ability to copy memory contents from memory to secondary storage as needed.

  • Images Protection: The system provides control to memory segments and restricts what process can write to memory.

  • Images Sharing: The system allows sharing of information based on a user’s security level for access control. For instance, Mike may be able to read an object, whereas Shawn may be able to read and write to the object.

  • Images Logical organization: The system provides for the sharing of and support for dynamic link libraries.

  • Images Physical organization: The system provides for the physical organization of memory.

Storage Media

A computer is not just a CPU; memory is also an important component. The CPU uses memory to store instructions and data. Therefore, memory is an important type of storage media. The CPU is the only component that can directly access memory. Systems are designed this way because the CPU has a high level of system trust.

A CPU can use different types of addressing schemes to communicate with memory, including absolute addressing and relative addressing. In addition, memory can be addressed either physically or logically. Physical addressing refers to the hard-coded address assigned to memory. Applications and programmers writing code use logical addresses. Relative addressing involves using a known address with an offset applied.

Not only can memory be addressed in different ways, but there are also different types of memory. Memory can be either nonvolatile or volatile. The sections that follow provide examples of both of these types.

Tip

Two important security concepts associated with storage are protected memory and memory addressing. For the CISSP exam, you should understand that protected memory prevents other programs or processes from gaining access or modifying the contents of address space that has previously been assigned to another active program. Memory can be addressed either physically or logically. Memory addressing describes the method used by the CPU to access the contents of memory. This is especially important for understanding the root causes of buffer overflow attacks.

RAM

Random-access memory (RAM) is volatile memory. If power is lost, the data in RAM is destroyed. Types of RAM include static RAM, which uses circuit latches to represent binary data, and dynamic RAM, which must be refreshed every few milliseconds. RAM can be configured as dynamic random-access memory (DRAM) or static random-access memory (SRAM).

SRAM doesn’t require a refresh signal, as DRAM does. SRAM chips are more complex and faster, and thus they are more expensive. DRAM access times are around 60 nanoseconds (ns) or more; SRAM has access times as fast as 10 ns. SRAM is often used for cache memory.

DRAM chips can be manufactured inexpensively. Dynamic refers to the memory chips’ need for a constant update signal (also called a refresh signal) to retain the information that is written there. Currently, there are five popular implementations of DRAM:

  • Images Synchronous DRAM (SDRAM): SDRAM shares a common clock signal with the transmitter of the data. The computer’s system bus clock provides the common signal that all SDRAM components use for each step to be performed.

  • Images Double data rate (DDR): DDR supports a double transfer rate compared to ordinary SDRAM.

  • Images DDR2: DDR2 splits each clock pulse in two, doubling the number of operations it can perform.

  • Images DDR3: DDR3 is a DRAM interface specification that offers the ability to transfer data at twice the rate (eight times the speed of its internal memory arrays), enabling higher bandwidth or peak data rates.

  • Images DDR4: DDR4 offers higher speed than DDR2 or DDR3 and is one of the latest variants of DRAM. It is not compatible with any earlier type of RAM.

ExamAlert

Memory leaks occur when programs or processes use RAM but cannot release it. Programs that suffer from memory leaks will eventually use up all available memory and can cause a system to halt or crash.

ROM

Read-only memory (ROM) is nonvolatile memory that retains information even if power is removed. ROM is typically used to load and store firmware. Firmware is embedded software much like BIOS or UEFI.

Tip

Most modern computer systems use Unified Extensible Firmware Interface (UEFI) instead of BIOS. UEFI offers several advantages over BIOS, including support for remote diagnostics and repair of systems even if no OS is installed.

Some common types of ROM include the following:

  • Images Erasable programmable read-only memory (EPROM)

  • Images Electrically erasable programmable read-only memory (EEPROM)

  • Images Flash memory

  • Images Programmable logic devices (PLDs)

Secondary Storage

Memory plays an important role in the world of storage, but other long-term types of storage are also needed. One of these is sequential storage. Tape drives are a type of sequential storage that must be read sequentially from beginning to end.

Another well-known type of secondary storage is direct-access storage. Direct-access storage devices do not have to be read sequentially; the system can identify the location of the information and go directly to it to read the data. A hard drive is an example of a direct-access storage device: A hard drive has a series of platters, read/write heads, motors, and drive electronics contained within a case designed to prevent contamination. Hard drives are used to hold data and software. Software is an operating system or application that you’ve installed on a computer system.

Compact discs (CDs) are a type of optical media. They use a laser/opto-electronic sensor combination to read or write data. A CD can be read-only, write-once, or rewriteable. CDs can hold up to around 800 MB on a single disk. A CD is manufactured by applying a thin layer of aluminum to what is primarily hard clear plastic. During manufacture or when a CD/R is burned, small bumps or pits are placed in the surface of the disc. These bumps or pits are converted into binary ones or zeros. Unlike a floppy disk, which has tracks and sectors, a CD comprises one long spiral track that begins at the inside of the disc and continues toward the outer edge.

Digital video discs (DVDs) are very similar to CDs in that both are optical media: DVDs just hold more data. The current version of optical storage is the Blu-ray disc. These optical disks can hold 50 GB or more of data.

More and more systems today are moving to solid-state drives (SSDs) and flash memory storage. Sizes up to 2 TB are now common.

I/O Bus Standards

The data that a CPU is working with must have a way to move from the storage media to the CPU. This is accomplished by means of a bus. A bus is lines of conductors that transmit data between the CPU, storage media, and other hardware devices. You need to understand two bus-related terms for the CISSP exam:

  • Images Northbridge: The northbridge, which is considered the memory controller hub (MCH), connects CPU, RAM, and video memory.

  • Images Southbridge: The southbridge is used by the I/O controller hub (ICH) to connect input/output devices such as the hard drive, DVD drive, keyboard, mouse, and so on.

From the point of view of the CPU, the various adapters plugged in to a computer are external devices. These connectors and the bus architecture used to move data to the devices have changed over time. The following are some bus architectures with which you need to be familiar:

  • Images Industry Standard Architecture (ISA): The ISA bus started as an 8-bit bus designed for IBM PCs. It is now obsolete.

  • Images Peripheral Component Interconnect (PCI): The PCI bus was developed by Intel and served as a replacement for ISA and other bus standards. PCI Express is now the standard.

  • Images Peripheral Component Interface Express (PCIe): The PCIe bus was developed as an upgrade to PCI. It offers several advantages, such as greater bus throughput, smaller physical footprint, better performance, and better error detection and reporting.

  • Images Serial ATA (SATA): The SATA standard is the current standard for connecting hard drives and solid-state drives to computers. It uses a serial design and smaller cables and offers greater speeds and better airflow inside the computer case.

  • Images Small Computer Systems Interface (SCSI): The SCSI bus allows a variety of devices to be daisy-chained off a single controller. Many servers use the SCSI bus for their preferred hard drive solution.

Universal Serial Bus (USB) has gained wide market share. USB overcame the limitations of traditional serial interfaces. USB 2.0 devices can communicate at speeds up to 480 Mbps or 60 MBps, whereas USB 3.0 devices have a maximum bandwidth rate of 5 Gbps or 640 MBps. Devices can be chained together so that up to 127 devices can be connected to one USB slot of one hub in a “daisy chain” mode, eliminating the need for expansion slots on the motherboard. The newest USB standard is 3.2. The biggest improvement for the USB 3.2 standard is a boost in data transfer bandwidth of up to 10 Gbps.

USB is used for flash memory, cameras, printers, external hard drives, and phones. USB has two fundamental advantages: It has broad product support and devices are typically recognized immediately when connected.

Many Apple computers make use of the Thunderbolt interface, and a few legacy FireWire (IEEE 1394) interfaces are still found on digital audio and video equipment.

Virtual Memory and Virtual Machines

Modern computer systems have developed specific ways to store and access information. One of these is virtual memory, which is the combination of the computer’s primary memory (RAM) and secondary storage (the hard drive or SSD). When these two technologies are combined, the OS can make the CPU believe that it has much more memory than it actually has. Examples of virtual memory include the following:

  • Images Page file

  • Images Swap space

  • Images Swap partition

These virtual memory types are user defined in terms of size, location, and other factors. When RAM is nearly depleted, the CPU begins saving data onto the computer’s hard drive in a process called paging. Paging takes a part of a program out of memory and uses the page file to save those parts of the program. If the system requires more RAM than paging provides, it writes an entire process out to the swap space. This process uses a paging file/swap file so that the data can be moved back and forth between the hard drive and RAM as needed. A specific drive can even be configured to hold such data and is therefore called a swap partition. Individuals who have used a computer’s hibernation function or who have ever opened more programs on their computers than they’ve had enough memory to support are familiar with the operation of virtual memory.

Closely related to virtual memory are virtual machines, such as VMware Workstation and Oracle VM VirtualBox. VMware is one of the leaders in the machine virtualization market. A virtual machine enables the user to run a second OS within a virtual host. For example, a virtual machine can let you run another Windows OS, Linux x86, or any other OS that runs on x86 processor and supports standard BIOS/UEFI booting.

Virtual systems make use of a hypervisor to manage the virtualized hardware resources to run a guest operating system. A Type 1 hypervisor runs directly on the hardware, with VM resources provided by the hypervisor, whereas a Type 2 hypervisor runs on a host operating system above the hardware. Virtual machines can be used for development and system administration, production, and to reduce the number of physical devices needed. Hypervisors are also being used to design virtual switches, routers, and firewalls.

Tip

Virtualization has been very important in the workplace, but cloud-based systems have more recently begun to take the place of VMs. Cloud-based systems enable employees to work from many different locations. The applications and data can reside in the cloud, and a user can access this content from any location that has connectivity. The potential disadvantage of cloud computing is that security can be an issue. It is important to consider who owns the cloud. Is it a private cloud (owned by company) or a public cloud (owned by someone else)? In addition, what is the physical location of the cloud, who has access to the cloud, and is it shared (co-tenancy)? It is critical to consider each of these factors before placing any corporate assets in the cloud.

Computer Configurations

The following are some of the most commonly used computer and device configurations:

  • Images Print server: Print servers are usually located close to printers and allow many users to access the same printer and share its resources.

  • Images File server: File servers allow users to have a centralized site to store files. A file server provides an easy way to perform backups because it can be done on one server rather than on all the client computers. It also allows for group collaboration and multiuser access.

  • Images Application server: An application server allows users to run applications that are not installed on an end user’s system. It is a very popular concept in thin client environments, which depend on a central server for processing power. Licensing is an important consideration with application servers.

  • Images Web server: Web servers provide web services to internal and external users via web pages. A sample web address or URL (uniform resource locator) is www.thesolutionfirm.com.

  • Images Database server: Database servers store and access data, including information such as product inventories, price lists, customer lists, and employee data. Because databases hold sensitive information, they require well-designed security controls. A database server typically sits in front of a database and brokers requests, acting as middleware between the untrusted users and the database holding the data.

  • Images Laptops and tablets: These are mobile devices that are easily lost or stolen. Mobile devices have become very powerful and must be properly secured.

  • Images Smartphones: Today’s smartphones are handheld computers that have large amounts of processing capability. They can take photos and offer onboard storage, Internet connectivity, and the ability to run applications. These devices are of particular concern as more companies start to support bring your own device (BYOD) policies. Such devices can easily fall outside of company policies and controls.

  • Images Industrial control systems (ICS): ICSs are typically used for industrial process control, such as with manufacturing systems on factory floors. ICSs can be used to operate and/or automate industrial processes. There are several categories of ICSs, including supervisory control and data acquisition (SCADA) systems, distributed control systems (DCSs), and field devices.

  • Images Embedded devices / Internet of Things (IoT): Embedded devices / IoT include ATMs, point-of-sale terminals, and smartwatches. More and more devices include embedded technology, such as smart refrigerators and Bluetooth-enabled toilets. The security of embedded devices is a growing concern, as these devices may not be patched or updated on a regular basis.

Note

We can expect more and more devices to have embedded technology as the Internet of Things (IoT) grows. Several companies even sell toilets with Bluetooth and SD card technology built in; like other devices, they are not immune to hacking (see www.extremetech.com/extreme/163119-smart-toilets-bidet-hacked-via-bluetoothgives-new-meaning-to-backdoor-vulnerability).

Security Architecture

Although a robust functional architecture is a good start, real security requires that you have a security architecture in place to control processes and applications. Concepts related to security architecture include the following:

  • Images Protection rings

  • Images Trusted computing base (TCB)

  • Images Open and closed systems

  • Images Security modes of operation

  • Images Operating states

  • Images Recovery procedures

  • Images Process isolation

Protection Rings

An operating system knows who and what to trust by relying on protection rings. Protection rings work much like your network of family members, friends, coworkers, and acquaintances. The people who are closest to you, such as your spouse and children, have the highest level of trust. Those who are distant acquaintances or are unknown to you probably have a lower level of trust. For example, when you see a guy on Canal Street in New York City hawking new Rolex watches for $100, you should have little trust in him and his relationship with the Rolex company!

Protection rings are conceptual rather than physical entities. Figure 4.4 illustrates the protection rings schema. The first implementation of such a system was in MIT’s Multics time-shared operating system.

Images

FIGURE 4.4 Protection Rings

The protection rings model provides the operating system with various levels at which to execute code or to restrict that code’s access. The idea is to use engineering design to build in layers of control using secure design principles. The rings provide much greater granularity than a system that just operates in user and privileged modes. As code moves toward the outer bounds of the model, the layer number increases, and the level of trust decreases. This model includes the following layers:

  • Images Layer 0: This is the most trusted level. The operating system kernel resides at this level. Any process running at layer 0 is said to be operating in privileged mode.

  • Images Layer 1: This layer contains non-privileged portions of the operating system.

  • Images Layer 2: This is where I/O drivers, low-level operations, and utilities reside.

  • Images Layer 3: This layer is where applications and processes operate. It is the level at which individuals usually interact with the operating system. Applications operating here are said to be working in user mode, which is often referred to as problem mode because this is where the less-trusted applications run; it is, therefore, where most problems occur.

Not all systems use all rings in the protection rings model. Most systems that are used today operate in two modes: user mode and supervisor (privileged) mode.

Items that need high security, such as the operating system security kernel, are located in the center ring. This ring is unique because it has access rights to all domains in the system. Protection rings are part of the trusted computing base concept, which is described next.

Trusted Computing Base

The trusted computing base (TCB) is the sum of all the protection mechanisms within a computer and is responsible for enforcing the security policy. The TCB includes hardware, software, controls, processes and is responsible for confidentiality and integrity. The TCB is the only portion of a system that operates at a high level of trust. It monitors four basic functions:

  • Images Input/output (I/O) operations: I/O operations are a security concern because operations from the outermost rings might need to interface with rings of greater protection. These cross-domain communications must be monitored.

  • Images Execution domain switching: Applications running in one domain or level of protection often invoke applications or services in other domains. If these requests are to obtain more sensitive data or service, their activity must be controlled.

  • Images Memory protection: To truly provide security, the TCB must monitor memory references to verify confidentiality and integrity in storage.

  • Images Process activation: Registers, process status information, and file access lists are vulnerable to loss of confidentiality in a multiprogramming environment. This type of potentially sensitive information must be protected.

ExamAlert

For the CISSP exam, you should understand not only that the TCB is tasked with enforcing security policy but also that the TCB is the sum of all protection mechanisms within a computer system that have also been evaluated for security assurance. It consists of hardware, firmware, and software.

Components that have not been evaluated are said to fall outside the security perimeter.

The TCB monitors the functions in the preceding list to ensure that the system operates correctly and adheres to security policy. The TCB follows the reference monitor concept. The reference monitor is an abstract machine that is used to implement security. The reference monitor’s job is to validate access to objects by authorized subjects. The reference monitor operates at the boundary between the trusted and untrusted realms. The reference monitor has three properties:

  • Images It cannot be bypassed and controls all access, as it must be invoked for every access attempt.

  • Images It cannot be altered and is protected from modification or change.

  • Images It must be small enough to be verified and tested correctly.

ExamAlert

For the CISSP exam, you should understand that the reference monitor enforces the security requirement for the security kernel.

The reference monitor is much like the bouncer at a club, standing between each subject and object and verifying that each subject meets the minimum requirements for access to an object (see Figure 4.5).

Images

FIGURE 4.5 Reference Monitor

Note

Subjects are active entities such as people, processes, or devices.

Objects are passive entities that are designed to contain or receive information. Objects can be processes, software, or hardware.

The reference monitor can be designed to use tokens, capability lists, or labels:

  • Images Tokens: Communicate security attributes before requesting access

  • Images Capability lists: Offer faster lookup than security tokens but are not as flexible

  • Images Security labels: Used by high-security systems because these labels offer permanence

At the heart of the operating system is the security kernel. The security kernel handles all user/application requests for access to system resources. A small security kernel is easy to verify, test, and validate as secure. However, in real life, the security kernel might be bloated with some unnecessary code because processes located inside can function faster and have privileged access. Vendors have taken different approaches to developing operating systems. For example, DOS used a monolithic kernel. Several of these designs are shown in Figure 4.6 and are described here:

  • Images Monolithic architecture: All of the OS processes work in kernel mode.

  • Images Layered OS design: This design separates system functionality into different layers.

  • Images Microkernel: A smaller kernel supports only critical processes.

  • Images Hybrid microkernel: The kernel structure is similar to a microkernel but implemented in terms of a monolithic design.

Although the reference monitor is conceptual, the security kernel can be found at the heart of every system. The security kernel is responsible for running the required controls used to enforce functionality and resist known attacks. As mentioned previously, the reference monitor operates at the security perimeter: the boundary between the trusted and untrusted realms. Components outside the security perimeter are not trusted. All trusted access control mechanisms are inside the security perimeter.

Images

FIGURE 4.6 Operating System Architecture

Source: http://upload.wikimedia.org/wikipedia/commons/d/d0/OS-structure2.svg

Open and Closed Systems

Open systems accept input from other vendors and are based on standards and practices that allow connection to different devices and interfaces. The goal is to promote full interoperability whereby the system can be fully utilized.

Closed systems are proprietary. They use devices that are not based on open standards and that are generally locked. They lack standard interfaces to allow connection to other devices and interfaces.

For example, in the U.S. cell phone industry, AT&T and T-Mobile cell phones are based on the worldwide Global System for Mobile Communications (GSM) standard and can be used overseas easily on other networks with a simple change of the subscriber identity module (SIM). These are open-system phones. Phones that use Code Division Multiple Access (CDMA), such as Sprint and Verizon phones, do not have the same level of support and have almost completely been phased out. In 2010, carriers worldwide started this process when agreeing to switch to LTE, a 4G network with 2023 listed as the final drop date.

Note

The concept of open and closed can apply to more than just hardware. With open software, others can view and/or alter the source code, but with closed software, they cannot. For example, a Samsung Galaxy phone runs the open-source Android operating system, whereas an Apple iPhone runs the closed-source iOS.

Security Modes of Operation

Several security modes of operation are based on Department of Defense (DoD) 5220.22-M classification levels. According to the DoD, information being processed on a system and the clearance level of authorized users can be classified into one of four modes (see Table 4.2):

  • Images Dedicated: A need to know is required to access all information stored or processed. Every user requires formal access with clearance and approval and must have executed a signed nondisclosure agreement (NDA) for all the information stored and/or processed. This mode must also support enforced system access procedures. All hard-copy output and media removed will be handled at the level for which the system is accredited until reviewed by a knowledgeable individual. As the system is dedicated to processing of one particular type or classification of information all authorized users can access all data.

  • Images System high: All users have a security clearance; however, a need to know is required only for some of the information contained within the system. Every user requires access approval and needs to have signed NDAs for all the information stored and/or processed. Access to an object by users not already possessing access permission must only be assigned by authorized users of the object. This mode must be capable of providing an audit trail that records time, date, user ID, terminal ID (if applicable), and filename. All users can access some data based on their need to know.

  • Images Compartmented: Valid need to know is required for some of the information on the system. All users must have formal access approval for all information they will access on the system and require proper clearance for the highest level of data classification on the system. All users must have signed NDAs for all information they will access on the system. All users can access some data based on their need to know and formal access approval.

  • Images Multilevel: Every user has a valid need to know for some of the information that is on the system, and more than one classification level can be processed at the same time. Users must have formal access approval and must have signed NDAs for all information they will access on the system. Mandatory access controls provide a means of restricting access to files based on their sensitivity label. All users can access some data based on their need to know, clearance, and formal access approval.

TABLE 4.2 Security Modes of Operation

Mode

Dedicated

System High

Compartmented

Multilevel

Signed NDA

All

All

All

All

Clearance

All

All

All

Some

Approval

All

All

Some

Some

Need to know

All

Some

Some

Some

Note

The term sensitivity or security labels denotes high-security Mandatory access control (MAC)-based systems.

Operating States

When systems are used to process and store sensitive information, there must be some agreed-on methods for how this will work. Generally, these concepts were developed to meet the requirements of handling sensitive government information with categories such as sensitive, secret, and top secret. The burden of handling this task can be placed on either administration or the system itself.

Generally, two designs are used:

  • Images Single-state systems: This type of system is designed and implemented to handle one category of information. The burden of management falls on the administrator, who must develop the policy and procedures to manage the system. The administrator must also determine who has access and what type of access the users have. These systems are dedicated to one mode of operation, so they are sometimes referred to as dedicated systems.

  • Images Multistate systems: These systems depend not on the administrator but on the system itself. More than one person can log in to a multistate system and access various types of data, depending on the level of clearance. As you would probably expect, these systems can be expensive. The XTS-400 that runs the Secure Trusted Operating Program (STOP) OS from BAE Systems is an example of a multistate system. A multistate system can operate as a compartmentalized system. This means that Mike can log in to the system with a secret clearance and access secret-level data, whereas Dwayne can log in with top-secret-level clearance and access a different level of data. These systems are compartmentalized and can segment data on a need-to-know basis.

Tip

Security-Enhanced Linux and TrustedBSD are freely available implementations of operating systems with limited multistate capabilities. Security evaluation is a problem for these free MLS implementations because of the expense and time it would take to fully qualify these systems.

Recovery Procedures

Unfortunately, things don’t always operate normally; they sometimes go wrong, and system failure can occur. A system failure could potentially compromise a system by corrupting integrity, opening security holes, or causing corruption. Efficient designs have built-in recovery procedures to recover from potential problems. There are two basic types of recovery procedures:

  • Images Fail safe: If a failure is detected, the system is protected from compromise by termination of services.

  • Images Fail soft: A detected failure terminates the noncritical process. Systems in fail soft mode are still able to provide partial operational capability.

It is important to be able to recover when an issue arises. The best way to ensure recovery is to take a proactive approach and back up all critical files on a regular schedule. The goal of recovery is to recover to a known state. Common issues that require recovery include the following:

  • Images System reboot: An unexpected/unscheduled event can cause a system reboot.

  • Images System restart: This automatically occurs when a system goes down and forces an immediate reboot.

  • Images System cold start: This results from a major failure or component replacement.

  • Images System compromise: This can be caused by an attack or a breach of security.

Process Isolation

Process isolation is required to maintain a high level of system trust. For a system to be certified as a multilevel security system, it must support process isolation. Without process isolation, there would be no way to prevent one process from spilling over into another process’s memory space, corrupting data, or possibly making the whole system unstable. Process isolation is performed by the operating system; its job is to enforce memory boundaries. Separation of processes is an important topic; without it, a system could be designed with a single point of failure (SPOF) so that one flaw in the design or configuration could cause the entire system to stop operating.

For a system to be secure, the operating system must prevent unauthorized users from accessing areas of the system to which they should not have access, it should be robust, and it should have no single point of failure. Sometimes all this is accomplished through the use of a virtual machine. A virtual machine allows users to believe that they have the use of the entire system, but in reality, processes are completely isolated. To take this concept a step further, some systems that require truly robust security also implement hardware isolation so that the processes are segmented not only logically but also physically.

Note

Java uses a form of virtual machine because it uses a sandbox to contain code and allows it to function only in a controlled manner.

Common Formal Security Models

Security models are used to determine how security will be implemented, what subjects can access the system, and what objects they will have access to. Simply stated, a security model formalizes security policy. Security models of control are typically implemented by enforcing integrity, confidentiality, or other controls. Keep in mind that each of these models lays out broad guidelines and is not specific in nature. It is up to the developer to decide how these models will be used and integrated into specific designs (see Figure 4.7).

The sections that follow discuss the different security models of control in greater detail. The first three models discussed are considered lower-level models.

Images

FIGURE 4.7 Security Model Fundamental Concepts Used in the Design of an OS

State Machine Model

The state machine model is based on a finite state machine (see Figure 4.8). State machines are used to model complex systems and deal with acceptors, recognizers, state variables, and transaction functions. A state machine defines the behavior of a finite number of states, the transitions between those states, and actions that can occur.

The most common representation of a state machine is through a state machine table. For example, as Table 4.3 illustrates, if the state machine is at the current state B and condition 2, the next state would be C and condition 3 as we progress through the options.

Images

FIGURE 4.8 Finite State Model

TABLE 4.3 State Machine Table

State Transaction

State A

State B

State C

Condition 1

Condition 2

Current state

Condition 3

A state machine model monitors the status of the system to prevent it from slipping into an insecure state. Systems that support the state machine model must have all their possible states examined to verify that all processes are controlled in accordance with the system security policy. The state machine concept serves as the basis of many security models. The model is valued for knowing in what state the system will reside. For example, if the system boots up in a secure state, and every transaction that occurs is secure, it must always be in a secure state and will not fail open. (To fail open means that all traffic or actions are allowed rather than denied.)

Information Flow Model

The information flow model is an extension of the state machine concept and serves as the basis of design for both the Biba and Bell-LaPadula models, which are discussed later in this chapter. The information flow model consists of objects, state transitions, and lattice (flow policy) states. The goal with this model is to prevent unauthorized, insecure information flow in any direction. This model and others can make use of guards, which allow the exchange of data between various systems.

Noninterference Model

The noninterference model, defined by Goguen and Meseguer, was designed to make sure that objects and subjects of different levels don’t interfere with objects and subjects of other levels. The model uses inputs and outputs of either low or high sensitivity. Each data access attempt is independent of all others, and data cannot cross security boundaries.

Confidentiality

Although the models described so far serve as a basis for many security models developed later, one major concern with those earlier models is confidentiality. Government entities such as the DoD are concerned about the confidentiality of information. The DoD divides information into categories to ease the burden of managing who has access to various levels of information. The DoD information classifications are sensitive but unclassified (SBU), confidential, secret, and top secret. The Bell-LaPadula model was one of the first models to address the confidentiality needs of the DoD.

Bell-LaPadula Model

The Bell-LaPadula state machine model enforces confidentiality. This model uses mandatory access control to enforce the DoD multilevel security policy. For subjects to access information, they must have a clear need to know and must meet or exceed the information’s classification level.

The Bell-LaPadula model is defined by the following properties:

  • Images Simple security (ss) property: This property states that a subject at one level of confidentiality is not allowed to read information at a higher level of confidentiality. This is sometimes referred to as “no read up.” Figure 4.9 provides an example.

Images

FIGURE 4.9 Bell-LaPadula Simple Security Model

  • Images Star (*) security property: This property states that a subject at one level of confidentiality is not allowed to write information to a lower level of confidentiality. This is also known as “no write down.” Figure 4.10 provides an example.

Images

FIGURE 4.10 Bell-LaPadula Star Property

  • Images Strong star property: This property states that a subject cannot read or write to an object of higher or lower sensitivity. Figure 4.11 provides an example.

Images

FIGURE 4.11 Bell-LaPadula Strong Star Property

ExamAlert

Review the Bell-LaPadula simple security and star security models closely; they are easy to confuse with Biba’s two defining properties.

Tip

A fourth but rarely implemented property of the Bell-LaPadula model called the discretionary security property allows users to grant access to other users at the same clearance level by means of an access matrix.

Although the Bell-LaPadula model goes a long way in defining the operation of secure systems, the model is not perfect. It does not address security issues such as covert channels. It was designed in an era when mainframes were the dominant platform. It was designed for multilevel security and takes only confidentiality into account.

Tip

It is important to know that the Bell-LaPadula model deals with confidentiality. This means that reading information at a higher level than is allowed endangers confidentiality.

Integrity

Integrity is a good thing. It is one of the basic elements of the security triad, along with confidentiality and availability. Integrity plays an important role in security because it can be used to verify that unauthorized users are not modifying data, authorized users don’t make unauthorized changes, and databases balance and data remains internally and externally consistent. Whereas governmental entities are typically very concerned with confidentiality, other organizations might be more focused on the integrity of information. In general, integrity has four goals:

  • Images Prevent data modification by unauthorized parties

  • Images Prevent unauthorized data modification by authorized parties

  • Images Reflect the real world

  • Images Maintain internal and external consistency

Note

Some sources list only three goals of security by combining the third and fourth goals into one: maintain internal and external consistency and ensure that the data reflects the real world.

Two security models that address secure systems integrity include Biba and Clark-Wilson models, which are covered in the following sections. The Biba model addresses only the first integrity goal, and the Clark-Wilson model addresses all four goals.

Biba Model

The Biba model was the first model developed to address integrity concerns. Originally published in 1977, this lattice-based model has the following defining properties:

  • Images Simple integrity property: This property states that a subject at one level of integrity is not permitted to read an object of lower integrity.

  • Images Star (*) integrity property: This property states that an object at one level of integrity is not permitted to write to an object of higher integrity.

  • Images Invocation property: This property prohibits a subject at one level of integrity from invoking a subject at a higher level of integrity.

Tip

The star property in both the Biba and Bell-LaPadula models deals with writes. One easy way to remember these rules is to think, “It’s written in the stars!”

The Biba model addresses only the first goal of integrity: protecting the system from access by unauthorized users. Other types of concerns such as confidentiality are not examined. This model also assumes that internal threats are being protected by good coding practices, and it therefore focuses on external threats.

Tip

To remember the purpose of the Biba model, you can think that the i in Biba stands for integrity.

Tip

Remember that the Biba model deals with integrity and, as such, writing to an object of a higher level might endanger the integrity of the system.

Clark-Wilson Model

The Clark-Wilson model, which was created in 1987, differs from previous models because it was developed to be used for commercial activities. This model addresses all four goals of integrity. The Clark-Wilson model dictates that the separation of duties must be enforced, subjects must access data through an application, and auditing is required. Some terms associated with this model include the following:

  • Images User

  • Images Transformation procedure

  • Images Unconstrained data item

  • Images Constrained data item

  • Images Integrity verification procedure

The Clark-Wilson model features an access control triple, where subjects must access programs before accessing objects (subject–program–object). The access control triple is composed of the user, a transformational procedure, and the constrained data item. It was designed to protect integrity and prevent fraud. Authorized users cannot change data in an inappropriate way. The Clark-Wilson model checks three attributes: tampered, logged, and consistent (TLC).

The Clark-Wilson model differs from the Biba model in that subjects are restricted. This means that a subject at one level of access can read one set of data, whereas a subject at another level has access to a different set of data. The Clark-Wilson model controls the way in which subjects access objects so that the internal consistency of the system can be ensured, and data can be manipulated only in ways that protect consistency. Integrity verification procedures (IVPs) ensure that a data item is in a valid state. Data cannot be tampered with while being changed, and the integrity of the data must be consistent. The Clark-Wilson model requires all changes to be logged.

The Clark-Wilson model is made up of transformation procedures (TPs). Constrained data items (CDIs) are data for which integrity must be preserved. Items not covered under the model are considered unconstrained data items (UDIs).

Tip

Remember that the Clark-Wilson model requires that users be authorized to access and modify data, and it deals with three key terms: tampered, logged, and consistent (TLC).

Take-Grant Model

The Take-Grant model is another confidentiality-based model that supports four basic operations: take, grant, create, and revoke. This model allows subjects with the take right to remove take rights from other subjects. Subjects possessing the grant right can grant this right to other subjects. The create and revoke operations work in the same manner: Someone with the create right can give the create right to others, and those with the revoke right can remove that right from others.

Brewer and Nash Model

The Brewer and Nash model is similar to the Bell-LaPadula model and is also sometimes referred to as the Chinese Wall model. It was developed to prevent conflict of interest (COI) problems. The Brewer and Nash model is context oriented in that it prevents a worker consulting for one firm from accessing data belonging to another, thereby preventing any COI. For example, imagine that your security firm does security work for many large firms. If one of your employees could access information about all the firms that your company has worked for, that person might be able to use this data in an unauthorized way.

Other Models

A security model defines and describes what protection mechanisms are to be used and what these controls are designed to achieve. The previous sections cover some of the most heavily tested models, but you should have a basic understanding of a few more security models, including the following:

  • Images Graham-Denning model: This model uses a formal set of eight protection rules for which each object has an owner and a controller. These rules define what you can create, delete, read, grant, or transfer.

  • Images Harrison-Ruzzo-Ullman model: This model is similar to the Graham-Denning model and details how subjects and objects can be created, deleted, accessed, or changed.

  • Images Lipner model: This model combines elements of the Bell-LaPadula and Biba models to guard both confidentiality and integrity.

  • Images Lattice model: This model is associated with MAC. Controls are applied to objects, and the model uses security levels that are represented by a lattice structure; this structure governs information flow. Subjects of the lattice model are allowed to access an object only if the security level of the subject is equal to or greater than that of the object. Overall access limits are set by having a least upper bound and a greatest lower bound for each security level.

ExamAlert

Spend some time reviewing all the models discussed in this section. Make sure you know which models are integrity based, which are confidentiality based, and the properties of each; you will need to know this information for the CISSP exam.

Tip

Although the security models described in this section are the ones the CISSP exam is most likely to focus on, there are many other models, such as the Sutherland, Boebert and Kain, Karger, Gong, and Jueneman models. Even though many security professionals may have never heard of these models, those who develop systems most likely learned of them in college.

Product Security Evaluation Models

A set of evaluation standards is needed when evaluating the security capabilities of information systems. A number of documents and guidelines have been developed to help evaluate and establish system assurance. These items are important to a CISSP candidate because they provide a level of trust and assurance that these systems will operate in a given and predictable manner. A trusted system has undergone testing and been validated to a specific standard. Assurance means freedom from doubt and a level of confidence that a system will perform as required every time it is used.

Think of product evaluation models as being similar to EPA gas mileage ratings, which give buyers and sellers a way to evaluate different automotive brands and models. In the world of product security, developers can use product evaluation systems when preparing to sell a system. A buyer can use the same evaluation models when preparing to make a purchase, as they provide a way to measure a system’s effectiveness and benchmark its abilities. The following sections describe documents and guidelines that facilitate these needs.

The Rainbow Series

The Rainbow Series is so named because each book in the series has a label of a different color. This 6-foot-tall stack of books was developed by the National Computer Security Center (NCSC), an organization that is part of the National Security Agency (NSA). These guidelines were developed for the Trusted Product Evaluation Program (TPEP), which tests commercial products against a comprehensive set of security-related criteria. The first of these books, released in 1983, is known as Trusted Computer System Evaluation Criteria (TCSEC), or the Orange Book. Many similar guides were also known by the color of the cover instead of their name, such as the Red Book. While the Orange Book is no longer commercially used, understanding TCSEC will help you understand how product security evaluation models have evolved into what we use today.

Note

Rainbow Series guidelines have all been replaced with Common Criteria, described later in this chapter.

The Orange Book: Trusted Computer System Evaluation Criteria

The Orange Book was developed to evaluate standalone systems. Its basis of measurement is confidentiality, so it is similar to the Bell-LaPadula model.

Note

Canada has its own version of the Orange Book, known as The Canadian Trusted Computer Product Evaluation Criteria (CTCPEC). It too has been replaced by Common Criteria.

Although the Orange Book is no longer considered current, it was one of the first product security standards. Table 4.4 lists the Orange Book levels.

TABLE 4.4 Orange Book Levels

Level

Items to Remember

A1

Built, installed, and delivered in a secure manner

B1

Security labels (MAC)

B2

Security labels and verification of no covert channels (MAC)

B3

Security labels, verification of no covert channels, and must stay secure during startup (MAC)

C1

Weak protection mechanisms (DAC)

C2

Strict login procedures (DAC)

D1

Failed or was not tested

The Red Book: Trusted Network Interpretation

The Red Book’s official name is the Trusted Network Interpretation (TNI). The purpose of the TNI is to examine security for network and network components. Whereas the Orange Book addresses only confidentiality, the Red Book examines integrity and availability. It also is tasked with examining the operation of networked devices. The Red Book addresses three areas of reviews:

  • Images Denial of service (DoS) prevention: Management and continuity of operations

  • Images Compromise protection: Data and traffic confidentiality and selective routing

  • Images Communications integrity: Authentication, integrity, and nonrepudiation

Information Technology Security Evaluation Criteria (ITSEC)

ITSEC is a European standard developed in the 1980s to evaluate confidentiality, integrity, and availability of an entire system. ITSEC is unique in that it was the first standard to unify markets and bring all of Europe under one set of guidelines. ITSEC designates the target system as the target of evaluation (TOE). The evaluation is actually divided into two parts: One part evaluates functionality, and the other evaluates assurance.

ITSEC speaks of 10 functionality (F) classes and 7 assurance (E) classes. Assurance classes rate the effectiveness and correctness of a system. Table 4.5 shows these ratings and how they correspond to the TCSEC ratings.

TABLE 4.5 ITSEC Functionality Ratings and Comparison to TCSEC

F Class Rating

E Class Rating

TCSEC Rating

NA

E0

D

F1

E1

C1

F2

E2

C2

F3

E3

B1

F4

E4

B2

F5

E5

B3

F5

E6

A1

F6

TOEs with high integrity requirements

F7

TOEs with high availability requirements

F8

TOEs with high integrity requirements during data communications

F9

TOEs with high confidentiality requirements during data communications

F10

Networks with high confidentiality and integrity requirements

Common Criteria

With all the standards we have discussed to this point, it is easy to see how someone might have a hard time determining which one is the right choice. The International Organization for Standardization (ISO) had this thought as well, and it decided that instead of the various standards and ratings that existed, there should be a single global standard. Figure 4.12 illustrates the development of Common Criteria.

Images

FIGURE 4.12 Common Criteria Development

In 1997, the ISO released Common Criteria (ISO 15408), which is an amalgamated version of TCSEC, ITSEC, and CTCPEC. Common Criteria is designed around TCB entities, which include physical and logical controls, startup and recovery, reference mediation, and privileged states. Common Criteria categorize assurance into one of seven increasingly strict levels of assurance, referred to as evaluation assurance levels (EALs):

  • Images EAL 1: Functionality tested

  • Images EAL 2: Structurally tested

  • Images EAL 3: Methodically checked and tested

  • Images EAL 4: Methodically designed, tested, and reviewed

  • Images EAL 5: Semi-formally designed and tested

  • Images EAL 6: Semi-formally verified, designed, and tested

  • Images EAL 7: Formally verified, designed, and tested

EALs provide a specific level of confidence in the security functions of the system being analyzed.

ExamAlert

If you are looking for an example of a high-level EAL 6 operating system, look no further than Integrity 178B by Green Hills software. This secure OS is used in jet fighters and other critical devices.

Like ITSEC, Common Criteria defines two types of security requirements: functional and assurance. Functional requirements define what a product or system does. They also define the security capabilities of a product. The assurance requirements and specifications to be used as the basis for evaluation are known as the security target (ST). A protection profile defines the system and its controls. The protection profile is divided into five sections:

  • Images Rationale

  • Images Evaluation assurance requirements

  • Images Descriptive elements

  • Images Functional requirements

  • Images Development assurance requirements

A security target consists of seven sections:

  • Images Introduction

  • Images Conformance Claims

  • Images Security Problem Definition

  • Images Security Objectives

  • Images Extended Components Definition

  • Images Security Requirements

  • Images TOE Security Specifications

A Common Criteria certification contains either a protection profile (PP) or a security target (ST).

Assurance requirements define how well a product is built. Assurance requirements inspire confidence in the product and show the correctness of its implementation.

ExamAlert

Common Criteria’s seven levels of assurance and two security requirements are required knowledge for the CISSP exam.

System Validation

No system or architecture will ever be completely secure; there will always be a certain level of risk. Security professionals must understand this risk and be comfortable with it, mitigate it, or offset it through a third party. All the documentation and guidelines already discussed dealt with ways to measure and assess risk. These can be a big help in ensuring that the implemented systems meet your requirements. However, before you begin to use the systems, you must complete the two additional steps of certification and accreditation.

U.S. federal agencies are required by law to have their IT systems and infrastructures certified and accredited. Although you shouldn’t expect to see in-depth certification and accreditation questions on the CISSP exam, it is worth knowing if you plan to interact with any agencies that require their use. These methodologies look at much more than a standard penetration test; they are more like an audit. They must validate that the systems are implemented, configured, and operating as expected and meet all security policies and procedures.

Certification and Accreditation

Certification is the process of validating that implemented systems are configured and operating as expected. It also validates that the systems are connected to and communicate with other systems in a secure and controlled manner and that they handle data in a secure and approved manner. The certification process is a technical evaluation of the system that can be carried out by independent security teams or by the existing staff. Its goal is to uncover any vulnerabilities or weaknesses in the implementation.

The results of the certification process are reported to the organization’s management for mediation and approval. If management agrees with the findings of the certification, the report is formally approved. The formal approval of the certification is the accreditation process. Management usually issues accreditation as a formal, written approval that the certified system is approved for use as specified in the certification documentation. If changes are made to the system or in the environment in which the system is used, a recertification and accreditation process must be repeated. The entire process is periodically repeated at intervals, depending on the industry and the regulations the organization must comply with. For example, Section 404 of Sarbanes-Oxley requires an annual evaluation of internal systems that deal with financial controls and reporting systems.

ExamAlert

For the CISSP exam, you might want to remember that certification is seen as the technical aspect of validation, whereas accreditation is management’s approval.

Note

Nothing lasts forever, including certification. The certification process should be repeated when systems change, when items are modified, or on a periodic basis.

Vulnerabilities of Security Architectures

Like most other chapters of this book, this one also reviews potential threats and vulnerabilities. Any time a security professional makes the case for stronger security, there will be those who ask why funds should be spent that way. It’s important to point out not only the benefits of good security but also the potential risks of not implementing good practices and procedures.

We live in a world of risk. As security professionals, we need to be aware of the threats to security and understand how the various protection mechanisms discussed throughout this chapter can be used to raise the level of security.

Buffer Overflows

Buffer overflows occur because of poor coding techniques. A buffer is a temporary storage area that has been coded to hold a certain amount of data. If additional data is fed to the buffer, it can spill over or overflow to adjacent buffers. This can corrupt those buffers and cause the application to crash or possibly allow an attacker to execute his own code that he has loaded onto the stack. Ideally, programs should be written with error checking—such as to check that you cannot type 32 characters into a 24-character buffer; however, this type of error checking does not always occur. Error checking is really nothing more than making sure that buffers receive the correct type and amount of information required. Here is an example of a buffer overflow:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int abc()
{
 char buffer[8];
 strcpy(buffer, "AAAAAAAAAA";
 return 0;
}

OS vendors are also working to make buffer overflow attacks harder by using techniques such as data execution prevention (DEP) and address space layout randomization (ASLR). DEP marks some areas of memory as either executable or non-executable. DEP can help avert some attacks by preventing the writing of malicious commands designed to be stored in memory. ASLR randomly rearranges address space positions of data. Think of the shell game, where a small ball is placed under one of three shells and is then moved around. To win the game, you must guess which shell the ball is under. Most modern operating systems, such as Android, Windows, and FreeBSD, make use of ASLR.

Other defenses for buffer overflows include code reviews, using safe programming languages, and applying patches and updates in a timely manner. Finally, because all data should be suspect by default, data being input, processed, or output should be checked to make sure it matches the correct parameters.

Backdoors

Backdoors are potential threats to the security of systems and software. Programmers use backdoors, which are also sometimes referred to as maintenance hooks, during development to allow easy access to a piece of software. Often these backdoors are undocumented. A backdoor can be used when software is developed in sections and developers want a means of accessing certain parts of the program without having to run through all the code. If backdoors are not removed before the release of the software, they can allow an attacker to bypass security mechanisms and access the program.

State Attacks

A state attack is a form of attack that typically targets timing. The objective is to exploit the delay between the time of check (TOC) and the time of use (TOU). These attacks are sometimes called asynchronous attacks or race conditions because the attacker races to make a change to the object after it has been checked but before the system uses it.

For example, if a program creates a date file to hold the amount a customer owes, and the attacker can race to replace this value before the program reads it, he can successfully manipulate the program. In reality, it can be difficult to exploit a race condition because a hacker might have to attempt to exploit the race condition many times before succeeding.

Covert Channels

Covert channels provide a means of moving information in a manner that was not intended. Covert channels are a favorite of attackers because they know that you cannot deny what you must permit. The term was originally used in TCSEC documentation to refer to ways of transferring information from a higher classification to a lower classification. Covert channel attacks can be broadly separated into two types:

  • Images Covert timing channel attacks: Timing attacks are difficult to detect. They function by altering a component or by modifying resource timing.

  • Images Covert storage channel attacks: These attacks use one process to write data to a storage area and another process to read the data.

Here is an example of how covert channel attacks happen in real life. Your organization has decided to allow ping (Internet Control Message Protocol [ICMP]) traffic into and out of your network. Based on this knowledge, an attacker has planted the Loki program on your network. Loki uses the payload portion of a ping packet to move data into and out of your network. Therefore, the network administrator sees nothing but normal ping traffic and is not alerted, even though the attacker is busy stealing company secrets. Sadly, many programs can perform this type of attack.

ExamAlert

The CISSP exam expects you to understand the two types of covert channel attacks.

Incremental Attacks

The goal of an incremental attack is to make changes slowly over time. By making small changes over long periods, an attacker hopes to remain undetected. Two primary incremental attacks are data diddling, which is possible if the attacker has access to the system and can make small incremental changes to data or files, and salami attack, which is similar to data diddling but involves making small changes to financial accounts or records, often referred to as “cooking the books.”

Emanations

Anyone who has seen movies such as Enemy of the State or The Conversation knows something about surveillance technologies and conspiracy theories. If you have ever thought that only fringe elements are worried about such things, guess again. This might sound like science fiction, but the U.S. government was concerned enough about the possibility of emanation of stray electrical signals from electronic devices that the Department of Defense started a program to study emanation leakage.

Research actually began in the 1950s, and this research eventually led to the TEMPEST technology. The fear was that attackers might try to sniff the stray electrical signals that emanate from electronic devices. Devices built to TEMPEST standards, such as cathode ray tube (CRT) monitors, have had TEMPEST-grade copper mesh, known as a Faraday cage, embedded in the case to prevent signal leakage. This costly technology is found only in very high-security environments.

TEMPEST is now considered somewhat dated; newer technologies, such as white noise and control zones, are now used to control emanation security. White noise involves using special devices that send out a stream of frequencies that makes it impossible for an attacker to distinguish the real information. Control zones are facilities whose walls, floors, and ceilings are designed to block electrical signals from leaving the zone.

Another term associated with this category of technology is Van Eck phreaking. This is the name given to eavesdropping on the contents of a CRT through emanation leakage. Although this technique sounds far-fetched, Cambridge University successfully demonstrated the technique against an LCD monitor in 2004.

ExamAlert

For the CISSP exam, you need to know the technologies and techniques implemented to prevent intruders from capturing and decoding information emanated through the airwaves. TEMPEST, white noise, and control zones are the three primary controls.

Web-Based Vulnerabilities

Vulnerabilities in web-based systems involve application flaws or weaknesses in design. Exploits can be launched from a client or server. For example, an input validation attack occurs when client-side input is not properly validated. Application developers should never assume that users will input the correct data. A user bent on malicious activity will attempt to stretch a protocol or an application in an attempt to find possible vulnerabilities. Parameter problems are best solved by implementing pre-validation and post-validation controls. Pre-validation is implemented in the client but can be bypassed by using proxies and other injection techniques. Post-validation is performed to ensure that a program’s output is correct. Other security issues directly related to a lack of input validation include the following:

  • Images Cross-site scripting (XSS): An attack that exploits trust so that an attacker uses a web application to send malicious code to a web server or an application server.

  • Images Cross-site request forgery (CSRF): An attack that involves third-party redirection of static content so that unauthorized commands are transmitted from a user that the website trusts.

  • Images Direct OS commands: The unauthorized execution of OS commands.

  • Images Directory traversal attack: A technique that allows an attacker to move from one directory to another.

  • Images Unicode encoding: A technique used to bypass security filters. One famous example used the Unicode string “%c0%af..%c0%af..”.

  • Images URL encoding: Used by an attacker to hide or execute an invalid application command via an HTTP request (for example, www.knowthetrade.com%2fmalicious.js%22%3e%3c%2fscript%3e).

Tip

XSS and CSRF are sometimes confused, so just keep in mind that one key difference is that XSS executes code in a trusted context.

One of the things that makes a programmer’s life difficult is that there is no such thing as trusted input. All input is potentially bad and must be verified. While the buffer overflow is the classic example of poor input validation, these attacks have become much more complex: Attackers have learned to insert malicious code in the buffer instead of just throwing “garbage” (that is, typing random gibberish) at an application to cause a buffer to overflow—which is just messy. There are also many tools available to launch these attacks; Figure 4.13 shows one example.

Images

FIGURE 4.13 The Burp Proxy Attack Tool

Attackers may also use the following techniques to exploit poor input validation:

  • Images XML injection

  • Images LDAP injection

  • Images SQL injection

All of these are the same type of attack, but they target different platforms.

Databases are another common target of malformed input. An attacker can attempt to insert database or SQL commands to disrupt the normal operation of a database. This could cause the database to become unstable and leak information. This type of attack is known as SQL injection. The attacker searches for web pages in which to insert SQL commands. Attackers use logic such as ' (a single quote) to test the database for vulnerabilities. Responses such as the one shown in the following code give the attacker the feedback needed to know that the database is vulnerable to attack:

Microsoft OLE DB Provider for ODBC Drivers error '80040e07'
[Microsoft][ODBC SQL Server Driver][SQL Server]Syntax error converting
the nvarchar value 'sa_login' to a column of data type int.
/index.asp, line 5

Although knowledge of the syntax and response used for a database attack is not required for the CISSP exam, it is useful to know this information as you attempt to secure your infrastructure.

Caution

SQL injection attacks are among the top attack vectors and are responsible for a large number of attacks. CISSP candidates should understand the threat these attacks pose.

Injection attacks, such as SQL, LDAP, and others, can occur in many different programs and applications and take advantage of a common problem: No separation exists between the application code and the input data, which makes it possible for attackers to run their code on the victim’s system. Injection attacks require the following:

  • Images Footprinting: It is necessary to determine what technology the web application is running.

  • Images Identifying: User input points must be identified.

  • Images Testing: User input that is susceptible to the attack must be tested.

  • Images Exploiting: Extra bits of code are placed into the input to execute commands on the victim’s computer.

Mobile System Vulnerabilities

Mobile devices have increased in power and now have the ability to handle many tasks that previously only desktops and laptops could perform. More and more employees are bringing their own mobile devices to work and using them on corporate networks. Organizations might have a number of concerns about this arrangement, including the following:

  • Images Eavesdropping on voice calls

  • Images Mobile viruses and malware

  • Images Plaintext storage on mobile devices

  • Images Ease of loss and theft of mobile device

  • Images Camera phones’ ability to photograph sensitive information

  • Images Large storage ability, which can lead to data theft or exfiltration

  • Images Software that exposes local device data such as names, email addresses, or phone numbers

Bring your own technology (BYOT), also known as bring your own device (BYOD), requires an organization to build in administrative and technical controls to govern how the devices can be used at work. Some of these basic controls might include the following:

  • Images Passwords: One of the most basic and cheapest means of protecting a mobile system is to enforce use of passwords. Also, having the ability to remote wipe a missing or stolen device is recommended for corporate devices.

  • Images Multifactor authentication (MFA): Multiple forms of authentication strengthen passwords. For example, a user might be required to use Okta or Microsoft MFA to approve a login from an unknown location.

  • Images Session lifetimes: Limiting session times and cookies can promote security by logging users out of sensitive services after a set amount of idle time.

  • Images Wireless vulnerabilities: Wireless networks are often vulnerable to attack. For example, an attacker might be able to set up a rogue wireless access point and launch a man-in-the-middle attack.

  • Images Unpatched OS, software, or browser: Mobile devices, like other computing devices, must be patched at regular periodic intervals.

  • Images Insecure devices: Jailbroken devices pose a risk for corporate networks as they are likely to be missing patches and other security updates.

  • Images Mobile device management: Mobile device management (MDM) and mobile application management (MAM) can be used to secure devices and can allow only managed devices to access company resources.

Cryptography

Cryptography involves transforming plaintext data to unreadable data, or cipher text. Today, cryptographic systems are mandatory to protect email, corporate data, personal information, and electronic transactions.

To give you a good understanding of cryptography, this section reviews how it relates to the foundations of security: privacy, authentication, integrity, and nonrepudiation.

Tip

One easy way to remember the primary goals of cryptography is to think of their initials, which spell PAIN: privacy, authentication, integrity, and nonrepudiation.

Confidentiality, or privacy, is the ability to guarantee that private information stays private. Cryptography provides confidentiality by transforming data. This transformation is called encryption. Encryption can protect confidentiality of information in storage or in transit. Just think about a CEO’s laptop. If it is lost or stolen, what is really worth more: the laptop or information regarding next year’s hot new product line? Information assets can be worth much more than the equipment on which they are stored. Hard disk encryption offers an easy way to protect information in the event that equipment is lost, stolen, or accessed by unauthorized individuals.

Authentication has several roles. First, authentication is usually associated with message encryption. Authentication provides a way to ensure that data or programs have not been modified and really come from the source that you believe them to have come from. Authentication is also used to confirm a user’s identity and is part of the identification and authentication process. The most common implementation of identification and authentication is with a username and password. Most passwords are encrypted, but they do not have to be. Without encryption, the authentication process is very weak. FTP and Telnet are examples of weak authentication. With these protocols, usernames and passwords are passed in unencrypted (that is, in plaintext), and anyone with access to the communication stream can intercept and capture these passwords. Virtual private networks (VPNs) also use authentication, but instead of using a plaintext username and password, they normally use digital certificates and digital signatures to more accurately identify the user and to protect the authentication process against spoofing.

Integrity is the assurance that information remains unaltered from the point at which it is created until it is received. If you’re selling widgets on the Internet for $100 each, you will likely go broke if a criminal can change the posted price to $1 at checkout. Integrity is critical for the exchange of information, be it engaging in e-commerce, maintaining trade secrets, or supplying accurate military communications.

Nonrepudiation is the capability to verify proof of identity. Nonrepudiation is used to ensure that a sender of data is provided with proof of delivery and that the recipient is assured of the sender’s identity. Neither party should be able to deny having sent or received the data at a later date. In the days of face-to-face transactions, nonrepudiation was not as hard to prove as it is today. The Internet makes many transactions faceless. You might never see the people that you deal with, and nonrepudiation is all the more critical. Nonrepudiation is achieved through digital signatures, digital certificates, and message authentication codes (MACs).

To help make this section a little easier to digest, review the following basic terms that are used throughout the rest of this chapter:

  • Images Plaintext: Text that is directly readable. Sometimes also called cleartext.

  • Images Encryption: The transformation of plaintext into ciphertext.

  • Images Ciphertext: Text that has been rendered unreadable by encryption.

  • Images Cryptographic algorithm: A set of mathematical procedures used to encrypt and decrypt data in a cryptographic system. For example, a simple transposition cypher such as Caesar’s cipher simply shifts characters forward or backward three characters in the alphabet.

  • Images Cryptographic key: A piece of information, also called a crypto variable, that controls how a cryptographic algorithm functions. It can be used to control the transformation of plaintext to ciphertext or ciphertext to plaintext. For example, an algorithm that shifts characters might use the key “+3” to shift characters forward by three positions. The word “cat” would be encrypted as “fdw” using this algorithm and key.

  • Images Key management: The generation, distribution, storage, and disposition of cryptographic keys. Key management is an important piece of the cryptographic process. Any portion of the key management process that is not handled correctly creates an opportunity to compromise the cryptographic system.

  • Images Digital rights management (DRM): A process that involves using tools, standards, and systems to protect intellectual property and copyrighted materials from misuse or theft. DRM is composed of data protection and data governance. Encryption technologies are used to provide data protection, and trust and policy management allow data governance so information can be distributed and used by authorized entities.

  • Images Steganography: The process of hiding a piece of information inside another message. Images, audio, and video are three example of messages that can be used to hide information.

  • Images Symmetric cryptography: Cryptography that provides for confidentiality by using a single key, a shared key, or the same key for both encryption and decryption.

  • Images Asymmetric cryptography: Cryptography that uses a private and public key pair for encryption and decryption. Both keys have dual functionality: What one key encrypts, the other key decrypts. Asymmetric cryptography provides for confidentiality, authentication, and nonrepudiation.

  • Images Cryptanalysis: The art and science of breaking a cryptography system or obtaining plaintext from ciphertext without a cryptographic key. Governments, the military, enterprises, and malicious hackers use cryptanalysis to find weaknesses and crack cryptographic systems.

  • Images Message digest or hash: A fixed-length hex string used to uniquely identify a variable amount of data.

  • Images Digital signature: A hash value that is encrypted with a sender’s private key and used for authentication and integrity.

When symmetric encryption is used to convert plaintext into ciphertext, the transformation can be accomplished by using two types of ciphers:

  • Images Block ciphers: Ciphers that separate the message into blocks for encryption and decryption

  • Images Stream ciphers: Ciphers that divide the message into bits for encryption and decryption

Algorithms

An algorithm is a set of rules used to encrypt and decrypt data. It’s a set of instructions that is used with a cryptographic key to encrypt plaintext data. Encrypting plaintext data with different keys or with dissimilar algorithms produces different ciphertext.

Not all cryptosystems are of the same strength. The strength of a cryptosystem relies on the strength of an algorithm because a flawed algorithm can be broken. However, the strength of encryption also depends on the size and complexity of the key. For example, imagine that you’re contemplating buying a combination lock. One lock has 3 digits, whereas the other has 4. Which would you choose? Consider that there are 1,000 possible combinations for the 3-digit lock, but there are 10,000 possible combinations for the 4-digit lock. As you can see, just a 1-digit increase can create a significant difference. The more possible keys or combinations there are, the longer it takes an attacker to guess the right key.

The size of the key—whether it is 4 possible numbers, 7 possible numbers, or even 64 possible numbers—is known as the key space. In the world of cryptography, key spaces are defined by the number of bits. So, a 64-bit key has a key space of 2 to the power of 64, or 18,446,744,073,709,551,616.

Keys must remain secret. Although a 7-digit combination lock can provide great security, it will do you little good if everyone knows the combination is your phone number.

Note

Data Encryption Standard (DES) uses a 64-bit key, with every 8th bit being a parity bit. 3DES (also called Triple DES), which uses three different keys and has a key strength of 168 bits, was the last official version of DES. All versions of DES have been retired.

The final consideration in the choice of a cryptosystem is the value of the data. Highly valued data requires more protection than data that has little value. Therefore, more valuable information needs stronger algorithms, larger keys, and more frequent key exchange to protect against attacks.

Cryptographic systems might make use of a nonce, which is a number generated as randomly as possible and used once. These pseudorandom numbers are different each time one is generated. An initialization vector (IV) is an example of a nonce. An IV can be added to a key and used to force creation of unique ciphertext even when encrypting the same message with the same cipher and the same key.

Modern cryptographic systems use two types of algorithms for encrypting and decrypting data:

  • Images Symmetric algorithms: Use the same key to encrypt and decrypt data

  • Images Asymmetric algorithms: Use different keys: one for encryption and the other for decryption

Table 4.6 highlights some of the key advantages and disadvantages of symmetric and asymmetric algorithms.

TABLE 4.6 Symmetric and Asymmetric Algorithms

Encryption Type

Advantages

Disadvantages

Symmetric

Faster than asymmetric

Key distribution

Provides only confidentiality

Asymmetric

Easy key exchange can provide confidentiality, authentication, and nonrepudiation

Slower than symmetric

Requires larger keys

ExamAlert

Make sure you know the differences between symmetric and asymmetric encryption for the CISSP exam.

Cipher Types and Methods

Symmetric encryption methods include block and stream ciphers. Block ciphers operate on blocks or fixed-size chunks of data. The Caesar cipher mentioned earlier in this chapter is an example of a block cipher. Most modern encryption algorithms implement some type of block cipher, and 64-bit blocks are a commonly used size. Block ciphers are widely used in software products. During the encryption and decryption process, the message is divided into blocks of bits. These blocks are then put through Boolean mathematical functions, resulting in the following:

  • Images Confusion: Occurs from substitution-type operations that create a complicated relationship between the plaintext and the key so that an attacker can’t alter the ciphertext to determine the key.

  • Images Diffusion: Occurs from transposition-type operations that shift pieces of the plaintext multiple times. The result is that changes are spread throughout the ciphertext.

A substitution box (s-box) performs a series of substitutions, transpositions, and exclusive-or (XOR) operations to obscure the relationship between the plaintext and the ciphertext. When properly implemented, s-boxes are designed to defeat cryptanalysis. An s-box takes a number of input bits (m) and transforms them into some number of output bits (n). S-boxes are implemented as a type of lookup table and used with symmetric encryption systems such as DES.

A stream cipher encrypts a stream of data 1 bit at a time. To accomplish this, a one-time pad is created from the encryption engine. This one-time pad is a key stream, and it is XORed with the plaintext data stream (1 bit at a time) to create ciphertext. Stream ciphers differ from each other in the engine they use to create the one-time pad; the engine receives the symmetric key as input to cause the creation of a unique key stream. The XOR operation is a Boolean math function that says when two bits are combined, if either one of them is a value of one, a one will result, and if both of the bits are the same, a zero will result. Table 4.7 provides a list of commonly used Boolean operators.

TABLE 4.7 Boolean Operators

Inputs

AND

OR

NAND

NOR

XOR

0 0

0

0

1

1

0

0 1

0

1

1

0

1

1 0

0

1

1

0

1

1 1

1

1

0

0

0

Stream ciphers operate at a higher speed than block ciphers and, in theory, are well suited for hardware implementation.

Symmetric Encryption

In symmetric encryption, a single shared secret key is used for both encryption and the decryption, as shown in Figure 4.14. The key is referred to as a dual-use key because it is used to lock and unlock data. Symmetric encryption is the oldest form of encryption; scytale and Caesar’s cipher are examples of it (see Chapter 5, “Communications and Network Security”). Symmetric encryption provides confidentiality by keeping individuals who do not have the key from knowing the true contents of the message.

Images

FIGURE 4.14 Symmetric Encryption

The simple diagram in Figure 4.14 shows the symmetric encryption process. Plaintext is encrypted with the single shared key, resulting in ciphertext; the ciphertext is then transmitted to the message’s recipient, who reverses the process to decrypt the message. Symmetric encryption and decryption are fast, and symmetric encryption is very hard to break if a large key is used. However, it has three significant disadvantages:

  • Images Distribution of the symmetric key

  • Images Key management

  • Images Confidentiality only

Distribution of the symmetric key is the most serious deficiency with symmetric encryption. For symmetric encryption to be effective, there must be a secure method to transfer keys. In our modern world, there needs to be some type of out-of-band transmission. Just think about it: If Bob wants to send Alice a secret message but is afraid that Eavesdropper Eve can monitor their communication, how can he send the message? If the key is sent in plaintext, Eve can intercept it. Bob could deliver the key in person, mail it, or even send a courier. All these methods are highly impractical in our world of e-commerce and electronic communication.

In addition to the problem of key exchange, there is also a key management problem. If, for example, you had 10 people who all needed to communicate with each other in complete confidentiality, you would require 45 keys for them. The following formula is used to calculate the number of keys needed in symmetric encryption:

N(N – 1)/2

In this example, the calculation is as follows:

10(10 – 1)/2 = 45 keys

Table 4.8 shows how the number of keys climbs as the number of users increases.

TABLE 4.8 Symmetric Encryption Users and Keys

Number of Users

Number of Keys

5

10

10

45

100

4,950

1,000

499,500

The third and final problem with symmetric encryption is that it provides for confidentiality only. The ultimate goal of cryptography is to supply confidentiality, integrity, authenticity, and nonrepudiation.

Some examples of symmetric algorithms include the following:

  • Images Data Encryption Standard (DES): DES was once the most commonly used symmetric algorithm. It has been officially retired by NIST. Even the latest version of DES, 3DES, was retired in 2018 and was replaced by the new FIP 197 standard AES.

  • Images Blowfish: Blowfish is a general-purpose symmetric algorithm intended as a replacement for DES. Blowfish has a variable block size and a key size of 32 bits to 448 bits.

  • Images Twofish: Twofish is a block cipher that operates on 128-bit blocks of data and is capable of using cryptographic keys up to 256 bits in length.

  • Images International Data Encryption Algorithm (IDEA): IDEA is a block cipher that uses a 128-bit key to encrypt 64-bit blocks of plaintext. It is patented but free for noncommercial use, and it is used by PGP.

  • Images Rijndael: This is a block cipher adopted as the Advanced Encryption Standard (AES) by the U.S. government to replace DES. Although Rijndael supports multiple block sizes, AES has a fixed block size of 128 bits. There are three approved key lengths—128, 192, and 256—with block sizes of 10, 12, and 14.

  • Images Rivest Cipher 4 (RC4): RC4 is a stream-based cipher. Stream ciphers treat the data as a stream of bits.

  • Images Rivest Cipher 5 (RC5): RC5 is a fast block cipher. It is different from other symmetric algorithms in that it supports a variable block size, a variable key size, and a variable number of rounds. Allowable choices for the block size are 32, 64, and 128 bits. The number of rounds can range from 0 to 255, and the key can range up to 2,040 bits.

  • Images Secure and Fast Encryption Routine (SAFER): SAFER is a block-based cipher that processes data in blocks of 64 and 128 bits.

  • Images MARS: MARS is a candidate for AES that was developed by IBM. It is a block cipher that has a 128-bit block size and a key length between 128 and 448 bits.

  • Images Carlisle Adams/Stafford Tavares (CAST): CAST is a 128- or 256-bit block cipher that was a candidate for AES.

  • Images Camellia: Camellia is a symmetric key block cipher with a block size of 128 bits and key sizes of 128, 192, and 256 bits. Developed by Mitsubishi Electric and NTT of Japan, Camellia is comparable to AES.

  • Images Skipjack: Skipjack, promoted by the NSA, uses an 80-bit key, supports the same four modes of operation as DES, and operates on 64-bit blocks of text. Skipjack faced public opposition because it was developed so that the government could maintain information enabling legal authorities (with a search warrant or approval of the court) to reconstruct a Skipjack access key and decrypt private communications between affected parties.

ExamAlert

Be sure to take your time to review the various encryption types, block sizes, and key lengths; you can expect to see these items on the CISSP exam. You will be expected to know some of the algorithms that are discussed in detail in the following section. Others may simply be used as distractors on the exam.

To provide authentication from cryptography, you must turn to asymmetric encryption. However, before we discuss asymmetric encryption, the sections that follow complete the discussion of DES and a couple other popular symmetric encryption methods.

Data Encryption Standard (DES)

DES grew out of an early 1970s project originally developed by IBM. IBM and NIST modified IBM’s original encryption standard, known as Lucifer, to use a 56-bit key. This revised standard was endorsed by the NSA, named DES, and published in 1977. It was released as an American National Standards Institute (ANSI) standard in 1981.

DES uses a 64-bit block to process 64 bits of plaintext at a time and outputs 64-bit blocks of ciphertext. As mentioned earlier, DES uses a 64-bit key (with every 8th bit being ignored) and has the following modes of operation:

  • Images Electronic Codebook (ECB) mode

  • Images Cipher Block Chaining (CBC) mode

  • Images Cipher Feedback (CFB) mode

  • Images Output Feedback (OFB) mode

  • Images Counter (CTR) mode

ExamAlert

These modes of operation can be applied to any symmetric key block cipher, such as DES, 3DES, or AES. You need to know them for the CISSP exam.

The written ANSI standard reports the DES key to be 64 bits, but 8 bits are actually used for parity to ensure the integrity of the remaining 56 bits. Therefore, in terms of encryption strength, the key is really only 56 bits long. Each 64-bit plaintext block is separated into two 32-bit blocks and then processed by this 56-bit key. The processing submits the plaintext to 16 rounds of transpositions and substitutions.

ExamAlert

Keep in mind that while DES operates on 64-bit blocks, the key has an effective length of only 56 bits.

Electronic Codebook (ECB) Mode

ECB is the native encryption mode of DES. As with all other modes, if the last block is not full, padding is added to make the plaintext a full block. Although ECB produces the highest throughput, it is also the easiest form of DES encryption to break. If used with large amounts of data, it can be easily attacked because identical plaintext, when encrypted with the same key, will always produce the same ciphertext. ECB mode is appropriate only when used on small amounts of data. Figure 4.15 illustrates ECB.

Images

FIGURE 4.15 DES ECB Encryption

Tip

When using ECB, a given block of plaintext encrypted with a given key will always give the same ciphertext. ECB is the weakest form of DES.

Cipher Block Chaining (CBC) Mode

The CBC mode of DES, which is widely used, is similar to ECB. CBC processes 64-bit blocks of data but inserts some of the ciphertext created from each block into the next block. In this process, called chaining, each block is dependent on the previous block, creating a chain; chaining is accomplished by using the XOR operation.

The CBC mode of DES makes the ciphertext more secure and less susceptible to cracking. CBC mode is subject to a slight risk of propagating transmission errors upon reception. Any error experienced will be propagated into the decryption of the subsequent block of receipt. This can make it impossible to decrypt that block and the following blocks as well.

Cipher Feedback (CFB) Mode

CFB is implemented using a small block size (of 1 bit to 1 byte) so that streaming data can be encrypted without waiting for 64 bits to accrue. The resulting effect is that CFB behaves as a stream cipher. It is similar to CBC in that previously generated ciphertext is added to subsequent blocks. And, as with CBC, errors and corruption during transmission can propagate through the decryption process on the receiving side.

Output Feedback (OFB) Mode

Like CFB mode, OFB mode emulates a stream cipher. Unlike CFB mode, however, OFB mode feeds the plaintext of the data stream back into the next block to be encrypted. Therefore, transmission errors do not propagate throughout the decryption process. An initialization vector is used to create the seed value for the first encrypted block. DES XORs the plaintext with a seed value to be applied with subsequent data.

There is a derivative mode of OFB known as counter mode. Counter mode, as described later in this chapter, implements DES as a stream cipher and produces a ciphertext that does not repeat for long periods. Figure 4.16 illustrates DES OFB encryption.

Tip

Although DES remained secure for many years, in 1998 the Electronic Frontier Foundation (EFF) was able to crack DES by brute force in about 23 hours. When DES was officially retired, it was recommended that Triple DES (3DES) be used to ensure security. Triple DES has since been replaced by AES.

Images

FIGURE 4.16 DES OFB Encryption

Counter (CTR) Mode

Like CFB and OFB modes, counter mode also implements a block cipher into a stream cipher and adds a counter to the process. The counter is a function that produces a sequence that will not repeat for a long time. The counter value gets combined with an initialization vector to produce the input into the symmetric key block cipher. This value is then encrypted through the block cipher using the symmetric key. Counter mode is designed for operation on a multiprocessor machine where blocks can be encrypted in parallel, as shown in Figure 4.17.

Images

FIGURE 4.17 Counter Mode Encryption

Triple DES (3DES)

Before we get to the details of 3DES, let’s look at why 3DES was even invented. DES was adopted with a five-year certification, which means it needed to be recertified every five years. While DES initially passed its recertifications without any problems, NIST saw that DES was beginning to outlive its usefulness and began looking for candidates to replace it. DES had become the victim of increased computing power. As Moore’s law predicts, the number of transistors per square inch doubles every 18 to 24 months, and so does processing power. As a result, an encryption standard that originally required years to break through brute force was becoming dramatically easier to attack. The final demise of DES came in 1998, when the EFF was able to crack DES by brute force in about 23 hours. The actual attack used distributed systems involving more than 100,000 computers. Although DES had been resistant to cracking for many years, the EFF project demonstrated the need for stronger algorithms.

Although AES was to be the long-term replacement, the government had not chosen a cipher to put behind it. A temporary solution was needed to fill the gap before AES could be deployed. Some thought that Double DES might be used. After all, Double DES could have a 112-bit key! However, cryptanalysis proved that Double DES was no more secure than DES; it required the same work factor to crack as DES. Double DES is also susceptible to meet in the middle https://www.hypr.com/meet-in-the-middle-mitm-attack/.

It turned out that 3DES provided a geometric increase in performance. Therefore, to extend the usefulness of the DES encryption standard, 3DES was used as a stopgap solution. 3DES can make use of two or three keys to encrypt data, depending on how it is implemented; therefore, it has an effective key length of either 112 bits or 168 bits. 3DES performs 48 rounds of transpositions and substitutions. Although it is much more secure, it is approximately three times as slow as 56-bit DES. 3DES can be implemented in several ways:

  • Images DES EEE2: DES EEE2 uses two keys. The first key is reused during the third round of encryption. The encryption process is performed three times (encrypt, encrypt, encrypt).

  • Images DES EDE2: DES EDE2 uses two keys. Again, the first key is reused during the third round of encryption. Unlike DES EEE2, DES EDE2 encrypts, decrypts, and then encrypts.

  • Images DES EEE3: DES EEE3 uses three keys and performs the encryption process three times, each time encrypting. Sometimes, you might see the specifics of these ciphers mathematically summarized. For example, when discussing DES-EEE3 using E(K,P), where E refers to the encryption of plaintext P with key K, the process is summarized as E(K3,E(K2,E(K1,P))).

  • Images DES EDE3: DES EDE3 uses three keys but operates by encrypting, decrypting, and then encrypting the data. Figure 4.18 illustrates EDE3.

Images

FIGURE 4.18 3DES EDE3

Advanced Encryption Standard (AES)

In 2002, NIST decided on the replacement for DES, to be known as AES. Several algorithms were examined, and Rijndael (which sounds like “rain doll”) was chosen. Its name derives from the names of its two developers: Vincent Rijmen and Joan Daemen. Rijndael is considered a fast, simple, robust encryption mechanism.

AES is likely the most important symmetric encryption standard today. It is widely used and commonly found in wireless access points and other products. In addition, Rijndael is known to stand up well to various types of attacks. The Rijndael algorithm uses three layers of transformations to encrypt/decrypt blocks of message text:

  • Images Linear mix transform

  • Images Nonlinear transform

  • Images Key addition transform

It also uses parallel series of rounds of four steps each:

  1. Byte sub: Each byte is replaced by an s-box substitution.

  2. Shift row: Bytes are arranged in a rectangular matrix and shifted.

  3. Mix column: Matrix multiplication is performed based on the arranged rectangle.

  4. Add round key: Each byte of the state is combined with the round key.

On the last round, the fourth step is bypassed and the first is repeated.

Rijndael is an iterated block cipher, and as developed, it supports variable key and block lengths of 128, 192, or 256 bits:

  • Images If both the key size and block size are 128 bits, there are 10 rounds.

  • Images If both the key size and block size are 192 bits, there are 12 rounds.

  • Images If both the key size and block size are 256 bits, there are 14 rounds.

As specified in the standard for AES, Rijndael is now fixed at a block size of 128, but it can still deploy multiple key lengths.

International Data Encryption Algorithm (IDEA)

IDEA is a 64-bit block cipher that uses a 128-bit key. Although it has been patented by a Swiss company, it is freely available for noncommercial use. It is considered a secure encryption standard, and there have been no known attacks against it. It operates in four distinct modes, much like DES. At one time, it was thought that IDEA would replace DES, but patent fees prevented that from happening.

Rivest Cipher Algorithms

Rivest cipher is a general term for a family of ciphers designed by Ron Rivest, including RC2, RC4, RC5, and RC6. Ron Rivest is one of the creators of RSA. RC1 was never released, and RC3 was broken by cryptanalysis before its release.

RC2 is an early algorithm in the series. It features a variable-key-size, 64-bit block cipher that can be used as a drop-in substitute for DES.

RC4 is a fast stream cipher that is faster than block mode ciphers, and it was widely used. It was especially suitable for low-power devices. The 40-bit version is used in Wired Equivalent Privacy (WEP). Although only 40-bit keys (together with a 24-bit IV, creating 64-bit WEP) were specified by the 802.11 standard, many vendors tried to strengthen the encryption through a de facto deployment of a 104-bit key (with the 24-bit IV, making 128-bit WEP).

RC5 is a block-based cipher in which the number of rounds can range from 0 to 255, and the key can range from 0 bits to 2,048 bits. RC6 is similar; it uses a variable key size and key rounds. RC6 added two features (integer multiplication and four 4-bit working registers) not found in RC5.

Asymmetric Encryption

Asymmetric encryption is unlike symmetric encryption in that it uses two unique keys, as shown in Figure 4.19. What one key encrypts the other key must decrypt. One of the greatest benefits of asymmetric encryption is that it overcomes one of the big barriers of symmetric encryption: key distribution.

Images

FIGURE 4.19 Asymmetric Encryption

Here’s how asymmetric encryption functions: Imagine that you want to send a client a message. You use your client’s public key to encrypt the message. When your client receives the message, he uses his private key to decrypt it. The important concepts here are that if the message is encrypted with the public key, only the matching private key will decrypt it. The private key, by definition, is generally kept secret, whereas the public key can be given to anyone. If this is properly designed, it should not be possible for someone to easily deduce a key pair’s private key from the public key.

Cryptographic systems can also make use of zero knowledge proof. This concept allows you to prove your knowledge without revealing the fact to a third party. For example, if someone encrypts data with the private key, that data can be decrypted with the public key. This would permit a perfect check of authenticity. Asymmetric encryption provided the mechanism for accomplishing this concept. It is possible for the holder of a private key to prove she holds that key without ever disclosing the contents to anyone. Dr. W. Diffie and Dr. M. E. Hellman (discussed shortly) used this concept to permit the creation of a trusted session key while communicating across an untrusted communication path. And—presto!—key distribution was solved.

Public key cryptography is made possible by the use of one-way functions. A one-way function, known as a trapdoor, is a mathematical calculation that is easy to compute in one direction but nearly impossible to compute in the other. Depending on the type of asymmetric encryption used, this calculation involves one of the following:

  • Images Manipulating discrete logarithms

  • Images Factoring large composite numbers into their original prime factors

As an example of a trapdoor function, consider an implementation that uses factoring. If you are given two large prime numbers such as 387 and 283, it is easy to multiply them together and get 109,521. However, if you are given only the product 109,521, it will take a while to find the factors.

As you can see, anyone who knows the trapdoor can easily perform the function in both directions, but anyone lacking the trapdoor can perform the function in only one direction. Trapdoor functions are used in the forward direction when someone is using the public key function; the forward direction is used for encryption, verification of digital signatures, and receipt of symmetric keys. Trapdoor functions are used in the inverse direction when someone is using the private key function; the inverse direction is used for decryption, generation of digital signatures, and transmission of symmetric keys.

When public key encryption is properly implemented, anyone with a private key can generate its public pair, but no one with a public key can easily derive its private pair. We have Diffie and Hellman to thank for helping develop public key encryption; they released the first key-exchange protocol in 1976.

Diffie-Hellman

Diffie-Hellman was the first public key-exchange algorithm. It was developed only for key exchange and not for data encryption or digital signatures. The Diffie-Hellman protocol allows two users to exchange a secret key over an insecure medium without any prior secrets.

Although in-depth knowledge of Diffie-Hellman’s operation is not necessary for the CISSP exam, its operation is classic and worth review for anyone interested in the working of cryptographic systems. Diffie-Hellman has two system parameters: p and g. Both parameters are public and can be used by all the system’s users. Parameter p is a prime number, and parameter g, which is usually called a generator, is an integer less than p that has the following property: For every number n between 1 and p – 1 inclusive, there is a power k of g such that gk = n mod p. For example, when given the following public parameters:

p = Prime number

g = Generator

these values are used to generate the function y = gx mod p. With this function, Alice and Bob can securely exchange a previously unshared secret (symmetric) key as follows:

Alice can use a private value a, which only she holds, to calculate

ya = ga mod p

Bob can use a private value b, which only he holds, to calculate

yb = gb mod p

Alice can now send ya (as Alice’s nonce, or A-nonce) to Bob, and Bob can send yb (as Bob’s nonce, or B-nonce) to Alice. Again, Alice can again use her private value A on the B-nonce. Her result will be (yb)a, or

gba mod p

Similarly, with his private value, b, Bob can calculate (ya)b from the received A-nonce:

gab mod p

But guess what: Mathematically, gba mod p and gab mod p are equivalent. So, in fact, Bob and Alice have just, securely, exchanged a new secret key.

Diffie-Hellman is vulnerable to man-in-the-middle attacks because the key exchange does not authenticate the participants. To prove authenticity, digital signatures and digital certificates—by accepting someone’s public key in advance, sometimes within a PKI—should be used. Diffie-Hellman is used in conjunction with several authentication methods, including the Internet Key Exchange (IKE) component of IPsec.

The following are some important facts you should know about Diffie-Hellman:

  • Images It was the first asymmetric algorithm.

  • Images It provides key-exchange services.

  • Images It is considered a key agreement protocol.

  • Images It operates by means of discrete logarithms.

RSA

RSA was developed in 1977 by Ron Rivest, Adi Shamir, and Len Adleman at MIT. The cipher’s name is based on their initials. Although RSA is much slower than symmetric encryption cryptosystems, it offers symmetric key exchange and is considered very secure. RSA is based on factoring prime numbers, but to be secure, it has to use prime numbers whose product is much larger than 129 digits. Decimal numbers less than 130 digits have been factored using a number field sieve algorithm. You do not need to know the inner workings of RSA public and private key generation for the CISSP exam, but the information in this section will be useful for you as a security professional.

Typically, the plaintext is broken into equal-length blocks, each with fewer than n digits, and each block is encrypted and decrypted. Cryptanalysts or anyone attempting to crack RSA would be left with the difficult challenge of factoring a large integer into its two factors. Cracking the key would require an extraordinary amount of computer processing power and time. RSA supports a key size up to 2,048 bits.

The RSA algorithm has become the de facto standard for industrial-strength encryption, especially since the patent expired in 2000. It has been built into many protocols, firmware, and software products, such as Microsoft Edge, Google Chrome, and Mozilla Firefox.

Note

LUC is an alternative to RSA, although it is not widely used. It was invented in 1991 and uses Lucas functions.

Note

XTR is a public key cryptosystem developed by Arjen Lenstra and Eric Verheul that is also based on finite fields and discrete logs, and it is seen as a generic superset function for all discrete log functions.

El Gamal

El Gamal is an extension of the Diffie-Hellman key exchange. It can be used for digital signatures, key exchange, and encryption. El Gamal consists of three discrete components: a key generator, an encryption algorithm, and a decryption algorithm. It was released in 1985, and its security rests in part on the difficulty of solving the discrete logarithm problem.

Elliptical Curve Cryptosystem (ECC)

ECC is considered more secure than previous asymmetric algorithms because elliptic curve systems are harder to crack than those based on discrete log problems. Elliptic curves are usually defined over finite fields such as real and rational numbers and implemented analogously to the discrete logarithm problem. An elliptic curve is defined by the following equation:

y2 = x3 + ax + b

along with a single point O, the point at infinity.

The space of the elliptic curve has the following properties:

  • Images Addition is the counterpart of modular multiplication.

  • Images Multiplication is the counterpart of modular exponentiation.

Thus, given two points, P and R, on an elliptic curve where P = KR, finding K is known as the elliptic curve discrete logarithm problem. ECC is fast. According to RFC 4492, a 163-bit key used in ECC has similar cryptographic strength to a 1,024-bit key used in the RSA algorithm. It can therefore be implemented in smaller, less-powerful devices such as smartphones, tablets, smart cards, and other handheld devices.

Merkle-Hellman Knapsack

Merkle-Hellman Knapsack (Knapsack) is an asymmetric algorithm based on fixed weights. Although this system was popular for a while, it was broken in 1982.

Review of Symmetric and Asymmetric Cryptographic Systems

To help ensure your success on the CISSP exam, Table 4.9 compares symmetric and asymmetric cryptographic systems.

TABLE 4.9 Symmetric and Asymmetric Systems Attributes and Features

Symmetric

Asymmetric

Confidentiality

Confidentiality, integrity, authentication, and nonrepudiation

One single shared key

Two keys: public and private

Requires out-of-band exchange

Useful for in-band exchange

Not scalable, too many keys needed

Scalable, works for e-commerce

Small key size and fast

Larger key size required and slower to process

Useful for bulk encryption

Digital signatures, digital envelopes, digital certificates, and small amounts of data

ExamAlert

Before attempting the CISSP exam, it is prudent that you know which categories each of the asymmetric algorithms discussed fit into. Take some time to review the differences:

  • images Functions by using a discrete logarithm in a finite field: Diffie-Hellman; El Gamal

  • images Functions by using the product of large prime numbers: RSA

  • images Functions by means of fixed weights: Merkle-Hellman Knapsack

  • images Functions by means of elliptic curve: Elliptic curve cryptosyste

Hybrid Encryption

Up to this point in the chapter, we have discussed symmetric and asymmetric ciphers individually, and as noted in Table 4.9, each has advantages and disadvantages. Although symmetric encryption is fast, key distribution is a problem. Asymmetric encryption offers easy key distribution but is not suited for large amounts of data. Hybrid encryption uses the advantages of each approach and combines them into a truly powerful system: The public key cryptosystem is used as a key encapsulation scheme, and the private key cryptosystem is used as a data encapsulation scheme.

Hybrid encryption system works as follows. If Michael wants to send a message to his editor, Betsy, the following would occur (see Figure 4.20):

  1. Michael generates a random private key for the data encapsulation scheme. We can call this the session key.

  2. Michael encrypts the message with the data encapsulation scheme using the session key that was generated in step 1.

  3. Michael encrypts the session key using Betsy’s public key.

  4. Michael sends both the encrypted message and the encrypted key to Betsy.

  5. Betsy uses her private key to decrypt the session key and then uses the session key to decrypt the message.

Nearly all modern cryptosystems are built to work this way because they provide the speed of secret key cryptosystems and the “key-exchange-ability” of public key cryptosystems. Hybrid cryptographic systems include IPsec, PGP, SSH, SET, SSL, WPA2-Enterprise, and TLS. (These systems are discussed in detail later in this chapter.)

Images

FIGURE 4.20 Hybrid Encryption

Public Key Infrastructure and Key Management

Dealing with brick-and-mortar businesses gives us plenty of opportunity to develop trust with a vendor. We can see the store, talk to the employees, and get a good look at how the vendor does business. Internet transactions are far less transparent. We can’t see who we are dealing with, don’t know what type of operation they really run, and might not be sure we can trust them. Public key infrastructure (PKI) was made to address these concerns and bring trust, integrity, and security to electronic transactions.

PKI is a framework that consists of hardware, software, and policies that exist to manage, create, store, and distribute keys and digital certificates. The components of this framework include the following:

  • Images The certificate authority (CA)

  • Images The registration authority (RA)

  • Images The certificate revocation list (CRL)

  • Images Digital certificates

  • Images A certificate distribution system

Certificate Authorities

A good analogy for a CA is the Department of Motor Vehicles (DMV), a state entity that is responsible for issuing driver’s licenses, which are the known standard for physical identification. If you cash a check, go to a night club, or catch a plane, your driver’s license is one document that is widely accepted at these locations to prove your identity. CAs are like DMVs: They vouch for your identity in a digital world. VeriSign, Thawte, and Entrust are some of the companies that perform public CA services.

A CA doesn’t have to be an external third party; many companies decide to tackle these responsibilities by themselves. Regardless of who performs them, the following steps are necessary:

  1. The CA verifies the request for certificate with the help of the RA.

  2. The individual’s identification is validated.

  3. A certificate is created by the CA, which certifies that the person matches the public key that is being offered.

Registration Authorities

The RA is like a messenger: It’s positioned between the client and the CA. Although the RA cannot generate a certificate, it can accept requests, verify a person’s identity, and pass along the information to the CA for certificate generation.

RAs play a key role when certificate services expand to cover large geographic areas. One central CA can delegate its responsibilities to regional RAs around the world.

ExamAlert

Expect to see CISSP exam questions that deal with the workings of PKI. It’s important to understand that the RA cannot issue certificates.

Certificate Revocation Lists

Just like driver’s licenses, digital certificates might not always remain valid. (I had a great aunt who drove with an expired license for years. In her case, she was afraid that at 95 years old, she might not pass the eye exam.) In corporate life, certificates might become invalid because someone leaves the company, information might change, or a private key might become compromised. For these reasons, the CRL must be maintained.

The CRL is maintained by the CA, which signs the list to maintain its accuracy. Whenever problems with digital certificates are reported, those certificates are considered invalid, and the CA has the serial number added to the CRL. Anyone requesting a digital certificate can check the CRL to verify the certificate’s integrity. The replacement for CRLs is the Online Certificate Status Protocol (OCSP); it has a client/server design that scales better than a CRL. When a user requests access to a server, OCSP sends a request for certificate status information. The server sends back a response of current, expired, or unknown. Regardless of which method is used, problems with certificates are nothing new; to read about the problem Dell had in 2015, see www.infoworld.com/article/3008422/security/what-you-need-to-know-about-dells-root-certificate-security-debacle.html.

Digital Certificates

Digital certificates are at the heart of a PKI system. A digital certificate serves two roles:

  • Images It ensures the integrity of the public key and makes sure the key remains unchanged and in a valid state.

  • Images It validates that the public key is tied to the stated owner and that all associated information is true and correct.

The information needed to accomplish these goals is added to the digital certificate. Digital certificates are formatted to the X.509 standard, whose most current version is Version 3. One of the key developments in Version 3 is the addition of extensions. Version 3 includes the flexibility to support other topologies. It can operate as a web of trust, much like PGP. An X.509 certificate includes the following elements, and examples showing some of these elements are provided in Figure 4.21:

  • Images Version

  • Images Serial number

  • Images Algorithm ID

  • Images Issuer

  • Images Validity

    • Images Not before (a specified date)

    • Images Not after (a specified date)

  • Images Subject

  • Images Subject public key information

    • Images Public key algorithm

    • Images Subject public key

  • Images Issuer—unique identifier (optional)

  • Images Subject—unique identifier (optional)

  • Images Extensions (optional)

Digital certificates play a vital role in the chain of trust. Public key encryption works well when you are dealing with people you know because it’s easy for you to send each other a public key. But what about communicating with people you don’t know?

Note

Digital certificates are used to prove your identity when performing electronic transactions.

Images

FIGURE 4.21 X.509 Certificate

Although you might want to use an external certificate authority, doing so is not mandatory. You could decide to have your own organization act as a certificate authority. Regardless of whether you have a third party handle certificate duties or you perform them yourself, digital certificates typically contain the following critical pieces of information:

  • Images Identification information including username, serial number, and validity dates of the certificates

  • Images The public key of the certificate holder

  • Images The digital signature of the signature authority, which piece is critical because it certifies and validates the integrity of the entire package

The Client’s Role in PKI

It might seem that up to this point, all the work has fallen on the shoulders of the CAs; this is not entirely true, however. Clients are responsible for requesting digital certificates and for maintaining the security of their private keys. Loss or compromise of a private key would be devastating; it would mean that communications were no longer secure. If you are dealing with credit card numbers or other pieces of user identity, this type of loss of security could lead to identity theft.

Protecting a private key is an important issue because it’s easier for an attacker to target the key than to try to crack the certificate service. Organizations should concern themselves with seven key management issues:

  • Images Generation

  • Images Distribution

  • Images Installation

  • Images Storage

  • Images Key change

  • Images Key control

  • Images Key disposal

Key recovery and control is an important issue that must be addressed. One basic recovery and control method is the M of N control method of access. This method is designed to ensure that no one person can have total control; it is closely related to dual control. Therefore, if N number of administrators have the ability to perform a process, M number of those administrators must authenticate for access to occur. M of N control should require physical presence for access. Here is an example: Suppose that a typical M of N control method requires that four people have access to the archive server and at least two of them must be present to accomplish access. In this situation, M = 2 and N = 4. This would ensure that no one person could compromise the security system or gain access.

Note

Many organizations use hardware security modules (HSMs) to securely store and securely retrieve these escrowed keys. HSM systems protect keys and can detect and prevent tampering by destroying the key material if unauthorized access is detected.

Integrity and Authentication

One of the things cryptography offers to its users is the capability to verify integrity and authentication. Integrity assures a recipient that the information remained unchanged and is in its true original form. Authentication provides the capability to ensure that messages are sent from who you believed sent them and that messages are received by the intended recipient. To help ensure your success on the CISSP exam, review the integrity methods listed in Table 4.10.

TABLE 4.10 Integrity Verification

Method

Description

Parity

Simple error detection code for networking

Hashing

Integrity

Digital signature

Integrity, authentication, and nonrepudiation

Hashed MAC

Integrity and data origin authentication

CBC MAC

Integrity and data origin authentication

Checksum

Redundancy check, weak integrity

Hashing and Message Digests

Hashing algorithms function by taking a variable amount of data and compressing it into a fixed-length value referred to as a hash value. Hashing provides a fingerprint or message digest of the data. Strong hashing algorithms are hard to break and will not produce the same hash value for two or more messages. Hashing can be used to meet the goals of integrity and/or nonrepudiation, depending on how the algorithms are used. Hashes can help verify that information has remained unchanged. Figure 4.22 provides an overview of the hashing process.

Hashing algorithms are not intended to be reversed to reproduce the data. The purpose of the message digest is to verify the integrity of data and messages. In a well-designed message digest, if there is even a slight change in an input string, the output hash value should change drastically. This is known as the avalanche effect. For example, the version of SolarWinds that is vulnerable to Sunburst has the MD5 hash value b91ce2fa41029f6955bff20079468448. This means if you were to match this hash to the version of SolarWinds, the version you are running would leave you exposed to the Sunburst malware. Another value would indicate that the version you have may not be vulnerable.

Images

FIGURE 4.22 Hashing

Programs such as Tripwire, MD5sum, and Windows System File Verification rely on hashing. Some common hashing algorithms include the following:

  • Images Message-Digest algorithm series

  • Images Secure Hash Algorithm (SHA)

  • Images HAVAL

  • Images RIPEMD

  • Images Whirlpool

  • Images Tiger

Note

While there are many hashing algorithms, two of the most common are SHA and MD series.

The biggest problem for hashing is the possibility of collisions. Collisions result when two or more different inputs create the same output. Collisions can be reduced by moving to an algorithm that produces a larger hash.

Note

When considering hash values, remember that close does not count! If the hashes being compared differ in any way—even by just a single bit—the data being digested is not the same.

MD Series

All of the MD algorithms were developed by Ron Rivest. They have progressed through a series of versions over the years as technology has advanced. The original was MD2, which was optimized for 8-bit computers and is somewhat outdated. It has also fallen out of favor because MD2 has been found to suffer from collisions. MD4 was the next algorithm to be developed. The message is processed in 512-bit blocks plus a 64-bit binary representation of the original length of the message, which is concatenated to the message. As with MD2, MD4 was found to be vulnerable to possible attacks. This is why MD5 was developed; it could be considered MD4 with additional safety mechanisms. MD5 processes a variable-size input and produces a fixed 128-bit output. As with MD4, it processes the data in blocks of 512 bits.

Tip

Collisions occur when two different messages are passed through a hash and produce the same message digest value. This is undesirable because it can mask the fact that someone might have changed the contents of a file or message. MD5 and SHA-0 have been shown to be vulnerable to forced collisions.

SHA-1/2

SHA-1 is a version of Secure Hashing Algorithm (SHA) that is similar to MD5. It is considered the successor to MD5 and produces a 160-bit message digest. SHA-1 processes messages in 512-bit blocks and adds padding, if needed, to get the data to add up to the right number of bits. Out of the 160 bits, SHA-1 has only 111-bit effectiveness. SHA-1 is one of a series of SHA algorithms including SHA-0, SHA-1, and SHA-2. SHA-0 is no longer considered secure, and SHA-1 is no longer recommended. SHA-2 is actually a family of functions and is a safe replacement for SHA-1. The SHA-2 family includes SHA-224, SHA-256, SHA-386, and SHA-512.

SHA-3

SHA-3 is the newest family of hashing algorithms and was designed to replace SHA-1 and SHA-2.

HAVAL

HAVAL is another one-way hashing algorithm that is similar to MD5. Unlike MD5, HAVAL is not tied to a fixed message-digest value. HAVAL-3-128 makes three passes and produces a 128-bit fingerprint; HAVAL-4-256 makes four passes and produces a 256-bit fingerprint.

Message Authentication Code (MAC)

A MAC is like a poor man’s version of a digital signature and is somewhat similar to a digital signature except that it uses symmetric encryption. MACs are created and verified with the same secret (symmetric) key. Four types of MACs exist: unconditionally secure, hash function based, stream cipher based, and block cipher based.

HMAC

Hashed-Based Message Authentication Code (HMAC) was designed to be immune to multi-collision attacks. This immunity was added by including a shared secret key. In simple terms, HMAC functions by using a hashing algorithm such as MD5 or SHA-1 and altering the initial state of the file to be processed by adding a password. Even if someone can intercept and modify the data, it’s of little use if that person does not possess the secret key. There is no easy way for the person to re-create the hashed value without it. For HMAC to be used successfully, the recipient would have to have acquired a copy of the symmetric key through some secure out-of-band mechanism.

CBC-MAC

A cipher block chaining MAC uses the CBC mode of a symmetric algorithm such as DES to create a MAC. CBC-MAC differs from HMAC in that CBC-MAC uses one algorithm, whereas HMAC uses two (a hashing algorithm and a symmetric block cipher). The last block of the message is used as the MAC authentication portion and is appended to the actual message.

CMAC

Cipher-Based Message Authentication (CMAC) addresses some of the security deficiencies of CBC-MAC. CMAC has more complex logic and uses mathematical functions that make use of AES for increased security. You can use CMAC to verify both the integrity and authenticity of a message.

Digital Signatures

Digital signatures, which are based on public key cryptography, are used to verify the authenticity and integrity of a message. Digital signatures are created by passing a message’s contents through a hashing algorithm and encrypting it with a sender’s private key. When the message is received, the recipient decrypts the encrypted hash and then recalculates the received message’s hash. These values should match to ensure the validity of the message and to prove that the message was sent by the party believed to have sent it (because only that party has access to the private key). Let’s break this process out step by step with an example to help detail the operation:

  1. Bill produces a message digest by passing a message through a hashing algorithm.

  2. The message digest is encrypted using Bill’s private key.

  3. The message is forwarded to the recipient, Alice.

  4. Alice creates a message digest from the message with the same hashing algorithm that Bill used. Alice then decrypts Bill’s signature digest by using his public key.

  5. Finally, Alice compares the two message digests—the one originally created by Bill and the other that she created. If the two values match, Alice can rest assured that the message is unaltered.

Figure 4.23 illustrates this process and demonstrates how the hashing function ensures integrity and the signing of the hash value provides authentication and nonrepudiation.

Images

FIGURE 4.23 Digital Signatures

DSA

Things are much easier when we have standards, and that is what Digital Signature Algorithm (DSA) was designed for. The DSA standards were proposed by NIST in 1991 to standardize Digital Signature Standards (DSS). DSA involves key generation, signature generation, and signature verification. It uses SHA-1 in conjunction with public key encryption to create a 160-bit hash. Signing speeds are equivalent to RSA signing, but signature verification is much slower. The DSA digital signature is a pair of large numbers represented as binary digits.

Cryptographic System Review

As a recap and to help ensure your success on the CISSP exam, review the well-known cryptographic systems in Table 4.11.

TABLE 4.11 Algorithms and Their Functions

Category

Algorithm

Symmetric

DES, 3DES, Blowfish, Twofish, IDEA, CAST, SAFER, Skipjack, and RC (series)

Asymmetric

RSA, ECC, Diffie-Hellman, Knapsack, LUC, and El Gamal

Hashing

MD (series), SHA (series), HAVAL, Tiger, Whirlpool, and RIPEMD

Digital signature

DSA

Cryptographic Attacks

Attacks on cryptographic systems are not new. Whenever someone has information to hide, there is usually someone who would like to reveal it. Cryptanalysis is the analysis of cryptography, as can be seen in the parts of the word: crypt = secret or hidden and analysis = loosen or dissolve. The ultimate goal of cryptanalysis is to determine the key value, and these types of activities occur every day at organizations like the NSA and at locations where hackers and security specialists are working. Depending on which key is cracked, an attacker could gain access to confidential information or could pretend to be someone else and attempt some sort of masquerade attack.

Because cryptography can be a powerful tool and the ability to break many algorithms is limited, the Coordinating Committee for Multilateral Export Controls (CoCom) was established to deal with the control of cryptographic systems. CoCom disbanded in 1994 and was replaced by the Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies. The Wassenaar Arrangement had wide support, bringing together more than 30 countries to control the export of cryptography.

One issue to consider before launching a cryptographic attack is what is known about the algorithm. Is it public or private? Auguste Kerckhoffs is credited with creating, in the nineteenth century, Kerckhoffs’s principle, which states that a cryptographic system should not require secrecy; everything should be public except the key. An example of this debate can be seen in the development and crack of Content Scrambling System (CSS). This method of encryption was developed by the DVD Copy Control Association (DVD CCA). Because the algorithm was proprietary, it was not made public. CSS was designed to allow only authorized DVD players to decode scrambled content stored on the original DVD discs. This was until Jon Lech Johansen and others got together and cracked CSS and posted a utility called DeCSS to the Internet in 1999. So, whereas some argue that algorithms should be secret, others continue to believe that open standards and systems allow for more robust, secure systems.

With a review of some of the basics completed, let’s now review some common attack techniques that might target a cryptographic system:

  • Images Known plaintext attack: This type of attack requires the attacker to have the plaintext and ciphertext of one or more messages. Encrypted file archives such as zip are prone to this type of attack.

  • Images Ciphertext-only attack: This type of attack requires the attacker to obtain several encrypted messages that have been encrypted using the same encryption algorithm. The attacker does not have the associated plaintext but attempts to crack the code by looking for patterns and using statistical analysis.

  • Images Chosen ciphertext: If an attacker can decrypt portions of a ciphertext message, the decrypted portion can then be used to discover the key.

  • Images Chosen plaintext: An attacker can have plaintext messages encrypted and then can analyze the ciphertext output.

  • Images Differential cryptanalysis: This type of attack, which is generally used to target block ciphers, works by looking for the difference between related bits of plaintext that are encrypted, and the difference between their resultant ciphertexts.

  • Images Linear cryptanalysis: Along with differential cryptanalysis, this is one of the two most widely used attacks on block ciphers. Linear cryptanalysis uses functions to identify the highest probability that a specific key was used during the encryption process. The key pairs are then studied to derive information about the key used to create them.

  • Images Birthday attack: This type of attack gets its name from the birthday paradox, which states that within a group of people, the chances that two or more will share birthdays is unexpectedly high. This same logic is applied to calculate collisions in hash functions. A message digest can be susceptible to birthday attacks if the output of the hash function is not large enough to avoid collisions.

  • Images Key clustering: This vulnerability can occur when two different keys produce the same ciphertext from the same message. This can sometimes be the result of having a small key space or might be a characteristic of some cryptosystems. Key clustering is a real problem as it means that two or more different keys could also decrypt the secure content. A strong cryptosystem should have a low frequency of key clustering occurrences. If it doesn’t, this is yet another way that a cryptosystem might be targeted for attack.

  • Images Replay attack: This method of attack occurs when the attacker can intercept cryptographic keys and reuse them later to either encrypt or decrypt messages.

  • Images Man-in-the middle attack: This type of attack is carried out when attackers place themselves in the communications path between two users. From this position, the attackers may be able to intercept and modify communications.

  • Images Side-channel attack: This type of attack is based on side-channel information, such as timing, sound, or electromagnetic leaks.

    ExamAlert

    When comparing cryptographic algorithms, it is important to keep in mind that the larger the work factor, the stronger the cryptosystem. Cryptographers develop systems with high work factors to withstand attacks, not to be foolproof. All systems can be cracked with enough time and determination. Sometimes attackers simply look for vulnerabilities that have yet to be publicly discovered. These are known as zero-day vulnerabilities.

  • Images Rubber hose attack: When all else fails, this method might be used to extract a key value or other information. This type of attack might include threats, violence, extortion, or blackmail because humans are a bigger weakness than cryptosystems.

When attempting a cryptographic attack, the work factor must be considered. The work factor can be measured as the time and effort needed to perform a brute-force attack against an encryption system. The following are some examples of successful attacks against cryptosystems that have occurred in the recent past:

  • Images BEAST: BEAST exploits weakness in the CBC usage in TLS 1.0. Violated same-origin constraints.

  • Images CRIME and BREACH: CRIME targeted compression over TLS, and BREACH was an instance of CRIME used over HTTP.

  • Images Cryptolocker: This ransomware had the ability to encrypt local and network files using RSA encryption.

  • Images DROWN: DROWN exploited the cipher of the then-still-supported SSL 2.

  • Images FREAK: FREAK exploited the cipher to carry out a man-in-the-middle attack and force the usage of weak keys.

  • Images Meltdown: Meltdown targeted hardware and Intel x86 processors to attempt a race condition and side-channel attack. It would allow a rogue process to read all memory, regardless of authorization.

  • Images POODLE: This cipher attack affected all block ciphers in SSL 3.0 and led to a migration from SSL to TLS. A POODLE variant also affected TLS 1.0 to 1.2.

  • Images Spectre: Spectre targeted hardware and microprocessors with branch prediction. It is an example of a side-channel and timing attack.

Site and Facility Security Controls

Keep in mind that good security requires multiple layers of defense, both logical and physical. Site and facility security controls are vital parts of strong facility security. They are covered in detail in Chapter 6, “Identity and Access Management,” and Chapter 8, “Security Operations,” but this section presents some common controls that are used for physical security:

  • Images Physical access controls: These controls include gates, fences, doors, guards, and locks. Fencing can be made from a range of components, such as steel, wood, brick, or concrete, but must be the correct design for the level of protection needed. Guards can also be used in multiple roles to monitor, greet, sign in, and escort visitors. Locks come in many types, sizes, and shapes; they are both some of the oldest theft-deterrent mechanisms and the most commonly used deterrents.

  • Images Controls in server rooms and data centers: Controls in these areas can include time restrictions on access, controls that specify who can enter specific areas, and where servers and data centers are placed. A well-placed data center should have limited accessibility and typically no more than two doors. A first-floor interior room is a good location for a data center. The ceilings should extend all the way up past the drop ceiling, access to the room should be controlled, and doors should be solid core with hinges to the inside.

  • Images Evidence storage controls: If you maintain a security operations center or deal with computer forensics, you might need to keep an evidence storage area. Typically, such storage is located in a secure area, with a locked secure cabinet or safe and a log to record activity related to chain of custody.

  • Images Restricted access and work area security: The goal of a security design should be to make it as hard as possible for unauthorized personnel to gain access to sensitive resources.

  • Images HVAC and environmental controls: Heat can be damaging to computer equipment, and most data centers are kept around 70°F. Security management should know who is in charge of the HVAC system, and the system must be controlled to protect the organization and its occupants from chemical and biological threats. Electrical power, like HVAC, is a resource that most of us take for granted. Even areas that have dependable power can be subject to outages, line noise, or electromagnetic interference (EMI). Businesses must be prepared to deal with all these factors. Uninterruptible power supplies (UPSs) are typically used to help with these issues.

  • Images Fire prevention, detection, and suppression controls: A big part of prevention is making sure people are trained and know how to prevent potential fire hazards. Policy must define how employees will be trained to deal with fires. Companies should make sure they have appropriate and functioning fire-detection equipment so that employees can be alerted to possible danger. Just being alerted to a fire is not enough. Employees need to know what to do and how to handle different types of fires.

Note

Physical security is covered in greater depth in Chapters 6 and 8.

Exam Prep Questions

1. Which of the following best describes a superscalar processor?

images A. A superscalar processor can execute only one instruction at a time.

images B. A superscalar processor has two large caches that are used as input and output buffers.

images C. A superscalar processor can execute multiple instructions at the same time.

images D. A superscalar processor has two large caches that are used as output buffers.

2. Which of the following are developed by programmers and used to allow the bypassing of normal processes during development but are left in the software when it ships to the customer?

images A. Backdoors

images B. Traps

images C. Buffer overflows

images D. Covert channels

3. Which of the following attacks occurs when an attacker can intercept session keys and reuse them at a later date?

images A. Known plaintext attack

images B. Ciphertext-only attack

images C. Man-in-the-middle attack

images D. Replay attack

4. Which of the following is a disadvantage of symmetric encryption?

images A. Key size

images B. Speed

images C. Key management

images D. Key strength

5. Which of the following is not an example of a symmetric algorithm?

images A. DES

images B. RC5

images C. AES

images D. RSA

6. Which of the following was the first model based on confidentiality that was developed?

images A. Bell-LaPadula

images B. Biba

images C. Clark-Wilson

images D. Take-Grant

7. Which of the following models is integrity based and was developed for commercial applications?

images A. Information flow model

images B. Clark-Wilson model

images C. Bell-LaPadula model

images D. Brewer and Nash model

8. Which of the following does the Biba model address?

images A. Focuses on internal threats

images B. Focuses on external threats

images C. Addresses confidentiality

images D. Addresses availability

9. Which model is also known as the Chinese Wall model?

images A. Biba model

images B. Take-Grant model

images C. Harrison-Ruzzo-Ullman model

images D. Brewer and Nash model

10. Which hashing algorithm produces 160-bit output?

images A. MD2

images B. MD4

images C. SHA-1

images D. El Gamal

11. What is the result of the * property in the Bell-LaPadula model?

images A. No read up

images B. No write up

images C. No read down

images D. No write down

12. What is the result of the simple integrity property of the Biba model?

images A. No read up

images B. No write up

images C. No read down

images D. No write down

13. Which of the following can be used to connect different MAC systems together?

images A. Labels

images B. Reference monitor

images C. Controls

images D. Guards

14. Which of the following security modes of operation best describes a user’s valid need to know all data?

images A. Dedicated

images B. System high

images C. Compartmented

images D. Multilevel

15. Which of the following security models makes use of the TLC concept?

images A. Biba model

images B. Clark-Wilson model

images C. Bell-LaPadula model

images D. Brewer and Nash model

16. Which of the following DES modes is considered the most vulnerable to attack?

images A. CBC

images B. ECB

images C. CFB

images D. OFB

17. Which of the following is the key size DES uses?

images A. 56 bits

images B. 64 bits

images C. 96 bits

images D. 128 bits

18. Which implementation of Triple DES uses the same key for the first and third iterations?

images A. DES-EEE3

images B. HAVAL

images C. DES-EEE2

images D. DES-X

19. Which of the following algorithms is used for key distribution and not encryption or digital signatures?

images A. El Gamal

images B. HAVAL

images C. Diffie-Hellman

images D. ECC

20. You are working with the file integrity program Tripwire and have been asked to review some recent issues with a cryptographic program. What is it called when two different keys generate the same ciphertext for the same message?

images A. Hashing

images B. Collision

images C. Key clustering

images D. Output verification

Answers to Exam Prep Questions

1. C. A superscalar processor can execute multiple instructions at the same time. Answer A describes a scalar processor; it can execute only one instruction at a time. Answer B does not describe a superscalar processor because it does not have two large caches that are used as input and output buffers. Answer D is incorrect because a superscalar processor does not have two large caches that are used as output buffers.

2. A. Programmers use backdoors, also referred to as maintenance hooks, during development to get easy access into a piece of software. Answer B is incorrect because a trap is a message used by Simple Network Management Protocol (SNMP) to report a serious condition to a management station. Answer C is incorrect because a buffer overflow occurs due to poor programming. Answer D is incorrect because a covert channel is a means of moving information in a manner that was not intended.

3. D. A reply attack occurs when the attacker can intercept session keys and reuse them at a later date. Answer A is incorrect because a known plaintext attack requires the attacker to have the plaintext and ciphertext of one or more messages. Answer B is incorrect because a ciphertext-only attack requires the attacker to obtain several messages encrypted using the same encryption algorithm. Answer C is incorrect because a man-in-the-middle attack is carried out when attackers place themselves in the communications path between two users.

4. C. Key management is a primary disadvantage of symmetric encryption. Answers A, B, and D are incorrect because encryption speed, key size, and key strength are not disadvantages of symmetric encryption.

5. D. RSA is an asymmetric algorithm. Answers A, B, and C are incorrect because DES, RC5, and AES are examples of symmetric algorithms.

6. A. Bell-LaPadula was the first model developed that is based on confidentiality. Answers B, C, and D are incorrect: The Biba and Clark-Wilson models both deal with integrity, whereas the Take-Grant model is based on four basic operations.

7. B. The Clark-Wilson model was developed for commercial activities. This model dictates that the separation of duties must be enforced, subjects must access data through an application, and auditing is required. Answers A, C, and D are incorrect. The information flow model addresses the flow of information and can be used to protect integrity or confidentiality. The Bell-LaPadula model is an integrity model, and the Brewer and Nash model was developed to prevent conflicts of interest.

8. B. The Biba model assumes that internal threats are being protected by good coding practices and, therefore, focuses on external threats. Answers A, C, and D are incorrect. The Biba model addresses only integrity and not availability or confidentiality.

9. D. The Brewer and Nash model is also known as the Chinese Wall model and was specifically developed to prevent conflicts of interest. Answers A, B, and C are incorrect because they do not fit the description. The Biba model is integrity-based, the Take-Grant model is based on four modes, and the Harrison-Ruzzo-Ullman model defines how access rights can be changed, created, or deleted.

10. C. SHA-1 produces a 160-bit message digest. Answers A, B, and D are incorrect because MD2 and MD4 both create a 128-bit message digest, and El Gamal is not a hashing algorithm.

11. D. The * property enforces “no write down” and is used to prevent someone with high clearance from writing data to a lower classification. Answers A, B, and C do not properly describe the Bell-LaPadula model’s star property.

12. C. The purpose of the simple integrity property of the Biba model is to prevent someone from reading an object of lower integrity. This helps protect the integrity of sensitive information.

13. D. A guard is used to connect various MAC systems together and allow for communication between these systems. Answer A is incorrect because labels are associated with MAC systems but are not used to connect them together. Answer B is incorrect because the reference monitor is associated with the TCB. Answer C is incorrect because the term controls here is simply a distractor.

14. A. Of the four modes listed, only the dedicated mode supports a valid need to know for all information on the system. Therefore, answers B, C, and D are incorrect.

15. B. The Clark-Wilson model was designed to support integrity and is focused on TLC, which stands for tampered, logged, and consistent. Answers A, C, and D are incorrect; the Biba, Bell-LaPadula, and Brewer and Nash models are not associated with TLC.

16. B. Electronic Code Book mode is susceptible to known plaintext attacks because the same plaintext always produces the same ciphertext. Answers A, C, and D are incorrect. Because CBC, CFB, and OFB all use some form of feedback, which helps randomize the encrypted data, they do not suffer from this deficiency and are considered more secure.

17. A. Each 64-bit plaintext block is separated into two 32-bit blocks and then processed by the 56-bit key. The total key size is 64 bits, but 8 bits are used for parity, thereby making 64, 96, and 128 bits incorrect.

18. C. DES-EEE2 performs the first and third encryption passes using the same key. Answers A, B, and D are incorrect: DES-EEE3 uses three different keys for encryption; HAVAL is used for hashing, and DES does not use it; and DES-X is a variant of DES with only a 56-bit key size, and it was designed for DES, not 3DES.

19. C. Diffie-Hellman is used for key distribution but not encryption or digital signatures. Answer A is incorrect because El Gamal is used for digital signatures, data encryption, and key exchange. Answer B is incorrect because HAVAL is used for hashing. Answer D is incorrect because ECC is used for digital signatures, data encryption, and key exchange.

20. C. Key clustering is said to occur when two different keys produce the same ciphertext for the same message. A good algorithm, using different keys on the same plaintext, should generate a different ciphertext. Answers A, B, and D are incorrect: Hashing is used for integrity verification; a collision occurs when two different messages are hashed and output the same message digest; and output verification is simply a distractor.

Need to Know More?

Microcode: https://www.techopedia.com/definition/8332/microcode

Trust and assurance: www.cs.clemson.edu/course/cpsc420/material/Assurance/Assurance%20and%20Trust.pdf

TPM binding and sealing: https://docs.microsoft.com/it-it/windows/iot-core/secure-your-device/tpm

Covert-timing-channel attacks: http://crypto.stanford.edu/~dabo/papers/ssl-timing.pdf

Digital rights management: https://digitalguardian.com/blog/what-digital-rights-management

HVAC and cybersecurity: https://www.propmodo.com/the-cyber-security-threats-lurking-in-your-hvac-system/

Restricted and work area security: https://info-savvy.com/cissp-restricted-and-work-area-security-bk1d3t11st6/

The Bell-LaPadula model: csrc.nist.gov/publications/secpubs/rainbow/std001.txt

ISO 17799: https://www.iso.org/standard/39612.html

Vulnerabilities in embedded devices: http://www.cse.psu.edu/~pdm12/cse597g-f15/readings/cse597g-embedded_systems.pdf

Five common vulnerabilities in industrial control systems: https://www.lanner-america.com/blog/5-common-vulnerabilities-industrial-control-systems/

Symmetric encryption: https://www.thesslstore.com/blog/symmetric-encryption-101-definition-how-it-works-when-its-used/

Ten types of vulnerabilities in web-based systems: https://www.terraats.com/2019/03/12/10-types-of-security-vulnerabilities-for-web-applications/

Site and facility security control checklist: http://www.mekabay.com/infosecmgmt/facilities_checklist.pdf

The BIBA security model: http://nathanbalon.com/projects/cis576/sBiba_Security.pdf

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.24.134