Chapter 5. Security Engineering


Terms you’ll need to understand:

Image Buffer overflows

Image Security models

Image Rings of protection

Image Trusted Computer System Evaluation Criteria (TCSEC)

Image Information Technology System Evaluation Criteria (ITSEC)

Image Common Criteria

Image Reference monitor

Image Trusted computing base

Image Open and closed systems

Image Emanations

Image Mobile system vulnerabilities

Topics you’ll need to master:

Image Engineering processes using secure design principles

Image Understanding confidentiality models such as Bell-LaPadula

Image Identifying integrity models such as Biba and Clark-Wilson

Image Understanding common flaws and security issues associated with security architecture designs

Image Distinguishing between certification and accreditation


Introduction

The security engineering domain deals with hardware, software, security controls, and documentation. When hardware is designed, it needs to be built to specific standards that should provide mechanisms to protect the confidentiality, integrity, and availability of the data. The operating systems (OS) that will run on the hardware must also be designed in such a way as to ensure security.

Building secure hardware and operating systems is just a start. Both vendors and customers need to have a way to verify that hardware and software perform as stated, to rate these systems, and to have some level of assurance that such systems will function in a known manner. This is the purpose of evaluation criteria. They allow the parties involved to have a level of assurance.

This chapter introduces the trusted computer base and the ways in which systems can be evaluated to assess the level of security. To pass the CISSP exam, you need to understand system hardware and software models and how models of security can be used to secure systems. Standards such as Common Criteria Information Technology System Evaluation Criteria (ITSEC) and Trusted Computer System Evaluation Criteria (TCSEC) are covered on the exam.

Fundamental Concepts of Security Models

Modern computer systems are comprised of hardware. These physical components interact with the software in the form of the OS, applications, and firmware to do the things we need done. At the core of every computer system is the central processing unit (CPU) and the hardware that makes it run. The CPU is just one of the items that you can find on the motherboard. The motherboard serves as the base for most crucial system components. Let’s start at the heart of the system and work our way out.

Central Processing Unit

The CPU is the heart of the computer system and serves as the brain of the computer. The CPU consists of the following:

Image An arithmetic logic unit (ALU) that performs arithmetic and logical operations. This is the brain of the CPU;

Image A control unit manages the instructions it receives from memory. It decodes and executes the requested instructions and determines what instructions have priority for processing; and

Image Memory, which is used to hold instructions and data to be processed. This is not your typical memory—it is much faster than non-CPU memory.

The CPU is capable of executing a series of basic operations, including fetch, decode, execute, and write. Pipelining combines multiple steps into one process. The CPU has the capability to fetch instructions and then process them. The CPU can operate in one of four states:

Image Supervisor state—Program can access entire system

Image Problem state—Only non-privileged instructions can be executed

Image Ready state—Program is ready to resume processing

Image Wait state—Program is waiting for an event to complete

Because CPUs have very specific designs, the operating system as well as applications must be developed to work with the CPU. CPUs also have different types of registers to hold data and instructions. The base register contains the beginning address assigned to a process, whereas the limit address marks the end of the memory segment. Together, the components are responsible for the recall and execution of programs. CPUs have made great strides, as Table 5.1 documents. As the size of transistors has decreased, the number of transistors that can be placed on a CPU has increased. By increasing the total number of transistors and ramping up clock speed, the power of CPUs has increased exponentially. As an example, a 3.06 GHz Intel Core i7 can perform about 18 million instructions per second (MIPS).

Image

TABLE 5.1 CPU Advancements


Note

Processor speed is measured in MIPS (millions of instructions per second). This standard is used to indicate how fast a CPU can work.


Two basic designs of CPUs are manufactured for modern computer systems:

Image Reduced Instruction Set Computing (RISC)—Uses simple instructions that require a reduced number of clock cycles.

Image Complex Instruction Set Computing (CISC)—Performs multiple operations for a single instruction.

The CPU requires two inputs to accomplish its duties: instructions and data. The data is passed to the CPU for manipulation where it is typically worked on in either the problem or the supervisor state. In the problem state, the CPU works on the data with non-privileged instructions. In the supervisor state, the CPU executes privileged instructions.


ExamAlert

A superscalar processor is one that can execute multiple instructions at the same time, whereas a scalar processor can execute only one instruction at a time. You will need to know this distinction for the exam.


The CPU can be classified in one of several categories depending on its functionality. When the computer’s CPU, motherboard, and operating system all support the functionality, the computer system is also categorized according to the following:

Image Multiprogramming—Can interleave two or more programs for execution at any one time.

Image Multitasking—Can perform one or more tasks or subtasks at a time.

Image Multiprocessor—Supports one or more CPUs. Windows 7 does not support multiprocessors, whereas Windows Server 2012 does.

A multiprocessor system can work in symmetric or asymmetric mode. With symmetric mode all processors are equal and can handle any tasks equally with all devices (peripherals being equally accessible) or no specialized path is required for resources. With asymmetric mode one CPU schedules and coordinates tasks between other processes and resources. The data that CPUs work with is usually part of an application or program. These programs are tracked by a process ID (PID). Anyone who has ever looked at Task Manager in Windows or executed a ps command on a Linux machine has probably seen a PID number. You can manipulate the priority of these task as well as start and stop them. Fortunately, most programs do much more than the first C code you wrote that probably just said, “Hello World.” Each line of code or piece of functionality that a program has is known as a thread.

A program that has the capability to carry out more than one thread at a time is known as multi-threaded. You can see an example of this in Figure 5.1.

Image

FIGURE 5.1 Processes and threads.

Process activity uses process isolation to separate processes. These techniques are needed to ensure that each application receives adequate processor time to operate properly. The four process isolation techniques used are:

Image Encapsulation of process or objects—Other processes do not interact with the application.

Image Virtual mapping—The application is written in such a way that it believes it is the only application running.

Image Time multiplexing—This allows the application or process to share the computer’s resources.

Image Naming distinctions—Processes are assigned their own unique name.


ExamAlert

To get a good look at naming distinctions, run ps -aux from the terminal of a Linux system and note the unique process ID (PID) values.


An interrupt is another key piece of a computer system. An interrupt is an electrical connection between a device and the CPU. The device can put an electrical signal on this line to get the attention of the CPU. The following are common interrupt methods:

Image Programmed I/O—Used to transfer data between a CPU and peripheral device.

Image Interrupt-driven I/O—A more efficient input/output method, but which requires complex hardware.

Image I/O using DMA—I/O based on direct memory access; can bypass the processor and write the information directly into main memory.

Image Memory-mapped I/O—Requires the CPU to reserve space for I/O functions and to make use of the address for both memory and I/O devices.

Image Port-mapped I/O—Uses a special class of instruction that can read and write a single byte to an I/O device.


ExamAlert

Interrupts can be maskable and non-maskable. Maskable interrupts can be ignored by the application or the system, whereas non-maskable interrupts cannot be ignored by the system. An example of a non-maskable interrupt can be seen in Windows when you enter Ctrl-Alt-Delete.


There is a natural hierarchy to memory and, as such, there must be a way to manage memory and ensure that it does not become corrupted. That is the job of the memory management system. Memory management systems on multitasking operating systems are responsible for:

Image Relocation—Maintains the ability to copy memory contents from memory to secondary storage as needed.

Image Protection—Provides control to memory segments and restricts what process can write to memory.

Image Sharing—Allows sharing of information based on a user’s security level of access control; for instance, Mike can read the object, whereas Shawn can read and write to the object.

Image Logical organization—Provides for the sharing and support for dynamic link libraries.

Image Physical organization—Provides for the physical organization of memory.

Let’s now look at storage media.

Storage Media

A computer is not just a CPU; memory is also an important component. The CPU uses memory to store instructions and data. Therefore, memory is an important type of storage media. The CPU is the only component that can directly access memory. Systems are designed that way because the CPU has a high level of system trust. The CPU can use different types of addressing schemes to communicate with memory, which includes absolute addressing and relative addressing. Memory can be addressed either physically or logically. Physical addressing refers to the hard-coded address assigned to the memory. Applications and programmers writing code use logical addresses. Relative addresses use a known address with an offset applied. Not only can memory be addressed in different ways, but there are also different types of memory. Memory can be either nonvolatile or volatile. The sections that follow provide examples of both.


Tip

Two important security concepts associated with storage are protected memory and memory addressing. For the exam, you should understand that protected memory prevents other programs or processes from gaining access or modifying the contents of address space that has previously been assigned to another active program. Memory can be addressed either physically or logically. Memory addressing describes the method used by the CPU to access the contents of memory. This is especially important for understanding the root cause for buffer overflow attacks.


RAM

Random access memory (RAM) is volatile memory. If power is lost, the data is destroyed. Types of RAM include static RAM, which uses circuit latches to represent binary data, and dynamic RAM, which must be refreshed every few milliseconds. RAM can be configured as Dynamic Random Access Memory (DRAM) or Static Random Access Memory (SRAM).

SRAM doesn’t require a refresh signal as DRAM does. The chips are more complex and are thus more expensive. However, they are faster. DRAM access times come in at 60 nanoseconds (ns) or more; SRAM has access times as fast as 10 ns. SRAM is often used for cache memory.

DRAM chips are cheap to manufacture. Dynamic refers to the memory chips’ need for a constant update signal (also called a refresh signal) to retain the information that is written there. Currently, there are five popular implementations of DRAM:

Image Synchronous DRAM (SDRAM)—Shares a common clock signal with the transmitter of the data. The computer’s system bus clock provides the common signal that all SDRAM components use for each step to be performed.

Image Double Data Rate (DDR)—Supports a double transfer rate of ordinary SDRAM.

Image DDR2—Splits each clock pulse in two, doubling the number of operations it can perform.

Image DDR3—A DRAM interface specification that offers the ability to transfer data at twice the rate (eight times the speed of its internal memory arrays), enabling higher bandwidth or peak data rates.

Image DDR4—Offers higher speed than DDR2 or DDR3 and is one of the latest variants of dynamic random-access memory (DRAM). It is not compatible with any earlier type of random access memory (RAM).


ExamAlert

Memory leaks occur when programs use RAM but cannot release it. Programs that suffer from memory leaks will eventually use up all available memory and can cause the system to halt or crash.


ROM

Read-only memory (ROM) is nonvolatile memory that retains information even if power is removed. ROM is typically used to load and store firmware. Firmware is embedded software much like BIOS or UEFI.


Tip

Most answer systems use Unified Extensible Firmware Interface (UEFI) instead of BIOS. It offers several advantages over BIOS, including support for remote diagnostics and repair of systems even if no OS is installed.


Some common types of ROM include:

Image Erasable Programmable Read-Only Memory (EPROM)

Image Electrically Erasable Programmable Read-Only Memory (EEPROM)

Image Flash Memory

Image Programmable Logic Devices (PLD)

Secondary Storage

Although memory plays an important role in the world of storage, other long-term types of storage are also needed. One of these is sequential storage. Anyone who has owned an IBM PC with a tape drive knows what sequential storage is. Tape drives are a type of sequential storage that must be read sequentially from beginning to end. Another well-known type of secondary storage is direct-access storage. Direct-access storage devices do not have to be read sequentially; the system can identify the location of the information and go directly to it to read the data. A hard drive is an example of a direct-access storage device: A hard drive has a series of platters, read/write heads, motors, and drive electronics contained within a case designed to prevent contamination. Hard drives are used to hold data and software. Software is the operating system or an application that you’ve installed on a computer system.

Compact discs (CDs) are a type of optical media. They use a laser/opto-electronic sensor combination to read or write data. A CD can be read-only, write-once, or rewriteable. CDs can hold up to around 800MB on a single disk. A CD is manufactured by applying a thin layer of aluminum to what is primarily hard clear plastic. During manufacturing or whenever a CD/R is burned, small bumps or pits are placed in the surface of the disc. These bumps or pits are converted into binary ones or zeros. Unlike the tracks and sectors of a floppy, a CD comprises one long spiral track that begins at the inside of the disc and continues toward the outer edge.

Digital video discs (DVDs) are very similar to CDs because both are optical media—DVDs just hold more data. The current version of optical storage is the Blu-ray disc. These optical disks can hold 50GB or more of data. More and more systems today are moving to solid-state drives (SSDs) and flash memory storage. Sizes up to 1 TB can now be found.

I/O Bus Standards

The data that the CPU is working with must have a way to move from the storage media to the CPU. This is accomplished by means of a bus. The bus is nothing more than lines of conductors that transmit data between the CPU, storage media, and other hardware devices. From the point of view of the CPU, the various adaptors plugged into the computer are external devices. These connectors and the bus architecture used to move data to the devices has changed over time. Some bus architectures are listed here:

Image ISA—The Industry Standard Architecture (ISA) bus started as an 8-bit bus designed for IBM PCs. It is now obsolete.

Image PCI—The Peripheral Component Interconnect (PCI) bus was developed by Intel and served as a replacement for ISA and other bus standards. PCI express is now the current standard.

Image PCIe—The Peripheral Component Interface Express (PCIe) bus was developed as an upgrade to PCI. It offers several advantages such as greater bus throughput, smaller physical footprint, better performance, and better error detection and reporting.

Image SATA—The Serial ATA (SATA) standard is the current standard for connecting hard drives and solid state drives to computers. It uses a serial design, smaller cable, greater speeds, and better airflow inside the computer case.

Image SCSI—The Small Computer Systems Interface (SCSI) bus allows a variety of devices to be daisy-chained off of a single controller. Many servers use the SCSI bus for their preferred hard drive solution.

Two serial bus standards, Universal Serial Bus (USB) and FireWire, have also gained wide market share. USB overcame the limitations of traditional serial interfaces. USB 2.0 devices can communicate at speeds up to 480 Mbps or 60 MBps, whereas USB 3.0 devices have a maximum bandwidth rate of 5 Gbps or 640 MBps. Devices can be chained together so that up to 127 devices can be connected to one USB slot of one hub in a “daisy chain” mode, eliminating the need for expansion slots on the motherboard. The newest USB standard is 3.1. The biggest improvement for the USB 3.1 standard is a boost in data transfer bandwidth of up to 10 gigabits per second.

USB is used for flash memory, cameras, printers, external hard drives, and even phones. Two of the fundamental advantages of USB are that it has broad product support and that many devices are immediately recognized when connected. Many Apple computers make use of the Thunderbolt interface, and some FireWire (IEEE 1394) interfaces are still found on digital audio and video equipment.

Virtual Memory and Virtual Machines

Modern computer systems have developed other ways in which to store and access information. One of these is virtual memory. Virtual memory is the combination of the computer’s primary memory (RAM) and secondary storage (the hard drive or SSD). By combining these two technologies, the OS can make the CPU believe that it has much more memory than it actually does. Examples of virtual memory include:

Image Page file

Image Swap space

Image Swap partition

These virtual memory types are user-defined in terms of size, location, and so on. When RAM is nearly depleted, the CPU begins saving data onto the computer’s hard drive. This process is called paging. Paging takes a part of a program out of memory and uses the page file to save those parts of the program. If the system requires more RAM than paging will provide, it will write an entire process out to the swap space. This process uses a paging file/swap file so that the data can be moved back and forth between the hard drive and RAM as needed. A specific drive can even be configured to hold such data and as such is called a swap partition. Individuals who have used a computer’s hibernation function or who have ever opened more programs on their computers than they’ve had enough memory to support are probably familiar with the operation of virtual memory.

Closely related to virtual memory are virtual machines, such as VMware Workstation, and Oracle VM VirtualBox. VMware is one of the leading products in the machine virtualization market. A virtual machine enables the user to run a second OS within a virtual host. For example, a virtual machine will let you run another Windows OS, Linux x86, or any other OS that runs on x86 processor and supports standard BIOS/UEFI booting. Virtual systems make use of a hypervisor to manage the virtualized hardware resources to run a guest operating system. A Type 1 hypervisor runs directly on the hardware with VM resources provided by the hypervisor, whereas a Type 2 hypervisor runs on a host operating system above the hardware. Virtual machines are a huge trend and can be used for development and system administration, production, and to reduce the number of physical devices needed. The hypervisor is also being used to design virtual switches, routers, and firewalls.


Tip

Virtualization is not the only thing that is changing in the workplace; cloud computing enables employees to work from many different locations. Because the applications and data can reside in the cloud, a user can access this content from any location that has connectivity. The potential disadvantage of cloud computer is security-related. Something to consider is who owns the cloud. Is it a private cloud (owned by company) or a public cloud (owned by someone else)? In addition, what is the physical location of the cloud, who has access to the cloud, and is it shared (co-tenancy)? Each of these items are critical to consider before placing any corporate assets in a cloud.


Computer Configurations

The following is a list of some of the most commonly used computer and device configurations:

Image Print server—Print servers are usually located close to printers and allow many users to access the same printer and share its resources.

Image File server—File servers allow users to have a centralized site to store files. This provides an easy way to perform backups because it can be done on one server rather than on all the client computers. It also allows for group collaboration and multi-user access.

Image Application server—This service allows users to run applications not installed on the end users’ system. It is a very popular concept in thin client environments. Thin clients depend on a central server for processing power. Licensing is an important consideration.

Image Web server—Web servers provide web services to internal and external users via web pages. A sample web address or URL (uniform resource locator) is www.thesolutionfirm.com.

Image Database server—Database servers store and access data. This includes information such as product inventory, price lists, customer lists, and employee data. Because databases hold sensitive information, they require well-designed security controls. They typically sit in front of a database and broker the request, acting as middleware between the untrusted users and the database holding the data.

Image Laptops and tablets—Mobile devices that are easily lost or stolen. Mobile devices have become much more powerful and must be properly secured.

Image Smartphones—Gone are the cell phones of the past that simply placed calls and sent SMS texts. Today’s smartphones are more like computers and have a large amount of processing capability; they can take photos and have onboard storage, Internet connectivity, and the ability to run applications. These devices are of particular concern as more companies start to support bring your own device (BYOD). Such devices can easily fall outside of company policies and controls.

Image Embedded Devices—Include ATM machines, point-of-sale terminals, and even smart watches. More and more technology has embedded technology, such as smart refrigerators and Bluetooth-enabled toilets. The security of embedded devices is a growing concern as these devices may not be patched or updated on a regular basis.


Note

Expect more and more devices to have embedded technology as the Internet of Things (IoT) grows. Several companies even sell toilets with Bluetooth and SD card technology built in, and like any other device they are not immune to hacking: www.extremetech.com/extreme/163119-smart-toilets-bidet-hacked-via-bluetooth-gives-new-meaning-to-backdoor-vulnerability.


Security Architecture

Although a robust functional architecture is a good start, real security requires that you have a security architecture in place to control processes and applications. Concepts related to security architecture include the following:

Image Protection rings

Image Trusted computer base (TCB)

Image Open and closed systems

Image Security modes of operation

Image Operating states

Image Recovery procedures

Image Process isolation

Protection Rings

The operating system knows who and what to trust by relying on protection rings. Protection rings work much like your network of family, friends, co-workers, and acquaintances. The people who are closest to you, such as your spouse and family, have the highest level of trust. Those who are distant acquaintances or are unknown to you probably have a lower level of trust. It’s much like the guy you see in New York City on Canal Street trying to sell new Rolex watches for $100; you should have little trust in him and his relationship with the Rolex company!

In reality, protection rings are conceptual. Figure 5.2 shows an illustration of the protection ring schema. The first implementation of such a system was in MIT’s Multics time-shared operating system.

Image

FIGURE 5.2 Protection rings.

The protection ring model provides the operating system with various levels at which to execute code or to restrict that code’s access. The idea is to use engineering design to build in layers of control using secure design principles. The rings provide much greater granularity than a system that just operates in user and privileged modes. As code moves toward the outer bounds of the model, the layer number increases and the level of trust decreases.

Image Layer 0—The most trusted level. The operating system kernel resides at this level. Any process running at layer 0 is said to be operating in privileged mode.

Image Layer 1—Contains non-privileged portions of the operating system.

Image Layer 2—Where I/O drivers, low-level operations, and utilities reside.

Image Layer 3—Where applications and processes operate. This is the level at which individuals usually interact with the operating system. Applications operating here are said to be working in user mode. User mode is often referred to as problem mode, because this is where the less-trusted applications run; therefore, the most problems occur here.

Not all systems use all rings. Most systems that are used today operate in two modes: user mode and supervisor (privileged) mode.

Items that need high security, such as the operating system security kernel, are located at the center ring. This ring is unique because it has access rights to all domains in that system. Protection rings are part of the trusted computing base concept.

Trusted Computer Base

The trusted computer base (TCB) is the sum of all the protection mechanisms within a computer and is responsible for enforcing the security policy. This includes hardware, software, controls, and processes. The TCB is responsible for confidentiality and integrity. The TCB is the only portion of a system that operates at a high level of trust. It monitors four basic functions:

Image Input/output operations—I/O operations are a security concern because operations from the outermost rings might need to interface with rings of greater protection. These cross-domain communications must be monitored.

Image Execution domain switching—Applications running in one domain or level of protection often invoke applications or services in other domains. If these requests are to obtain more sensitive data or service, their activity must be controlled.

Image Memory protection—To truly be secure, the TCB must monitor memory references to verify confidentiality and integrity in storage.

Image Process activation—Registers, process status information, and file access lists are vulnerable to loss of confidentiality in a multiprogramming environment. This type of potentially sensitive information must be protected.


ExamAlert

For the exam, you should understand not only that the TCB is tasked with enforcing security policy but also that the TCB is the sum of all protection mechanisms within a computer system that have also been evaluated for security assurance. This includes hardware, firmware, and software within the TCB.

Those components that have not been evaluated are said to fall outside the security perimeter.


The TCB monitors the functions in the preceding list to ensure that the system operates correctly and adheres to security policy. The TCB follows the reference monitor concept. The reference monitor is an abstract machine that is used to implement security. The reference monitor’s job is to validate access to objects by authorized subjects. The reference monitor operates at the boundary between the trusted and untrusted realm. The reference monitor has three properties:

Image Cannot be bypassed and controls all access, must be invoked for every access attempt

Image Cannot be altered and is protected from modification or change

Image Must be small enough to be verified and tested correctly


ExamAlert

For the exam, you should understand that the reference monitor enforces the security requirement for the security kernel.


The reference monitor is much like the bouncer at a club because it stands between each subject and object. Its role is to verify that the subject meets the minimum requirements for access to an object, as illustrated in Figure 5.3.

Image

FIGURE 5.3 Reference monitor.


Note

Subjects are active entities such as people, processes, or devices.



Note

Objects are passive entities that are designed to contain or receive information. Objects can be processes, software, or hardware.


The reference monitor can be designed to use tokens, capability lists, or labels.

Image Tokens—Communicate security attributes before requesting access

Image Capability lists—Offer faster lookup than security tokens but are not as flexible

Image Security labels—Used by high-security systems because labels offer permanence. This is provided only by security labels.


Note

Note that each time the term security labels is listed, it is used to denote high-security MAC-based systems.


At the heart of the system is the security kernel. The security kernel handles all user/application requests for access to system resources. A small security kernel is easy to verify, test, and validate as secure. However, in real life, the security kernel might be bloated with some unnecessary code because processes located inside can function faster and have privileged access. Vendors have taken different approaches in how they develop operating systems. As an example, DOS used a monolithic kernel. Several of these designs are shown in Figure 5.4 and are described here:

Image Monolithic architecture: All of the OS processes work in kernel mode

Image Layered OS design: Separates system functionality into different layers

Image Microkernel: Smaller kernel that only supports critical processes

Image Hybrid microkernel: The kernel structure is similar to a microkernel, but implemented in terms of a monolithic design

Image

Source: http://upload.wikimedia.org/wikipedia/commons/d/d0/OS-structure2.svg

FIGURE 5.4 Operating System Architecture.

Although the reference monitor is conceptual, the security kernel can be found at the heart of every system. The security kernel is responsible for running the required controls used to enforce functionality and resist known attacks. As mentioned previously, the reference monitor operates at the security perimeter—the boundary between the trusted and untrusted realm. Components outside the security perimeter are not trusted. All trusted access control mechanisms are inside the security perimeter.

Open and Closed Systems

Open systems accept input from other vendors and are based on standards and practices that allow connection to different devices and interfaces. The goal is to promote full interoperability whereby the system can be fully utilized.

Closed systems are proprietary. They use devices that are not based on open standards and that are generally locked. They lack standard interfaces to allow connection to other devices and interfaces.

An example of this can be seen in the United States cell phone industry. AT&T and T-Mobile cell phones are based on the worldwide Global System for Mobile Communications (GSM) standard and can be used overseas easily on other networks by simply changing the subscriber identity module (SIM). These are open-system phones. Phones that are used on the Sprint network use Code Division Multiple Access (CDMA), which does not have worldwide support.


Note

The concept of open and closed can apply to more than just hardware. Open and closed software is about whether others can view and/or alter your source code. As an example, the Galaxy Nexus phone running Android is open source, whereas the Apple iPhone is closed source code.


Security Modes of Operation

Several security modes of operation are based on Department of Defense (DoD 5220.22-M) classification levels. According to the DoD, information being processed on a system, and the clearance level of authorized users, have been defined as one of four modes (see Table 5.2):

Image Dedicated—A need-to-know is required to access all information stored or processed. Every user requires formal access with clearance and approval, and has executed a signed nondisclosure agreement for all the information stored and/or processed. This mode must also support enforced system access procedures. All hardcopy output and media removed will be handled at the level for which the system is accredited until reviewed by a knowledgeable individual. All users can access all data.

Image System High—All users have a security clearance; however, a need-to-know is only required for some of the information contained within the system. Every user requires access approval, and needs to have signed nondisclosure agreements for all the information stored and/or processed. Access to an object by users not already possessing access permission must only be assigned by authorized users of the object. This mode must be capable of providing an audit trail that records time, date, user ID, terminal ID (if applicable), and file name. All users can access some data based on their need to know.

Image Compartmented—Valid need-to-know is required for some of the information on the system. Every user has formal access approval for all information they will access on the system, and requires proper clearance for the highest level of data classification on the system. All users have signed NDAs for all information they will access on the system. All users can access some data based on their need to know and formal access approval.

Image Multi-level—Every user has a valid need-to-know for some of the information that is on the system, and more than one classification level can be processed at the same time. Users have formal access approval and have signed NDAs for all information they will access on the system. Mandatory access controls provide a means of restricting access to files based on their sensitivity label. All users can access some data based on their need to know, clearance, and formal access approval.

Image

TABLE 5.2 Security Modes of Operation

Operating States

When systems are used to process and store sensitive information, there must be some agreed-on methods for how this will work. Generally, these concepts were developed to meet the requirements of handling sensitive government information with categories such as “sensitive,” “secret,” and “top secret.” The burden of handling this task can be placed on either administration or the system itself.

Generally there are two designs that are used. These include single-state and multistate systems.

Single-state systems are designed and implemented to handle one category of information. The burden of management falls on the administrator who must develop the policy and procedures to manage this system. The administrator must also determine who has access and what type of access the users have. These systems are dedicated to one mode of operation, so they are sometimes referred to as dedicated systems.

Multistate systems depend not on the administrator, but on the system itself. They are capable of having more than one person log in to the system and access various types of data depending upon the level of clearance. As you would probably expect, these systems are not inexpensive. The XTS-400 that runs the Secure Trusted Operating Program (STOP) OS from BAE Systems is an example of a multilevel state system. Multistate systems can operate as a compartmentalized system. This means that Mike can log into the system with a secret clearance and access secret-level data, whereas Dwayne can log in with top-secret-level access and access a different level of data. These systems are compartmentalized and can segment data on a need-to-know basis.


Tip

Security-Enhanced Linux and TrustedBSD are freely available implementations of operating systems with limited multistate capabilities. Security evaluation is a problem for these free MLS implementations because of the expense and time it would take to fully qualify these systems.


Recovery Procedures

Unfortunately, things don’t always operate normally; they sometimes go wrong and a system failure can occur. A system failure could potentially compromise the system by corrupting integrity, opening security holes, or causing corruption. Efficient designs have built-in recovery procedures to recover from potential problems:

Image Fail safe—If a failure is detected, the system is protected from compromise by termination of services.

Image Fail soft—A detected failure terminates the noncritical process. Systems in fail soft mode are still able to provide partial operational capability.

It is important to be able to recover when an issue arises. This requires taking a proactive approach and backing up all critical files on a regular schedule. The goal of recovery is to recover to a known state. Common issues that require recovery include:

Image System Reboot—An unexpected/unscheduled event.

Image System Restart—Automatically occurs when the system goes down and forces an immediate reboot.

Image System Cold Start—Results from a major failure or component replacement.

Image System Compromise—Caused by an attack or breach of security.

Process Isolation

Process isolation is required to maintain a high level of system trust. To be certified as a multilevel security system, process isolation must be supported. Without process isolation, there would be no way to prevent one process from spilling over into another process’s memory space, corrupting data, or possibly making the whole system unstable. Process isolation is performed by the operating system; its job is to enforce memory boundaries. Separation of processes is an important topic—otherwise the system could be designed in such a way that one flaw in the design or configuration could cause an entire system to stop operating. This is known as a single point of failure (SPOF).

For a system to be secure, the operating system must prevent unauthorized users from accessing areas of the system to which they should not have access, be robust, and have no single point of failure. Sometimes this is accomplished by means of a virtual machine. A virtual machine allows users to believe that they have the use of the entire system, but in reality, processes are completely isolated. To take this concept a step further, some systems that require truly robust security also implement hardware isolation. This means that the processes are segmented not only logically but also physically.


Note

Java uses a form of virtual machine because it uses a sandbox to contain code and allows it to function only in a controlled manner.


Common Formal Security Models

Security models are used to determine how security will be implemented, what subjects can access the system, and what objects they will have access to. Simply stated, they are a way to formalize security policy. Security models of control are typically implemented by enforcing integrity, confidentiality, or other controls. Keep in mind that each of these models lays out broad guidelines and is not specific in nature. It is up to the developer to decide how these models will be used and integrated into specific designs, as shown in Figure 5.5.

Image

FIGURE 5.5 Security model fundamental concepts used in the design of an OS.

The sections that follow discuss the different security models of control in greater detail. The first three models discussed are considered lower-level models.

State Machine Model

The state machine model is based on a finite state machine, as shown in Figure 5.6. State machines are used to model complex systems and deal with acceptors, recognizers, state variables, and transaction functions. The state machine defines the behavior of a finite number of states, the transitions between those states, and actions that can occur.

Image

FIGURE 5.6 Finite state model.

The most common representation of a state machine is through a state machine table. For example, as Table 5.3 illustrates, if the state machine is at the current state of (B) and condition (2), the next state would be (C) and condition (3) as we progress through the options.

Image

TABLE 5.3 State Machine Table

A state machine model monitors the status of the system to prevent it from slipping into an insecure state. Systems that support the state machine model must have all their possible states examined to verify that all processes are controlled in accordance with the system security policy. The state machine concept serves as the basis of many security models. The model is valued for knowing in what state the system will reside. As an example, if the system boots up in a secure state, and every transaction that occurs is secure, it must always be in a secure state and not fail open. (To fail open means that all traffic or actions would be allowed, not denied.)

Information Flow Model

The information flow model is an extension of the state machine concept and serves as the basis of design for both the Biba and Bell-LaPadula models, which are discussed in the sections that follow. The information flow model consists of objects, state transitions, and lattice (flow policy) states. The real goal of the information flow model is to prevent unauthorized, insecure information flow in any direction. This model and others can make use of guards. Guards allow the exchange of data between various systems.

Noninterference Model

The Noninterference model as defined by Goguen and Meseguer was designed to make sure that objects and subjects of different levels don’t interfere with the objects and subjects of other levels. The model uses inputs and outputs of either low or high sensitivity. Each data access attempt is independent of all others and data cannot cross security boundaries.

Confidentiality

Although the preceding models serve as a basis for many security models that were developed later, one major concern is confidentiality. Government entities such as the DoD are concerned about the confidentiality of information. The DoD divides information into categories to ease the burden of managing who has access to what levels of information. DoD information classifications are “sensitive but unclassified” (SBU), “confidential,” “secret,” and “top secret.” One of the first models to address the needs of the DoD was the Bell-LaPadula model.

Bell-LaPadula

The Bell-LaPadula state machine model enforces confidentiality. This model uses mandatory access control to enforce the DoD multilevel security policy. For subjects to access information, they must have a clear need to know, and must meet or exceed the information’s classification level.

The Bell-LaPadula model is defined by the following properties:

Image Simple security property (ss property)—This property states that a subject at one level of confidentiality is not allowed to read information at a higher level of confidentiality. This is sometimes referred to as “no read up.” An example is shown in Figure 5.7.

Image

FIGURE 5.7 Bell-LaPadula Simple Security Model.

Image Star * security property—This property states that a subject at one level of confidentiality is not allowed to write information to a lower level of confidentiality. This is also known as “no write down.” An example is shown in Figure 5.8.

Image

FIGURE 5.8 Bell-LaPadula Star * Property.

Image Strong star * property—This property states that a subject cannot read or write to an object of higher or lower sensitivity. An example is shown in Figure 5.9.

Image

FIGURE 5.9 Bell-LaPadula Strong Star Property.


ExamAlert

Review the Bell-LaPadula simple security and Star * security models closely; they are easy to confuse with Biba’s two defining properties.



Tip

A fourth but rarely implemented property called the discretionary security property allows users to grant access to other users at the same clearance level by means of an access matrix.


Although the Bell-LaPadula model did go a long way in defining the operation of secure systems, the model is not perfect. It did not address security issues such as covert channels. It was designed in an era when mainframes were the dominant platform. It was designed for multilevel security and takes only confidentiality into account.


Tip

Know that the Bell-LaPadula model deals with confidentiality. As such, reading information at a higher level than what is allowed would endanger confidentiality.


Integrity

Integrity is a good thing. It is one of the basic elements of the security triad, along with confidentiality and availability. Integrity plays an important role in security because it can verify that unauthorized users are not modifying data, authorized users don’t make unauthorized changes, and that databases balance and data remains internally and externally consistent. Although governmental entities are typically very concerned with confidentiality, other organizations might be more focused on the integrity of information. In general, integrity has four goals:

1. Prevent data modification by unauthorized parties

2. Prevent unauthorized data modification by authorized parties

3. Reflect the real world

4. Maintain internal and external consistency


Note

Some sources list only three goals of security by combining 3 and 4, i.e., must maintain internal and external consistency and the data must reflect the real world.


Two security models that address secure systems integrity include Biba and Clark-Wilson. These models are addressed next. The Biba model only addresses the first integrity goal and the Clark-Wilson addresses all goals.

Biba

The Biba model was the first model developed to address the concerns of integrity. Originally published in 1977, this lattice-based model has the following defining properties:

Image Simple integrity property—This property states that a subject at one level of integrity is not permitted to read an object of lower integrity.

Image Star * integrity property—This property states that an object at one level of integrity is not permitted to write to an object of higher integrity.

Image Invocation property—This property prohibits a subject at one level of integrity from invoking a subject at a higher level of integrity.


Tip

One easy way to help you remember these rules is to note that the Star property in both Biba and Bell-LaPadula deal with write. Just remember, “It’s written in the stars!”


Biba addresses only the first goal of integrity—protecting the system from access by unauthorized users. Other types of concerns such as confidentiality are not examined. It also assumes that internal threats are being protected by good coding practices, and therefore focuses on external threats.


Tip

To remember the purpose of the Biba model, just keep in mind that the “i” in Biba stands for integrity.



Tip

Remember that the Biba model deals with integrity and as such, writing to an object of a higher level might endanger the integrity of the system.


Clark-Wilson

The Clark-Wilson model was created in 1987. It differs from previous models because it was developed to be used for commercial activities. This model addresses all the goals of integrity. Clark-Wilson dictates that the separation of duties must be enforced, subjects must access data through an application, and auditing is required. Some terms associated with Clark-Wilson include:

Image User

Image Transformation procedure

Image Unconstrained data item

Image Constrained data item

Image Integrity verification procedure

Clark-Wilson features an access control triple, where subjects must access programs before accessing objects (subject-program-object). The access control triple is composed of the user, transformational procedure, and the constrained data item. It was designed to protect integrity and prevent fraud. Authorized users cannot change data in an inappropriate way. The Clark-Wilson model checks three attributes: tampered, logged, and consistent, or “TLC.”

It also differs from the Biba model in that subjects are restricted. This means that a subject at one level of access can read one set of data, whereas a subject at another level has access to a different set of data. Clark-Wilson controls the way in which subjects access objects so that the internal consistency of the system can be ensured, and that data can be manipulated only in ways that protect consistency. Integrity verification procedures (IVPs) ensure that a data item is in a valid state. Data cannot be tampered with while being changed and the integrity of the data must be consistent. Clark-Wilson requires that all changes must be logged. Clark-Wilson is made up of transformation procedures (TP). Constrained data items (CDI) are data for which integrity must be preserved. Items not covered under the model are considered unconstrained data items (UDIs).


Tip

Remember that the Clark-Wilson model requires that users be authorized to access and modify data, and that it deals with three key terms: tampered, logged, and consistent, or “TLC.”


Take-Grant Model

The Take-Grant model is another confidentiality-based model that supports four basic operations: take, grant, create, and revoke. This model allows subjects with the take right to remove take rights from other subjects. Subjects possessing the grant right can grant this right to other subjects. The create and revoke operations work in the same manner: someone with the create right can give the create right to others and those with the revoke right can remove that right from others.

Brewer and Nash Model

The Brewer and Nash model is similar to the Bell-LaPadula model and is also sometimes referred to as the Chinese Wall model. It was developed to prevent conflict of interest (COI) problems. As an example, imagine that your security firm does security work for many large firms. If one of your employees could access information about all the firms that your company has worked for, that person might be able to use this data in an unauthorized way. Therefore, the Brewer and Nash model is more context-oriented in that it prevents a worker consulting for one firm from accessing data belonging to another, thereby preventing any COI.

Other Models

A security model defines and describes what protection mechanisms are to be used and what these controls are designed to achieve. Although the previous section covered some of the more heavily tested models, you should have a basic understanding of a few more. These security models include:

Image Graham Denning model—This model uses a formal set of eight protection rules for which each object has an owner and a controller. These rules define what you can create, delete, read, grant, or transfer.

Image Harrison-Ruzzo-Ullman model—This model is similar to the Graham Denning model and details how subjects and objects can be created, deleted, accessed, or changed.

Image Lipner—This model combines elements of both Bell-LaPadula and Biba to guard both confidentiality and integrity.

Image Lattice model—This model is associated with MAC. Controls are applied to objects and the model uses security levels that are represented by a lattice structure. This structure governs information flow. Subjects of the lattice model are allowed to access an object only if the security level of the subject is equal to or greater than that of the object. Overall access limits are set by having a least upper bound and a greatest lower bound for each “security level.”


ExamAlert

Spend some time reviewing all the models discussed in this section. Make sure you know which models are integrity-based, which are confidentiality-based, and the properties of each; you will need to know this distinction for the exam.



Tip

Although the security models listed in this section are the ones the exam will most likely focus on, there are many other models, such as the Sutherland, Boebert and Kain, Karger, Gong, and Jueneman. Even though many security professionals may have never heard of these, those that develop systems most likely learned of them in college.


Product Security Evaluation Models

A set of evaluation standards will be needed when evaluating the security capabilities of information systems. The following documents and guidelines were developed to help evaluate and establish system assurance. These items are important to the CISSP candidate because they provide a level of trust and assurance that these systems will operate in a given and predictable manner. A trusted system has undergone testing and validation to a specific standard. Assurance is the freedom from doubt and a level of confidence that a system will perform as required every time it is used.

Think of product evaluation models as being similar to EPA gas mileage ratings. These give the buyer and seller a way to evaluate different automotive brands and models. In the world of product security, such systems can be used by developers when preparing to sell a system. The same evaluation models can be used by the buyer when preparing to make a purchase, as they provide a way to measure the system’s effectiveness and benchmark its abilities. The following documents and guidelines facilitate these needs.

The Rainbow Series

The Rainbow Series is aptly named because each book in the series has a label of a different color. This 6-foot-tall stack of books was developed by the National Computer Security Center (NCSC), an organization that is part of the National Security Agency (NSA). These guidelines were developed for the Trusted Product Evaluation Program (TPEP), which tests commercial products against a comprehensive set of security-related criteria. The first of these books was released in 1983 and is known as Trusted Computer System Evaluation Criteria (TCSEC) or the Orange Book. Many similar guides were also known by the color of the cover instead of their name, such as the Purple Book and the Brown Book. These guidelines have all been replaced with Common Criteria, discussed below. While no longer commercially used, understanding TCSEC will help you understand how product security evaluation models have evolved into what is used today. Because it addresses only standalone systems, other volumes were developed to increase the level of system assurance.

The Orange Book: Trusted Computer System Evaluation Criteria

The Orange Book’s official name is the Trusted Computer System Evaluation Criteria and was developed to evaluate standalone systems. Its basis of measurement is confidentiality, so it is similar to the Bell-LaPadula model. It is designed to rate systems and place them into one of four categories:

Image A—Verified protection. An A-rated system is the highest security category.

Image B—Mandatory security. A B-rated system has mandatory protection of the TCB.

Image C—Discretionary protection. A C-rated system provides discretionary protection of the TCB.

Image D—Minimal protection. A D-rated system fails to meet any of the standards of A, B, or C and basically has no security controls.


Note

The Canadians had their own version of the Orange Book known as The Canadian Trusted Computer Product Evaluation Criteria (CTCPEC). It is seen as a more flexible version of TCSEC.


The Orange Book not only rates systems into one of four categories, but each category is also broken down further. For each of these categories, a higher number indicates a more secure system, as noted in the following:

Image A is the highest security division. An A1 rating means that the system has verified protection and supports mandatory access control (MAC).

Image A1 is the highest supported rating. Systems rated as such must adhere to formal methods and provide formal proof of integrity of the TCB. An A1 system must not only be developed under strict guidelines, but also must be installed and delivered securely. Examples of A1 systems include the Gemini Trusted Network Processor and the Honeywell SCOMP. The true nature of A rating deals with the level of scrutiny the system receives during evaluation.

Image B is considered a mandatory protection design. Just as with an A-rated system, those that obtain a B rating must support MAC.

Image B1 (labeled security protection) systems require sensitivity labels for all subjects and storage objects. Examples of B1-rated systems include the Cray Research Trusted Unicos 8.0 and the Digital SEVMS.

Image For a B2 (structured protection) rating, the system must meet the requirements of B1 and support hierarchical device labels, trusted path communications between user and system, and covert storage analysis. An example of a B2 system is the Honeywell Multics.

Image Systems rated as B3 (security domains) must meet B2 standards and support trusted path access and authentication, automatic security analysis, and trusted recovery. B3 systems must address covert timing vulnerabilities. A B3 system must not only support security controls during operation but also be secure during startup. An example of a B3-rated system is the Federal XTS-300.

Image C is considered a discretionary protection rating. C-rated systems support discretionary access control (DAC).

Image Systems rated at C1 (discretionary security protection) don’t need to distinguish between individual users and types of access.

Image C2 (controlled access protection) systems must meet C1 requirements and they must distinguish between individual users and types of access by means of strict login controls. C2 systems must also support object reuse protection. A C2 rating is common; products such as Windows NT and Novell NetWare 4.11 have a C2 rating.

Image Any system that does not comply with any of the other categories or that fails to receive a higher classification is rated as a D-level (minimal protection) system. MS-DOS is a D-rated system.


ExamAlert

The CISSP exam will not expect you to know what systems meet the various Orange Book ratings. These are provided only as examples; however, the test will expect you to know which levels are MAC and DAC certified.


Although the Orange Book is no longer considered current, it was one of the first standards. It is reasonable to expect that the exam might ask you about Orange Book levels and functions at each level. Listed in Table 5.4 are important notes to keep in mind about Orange Book levels.

Image

TABLE 5.4 Orange Book Levels

The Red Book: Trusted Network Interpretation

The Red Book’s official name is the Trusted Network Interpretation (TNI). The purpose of the TNI is to examine security for network and network components. Whereas the Orange Book addresses only confidentiality, the Red Book examines integrity and availability. It also is tasked with examining the operation of networked devices. Three areas of reviews of the Red Book include:

Image DoS prevention—Management and continuity of operations.

Image Compromise protection—Data and traffic confidentiality, selective routing.

Image Communications integrity—Authentication, integrity, and nonrepudiation.

Information Technology Security Evaluation Criteria

ITSEC is a European standard developed in the 1980s to evaluate confidentiality, integrity, and availability of an entire system. ITSEC was unique in that it was the first standard to unify markets and bring all of Europe under one set of guidelines. ITSEC designates the target system as the Target of Evaluation (TOE). The evaluation is actually divided into two parts: One part evaluates functionality and the other evaluates assurance. There are 10 functionality (F) classes and 7 assurance (E) classes. Assurance classes rate the effectiveness and correctness of a system. Table 5.5 shows these ratings and how they correspond to the TCSEC ratings.

Image

TABLE 5.5 ITSEC Functionality Ratings and Comparison to TCSEC

Common Criteria

With all the standards we have discussed, it is easy to see how someone might have a hard time determining which one is the right choice. The International Standards Organization (ISO) had these same thoughts; therefore, it decided that because of the various standards and ratings that existed, there should be a single global standard. Figure 5.10 illustrates the development of Common Criteria.

Image

FIGURE 5.10 Common Criteria development.

In 1997, the ISO released the Common Criteria (ISO 15408), which is an amalgamated version of TCSEC, ITSEC, and the CTCPEC. Common Criteria is designed around TCB entities. These entities include physical and logical controls, startup and recovery, reference mediation, and privileged states. Common Criteria categorizes assurance into one of seven increasingly strict levels of assurance. These are referred to as Evaluation Assurance Levels (EALs). EALs provide a specific level of confidence in the security functions of the system being analyzed. A description of each of the seven levels of assurance follows:

Image EAL 1—Functionality tested

Image EAL 2—Structurally tested

Image EAL 3—Methodically checked and tested

Image EAL 4—Methodically designed, tested, and reviewed

Image EAL 5—Semi-formally designed and tested

Image EAL 6—Semi-formally verified, designed, and tested

Image EAL 7—Formally verified, designed, and tested


ExamAlert

If you are looking for an example of a high level, EAL 6 operating system, look no further than Integrity 178B by Green Hills software. This secure OS is used in jet fighters and other critical devices. See www.informationweek.com/news/software/bi/229208909 for more details.


Like ITSEC, Common Criteria defines two types of security requirements: functional and assurance. Functional requirements define what a product or system does. They also define the security capabilities of a product. The assurance requirements and specifications to be used as the basis for evaluation are known as the Security Target (ST). A protection profile defines the system and its controls. The protection profile is divided into the following five sections:

Image Rationale

Image Evaluation assurance requirements

Image Descriptive elements

Image Functional requirements

Image Development assurance requirements

A Security Target consists of the following seven sections:

Image Introduction

Image Conformance Claims

Image Security Problem Definition

Image Security Objectives

Image Extended Components Definition

Image Security Requirements

Image TOE Security Specifications

Assurance requirements define how well a product is built. Assurance requirements give confidence in the product and show the correctness of its implementation.


ExamAlert

Common Criteria’s seven levels of assurance and its two security requirements are required test knowledge.


System Validation

No system or architecture will ever be completely secure; there will always be a certain level of risk. Security professionals must understand this risk and be comfortable with it, mitigate it, or offset it to a third party. All the documentation and guidelines already discussed dealt with ways to measure and assess risk. These can be a big help in ensuring that the implemented systems meet our requirements. However, before we begin to use the systems, we must complete the two additional steps of certification and accreditation.

Certification and Accreditation

Certification is the process of validating that implemented systems are configured and operating as expected. It also validates that the systems are connected to and communicate with other systems in a secure and controlled manner, and that they handle data in a secure and approved manner. The certification process is a technical evaluation of the system that can be carried out by independent security teams or by the existing staff. Its goal is to uncover any vulnerabilities or weaknesses in the implementation.

The results of the certification process are reported to the organization’s management for mediation and approval. If management agrees with the findings of the certification, the report is formally approved. The formal approval of the certification is the accreditation process. Management usually issues accreditation as a formal, written approval that the certified system is approved for use as specified in the certification documentation. If changes are made to the system or in the environment in which the system is used, a recertification and accreditation process must be repeated. The entire process is periodically repeated in intervals depending on the industry and the regulations they must comply with. As an example, Section 404 of Sarbanes-Oxley requires an annual evaluation of internal systems that deal with financial controls and reporting systems.


ExamAlert

For the exam, you might want to remember that certification is seen as the technical aspect of validation, whereas accreditation is management’s approval.



Note

Nothing lasts forever—and that includes certification. The certification process should be repeated when systems change, items are modified, or on a periodic basis.


Security Guidelines and Governance

The Internet and global connectivity extend the company’s network far beyond its traditional border. This places new demands on information security and its governance. Attacks can originate not just from inside the organization, but anywhere in the world.

Information security governance requires more than certification and accreditation. Governance should focus on the availability of services, integrity of information, and protection of data confidentiality. Failure to adequately address this important concern can have serious consequences. This has led to the growth of governance frameworks such as the IT Infrastructure Library (ITIL). ITIL specifies a set of processes, procedures, and tasks that can be integrated with the organization’s strategy, delivering value and maintaining a minimum level of competency. ITIL can be used to create a baseline from which the organization can plan, implement, and measure its governance progress.

Security and governance can be enhanced by implementing an enterprise architecture (EA) plan. The EA is the practice within information technology of organizing and documenting a company’s IT assets to enhance planning, management, and expansion. The primary purpose of using EA is to ensure that business strategy and IT investments are aligned. The benefit of EA is that it provides a means of traceability that extends from the highest level of business strategy down to the fundamental technology. EA has grown since first developed it in the 1980s; companies such as Intel, BP, and the United States government now use this methodology. One early EA model is the Zachman Framework. It was designed to allow companies to structure policy documents for information systems so they focus on who, what, where, when, why, and how, as shown in Figure 5.11.

Image

FIGURE 5.11 Zachman model.

Enterprise Architecture

Federal law requires government agencies to set up EAs and a structure for its governance. This process is guided by the Federal Enterprise Architecture (FEA) reference model. The FEA is designed to use five models:

Image Performance reference model—A framework used to measure performance of major IT investments.

Image Business reference model—A framework used to provide an organized, hierarchical model for day-to-day business operations.

Image Service component reference model—A framework used to classify service components with respect to how they support business or performance objectives.

Image Technical reference model—A framework used to categorize the standards, specifications, and technologies that support and enable the delivery of service components and capabilities.

Image Data reference model—A framework used to provide a standard means by which data can be described, categorized, and shared.

An independently designed, but later integrated, subset of the Zachman Framework is the Sherwood Applied Business Security Architecture (SABSA). Like the Zachman Framework, this model and methodology was developed for risk-driven enterprise information security architectures. It asks what, why, how, and where. More information on the SABSA model is at www.sabsa-institute.org.

The British Standard (BS) 7799 was developed in England to be used as a standard method to measure risk. Because the document found such a wide audience and was adopted by businesses and organizations, it evolved into ISO 17799 and then later was used in the development of ISO 27005.

ISO 17799 is a code of practice for information security. ISO 17799 is written for individuals responsible for initiating, implementing, or maintaining information security management systems. Its goal is to help protect confidentiality, integrity, and availability. Compliance with 17799 is an involved task and is far from trivial for even the most security-conscious organizations. ISO 17799 provides best-practice guidance on information security management and is divided into 12 main sections:

Image Risk assessment and treatment

Image Security policy

Image Organization of information security

Image Asset management

Image Human resources security

Image Physical and environmental security

Image Communications and operations management

Image Access control

Image Information systems acquisition, development, and maintenance

Image Information security incident management

Image Business continuity management

Image Compliance

The ISO 27000 series is part of a family of standards that can trace its origins back to BS 7799. Organizations can become ISO 27000 certified by verifying their compliance to an accredited testing entity. Some of the core ISO standards include the following:

Image 27001—This document describes requirements on how to establish, implement, operate, monitor, review, and maintain an information security management system (ISMS). It follows a Plan-Do-Check-Act model.

Image 27002—This document was originally the BS 7799 standard, then was republished as an ISO 17799 standard. It also describes ways to develop a security program within the organization.

Image 27003—This document focuses on implementation.

Image 27004—This document describes the ways to measure the effectiveness of the information security program.

Image 27005—This document describes the code of practice of information security.

One final item worth mentioning is the information technology infrastructure library (ITIL). ITIL provides a framework for identifying, planning, delivering, and supporting IT services for the business. ITIL presents a service lifecycle that includes:

Image Continual service improvement

Image Service strategy

Image Service design

Image Service transition

Image Service operation

True security is a layered process. Each of the items discussed in this section can be used to build a more secure organization.

Regulatory Compliance and Process Control

One area of concern for the CISSP is protection of sensitive information and the security of financial data. One such item is Payment Card Industry Data Security Standard (PCI DSS). This multinational standard was first released in 2004, and was created to enforce strict standards of control for the protection of credit card, debit card, ATM card, and gift card numbers by mandating policies, security devices, controls, and network monitoring. PCI also sets standards for the protection of personally identifiable information that is associated with the cardholder of the account. Participating vendors include American Express, MasterCard, Visa, and Discover.

While PCI is used to protect financial data, Control Objectives for Information and Related Technology (COBIT) was developed to meet the requirements of business and IT processes. It is a standard used for auditors worldwide and was developed by the Information Systems Audit and Control Association (ISACA). COBIT is divided into four control areas:

Image Planning and Organization

Image Acquisition and Implementation

Image Delivery and Support

Image Monitoring

Vulnerabilities of Security Architectures

Just as in most other chapters of this book, this one also reviews potential threats and vulnerabilities. Any time a security professional makes the case for stronger security, there will be those that ask why such funds should be spent. It’s important to point out not only the benefits of good security, but also the potential risks of not implementing good practices and procedures.

We live in a world of risk. As security professionals, we need to be aware of these threats to security and understand how the various protection mechanisms discussed throughout this chapter can be used to raise the level of security.

Buffer Overflow

Buffer overflows occur because of poor coding techniques. A buffer is a temporary storage area that has been coded to hold a certain amount of data. If additional data is fed to the buffer, it can spill over or overflow to adjacent buffers. This can corrupt those buffers and cause the application to crash or possibly allow an attacker to execute his own code that he has loaded onto the stack. Ideally, programs should be written to check that you cannot type 32 characters into a 24-character buffer; however, this type of error checking does not always occur. Error checking is really nothing more than making sure that buffers receive the correct type and amount of information required. Here is an example buffer overflow:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int abc()
{
 char buffer[8];
 strcpy(buffer, "AAAAAAAAAA";
 return 0;
}

For example, in 2010, the Aurora Exploit was developed to cause a buffer overflow against Windows XP systems running Internet Explorer. As a result of the attack, attackers could take control of the client system and execute commands remotely. Due diligence is required to prevent buffer overflows. The programmer’s work should always be checked for good security practices.

OS vendors are also working to make buffer overflow attacks harder by using techniques such as data execution prevention (DEP) and address space layout randomization (ASLR). Buffer overflows are possible in part because attackers can determine what memory space should be used to load their malicious code onto the stack. DEP marks some areas of memory as either executable or non-executable. DEP can help avert some attacks by preventing the writing of malicious commands designed to be stored in memory. ALSR randomly rearranges address space positions of data. Think of the shell game where a small pea is placed under one of the three shells and is then moved around. To win the game you must guess which shell the pea is under. To defeat randomization, attackers must successfully guess the positions of all areas they wish to attack. Increasing the memory space and entropy makes it harder for attackers to guess all possible positions. Most modern OS’s such as Android, Windows, and FreeBSD make use of ALSR.

Other defenses for buffer overflows include code reviews, using safe programming languages, and applying patches and updates in a timely manner. You should also consider the human element; continuous coder training can aid programmers in keeping abreast of ongoing threats and a changing landscape. Finally, since all data should be suspect by default, data being input, processed, or output should be checked to make sure that it matches the correct parameters.

Back Doors

Back doors are another potential threat to the security of systems and software. Back doors, which are also sometimes referred to as maintenance hooks, are used by programmers during development to allow easy access to a piece of software. Often these back doors are undocumented. A back door can be used when software is developed in sections and developers want a means of accessing certain parts of the program without having to run through all the code. If back doors are not removed before the release of the software, they can allow an attacker to bypass security mechanisms and access the program.

State Attacks

State attacks are a form of attack that typically targets timing. The objective is to exploit the delay between the time of check (TOC) and the time of use (TOU). These attacks are sometimes called asynchronous attacks or race conditions because the attacker races to make a change to the object after it has been changed but before the system uses it.

As an example, if a program creates a date file to hold the amount a customer owes and the attacker can race to replace this value before the program reads it, he can successfully manipulate the program. In reality, it can be difficult to exploit a race condition because a hacker might have to attempt to exploit the race condition many times before succeeding.

Covert Channels

Covert channels are a means of moving information in a manner in which it was not intended. Covert channels are a favorite of attackers because they know that you cannot deny what you must permit. The term was originally used in TCSEC documentation to refer to ways of transferring information from a higher classification to a lower classification. Covert channel attacks can be broadly separated into two types:

Image Covert timing channel attacks—Timing attacks are difficult to detect. They function by altering a component or by modifying resource timing.

Image Covert storage channel attacks—These attacks use one process to write data to a storage area and another process to read the data.

Here is an example of how covert channel attacks happen in real life. Your organization has decided to allow ping (Internet Control Message Protocol [ICMP]) traffic into and out of your network. Based on this knowledge, an attacker has planted the Loki program on your network. Loki uses the payload portion of the ping packet to move data into and out of your network. Therefore, the network administrator sees nothing but normal ping traffic and is not alerted, even though the attacker is busy stealing company secrets. Sadly, many programs can perform this type of attack.


ExamAlert

The CISSP exam expects you to understand the two types of covert channel attacks.


Incremental Attacks

The goal of an incremental attack is to make a change slowly over time. By making such a small change over such a long period, an attacker hopes to remain undetected. Two primary incremental attacks include data diddling, which is possible if the attacker has access to the system and can make small incremental changes to data or files, and a salami attack, which is similar to data diddling but involves making small changes to financial accounts or records, often referred to as “cooking the books.”


ExamAlert

The attacks discussed are items that you can expect to see on the exam.


Emanations

Anyone who has seen movies such as Enemy of the State or The Conversation knows something about surveillance technologies and conspiracy theories. If you ever thought that it was just fringe elements that were worried about such things, guess again. This might sound like science fiction, but the United States government was concerned enough about the possibility of emanation that the Department of Defense started a program to study emanation leakage.

Research actually began in the 1950s with the result being TEMPEST technology. The fear was that attackers might try to sniff the stray electrical signals that emanate from electronic devices. Devices that have been built to TEMPEST standards, such as cathode ray tube (CRT) monitors, have had TEMPEST-grade copper mesh, known as a Faraday cage, embedded in the case to prevent signal leakage. This costly technology is found only in very high-security environments.

TEMPEST is now considered somewhat dated; newer technologies such as white noise and control zones are now used to control emanation security. White noise uses special devices that send out a stream of frequencies that makes it impossible for an attacker to distinguish the real information. Control zones are facilities whose walls, floors, and ceilings are designed to block electrical signals from leaving the zone.

Another term associated with this category of technology is Van Eck phreaking. That is the name given to eavesdropping on the contents of a CRT by emanation leakage. Although you might be wondering if all this is really true, it’s worth noting that Cambridge University successfully demonstrated the technique against an LCD monitor in 2004.


ExamAlert

A CISSP candidate is expected to know the technologies and techniques implemented to prevent intruders from capturing and decoding information emanated through the airwaves. TEMPEST, white noise, and control zones are the three primary controls.


Web-based Vulnerabilities

Some attacks can occur on a client. As an example, an input validation attack occurs when client-side input is not properly validated. Application developers should never assume that users will input the correct data. A user bent on malicious activity will attempt to stretch the protocol or an application in an attempt to find possible vulnerabilities. Parameter problems are best solved by implementing pre-validation and post-validation controls. Pre-validation is implemented in the client but can be bypassed by using proxies and other injection techniques. Post-validation is performed to ensure the program’s output is correct. Other security issues directly related to a lack of input validation include the following:

Image Cross-site scripting (XSS)—An attack that exploits trust so that an attacker uses a web application to send malicious code to a web or application server.

Image Cross-site request forgery (CSRF)—An attack that works by third-party redirection of static content so that unauthorized commands are transmitted from a user that the website trusts.

Image Direct OS commands—The unauthorized execution of OS commands.

Image Directory traversal attack—A technique that allows an attacker to move from one directory to another.

Image Unicode encoding—Used to bypass security filters. One famous example used the Unicode string “%c0%af..%c0%af..”.

Image URL encoding—Used by an attacker to hide or execute an invalid application command via an HTTP request. As an example, www.knowthetrade.com%2fmalicious.js%22%3e%3c%2fscript%3e.


Tip

XSS and CSRF are sometimes confused, so just keep in mind that one key difference is that XSS executes code in a trusted context.


One of the things that makes a programmer’s life difficult is that there is no such thing as trusted input. All input is potentially bad and must be verified. While the buffer overflow is the classic example of a poor input validation, these attacks have become much more complex: attackers have learned to insert malicious code within the buffer, instead of just throwing “garbage” (typing random gibberish) at an application to cause a buffer to overflow, which is just messy. There are also many tools available to launch these attacks, an example of which is illustrated in Figure 5.12.

Image

FIGURE 5.12 Burp proxy.

Some of the other techniques attackers use to exploit poor input validation:

Image XML injection

Image LDAP injection

Image SQL injection

All of these are the same type of attack; they just target different platforms.

Databases are another common target of malformed input. An attacker can attempt to insert database or SQL commands to disrupt the normal operation of the database. This could cause the database to become unstable and leak information. This type of attack is known as SQL injection. The attacker searches for web pages in which to insert SQL commands. Attackers use logic such as ‘(a single quote) to test the database for vulnerabilities. Responses such as the one shown in the following code give the attacker the feedback needed to know that the database is vulnerable to attack:

Microsoft OLE DB Provider for ODBC Drivers error '80040e07'
[Microsoft][ODBC SQL Server Driver][SQL Server]Syntax error converting
the nvarchar value 'sa_login' to a column of data type int.
/index.asp, line 5

Although knowing the syntax and response used for a database attack is not required exam knowledge, they are useful to know as you attempt to secure your infrastructure.


Caution

SQL injection attacks are among the top attack vectors and responsible for a large number of attacks. CISSP candidates should understand their potential threat.


Injection attacks, such as SQL, LDAP, and others, can occur in many different programs and applications and share a common problem in that no separation exists between the application code and the input data. This makes it possible for attackers to run their code on the victim’s system. Injection attacks require the following:

Image Footprinting—Determining the technology that the web application is running.

Image Identifying—User input points must be identified.

Image Testing—User input that is susceptible to the attack must be tested.

Image Exploiting—Placing extra bits of code into the input to execute commands on the victim’s computer.

Mobile System Vulnerabilities

Mobile devices have increased in power, and now have the ability to handle many tasks that previously only desktops and laptops could perform. More and more employees are bringing their own mobile devices to work and using them on the corporate network. Some of the concerns the organization might have include:

Image Eavesdropping on voice calls

Image Mobile viruses and malware

Image Plaintext storage on mobile device

Image Ease of loss and theft of mobile device

Image Camera phones’ ability to photograph sensitive information

Image Large storage ability, which can lead to data theft or exfiltration

Image Software that exposes local device data such as names, email addresses, or phone numbers.

Bring your own technology (BYOT), a.k.a. bring your own device (BYOD), requires the organization to build in administrative and technical controls to govern how the devices can be used at work. Some of these basic controls might include:

Image Only allowing managed devices to access to company resources

Image User profiles and policies

Image Mobile device management (MDM)

Image Mobile application management (MAM)

Image Remote wipe

Image Encryption

Image Password protection enforcement

Image Limited password attempts

Image Expanding endpoint security to mobile devices

Image Malware detection and mitigation technology

Exam Prep Questions

1. Which of the following best describes a superscalar processor?

Image A. A superscalar processor can execute only one instruction at a time.

Image B. A superscalar processor has two large caches that are used as input and output buffers.

Image C. A superscalar processor can execute multiple instructions at the same time.

Image D. A superscalar processor has two large caches that are used as output buffers.

2. Which of the following are developed by programmers and used to allow the bypassing of normal processes during development, but are left in the software when it ships to the customer?

Image A. Back doors

Image B. Traps

Image C. Buffer overflows

Image D. Covert channels

3. Carl has noticed a high level of TCP traffic in and out of the network. After running a packet sniffer, he discovered malformed TCP ACK packets with unauthorized data. What has Carl discovered?

Image A. Buffer overflow attack

Image B. Asynchronous attack

Image C. Covert channel attack

Image D. DoS attack

4. You have been promoted to CISO and have instructed the security staff to harden user systems. You are concerned about employee web browsing activity and active web pages they may visit. You have instructed the staff that browsers should be patched and updated, cookie control options should be set, the execution of active code should be controlled, security protocols such as HTTPS, TLS, SSL3, and so on should be used, and to control what can be executed locally. You have informed the CIO that functionality must be sacrificed. What type of attack are you attempting to prevent?

Image A. SYN flood attack

Image B. Buffer overflow attack

Image C. TOC/TOU attack

Image D. Client side attack

5. Which of the following standards evaluates functionality and assurance separately?

Image A. TCSEC

Image B. TNI

Image C. ITSEC

Image D. CTCPEC

6. Which of the following was the first model developed that was based on confidentiality?

Image A. Bell-LaPadula

Image B. Biba

Image C. Clark-Wilson

Image D. Take-Grant

7. Which of the following models is integrity-based and was developed for commercial applications?

Image A. Information Flow

Image B. Clark-Wilson

Image C. Bell-LaPadula

Image D. Brewer-Nash

8. Which of the following does the Biba model address?

Image A. Focuses on internal threats

Image B. Focuses on external threats

Image C. Addresses confidentiality

Image D. Addresses availability

9. Which model is also known as the Chinese Wall model?

Image A. Biba

Image B. Take-Grant

Image C. Harrison-Ruzzo-Ullman

Image D. Brewer-Nash

10. Which of the following examines integrity and availability?

Image A. Orange Book

Image B. Brown Book

Image C. Red Book

Image D. Purple Book

11. What is the purpose of the * property in the Bell-LaPadula model?

Image A. No read up

Image B. No write up

Image C. No read down

Image D. No write down

12. What is the purpose of the simple integrity property of the Biba model?

Image A. No read up

Image B. No write up

Image C. No read down

Image D. No write down

13. Which of the following can be used to connect different MAC systems together?

Image A. Labels

Image B. Reference monitor

Image C. Controls

Image D. Guards

14. Which of the following security modes of operation best describes when a user has a valid need to know all data?

Image A. Dedicated

Image B. System High

Image C. Compartmented

Image D. Multilevel

15. Which of the following security models makes use of the TLC concept?

Image A. Biba

Image B. Clark-Wilson

Image C. Bell-LaPadula

Image D. Brewer Nash

Answers to Exam Prep Questions

1. C. A superscalar processor can execute multiple instructions at the same time. Answer A describes a scalar processor; it can execute only one instruction at a time. Answer B does not describe a superscalar processor because it does not have two large caches that are used as input and output buffers. Answer D is incorrect because a superscalar processor does not have two large caches that are used as output buffers.

2. A. Back doors, also referred to as maintenance hooks, are used by programmers during development to give them easy access into a piece of software. Answer B is incorrect because a trap is a message used by the Simple Network Management Protocol (SNMP) to report a serious condition to a management station. Answer C is incorrect because a buffer overflow occurs due to poor programming. Answer D is incorrect because a covert channel is a means of moving information in a manner in which it was not intended.

3. C. A covert channel is a means of moving information in a manner in which it was not intended. A buffer overflow occurs because of poor programming and usually results in program failure or the attacker’s ability to execute his code; thus, answer A is incorrect. An asynchronous attack deals with performing an operation between the TOC and the TOU, so answer B is incorrect; whereas a DoS attack affects availability, not confidentiality, making answer D incorrect.

4. D. A client side attack is any attack carried out on the client device such as XSS. A is not correct because a SYN flood is when the three-way handshake is exploited; Answer B is incorrect as a buffer overflow is specifically where more data is placed into the buffer than what it can hold; and Answer C is not correct because a TOC/TOU is a timing attack.

5. C. ITSEC is a European standard that evaluates functionality and assurance separately. All other answers are incorrect because they do not separate the evaluation criteria. TCSEC is also known as the Orange Book, TNI is known as the Red Book, and CTCPEC is a Canadian assurance standard; therefore, answers A, B, and D are incorrect.

6. A. Bell-LaPadula was the first model developed that is based on confidentiality. Answers B, C, and D are incorrect: Biba and Clark-Wilson both deal with integrity, whereas the Take-Grant model is based on four basic operations.

7. B. Clark-Wilson was developed for commercial activities. This model dictates that the separation of duties must be enforced, subjects must access data through an application, and auditing is required. Answers A, C, and D are incorrect. The Information Flow model addresses the flow of information and can be used to protect integrity or confidentiality. Bell-LaPadula is an integrity model, and Brewer-Nash was developed to prevent conflict of interest.

8. B. The Biba model assumes that internal threats are being protected by good coding practices and, therefore, focuses on external threats. Answers A, C, and D are incorrect. Biba addresses only integrity, not availability or confidentiality.

9. D. The Brewer-Nash model is also known as the Chinese Wall model and was specifically developed to prevent conflicts of interest. Answers A, B, and C are incorrect because they do not fit the description. Biba is integrity-based, Take-Grant is based on four modes, and Harrison-Ruzzo-Ullman defines how access rights can be changed, created, or deleted.

10. C. The Red Book examines integrity and availability of networked components. Answer A is incorrect because the Orange Book deals with confidentiality. Answer B is incorrect because the Brown Book is a guide to understanding trusted facility management. Answer D is incorrect because the Purple Book deals with database management.

11. D. The * property enforces “no write down” and is used to prevent someone with high clearance from writing data to a lower classification. Answers A, B, and C do not properly describe the Bell-LaPadula model’s star property.

12. C. The purpose of the simple integrity property of the Biba model is to prevent someone from reading an object of lower integrity. This helps protect the integrity of sensitive information.

13. D. A guard is used to connect various MAC systems together and allow for communication between these systems. Answer A is incorrect because labels are associated with MAC systems but are not used to connect them together. Answer B is incorrect because the reference monitor is associated with the TCB. Answer C is incorrect because the term controls here is simply a distracter.

14. A. Out of the four modes listed, only the dedicated mode supports a valid need to know for all information on the system. Therefore, answers B, C, and D are incorrect.

15. B. The Clark-Wilson model was designed to support the goals of integrity and is focused on TLC, which stands for tampered, logged, and consistent. Answers A, C, and D are incorrect; Biba, Bell-LaPadula, and Brewer Nash are not associated with TLC.

Common Criteria: www.niap-ccevs.org/cc-scheme/

Smashing the stack for fun and profit: insecure.org/stf/smashstack.html

Covert-timing-channel attacks: www.owasp.org/index.php/Covert_timing_channel

Java security: java.sun.com/javase/technologies/security/

How Windows measures up to TCSEC standards: technet.microsoft.com/en-us/library/cc767092.aspx

The Rainbow Series: csrc.nist.gov/publications/secpubs/rainbow/std001.txt

The Bell-LaPadula model: csrc.nist.gov/publications/secpubs/rainbow/std001.txt

ISO 17799: iso-17799.safemode.org/

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.80.101