Chapter 5. System Architecture and Models

Terms you'll need to understand:

  • Buffer overflows

  • Security modes

  • Rings of protection

  • Trusted Computer System Evaluation Criteria (TCSEC)

  • Information Technology System Evaluation Criteria (ITSEC)

  • System vulnerabilities

  • Common Criteria

  • Reference monitor

  • Trusted computing base

  • Open and closed systems

Techniques you'll need to master:

  • Understanding confidentiality models, such as Bell-LaPadula

  • Identifying integrity models, such as Biba and Clark-Wilson

  • Understanding common flaws and security issues associated with system-architecture designs

  • Distinguishing between certification and accreditation

Introduction

The systems architecture and models domain deals with system hardware and the software that interacts with it. This chapter discusses the standards for securing these systems and protecting confidentiality, integrity, and availability. You are introduced to the trusted computer base and the ways in which systems can be evaluated to assess the level of security.

To pass the CISSP exam, you need to understand system hardware software models, and how models of security can be used to secure systems. Standards such as Common Criteria, Information Technology System Evaluation Criteria (ITSEC), and Trusted Computer System Evaluation Criteria (TCSEC) are covered on the exam.

Common Flaws in the Security Architecture

Just as in other chapters of this book, this one starts by reviewing potential threats and vulnerabilities. The purpose of placing these sections at the beginning of each chapter is to drive home the point that we live in a world of risk. As security professionals, we need to be aware of these threats to security and understand how the various protection mechanisms discussed throughout the chapter can be used to raise the level of security. Doing this can help build real defense in depth.

Buffer Overflow

Buffer overflows occur because of poor coding techniques. A buffer is a temporary storage area that has been coded to hold a certain amount of data. If additional data is fed to the buffer, it can spill over or overflow to adjacent buffers. This can corrupt these buffers and cause the application to crash or possibly allow an attacker to execute his own code that he has loaded onto the stack.

As an example, Eeye Digital Security discovered a vulnerability with Microsoft's ISAPI filter extension used for Web-based printing back in 2001. The vulnerability occurred when a buffer of approximately 420 bytes was sent to the HTTP host for a .printer ISAPI request. As a result, attackers could take control of the web server remotely and make themselves administrator.

The point here is that the programmer's work should always be checked for good security practices. Due diligence is required to prevent buffer flows.

All data that is being passed to a program should be checked to make sure that it matches the correct parameters.

Back Doors

Back doors are another potential threat to the security of systems and software. Back doors, which are also sometimes referred to as maintenance hooks, are used by programmers during development to allow easy access to a piece of software. A back door can be used when software is developed in sections and developers want a means of accessing certain parts of the program without having to run through all the code. If back doors are not removed before the release of the software, they can allow an attacker to bypass security mechanisms and hack the program.

Asynchronous Attacks

Asynchronous attacks are a form of attack that typically targets timing. The objective is to exploit the delay between the time of check (TOC) and the time of use (TOU). These attacks are sometimes called race conditions because the attacker races to make a change to the object after it has been changed but before the system uses it.

As an example, if a program creates a date file to hold the amount a customer owes and the attacker can race to replace this value before the program reads it, he can successfully manipulate the program. In reality, it can be difficult to exploit a race condition because a hacker might have to attempt to exploit the race condition many times before succeeding.

Covert Channels

A covert channel is a means of moving information in a manner in which it was not intended. Covert channels are a favorite of attackers because they know that you cannot deny what you must permit. The term was originally used in TCSEC documentation to refer to ways of transferring information from a higher classification to a lower classification. Covert channel attacks can be broadly separated into two types:

  • Covert timing channel attacks—. Timing attacks are difficult to detect and function by altering a component or by modifying resource timing.

  • Covert storage channel attacks—. These attacks use one process to write data to a storage area and another process to read the data.

Here is an example of how covert channel attacks happen in real life. Your organization has decided to allow ping traffic into and out of your network. Based on this knowledge, an attacker has planted the Loki program on your network. Loki uses the payload portion of the ping packet to move data into and out of your network. Therefore, the network administrator sees nothing but normal ping traffic and is not alerted, all while the attacker is busy stealing company secrets. Sadly, many programs can perform this type of attack.

Note

Covert storage channel attacks—

The CISSP exam expects you to understand the two types of covert channel attacks.

Incremental Attacks

The goal of an incremental attack is to make a change slowly over time. By making such a small change over such a long period of time, an attacker hopes to remain undetected. Two primary incremental attacks include data diddling, which is possible if the attacker has access to the system and can make small incremental changes to data or files, and a salami attack, which is similar to data diddling but involves making small changes to financial accounts or records.

Note

Incremental Attacks

The attacks discussed are items that you can expect to see on the exam.

Computer System Architecture

At the core of every computer system is the CPU and hardware that make it run. These are the physical components that interact with the OS and applications to do the things we need done. Let's start at the heart of the system and work our way out.

Central Processing Unit (CPU)

The CPU is the heart of the computer system. The CPU consists of an arithmetic logic unit (ALU), which performs arithmetic and logical operations, and a control unit, which extracts instructions from memory and decodes and executes the requested instructions. Two basic designs of CPUs are manufactured for modern computer systems:

  • Reduced Instruction Set Computing (RISC)—. Uses simple instructions that require a reduced number of clock cycles

  • Complex Instruction Set Computing (CISC)—. Performs multiple operations for a single instruction

The CPU requires two inputs to accomplish its duties: instructions and data. The data is passed to the CPU for manipulation, where it is typically worked on in either supervisor or problem state. In problem state, the CPU works on the data with nonprivileged instructions. In supervisor state, the CPU executes privileged instructions.

Note

Complex Instruction Set Computing (CISC)—

A superscalar processor is one that can execute multiple instructions at the same time, whereas a scalar processor can execute only one instruction at a time. You will need to know this distinction for the exam.

The CPU can be classified in one of several categories, depending on its functionality. Both the hardware and software must be supported to use these features. These categories include the following:

  • Multiprogramming—. Can interleave two or more programs for execution at any one time.

  • Multitasking—. Can perform one or more tasks or subtasks at a time.

  • Multiprocessor—. Supports one or more CPUs. As an example, Windows 98 does not support the multiprocessor, whereas Windows 2003 does.

The data that CPUs work with is usually part of an application or program. These programs are tracked by a process ID, or PID. Anyone who has ever looked at Task Manager in Windows or executed a ps command on a Linux machine has probably seen a PID number. Fortunately, most programs do much more than the first C code you probably wrote that just said, “Hello World.” Each line of code or piece of functionality that a program has is known as a thread.

The data that the CPU is working with must have a way to move from the storage media to the CPU. This is accomplished by means of the bus. The bus is nothing more than lines of conductors that transmit data between the CPU, storage media, and other hardware devices.

Storage Media

The CPU uses memory to store instructions and data. Therefore, memory is an important type of storage media. The CPU is the only device that can directly access memory. Systems are designed that way because the CPU has a high level of system trust. Memory can have either physical or logical addresses. Physical addressing refers to the hard-coded address assigned to the memory. Applications and programmers writing code use logical addresses. Not only can memory be addressed in different ways, but there are also different types of memory. Memory can be either nonvolatile or volatile. Examples of both are given here:

  • Read-only memory (ROM) is nonvolatile memory that retains information even if power is removed. Types of ROM include Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory, and programmable logic devices (PLD). ROM is typically used to load and store firmware.

  • Random access memory (RAM) is volatile memory. If power is lost, the data is destroyed. Types of RAM include static RAM, which uses circuit latches to represent binary data, and dynamic RAM, which must be refreshed every few milliseconds.

Secondary Storage

Although memory plays an important part in the world of storage, other long-term types of storage are also needed. One of these is sequential storage. Anyone who has owned an IBM PC with a tape drive knows what sequential storage is. Tape drives are a type of sequential storage that must be read sequentially from beginning to end. Another well-known type of secondary storage is direct-access storage. Direct-access storage devices do not have to be read sequentially; the system can identify the location of the information and go directly to it to read the data. Hard drives are an example of a sequential storage device: They are used to hold data and software. Software is the operating system or an application that you've installed on a computer system.

Virtual Memory and Virtual Machines

Modern computer systems have developed other ways in which to store and access information. One of these is virtual memory. Virtual memory is the combination of the computer's primary memory, RAM, and secondary storage, the hard drive. By combining these two technologies, the OS can make the CPU believe that it has much more memory than it actually does. When RAM is depleted, the CPU begins saving data onto the computer's hard drive. This information is saved in pages that can be swapped back and forth between the hard drive and RAM, as needed. Individuals who have opened more programs on their computers than they've had enough memory to support are probably familiar with the operation of virtual memory.

Closely related to virtual memory are virtual machines. VMWare and VirtualPC are the two leading contenders in this category. A virtual machine enables the user to run a second OS within a virtual host. For example, a virtual machine will let you run another Windows OS, Linux x86, or any other OS that runs on x86 processor and supports standard BIOS booting. Virtual machines are used primarily for development and system administration, and to reduce the number of physical devices needed.

Security Mechanisms

Although a robust architecture is a good start, real security requires that you have security mechanisms in place to control processes and applications. Some good security mechanisms are described in the following sections.

Process Isolation

Process isolation is required to maintain a high level of system trust. To be certified as a multilevel security system, process isolation must be supported. Without process isolation, there would be no way to prevent one process from spilling over into another process's memory space, corrupting data or possibly making the whole system unstable. Process isolation is performed by the operating system; its job is to enforce memory boundaries.

For a system to be secure, the operating system must prevent unauthorized users from accessing areas of the system to which they should not have access. Sometimes this is done by means of a virtual machine. A virtual machine allows the user to believe that they have the use of the entire system, but in reality, processes are completely isolated. To take this concept a step further, some systems that require truly robust security also implement hardware isolation. This means that the processes are segmented not only logically, but also physically.

Note

Process Isolation

Java uses a form of virtual machine because it uses a sandbox to contain code and allows it to function only in a controlled manner.

Operation States

When systems are used to process and store sensitive information, there must be some agreed-upon methods for how this will work. Generally, these concepts were developed to meet the requirements of handling sensitive government information with categories such as sensitive, secret, and top secret. The burden of handling this task can be placed upon either administration or the system itself.

Single-state systems are designed and implemented to handle one category of information. The burden of management falls upon the administrator, who must develop the policy and procedures to manage this system. The administrator must also determine who has access and what type of access the users have. These systems are dedicated to one mode of operation, so they are sometimes referred to as dedicated systems.

Multistate systems depend not on the administrator, but on the system itself. They are capable of having more than one person log in to the system and access various types of data, depending upon the level of clearance. As you would probably expect, these systems are not inexpensive. Multistate systems can operate as a compartmentalized system. This means that Mike can log into the system with a secret clearance and access secret-level data, while Carl can log in with top-secret level access and access a different level of data. These systems are compartmentalized and can segment data on a need-to-know basis.

Unfortunately, things don't always operate normally; they sometimes go wrong, and a system failure can occur. A system failure could potentially compromise the system. Efficient designs have built-in recovery procedures to recover from potential problems:

  • Fail-safe—. If a failure is detected, the system is protected from compromise by termination of services.

  • Fail-soft—. A detected failure terminates the noncritical process, and the system continues to function.

Protection Rings

So how does the operating system know who and what to trust? It relies on rings of protection. Rings of protection work much like your network of family, friends, coworkers, and acquaintances. The people who are closest to you, such as your spouse and family, have the highest level of trust. Those who are distant acquaintances or are unknown to you probably have a lower level of trust. It's much like the guy I met in Times Square trying to sell me a new Rolex for $100—I had little trust in him and the supposed Rolex!

In reality, the protection rings are conceptual. Figure 5.1 shows an illustration of the protection ring schema.

Rings of protection.

Figure 5.1. Rings of protection.

The protection ring model provides the operating system with various levels at which to execute code or restrict its access. It provides much greater granularity than a system that just operates in user and privileged mode. As you move toward the outer bounds of the model, the numbers increase and the level of trust decreases.

  • Layer 0 is the most trusted level. The operating system kernel resides at this level. Any process running at layer 0 is said to be operating in privileged mode.

  • Layer 1 contains nonprivileged portions of the operating system.

  • Layer 2 is where I/O drivers, low level operations, and utilities reside.

  • Layer 3 is where applications and processes operate. Items such as FTP, DNS, Telnet, and HTTP all operate at this level. This is the level at which individuals usually interact with the operating system. Applications operating here are said to be working in user mode.

Trusted Computer Base

The trusted computer base (TCB) is the sum of all the protection mechanisms within a computer and is responsible for enforcing the security policy. This includes hardware, software, controls, and processes. The TCB is responsible for confidentiality and integrity. It monitors four basic functions:

  • Input/output operations—. I/O operations are a security concern because operations from the outermost rings might need to interface with rings of greater protection. These cross-domain communications must be monitored.

  • Execution domain switching—. Applications running in one domain or level of protection often invoke applications or services in other domains. If these requests are to obtain more sensitive data or service, their activity must be controlled.

  • Memory protection—. To truly be secure, the TCB must monitor memory references to verify confidentiality and integrity in storage.

  • Process activation—. Registers, process status information, and file access lists are vulnerable to loss of confidentiality in a multiprogramming environment. This type of potentially sensitive information must be protected.

An important component of the TCB is the reference monitor, an abstract machine that is used to implement security. The reference monitor's job is to validate access to objects by authorized subjects. The reference monitor is implemented by the security kernel, which is at the heart of the system and handles all user/application requests for access to system resources. A small security kernel is easy to verify, test, and validate as secure. However, in real life, the security kernel is usually not that small because processes located inside can function faster and have privileged access. To avoid these performance costs, Linux and Windows have fairly large security kernels and have opted to sacrifice small size in return for performance gains. No matter what the size is, the security kernel must

  • Control all access

  • Be protected from modification or change

  • Be verified and tested to be correct

Security Models of Control

Security models of control are used to determine how security will be implemented, what subjects can access the system, and what objects they will have access to. Simply stated, they are a way to formalize security policy. Security models of control are typically implemented by enforcing integrity or confidentiality.

Integrity

Integrity is a good thing. It is one of the basic elements of the security triad, along with confidentiality and availability. Integrity plays an important role in security because it can verify that unauthorized users are not modifying data, that authorized users don't make unauthorized changes, and that data remains internally and externally consistent. Two security models of control that address integrity include Biba and Clark-Wilson.

Biba

The Biba model was the first model developed to address the concerns of integrity. Originally published in 1977, this lattice-based model has two defining properties:

  • Simple Integrity Property—. This property states that a subject at one level of integrity is not permitted to read an object of lower integrity.

  • Star * Integrity Property—. This property states that an object at one level of integrity is not permitted to write to an object of higher integrity.

Biba addresses integrity only, not availability or confidentiality. It also assumes that internal threats are being protected by good coding practices and, therefore, focuses on external threats.

Note

Star * Integrity Property—

Remember that the Biba model deals with integrity. As such, writing to an object of a higher level might endanger the integrity of the system.

Clark-Wilson

The Clark-Wilson model was created in 1987. It differs from previous models because it was developed with the intention to be used for commercial activities. This model dictates that the separation of duties must be enforced, subjects must access data through an application, and auditing is required. It also differs from the Biba model in that subjects are restricted. This means a subject at one level of access can read one set of data, whereas a subject at another level of access has access to a different set of data.

Confidentiality

Although integrity is an important concept, confidentiality was actually the first to be addressed in a formal model. This is because the Department of Defense (DoD) was concerned about the confidentiality of information. The DoD divides information into categories, to ease the burden of managing who has access to what levels of information. DoD information classifications include confidential, secret, and top secret.

Bell-LaPadula

The Bell-LaPadula model was actually the first formal model developed to protect confidentiality. This is a state machine that enforces confidentiality. A state machine is a conceptual model that monitors the status of the system to prevent it from slipping into an insecure state. Systems that support the state machine model must have all their possible states examined to verify that all processes are controlled. The Bell-LaPadula model uses mandatory access control to enforce the DoD multilevel security policy. For a subject to access information, he must have a clear “need to know” and meet or exceed the information's classification level.

The Bell-LaPadula model is defined by the two following properties:

  • Simple Security Property (ss Property)—. This property states that a subject at one level of confidentiality is not allowed to read information at a higher level of confidentiality. This is sometimes referred to as “no read up.”

  • Star * Security Property—. This property states that a subject at one level of confidentiality is not allowed to write information to a lower level of confidentiality. This is also known as “no write down.”

Note

Star * Security Property—

Review the Bell-LaPadula Simple Security and Star * Security models closely; they are easy to confuse with Biba's two defining properties.

Note

Star * Security Property—

Know that the Bell-LaPadula model deals with confidentiality. As such, reading information at a higher level than what is allowed would endanger confidentiality.

Take-Grant Model

The Take-Grant model is another confidentiality-based model that supports four basic operations: take, grant, create, and revoke. This model allows subjects with the take right to remove take rights from other subjects. Subjects possessing the grant right can grant this right to other subjects. The create and revoke operations work in the same manner: Someone with the create right can give the create right to others, and those with the revoke right can remove that right from others.

Brewer and Nash Model

The Brewer and Nash model is similar to the Bell-LaPadula model and is also called the Chinese Wall model. It was developed to prevent conflict of interest (COI) problems. As an example, imagine that your security firm does security work for many large firms. If one of your employees could access information about all the firms that your company has worked for, he might be able to use this data in an unauthorized way. Therefore, the Chinese Wall model would prevent a worker consulting for one firm from accessing data belonging to another, thereby preventing any COI.

Other Models

Although not as popular, other security models of control exist:

  • Noninterference model—. As its name states, this model's job is to make sure that objects and subjects of different levels don't interfere with the objects and subjects of other levels.

  • Information-flow model—. This model is the basis of design of both the Biba and Bell-LaPadula models. Information-flow models are considered a type of state machine. The Biba model is designed to prevent information from flowing from a low security level to a high security level. This helps protect the integrity of sensitive information. The Bell-LaPadula model is designed to prevent information from flowing from a high security level to a lower one. This protects confidentiality. The real goal of any information-flow model is to prevent the unauthorized, insecure information flow in any direction.

  • Graham Denning model—. This model uses a formal set of protection rules for which each object has an owner and a controller.

  • Harrison-Ruzzo-Ullman model—. This model details how subjects and objects can be created, deleted, accessed, or changed.

Note

Harrison-Ruzzo-Ullman model—

Spend some time reviewing all the models discussed in this section. Make sure you know which models are integrity based and which are confidentiality based; you will need to know this distinction for the exam.

Open and Closed Systems

Open systems accept input from other vendors and are based upon standards and practices that allow connection to different devices and interfaces. The goal is to promote full interoperability whereby the system can be fully utilized.

Closed systems are proprietary. They use devices that are not based on open standards and are generally locked. They lack standard interfaces to allow connection to other devices and interfaces.

An example of this can be seen in the U.S. cellphone industry. Cingular and T-Mobile cellphones are based on the worldwide Global System for Mobile Communications (GMS) standard and can be used overseas easily on other networks by simply changing the SIM module. These are open-system phones. Other phones, such as Sprint, use Code Division Multiple Access (CDMA), which does not have worldwide support.

Documents and Guidelines

The documents and guidelines discussed in the following sections were developed to help evaluate and establish system assurance. These items are important to the CISSP candidate because they provide a level of trust and assurance that these systems will operate in a given and predictable manner. A trusted system has undergone testing and validation to a specific standard. Assurance is the freedom of doubt and a level of confidence that a system will perform as required every time it is used.

The Rainbow Series

The rainbow series is aptly named because each book in the series has a different color of label. This 6-foot-tall stack of books was developed by the National Computer Security Center (NCSC), an organization that is part of the National Security Agency (NSA). These guidelines were developed for the Trusted Product Evaluation Program (TPEP), which tests commercial products against a comprehensive set of security-related criteria. The first of these books was released in 1983 and is known as the Orange Book. Because it addresses only standalone systems, other volumes were developed to increase the level of system assurance.

The Orange Book: Trusted Computer System Evaluation Criteria

The Orange Book's official name is the Trusted Computer System Evaluation Criteria (TCSEC). As noted, it was developed to evaluate standalone systems. Its basis of measurement is confidentiality, so it is similar to the Bell-LaPadula model. It is designed to rate systems and place them into one of four categories:

  • AVerified protection. An A-rated system is the highest security division.

  • BMandatory security. A B-rated system has mandatory protection of the TCB.

  • CDiscretionary protection. A C-rated system provides discretionary protection of the TCB.

  • DMinimal protection. A D-rated system fails to meet any of the standards of A, B, or C, and basically has no security controls.

Note

D:

The Canadians have their own version of the Orange Book, known as The Canadian Trusted Computer Product Evaluation Criteria (CTCPEC). It is seen as a more flexible version of TCSEC.

The Orange Book not only rates systems into one of four categories, but each category is also broken down further. For each of these categories, a higher number indicates a more secure system, as noted in the following:

  • A is the highest security division. An A1 rating means that the system has verified protection and supports mandatory access control (MAC).

    • A1 is the highest supported rating. Systems rated as such must meet formal methods and proof of integrity of TCB. Examples of A1 systems include the Gemini Trusted Network Processor and the Honeywell SCOMP.

  • B is considered a mandatory protection design. Just as with an A-rated system, those that obtain a B rating must support MAC.

    • B1 (labeled security protection) systems require sensitivity labels for all subjects and storage objects. Examples of B1-rated systems include the Cray Research Trusted Unicos 8.0 and the Digital SEVMS.

    • For a B2 (structured protection) rating, the system must meet the requirements of B1 and support hierarchical device labels, trusted path communications between user and system, and covert channel analysis. An example of a B2 system is the Honeywell Multics.

    • Systems rated as B3 (security domains) must meet B2 standards and support trusted path access and authentication, automatic security analysis, and trusted recovery. An example of a B3-rated system is the Federal XTS-300.

  • C is considered a discretionary protection rating. C-rated systems support discretionary access control (DAC).

    • Systems rated at C1 (discretionary security protection) don't need to distinguish between individual users and types of access.

    • C2 (controlled access protection) systems must meet C1 requirements plus must distinguish between individual users and types of access.

      C2 systems must also support object reuse protection. A C2 rating is common; products such as Windows NT and Novell NetWare 4.11 have a C2 rating.

  • Any system that does not comply with any of the other categories or that fails to receive a higher classification is rated as a D-level (minimal protection) system. MS-DOS is a D-rated system.

Note

D:

The CISSP exam will not expect you to know what systems meet the various Orange Book ratings; however, it will expect you to know where MAC and DAC are applied.

The Red Book: Trusted Network Interpretation

The Red Book's official name is the Trusted Network Interpretation. Its purpose is to address the deficiencies of the Orange Book. Although the Orange Book addresses only confidentiality, the Red Book examines integrity and availability. It also is tasked with examining the operation of networked devices.

Information Technology Security Evaluation Criteria (ITSEC)

ITSEC is a European standard that was developed in the 1980s to evaluate confidentiality, integrity, and availability of an entire system. ITSEC designates the target system as the Target of Evaluation (TOE). The evaluation is actually divided into two parts: One part evaluates functionality, and the other evaluates assurance. There are 10 functionality (F) classes and 7 assurance (E) classes. Assurance classes rate the effectiveness and correctness of the system. Table 5.1 shows these ratings and how they correspond to the TCSEC ratings.

Table 5.1. ITSEC Functionality Ratings and Comparison to TCSEC

(F) Class

(E) Class

TCSEC Rating

NA

E0

D

F1

E1

C1

F2

E2

C2

F3

E3

B1

F4

E4

B2

F5

E5

B3

F5

E6

A1

F6

TOEs with high integrity requirements

F7

TOEs with high availability requirements

F8

TOEs with high integrity requirements during data communications

F9

TOEs with high confidentiality requirements during data communications

F10

Networks with high confidentiality and integrity requirements

Common Criteria

With all the standards we have discussed, it would be easy to see how someone might have a hard time determining which one is the right choice. The International Standards Organization (ISO) had these same thoughts. Therefore, they decided that because of the various standards and ratings that existed, there should be a single global standard.

In 1997, the ISO released the Common Criteria (ISO 15408), which is an amalgamated version of TCSEC, ITSEC, and the CTCPEC. Common Criteria is designed around TCB entities. These entities include physical and logical controls, startup and recovery, reference mediation, and privileged states. Common Criteria categorizes assurance into one of seven increasingly strict levels of assurance. These are referred to as Evaluation Assurance Levels (EAL). EALs provide a specific level of confidence in the security functions of the system being analyzed. The system being analyzed and tested is known as the Target of Evaluation (TOE), which is just another name for the system that is being subjected to the security evaluation. The assurance requirements and specifications to be used as the basis for evaluation are known as the Security Target (ST). A description of each of the seven levels of assurance follows:

  • EAL 0Inadequate assurance

  • EAL 1Functionality tested

  • EAL 2Structurally tested

  • EAL 3Methodically checked and tested

  • EAL 4Methodically designed, tested, and reviewed

  • EAL 5Semiformally designed and tested

  • EAL 6Semiformally verified designed and tested

  • EAL 7Formally verified designed and tested

Common Criteria defines two types of security requirements: functional and assurance. Functional requirements define what a product or system does. They also define the security capabilities of the product. Assurance requirements define how well the product is built. Assurance requirements give confidence in the product and show the correctness of its implementation.

Note

EAL 7:

The Common Criteria seven levels of assurance and its two security requirements are required test knowledge.

British Standard 7799

The BS 7799 was developed in England to be used as a standard method to measure risk. Because the document found such a wide audience and was adopted by businesses and organizations, it evolved into ISO 17799 in December 2000. This is a comprehensive standard in its coverage of security issues and is divided into 10 sections:

  • Security Policy

  • Security Organization

  • Asset Control and Classification

  • Environmental and Physical Security

  • Employee Security

  • Computer and Network Management

  • Access Controls

  • System Development and Maintenance

  • Business Continuity Planning

  • Compliance

Compliance with 7799 is an involved task and is far from trivial for even the most security conscious of organizations.

System Validation

No system or architecture will ever be completely secure; there will always be a certain level of risk. Security professionals must understand this risk and be comfortable with it, mitigate it, or offset it to a third party. All the documentation and guidelines already discussed dealt with ways to measure and assess risk. These can be a big help in ensuring that the implemented systems meet our requirements. However, before we begin to use the systems, we must complete two additional steps.

Certification and Accreditation

Certification is the process of validating that systems we implement are configured and operating as expected. It also validates that the systems are connected to and communicate with other systems in a secure and controlled manner, and that they handle data in a secure and approved manner. The certification process is a technical evaluation of the system that can be carried out by independent security teams or by the existing staff. Its goal is to uncover any vulnerabilities or weaknesses in the implementation.

The results of the certification process are reported to the organization's management for mediation and approval. If management agrees with the findings of the certification, the report is formally approved. The formal approval of the certification is the accreditation process. Management usually issues this in a formal written approval that the certified system is approved for use and specified in the certification documentation. If changes are made to the system, it is reconfigured; if there are other changes in the environment, a recertification and accreditation process must be repeated. The entire process is periodically repeated at intervals depending on the industry and the regulations they must comply with. As an example, Section 404 of Sarbanes-Oxley requires an annual evaluation of internal systems that deal with financial controls and reporting systems.

Exam Prep Questions

1:

Which of the following best describes a superscalar processor?

  • A. A superscalar processor can execute only one instruction at a time.

  • B. A superscalar processor has two large caches that are used as input and output buffers.

  • C. A superscalar processor can execute multiple instructions at the same time.

  • D. A superscalar processor has two large caches that are used as output buffers.

2:

Which of the following are developed by programmers and used to allow the bypassing of normal processes during development?

  • A. Back doors

  • B. Traps

  • C. Buffer overflows

  • D. Covert channels

3:

Carl has noticed a high level of TCP traffic in and out of the network. After running a packet sniffer, he discovered malformed TCP ACK packets with unauthorized data. What has Carl discovered?

  • A. Buffer-overflow attack

  • B. Asynchronous attack

  • C. Covert-channel attack

  • D. DoS attack

4:

Which of the following types of CPUs can perform multiple operations from a single instruction?

  • A. DITSCAP

  • B. RISC

  • C. NIACAP

  • D. CISC

5:

Which of the following standards evaluates functionality and assurance separately?

  • A. TCSEC

  • B. TNI

  • C. ITSEC

  • D. CTCPEC

6:

Which of the following was the first model developed that was based on confidentiality?

  • A. Bell-LaPadula

  • B. Biba

  • C. Clark-Wilson

  • D. Take-Grant

7:

Which of the following models is integrity based and was developed for commercial applications?

  • A. Information-flow

  • B. Clark-Wilson

  • C. Bell-LaPadula

  • D. Brewer-Nash

8:

Which of the following does the Biba model address?

  • A. Focuses on internal threats

  • B. Focuses on external threats

  • C. Addresses confidentiality

  • D. Addresses availability

9:

Which model is also known as the Chinese Wall model?

  • A. Biba

  • B. Take-Grant

  • C. Harrison-Ruzzo-Ullman

  • D. Brewer-Nash

10:

Which of the following examines integrity and availability?

  • A. Orange Book

  • B. Brown Book

  • C. Red Book

  • D. Purple Book

Answers to Exam Prep Questions

A1:

Answer: C. A superscalar processor can execute multiple instructions at the same time. Answer A describes a scalar processor; it can execute only one instruction at a time. Answer B does not describe a superscalar processor because it does not have two large caches that are used as input and output buffers. Answer D is incorrect because a superscalar processor does not have two large caches that are used as output buffers.

A2:

Answer: A. Back doors, also referred to as maintenance hooks, are used by programmers during development to give them easy access into a piece of software. Answer B is incorrect because a trap is a message used by the Simple Network Management Protocol (SNMP) to report a serious condition to a management station. Answer C is incorrect because a buffer overflow occurs because of poor programming. Answer D is incorrect because a covert channel is a means of moving information in a manner in which it was not intended.

A3:

Answer: C. A covert channel is a means of moving information in a manner in which it was not intended. A buffer overflow occurs because of poor programming and usually results in program failure or the attacker's ability to execute his code; thus, answer A is incorrect. An asynchronous attack deals with performing an operation between the TOC and the TOU (so answer B is incorrect), whereas a DoS attack affects availability, not confidentiality (making answer D incorrect).

A4:

Answer: D. The Complex Instruction Set Computing (CISC) CPU can perform multiple operations from a single instruction. Answer A is incorrect because DITSCAP is the Defense Information Technology Systems Certification and Accreditation Process. Answer B describes the Reduced Instruction Set Computing (RISC) CPU, which uses simple instructions that require a reduced number of clock cycles. Answer C is incorrect because NIACAP is the National Information Assurance Certification and Accreditation Process, an accreditation process.

A5:

Answer: C. ITSEC is a European standard that evaluates functionality and assurance separately. All other answers are incorrect because they do not separate the evaluation criteria. TCSEC is also known as the Orange Book, TNI is known as the Red Book, and CTCPEC is a Canadian assurance standard; therefore, answers A, B, and D are incorrect.

A6:

Answer: A. Bell-LaPadula was the first model developed that is based on confidentiality. It uses two main rules to enforce its operation. Answers B, C, and D are incorrect. Biba and Clark-Wilson both deal with integrity, whereas the Take-Grant model is based on four basic operations.

A7:

Answer: B. Clark-Wilson was developed for commercial activities. This model dictates that the separation of duties must be enforced, subjects must access data through an application, and auditing is required. Answers A, C, and D are incorrect. The information-flow model addresses the flow of information and can be used to protect integrity or confidentiality. Bell-LaPadula is an integrity model, and Brewer-Nash was developed to prevent conflict of interest.

A8:

Answer: B. The Biba model assumes that internal threats are being protected by good coding practices and, therefore, focuses on external threats. Answers A, C, and D are incorrect. Biba addresses only integrity, not availability or confidentiality.

A9:

Answer: D. The Brewer-Nash model is also known as the Chinese Wall model and was specifically developed to prevent conflicts of interest. Answers A, B, and C are incorrect because they do not fit the description. Biba is integrity based, Take-Grant is based on four modes, and Harrison-Ruzzo-Ullman defines how access rights can be changed, created, or deleted.

A10:

Answer: C. The Red Book examines integrity, availability, and networked components. Answer A is incorrect because the Orange Book deals with confidentiality. Answer B is incorrect because the Brown Book is a guide to understanding trusted facility management. Answer D is incorrect because the Purple Book deals with database management.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.234.150