Chapter 13

Security Standards

Chapter Objectives

After reading this chapter and completing the exercises, you will be able to do the following:

images Apply the U.S. Department of Defense’s Orange Book computer security criteria.

images Understand industry standards like COBIT

images Understand ISO standards

images Use the Common Criteria computer security criteria.

images Employ other security models, including the Bell-LaPadula, Clark-Wilson, Biba Integrity, Chinese Wall, and State Machine models.

Introduction

Network security, as a field of study, has matured greatly in the past few decades. This means that there are a number of well-studied and widely accepted security standards already in place. There are also a variety of security models in place that you can use to assist in your approach to security. Understanding these standards and models is essential to developing a complete security strategy for your network. Through the preceding 12 chapters you have studied firewalls, proxy servers, antivirus software, defenses against DoS attacks, security policies, and more. Adding to that knowledge an understanding of security standards and models will give you a very solid understanding of network security.

COBIT

Control Objectives for Information and Related Technologies (COBIT) is a framework that can be effective in providing a structure applicable to a diverse set of cyber security environments. COBIT is a framework developed by ISACA (Information Systems Audit and Control Association) and first released in 1996. It was originally targeted to financial audits but has expanded over time. In 2005 COBIT was published as an ISO standard, ISO 17799:2005. The current version is COBIT 5, released in April 2012. The current version includes five components: framework, process descriptions, control objectives, management guidelines, and maturity models. Each of these components is an integral part of the framework and important to information security management.

The framework component of COBIT is one of the aspects of the standard that makes it relatively easy to integrate other standards. This component is rather general and requires that organizations develop good practices related to their business requirements. “Good practices” is a broadly defined term. In this component of COBIT, it would be appropriate to integrate any standards that are pertinent to the organization in question. For example, a company that processes credit cards would integrate the PCI DSS standard in the framework component of COBIT. Then the organization would develop practices based on the PCI DSS standard. This illustrates not only the fact that COBIT is flexible and can be integrated with many standards, but that those standards are not in and of themselves complete approaches to cyber security. The fact that any of these standards would accommodate only one part of the COBIT framework is indicative of the narrow focus of these IT standards.

The next component of COBIT is process descriptions. While this is applicable to any network environment, it goes beyond existing standards such as HIPAA and PCI DSS, both of which will be discussed later in this chapter. This component requires the organization to clearly describe all business processes. This is a critical early step, because one cannot effectively approach security for any organization until one has a firm grasp on the processes of that organization.

Process descriptions in COBIT need to be detailed. These descriptions will include all inputs to a given process as well as expected outputs. Every process within the organization must be described. This detailed description provides a guide to the security needs of that process. For example, if a given process is to process credit card information, understanding the inputs and outputs will help determine the security controls that would be appropriate.

The third component of COBIT are control objectives. This is another aspect of COBIT that goes beyond security standards, and instead provides a framework for information assurance. This component requires the organization to establish clear objectives for each security control. Whether that control is administrative or technological in nature, there must be a clearly articulated objective for the control. Without such objectives, it is impossible to evaluate the efficacy of a security control.

The more specific and detailed the objectives are the more effective they can be. One example of a control objective would be the implementation of an antivirus software solution. A generic objective would be to simply state the objective is to mitigate the risk of malware. A more detailed objective would be to target a 20% reduction either in the frequency or deleterious impact of malware outbreaks within the organization’s network. The more precise the objective is, the easier it will be to measure and improve performance.

The control objectives lead naturally to management guidelines, the fourth component of COBIT. This component requires management to establish responsibility for achieving security goals, and implements methods to measure performance of security controls. It is noteworthy that management guidelines are fourth in the COBIT components. Only after addressing the three previous components is it possible to develop effective management guidelines. Without clear control objectives, an understanding of business processes, and similar information, it is difficult to manage.

Finally, COBIT includes maturity models. Maturity models examine any process from the point of view of how developed that process is. Essentially, each individual security process is first assessed to determine how mature that process is. Maturity is defined as how that control is performing against objectives. Then, over time, the security process is evaluated to determine if it is maturing and improving. As an example, a policy regarding passwords might initially be developed based on generic guidelines. Then later, the policy could be revised in light of events within the organization, published standards, or increasing understanding of the security personnel. This process would then be said to be maturing.

ISO Standards

The International Organization for Standardization creates standards for a wide range of topics. There are hundreds of such standards, and it would be impossible to cover them in a single chapter of a single book. In fact each standard could be the subject of a book itself, or at least a few chapters. Some of the more important standards for network security are listed here:

images ISO/IEC 15408: The Common Criteria for Information Technology Security Evaluation

images ISO/IEC 25000: Systems and Software Engineering

images ISO/IEC 27000: Information technology — Security Technology

images ISO/IEC 27001: Information Security Management

images ISO/IEC 27005: Risk Management

images ISO/IEC 27006: Accredited Certification Standard

images ISO/IEC 28000: Specification for security management systems for the supply chain

images ISO 27002: Information Security Controls

images ISO 27003: ISMS Implementation

images ISO 27004: IS Metrics

images ISO 27005: Risk management

images ISO 27006: ISMS certification

images ISO 27007: Management System Auditing

images ISO 27008: Technical Auditing

images ISO 27010: Inter-organization communication

images ISO 27011: Telecommunications

images ISO 27033: Network security

images ISO 27034: Application security

images ISO 27035: Incident Management

images ISO 27036: Supply chain

images ISO 27037: Digital forensics

images ISO 27038: Document reduction

images ISO 27039: Intrusion prevention

images ISO 27040: Storage security

images ISO 27041: Investigation assurance

images ISO 27042: Analyzing digital evidence

images ISO 27043: Incident Investigation

NIST Standards

The U.S. National Institute of Standards and Technology establishes standards for a wide range of things. Some of the standards most important to network security are discussed in this section.

NIST SP 800-14

Special Publication 800-14, Generally Accepted Principles and Practices for Securing Information Technology Systems, describes common security principles that should be addressed within security policies. The purpose of this document is to describe 8 principles and 14 practices that can be used to develop security policies. This standard is based on 8 principles, which are:

1. Computer security supports the mission of the organization.

2. Computer security is an integral element of sound management.

3. Computer security should be cost-effective.

4. System owners have security responsibilities outside their own organizations.

5. Computer security responsibilities and accountability should be made explicit.

6. Computer security requires a comprehensive and integrated approach.

7. Computer security should be periodically reassessed.

8. Computer is security is constrained by societal factors.

NIST SP 800-35

NIST SP 800-35, Guide to Information Technology Security Services, is an overview of information security. In this standard six phases of the IT security life cycle are defined:

images Phase 1: Initiation. At this point the organization is looking into implementing some IT security service, device, or process.

images Phase 2: Assessment. This phase involves determining and describing the organization’s current security posture. It is recommended that this phase use quantifiable metrics.

images Phase 3: Solution. This is where various solutions are evaluated and one or more are selected.

images Phase 4: Implementation. In this phase the IT security service, device, or process is implemented.

images Phase 5: Operations. Phase 5 is the ongoing operation and maintenance of the security service, device, or process that was implemented in phase 4.

images Phase 6: Closeout. At some point, whatever was implemented in phase 4 will be concluded. Often this is when a system is replaced by a newer and better system.

NIST SP 800-30 Rev. 1

NIST SP 800-30 Rev. 1, Guide for Conducting Risk Assessments, is a standard for conducting risk assessments. Risk assessments were discussed in Chapter 12, “Assessing System Security.” This standard provides guidance to how to conduct such an assessment. There are nine steps in the process:

STEP 1. System Characterization

STEP 2. Threat Identification

STEP 3. Vulnerability Identification

STEP 4. Control Analysis

STEP 5. Likelihood Determination

STEP 6. Impact Analysis

STEP 7. Risk Determination

STEP 8. Control Recommendations

STEP 9. Results documentation

U.S. DoD Standards

Risk Management Framework (RMF) is the unified information security framework for the entire federal government that is replacing the legacy DIACAP processes within federal government departments and agencies, the Department of Defense (DoD), and the Intelligence Community (IC).

Defense Information Assurance Certification and Accreditation Process (DIACAP) was the DoD procedure for identifying, implementing, validating, certifying, and managing IA capabilities and services, expressed as IA controls, and authorizing the operation of DoD ISs. It also describes the processes for configuration management of DoD IA controls and supporting implementation materials. DIACAP was replaced by RMF.

Using the Orange Book

The Orange Book is the common name of one of several books published by the United States Department of Defense (DoD). Because each book is color-coded, the entire series is referred to as The Rainbow Series. (We will look at the series as a whole in the next section of this chapter.) The full name of the Orange Book is the Department of Defense Trusted Computer System Evaluation Criteria (DOD-5200.28-STD). It is a cornerstone for computer security standards, and one cannot be a security professional without a good understanding of this book. Although the Orange Book has been supplanted, the concepts in the book are still worthy of study, as they provide significant guidance on security standards for networks.

The book outlines the criteria for rating various operating systems. In the chapters you have already read, we have primarily focused on Windows with some attention to Linux. For most settings these operating systems provide enough security. However, you need to be aware of the various security levels of secure operating systems available. If you are considering operating systems for key servers, you should consider the underlying security rating for that operating system. If your organization intends to do any work with any military, defense, or intelligence agencies, you may be required to have operating systems that reach a specified level of security.

Actual copies of the Orange Book are notoriously difficult to obtain for anyone not working for the U.S. government, which makes understanding the security ratings difficult. The book is not classified; it simply is not widely published. However, you can find excerpts, chapters, and standards from it at the following web addresses:

images The Orange Book Site: www.dynamoo.com/orange/

images Department of Defense Orange Book: http://csrc.nist.gov/publications/history/dod85.pdf

images The Department of Defense Standard: http://csrc.nist.gov/publications/history/dod85.pdf#search=’the%20orange%20book%20computer%20security

The DoD security categories are designated by a letter ranging from D (minimal protection) to A (verified protection). The Orange Book designations are generally used to evaluate the security level of operating systems rather than entire networks. However, your network will not be particularly secure if the operating systems running on your servers and workstations are not secure. We will take a moment to examine each of these categories.

D - Minimal Protection

This category is for any system that does meet the specifications of any other category. Any system that fails to receive a higher classification gets a D classification. In short, this is a classification that is so low that they simply did not bother to rate it. In other words, a D rating means an operating system that has not been rated. By default any operating system that is not given any other rating is given a D rating. It is very rare to find any widely used operating system that has a D rating.

C - Discretionary Protection

Discretionary protection applies to Trusted Computing Bases (TCBs) with optional object (for example, file, directory, devices, etc.) protection. This simply means that there is some protection for the file structure and devices. This is a rather low level of protection. C is a general class where all of its members (C1, C2, etc.) have basic auditing capability. That means that security events are logged. If you have ever looked at the event viewer in Windows 2000 or Windows XP, then you have seen an example of security audit logs. Operating systems will actually fall into a subcategory such as C2, rather than the general class C.

FYI: What Is a Trusted Computing Base?

A trusted computing base, or TCB, is a term referring to the totality of protection mechanisms within a computer system, including hardware, firmware, and software, the combination of which is responsible for enforcing a security policy. The ability of a trusted computing base to enforce correctly a unified security policy depends on the correctness of the mechanisms within the trusted computing base and the correct input of parameters related to the security policy.

C1 - Discretionary Security Protection

C1 - discretionary security protection is the C protection with a bit more added to it. The following list defines a number of additional features required to achieve C1-level protection. This level of security was found frequently in the past, but for the past decade, most operating system vendors have aimed for C2.

images Discretionary access control, for example access control lists (ACLs), user/group/world protection.

images Usually for users who are all on the same security level.

images Periodic checking of the trusted computing base (TCB). The trusted computing base is the Orange Book’s general term for any computing system.

images Username and password protection and secure authorizations database.

images Protected operating system and system operations mode.

images Tested security mechanisms with no obvious bypasses.

images Documentation for user security.

images Documentation for systems administration security.

images Documentation for security testing.

This list may not be particularly clear to some readers. In order to clarify exactly what C1 security is, let’s look at a few actual excerpts from the Orange Book about C-level and then explain what these excerpts mean:

images “The TCB shall require users to identify themselves to it before beginning to perform any other actions that the TCB is expected to mediate. Furthermore, the TCB shall use a protected mechanism (for example, passwords) to authenticate the user’s identity. The TCB shall protect authentication data so that it cannot be accessed by any unauthorized user.”

This simply means that users must log in before they can do anything. That may sound obvious, but earlier versions of Windows (3.1 and before) did not require users to log in. This was true of many older desktop operating systems.

images “The security mechanisms of the ADP system shall be tested and found to work as claimed in the system documentation. Testing shall be done to assure that there are no obvious ways for an unauthorized user to bypass or otherwise defeat the security protection mechanisms of the TCB.”

That sounds pretty vague. It simply means that the operating system has been tested to ensure that it does what its own documentation claims it will do. It says nothing about what level of security the documentation should claim, merely that there must have been testing to ensure the operating system meets the claims made in the documentation. The reader may also wish to note that ADP stands for automatic data processing. It refers to any system that processes data without direct step-by-step human intervention. This may sound like a description of most computer systems, and it is. Remember that the Orange Book was first conceived many years ago.

C2 - Controlled Access Protection

C2, as the name suggests, is C1 with additional restrictions.

images Object protection can be on a single-user basis, for example, through an ACL or Trustee database.

images Authorization for access may be assigned only by authorized users.

images Mandatory identification and authorization procedures for users, for example, username/password.

images Full auditing of security events (the event, date, time, user, success/ failure, terminal ID).

images Protected system mode of operation.

images Documentation as C1 plus information on examining audit information.

You will find this level of certification in IBM OS/400, Windows NT/2000/XP, and Novell Netware. Most Windows Systems today would be C2. Again it might be helpful to explain this level of security by examining what the Orange Book actually says and elaborating on that a bit.

images “The TCB shall define and control access between named users and named objects (for example, files and programs) in the ADP system. The enforcement mechanism (for example, self/group/public controls, access control lists) shall allow users to specify and control sharing of those objects by named individuals, or defined groups of individuals, or by both, and shall provide controls to limit propagation of access rights. The discretionary access control mechanism shall, either by explicit user action or by default, provide that objects are protected from unauthorized access. These access controls shall be capable of including or excluding access to the granularity of a single user. Access permission to an object by users not already possessing access permission shall only be assigned by authorized users.”

What this means in plain English is that once a user has logged on and has access to specific objects, that user cannot easily “promote” himself to a higher level of access. It also means that for an operating system to be rated C2, you must be able to assign security permissions to individual users rather than simply to entire groups.

images “All authorizations to the information contained within a storage object shall be revoked prior to initial assignment, allocation or reallocation to a subject from the TCB’s pool of unused storage objects. No information, including encrypted representations of information, produced by a prior subject’s actions is to be available to any subject that obtains access to an object that has been released back to the system.”

This paragraph means that if one user logs on and uses some system object, all of its permissions are revoked before that object can be reused by another user. This prevents a user with lower security access from logging on immediately after a user with higher security access and perhaps reusing some system object the previous user left in memory. It is yet another way to prevent a user from accessing items that he may not be authorized to access.

images “The TCB shall require users to identify themselves to it before beginning to perform any other actions that the TCB is expected to mediate. Furthermore, the TCB shall use a protected mechanism (for example, passwords) to authenticate the user’s identity. The TCB shall protect authentication data so that it cannot be accessed by any unauthorized user. The TCB shall be able to enforce individual accountability by providing the capability to uniquely identify each individual ADP system user. The TCB shall also provide the capability of associating this identity with all auditable actions taken by that individual.”

In short this paragraph means that not only should security activities be able to be logged, but they should also be associated with a specific user. That way an administrator can tell which user did what activity. Again, if you have ever looked at a Windows Security log, you will see this. Figure 13-1 shows an event from a Windows event log. Note that the individual username is shown.

A screen of an event properties dialog box overlapping the event viewer window is shown.

FIGURE 13-1 Windows 8 event log

B - Mandatory Protection

Category B is a rather important category because it provides a higher level of security. It does this by specifying that the TCB protection systems should be mandatory, not discretionary. Like the C category this is a broad category containing several subcategories. You will not encounter an operating system that is simply rated B; it would be B1, B2, and so on.

B1 - Labeled Security Protection

This is just like B, only with a few added security features.

images Mandatory security and access labeling of all objects. The term objects, in this context, encompasses files, processes, devices, and so on.

images Auditing of labeled objects.

images Mandatory access control for all operations.

images Ability to specify security level printed on human-readable output (for example, printers).

images Ability to specify security level on any machine-readable output.

images Enhanced auditing.

images Enhanced protection of operating system.

images Improved documentation.

Let us again turn to what the Orange Book actually states about this security level and use that as a guide to better understanding this particular security rating.

images “Sensitivity labels associated with each subject and storage object under its control (for example, process, file, segment, device) shall be maintained by the TCB. These labels shall be used as the basis for mandatory access control decisions. In order to import non-labeled data, the TCB shall request and receive from an authorized user the security level of the data, and all such actions shall be auditable by the TCB.”

This paragraph tells us that in a B1-rated system there are security levels (labels) assigned to every single object (that would include any file and any device) and for every subject (user). No new subject or object can be added to the system without a security level. This means that unlike C1 and C2 systems where such access control is discretionary (i.e., optional), it is impossible to have any subject or object in a B1 system that does not have access control defined. Consider again the Windows operating system. Many items in that system have restricted access (often restricted only to administrators). This includes the control panel and various administrative utilities. However, some items (such as the accessories) have no access control. In a B1- (or higher) rated system, everything in that system has access control.

These security labels are the real key to B1 security ratings. Much of the Orange Book documentation regarding the B1 rating surrounds how such labels are imported or exported.

images “The TCB shall require users to identify themselves to it before beginning to perform any other actions that the TCB is expected to mediate. Furthermore, the TCB shall maintain authentication data that includes information for verifying the identity of individual users (for example, passwords) as well as information for determining the clearance and authorizations or individual users. This data shall be used by the TCB to authenticate the user’s identity and to ensure that the security level and authorizations of subjects external to the TCB that may be created to act on behalf of the individual user are dominated by the clearance and authorization of that user. The TCB shall protect authentication data so that it cannot be accessed by any unauthorized user. The TCB shall be able to enforce individual accountability by providing the capability to uniquely identify each individual ADP system user. The TCB shall also provide the capability of associating this identity with all auditable actions taken by that individual.”

Now this paragraph may sound like the same paragraph from the C category indicating that security activities should be audited. However, this goes a bit further. Every action is not only audited along with the user that performed that action, but the user’s access rights/security level are also noted. This provides a clear indication of any user attempting to perform some action that is beyond his security rights.

This level of operating system security can be found on several very high-end systems such as:

images HP-UX BLS (a highly secure version of Unix)

images Cray Research Trusted Unicos 8.0 (an operating system for the famous Cray research computers)

images Digital SEVMS (a highly secure VAX operating system)

B2 - Structured Protection

As the name suggests, this is an enhancement to the B category. It includes everything B does, plus a few added features.

images Notification of security level changes affecting interactive users

images Hierarchical device labels

images Mandatory access over all objects and devices

images Trusted path communications between user and system

images Tracking down of covert storage channels

images Tighter system operations mode into multilevel independent units

images Improved security testing

images Version, update, and patch analysis and auditing

This level of security is actually found in a few operating systems:

images Honeywell Multics: This is a highly secure mainframe operating system.

images Cryptek VSLAN: This is a very secure component to network operating systems. The Verdix Secure Local Area Network (VSLAN) is a network component that is capable of interconnecting host systems operating at different ranges of security levels allowing a multi-level secure (MLS) LAN operation.

images Trusted XENIX: This is a very secure Unix variant.

Examining the Orange Book will give us a better view of the differences between B2 and B1 levels of security. A few paragraphs seem to really illustrate the primary differences:

images “The TCB shall support a trusted communication path between itself and user for initial login and authentication. Communications via this path shall be initiated exclusively by a user.”

This paragraph tells us that not only must the user be authenticated before accessing any of the system’s resources, but that the communication used to authenticate must be secure. This is particularly important in client/server situations. A B2-rated server allows clients to log on only if their log-on process is secure. This means the log-on communication should be encrypted via a VPN or some other method that keeps the username and password secure. Notice that the first two B2-rated operating systems are for distributed environments.

images “The TCB shall immediately notify a terminal user of each change in the security level associated with that user during an interactive session. A terminal user shall be able to query the TCB as desired for a display of the subject’s complete sensitivity label.”

In this excerpt we see that if a user is logged on to the system and something should change in either his security level or in the security level of some object he is accessing, that the user will immediately be notified and, if necessary, his access will be changed. In many systems you are probably most familiar with (Windows, Unix, Linux), if a user’s permissions are changed, the changes do not take effect until the next time the user logs on. With a B2-rated system the changes take effect immediately.

B3 - Security Domains

Yes, this category is yet another enhancement to the B category.

images ACLs additionally based on groups and identifiers

images Trusted path access and authentication

images Automatic security analysis

images Auditing of security auditing events

images Trusted recovery after system down and relevant documentation

images Zero design flaws in the TCB and a minimum of implementation flaws

To the best of this author’s knowledge, there is only one B3-certified operating system, Getronics/Wang Federal XTS-300. This is a highly secure Unix-like operating system, complete with a graphical user interface. There are a couple of fascinating segments of the Orange Book’s description of the B3 security rating that help illustrate the differences between B2 and B3.

images “The TCB shall define and control access between named users and named objects (for example, files and programs) in the ADP system. The enforcement mechanism (for example, access control lists) shall allow users to specify and control sharing of those objects, and shall provide controls to limit propagation of access rights. The discretionary access control mechanism shall, either by explicit user action or by default, provide that objects are protected from unauthorized access. These access controls shall be capable of specifying, for each named object, a list of named individuals and a list of groups of named individuals with their respective modes of access to that object. Furthermore, for each such named object, it shall be possible to specify a list of named individuals and a list of groups of named individuals for which no access to the object is to be given. Access permission to an object by users not already possessing access permission shall only be assigned by authorized users.”

This paragraph says that access control is taken to a higher level with B3 systems. In such a system every single object must have a specific list of authorized users and may have a specific list of prohibited users. This goes beyond the C level, where an object may have a list of authorized users. It also goes beyond the lower B ratings with its list of specifically disallowed users.

images “The TCB shall be able to create, maintain, and protect from modification or unauthorized access or destruction an audit trail of accesses to the objects it protects. The audit data shall be protected by the TCB so that read access to it is limited to those who are authorized for audit data. The TCB shall be able to record the following types of events: use of identification and authentication mechanisms, introduction of objects into a user’s address space (for example, file open, program initiation), deletion of objects, and actions taken by computer operators and system administrators and/or system security officers and other security relevant events. The TCB shall also be able to audit any override of human-readable output markings. For each recorded event, the audit record shall identify: date and time of the event, user, type of event, and success or failure of the event. For identification/authentication events the origin of request (for example, terminal ID) shall be included in the audit record. For events that introduce an object into a user’s address space and for object deletion events the audit record shall include the name of the object and the object’s security level. The ADP system administrator shall be able to selectively audit the actions of any one or more users based on individual identity and/or object security level. The TCB shall be able to audit the identified events that may be used in the exploitation of covert storage channels. The TCB shall contain a mechanism that is able to monitor the occurrence or accumulation of security auditable events that may indicate an imminent violation of security policy. This mechanism shall be able to immediately notify the security administrator when thresholds are exceeded, and if the occurrence or accumulation of these security relevant events continues, the system shall take the least disruptive action to terminate the event.”

This paragraph tells us that auditing in a B3 system is taken to a higher level. In such a system not only are all security-related events audited, but any occurrence or accumulation of occurrences that might indicate a potential violation of a security policy will trigger an alert to the administrator. This is conceptually similar to an intrusion detection system. However, in this incident it is not simply signs of intrusions that are being monitored but any event or series of events that might lead to any compromise of any part of the operating system’s security.

A - Verified Protection

Division A is the highest security division. It is divided into A1 and A2 and beyond. A2 and above are simply theoretical categories for operating systems that might someday be developed. There are currently no such operating systems in existence.

A1 - Verified Protection

This level includes everything found in B3 with the addition of formal methods and proof of integrity of TCB. The biggest difference between A-rated and B-rated operating systems lies in the development process. For A-rated systems the Orange Book carefully delineates specific controls that must be in place during the development of the system and testing standards that must be adhered to. This basically means that an A-rated system has had every aspect of its security carefully verified during its development. Doing this requires a great deal of effort and expense. You will note that the only two A1 systems we list are for military use.

You can actually find a few A1-certified systems:

images Boeing MLS LAN: This is a highly secure and specialized network operating system.

images Honeywell SCOMP (Secure Communications Processor): This is a highly secure and specialized network operating system.

In Practice

The Orange Book in Your Organization

Many IT professionals select operating systems based on one of three factors:

images Cost

images What they are most familiar with

images What has the most software available for it

This means that in many businesses you will see Windows on the desktop and Windows, Linux, or Unix servers. However, as security becomes a greater concern, perhaps other criteria should be considered, at least for servers. Note that Windows Systems are C2-rated systems. That means that a Windows 2000 or Windows 2003 server is also rated C2. For many businesses this is enough.

However, you may wish to consider a more secure solution, at least for your most critical servers. Even a C2- or B1-rated system generally suffices. This would probably mean some version of Unix (though it is hoped that Microsoft will eventually release a more secure server version, perhaps one with a B1 or better rating). You could still have Windows workstations, and even use Windows for less critical servers such as web servers. But use the more secure Unix version for your major database servers that contain critical data such as credit card data.

There even has been a great deal of talk in the Linux community about someone making a much more secure version of this open source operating system specifically for use in highly secure settings. So far, to the best of this author’s knowledge, that product has not been released. However, given the history of the open source software community, it seems only a matter of time.

Using the Rainbow Series

As we mentioned, the Orange Book is only one part of the Rainbow Series. You will see the Orange Book mentioned most often, but there are other books you should be aware of. Each of these books is part of the U.S. Department of Defense guide to information security. You can view the series at the following website:

images FAS Rainbow Series page: https://fas.org/irp/nsa/rainbow.htm

Below is a list of the books in the series, along with a brief description of each. Some books are more applicable to your study of network defense than others. For those books that are less relevant to our study, the description is briefer. You may think that if you are not directly involved in systems related to defense or intelligence, you do not need to be familiar with these standards. However, consider that when you are trying to secure any network, would it not be useful to consider the security standards and requirements of the most secure systems? Many private companies have done just that and have adopted one or more of these standards for their own use.

images Tan Book—A Guide to Understanding Audit in Trusted Systems [Version 2 6/01/88]. This book describes recommended processes for auditing trusted systems. Recall that event auditing is a significant feature of several security classifications in the Orange Book. The Tan Book describes exactly how auditing should be done. This book is a worthwhile read for any security professional.

images Bright Blue Book—Trusted Product Evaluation - A Guide for Vendors [Version 1 3/1/88]. As the name indicates, this is a guide for vendors. This will be of use to you only if your company is attempting to market secure systems to the United States Department of Defense.

images Orange Book—A Guide to Understanding Discretionary Access Control in Trusted Systems. This section has been examined in great detail in the first portion of this chapter.

images Aqua Book—Glossary of Computer Security Terms. Bookstores and the Internet are replete with computer security glossaries. The textbook you are reading right now includes such a glossary. The Aqua Book is the Department of Defense computer security glossary. It is worth at least a cursory examination.

images Burgundy Book—A Guide to Understanding Design Documentation in Trusted Systems. As the name suggests, this book examines what is required for documentation. As with most government agencies, the standard here is for a lengthy amount of documentation probably much more detailed than most organizations will require.

images Lavender Book—A Guide to Understanding Trusted Distribution in Trusted Systems. This book discusses standards for security in distributed systems. In this day of e-commerce it would be quite useful for any security professional to spend some time studying these standards.

images Venice Blue Book—Computer Security Subsystem Interpretation of the Trusted Computer System Evaluation Criteria. This book describes criteria for evaluating any hardware or software that is to be added to an existing secure system. While the specifics of this particular document are not critical to your study of network defense, the concept is. Recall in Chapter 12 we discussed change control processes. One reason this is so important is that even a very secure system can have its security compromised by the addition of a device or software that is not secure.

images Red Book—Trusted Network Interpretation Environments Guideline - Guidance for Applying the Trusted Network Interpretation. In this book you will find criteria for evaluating network security technologies. This is closely related to the material in the Lavender Book.

images Pink Book—Rating Maintenance Phase Program Document. In this document you will see the criteria for rating maintenance programs. This again relates back to change control processes discussed in Chapter 12 and is related to the Venice Blue Book. Routine maintenance of a secure system can either enhance or compromise system security, depending on how it is executed.

images Purple Book—Guidelines for Formal Verification Systems. For a vendor developing a system it wishes to be rated according to Department of Defense guidelines, this book outlines the process of verifying the security of that system.

images Brown Book—A Guide to Understanding Trusted Facility Management [6/89]. Because secure systems must reside in some building/facility, then the management of that facility is of concern to a security professional. This book details guidelines for the management of a trusted facility.

images Yellow-Green Book—Writing Trusted Facility Manuals. Anyone familiar with government documents of any type is accustomed to a great deal of paperwork and an excessive amount of manuals. This particular book is a guide to writing manuals.

images Light Blue Book—A Guide to Understanding Identification and Authentication in Trusted Systems. In this manual the process of authentication is explored in great detail. This information is critical to you only if you are attempting to create your own authentication process rather than using one of the many existing authentication protocols.

images Blue Book—Trusted Product Evaluation Questionnaire [Version-2 - 2 May 1992]. This document is closely related to the Orange Book, as it contains questions that must be answered in order to get an operating system rated according to Orange Book standards.

images Grey/Silver Book—Trusted UNIX Working Group (TRUSIX) Rationale for Selecting Access Control List Features for the UNIX System. For readers using Unix this book is of particular value. It examines the standards for choosing specific access control list options in a Unix operating system.

images Lavender/Purple Book—Trusted Database Management System Interpretation. As the name suggests, this book details the requirements for a secure database management system. Given that databases are at the heart of all business programming, the security of such database systems is an important issue.

images Yellow Book—A Guide to Understanding Trusted Recovery. Should any failure occur (hard drive crash, flood, fire, etc.), you must restore your systems. For secure systems, even such recovery must be done in accordance with security guidelines, which this book outlines.

images Forest Green Book—A Guide to Understanding Data Remanence in Automated Information Systems. This particular book covers requirements for the secure storage of data.

images Hot Peach Book—A Guide to Writing the Security Features User’s Guide for Trusted Systems. This book is yet another manual on how to write manuals.

images Turquoise Book—A Guide to Understanding Information System Security Officer Responsibilities for Automated Information Systems. In many government agencies or in defense contractor companies, there is a designated security officer with overall responsibilities for security. This book outlines the responsibilities of such an officer. It is not directly relevant to network defense but can provide background information when formulating organizational security policies.

images Violet Book—Assessing Controlled Access Protection. In this particular book the reader will find standards related to how to assess access control procedures. Most operating systems (at least C-rated or better) have some sort of access control (discretionary in C-rated systems, mandatory in B-rated systems).

images Blue Book—Introduction to Certification and Accreditation. This manual explains the process of achieving Department of Defense certification for a product.

images Light Pink Book—A Guide to Understanding Covert Channel Analysis of Trusted Systems [11/93]. One feature of some higher rated systems (B2 and above) is the handling of communication channels. This document discusses analyzing such channels in great detail.

Clearly no one can be expected to study, much less memorize, all of these books. The Orange Book is not used today, but it still is a valuable view of how systems security works, so you should certainly have a basic familiarity with it. Beyond that, simply select the one or two books that are most pertinent to your job role or to your personal research interests, and familiarize yourself with those. The most important thing to gather from this section is what the various books are responsible for. You should know which book to consult for a given purpose.

Using the Common Criteria

The Orange Book and the entire Rainbow Series are excellent guidelines for security. Several other organizations and other nations have also established their own security guidelines. Each of these separate security criteria overlap on some issues. Eventually, the organizations responsible for the existing security criteria in the United States, Canada, and Europe began a project to fuse their separate criteria into a single set of IT security criteria that became known as the Common Criteria (www.commoncriteriaportal.org/cc/). The first version of it was completed in January 1996.

The Common Criteria originated out of three standards:

images ITSEC (Information Technology Security Evaluation Criteria), a European standard used by UK, France, the Netherlands, Germany, and Australia. You can learn more about ITSEC at https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Zertifizierung/ITSicherheitskriterien/itsec-en_pdf.pdf?__blob=publicationFile. However, remember that in most cases this has been supplanted by the Common Criteria.

images The United States Department of Defense Orange Book.

images CTCPEC (Canadian Trusted Computer Product Evaluation Criteria), the Canadian standard. This standard is roughly equivalent in purpose to the Orange Book.

The Common Criteria is essentially a fusion of these three standards. While they can now be applied to any product, the original intent was to outline standards for companies selling computer products for use in defense or intelligence organizations. The idea of the Common Criteria is, as the name suggests, to have common criteria for security: common, as in applicable to a wide range of organizations and industries.

As with most things in information technology, the Common Criteria was eventually revised. Version 2.0 of the Common Criteria was released in April 1998. This version of the Common Criteria was adopted as ISO International Standard 15408 in 1999. Subsequent minor revisions of the Common Criteria were also adopted by ISO. The Common Criteria was originally developed to supersede parts of the Rainbow Series and similar standards used in Europe and Canada. However, its use has gone well beyond defense-related applications. The Common Criteria are now often used in private organizational security settings. In fact, a basic knowledge of this standard is part of the CISSP (Certified Information Systems Security Professional) certification test.

Clearly the Common Criteria is important and widely used, but what exactly does it cover? The Common Criteria (often abbreviated as just CC) defines a common set of security requirements. These requirements are divided into functional requirements and assurance requirements. The CC further defines two kinds of documents that can be built using this common set:

images Protection Profile: This is a document created by a user that identifies user security requirements.

images Security Target: This is a document created by the developer of a particular system that identifies the security capabilities of a particular product.

images Security Functional Requirements: Specify individual security functions that a particular product should provide.

images Security Assurance Requirements: Describe what measures are taken during the development (and eventual evaluation) of a product to ensure that it actually complied with the security functionality.

Frequently, organizations ask for an independent evaluation of a product to show that the product does in fact meet the claims in a particular security target. This evaluation is referred to as the Target of Evaluation, or TOE. The Common Criteria has built-in mechanisms to support these independent evaluations.

The Common Criteria outlines some requirements/levels of security assurance. These levels are usually called Evaluation Assurance Levels (EALs). These EALs are numbered 1 to 7, with higher numbers representing more thoroughly evaluated security. The idea is to rate security products, operating systems, and security on a numeric scale. The criteria for each level are well established and are the same for all parties using the Common Criteria. Essentially the EALs are based on the security targets, security functional requirements, and security assurance requirements described earlier in this section.

Using Security Models

The Orange Book and the Common Criteria are designed to evaluate the security levels of operating systems, applications, and other products. Ensuring that the products your organization uses meet a certain security standard is certainly a key part of securing your network. This process of evaluating systems, as well as everything else we have discussed in this book, has been very direct and very practical. However, now it is time to delve into the theoretical aspects of computer security. In this section we will discuss various widely used models that are used to form the underlying basis for an organization’s security strategy. Let me reiterate that a person can certainly secure a network using only the practical guidelines found in the preceding 12 chapters of this book. However, in some organizations, particularly larger organizations, you will find that a particular security model is first chosen and then the security strategy is built. In small to midsize organizations, a security model is generally not selected.

It must be stressed that these models are theoretical frameworks that can assist you in guiding your network defense strategy. You can certainly be successful at defending networks without them, but in this book our goal is to give you a well-rounded understanding of network defense.

FYI: Security Models and the CISSP

The CISSP exam covers several of these models, so familiarizing yourself with them now can be advantageous to you should you later take that exam.

Bell-LaPadula Model

The Bell-LaPadula model is a formal security model that describes various access control rules. This was one of the earliest computer security models. It was developed by two researchers named Bell and LaPadula in 1973. It was designed to enforce access control in government and military applications. The entire model is based on a principle it refers to as the basic security theorem. That theorem states that:

A system is secure if and only if the initial state is a secure state and all state transitions are secure, then every subsequent state will also be secure, no matter what inputs occur.

In other words, if you start out with a secure system, and then every single transaction that occurs that might change the state of the system in any way is also secure, then the system will remain secure. Therefore the Bell-LaPadula model focuses on any transaction that changes the system’s state.

The model divides a system into a serious of subjects and objects. A subject is any entity that is attempting to access a system or data. That usually refers to an application or system that is accessing another system or data within that system. For example, if a program is designed to perform data-mining operations, requiring it to access data, then that program is the subject, and the data it is trying to access is the object. An object, in this context, is literally any resource the user may be trying to access.

The model defines the access control for these subjects and objects. All interactions between any subjects and objects are based on their individual security levels. There are usually four security levels:

images Unclassified

images Confidential

images Secret

images Top secret

It is no coincidence that these are the same four classifications the United States military uses. This particular model was originally designed with military applications in mind.

There are two properties that describe the mandatory access in this model. These are the simple-security property and the * property:

images Simple-security property (also referred to as ss-property): This means that a subject can read an object only if the security level of the subject is higher than or equal to the security of the object. This is often referred to as read-down. What this means is that if the subject has a secret level of security it can read only secret, confidential, and unclassified materials. That subject cannot read top secret material.

images * property (also referred to as the star property): A subject can write on an object only if the security level of the object is higher than or equal to the security level of the subject. This is often referred to as write up. It may seem odd to allow a system to write to a higher security level than itself; however, the key is to use a broad definition for the word write. What this means is that a system that is classified secret cannot output less than secret. This prevents a secret system from classifying its output as confidential or unclassified.

The Bell-LaPadula model also has a third rule that is applied to discretionary access control (DAC), called the discretionary security property. Discretionary access is defined as the policies that control access based on named users and named objects.

images Discretionary security property (also called ds-property): Each element of the set of current accesses, as well as the specific access mode (for example, read, write, or append), is included in the access matrix entry for the corresponding subject-object pair.

Biba Integrity Model

The Biba Integrity Model is also an older model, having been first published in 1977. This model is similar to the Bell-LaPadula model in that it also uses subjects and objects; in addition, it controls object modification in the same way that Bell-LaPadula controls data disclosure, what the Bell-LaPadula model referred to as write up.

The Biba Integrity model consists of three parts. The first two are very similar in wording and concept to the Bell-LaPadula model but with wider applications.

images A subject cannot execute objects that have a lower level of integrity than the subject.

images A subject cannot modify objects that have a higher level of integrity.

images A subject may not request service from objects that have a higher integrity level.

Essentially this last item means that a subject that has a confidential clearance cannot even request a service from any object with a secret or top secret clearance. The idea is to prevent subjects from even requesting data from objects with higher security levels.

Clark-Wilson Model

The Clark-Wilson Model was first published in 1987. Like the Bell-LaPadula model it is a subject-object model. However, it introduces a new element, programs. In addition to considering subjects (systems accessing data) and objects (the data), it also considers subjects accessing programs. With the Clark-Wilson model there are two primary elements for achieving data integrity:

images Well-formed transaction

images Separation of duties

Well-formed transaction simply means users cannot manipulate or change the data without careful restrictions. This prevents transactions from inadvertently altering secure data. Separation of duties prevents authorized users from making improper modifications, thus preserving the external consistency of data.

The Clark-Wilson model uses integrity verification and transformation procedures to maintain internal and external consistency of data. The verification procedures confirm that the data conforms to the integrity specifications at the time the verification is performed. What this means in simple terms is that this model explicitly calls for outside auditing to ensure that the security procedures are in place and effective. The model essentially encompasses three separate but related goals:

images Prevent unauthorized users from making modifications

images Prevent authorized users from making improper modifications

images Maintain internal and external consistency

Chinese Wall Model

In business, the term Chinese wall is used to denote a complete separation between parts of a firm. It literally means to establish some mechanism to make sure that different parts of the firm are kept separate so that information does not circulate between the two segments. It is often used to prevent conflicts of interest.

The Chinese Wall Model was proposed by Brewer and Nash. This model seeks to prevent information flow that can cause a conflict of interest. For example, write access is granted only if no other object containing unsanitized information can be read. Unlike Bell-LaPadula, access to data is not constrained by attributes of the data in question but by what data the subject already holds access rights to. Also unlike Bell-LaPadula and Biba, this model originates with business concepts rather than military concepts. With this model, sets of data are grouped into “conflict of interest classes” and all subjects are allowed access to at most one dataset belonging to each such conflict of interest class.

The basis of this model is that users are allowed access only to information that is not considered to be in conflict with any other information that they already possess. From the perspective of the computer system, the only information already possessed by a user must be information that the user has previously accessed.

State Machine Model

The State Machine Model looks at a system’s transition from one state to another. It starts by capturing the current state of a system. Later the system’s state at that point in time is compared to the previous state of the system to determine whether there has been a security violation in the interim. It looks at several things to evaluate this:

images Users

images States

images Commands

images Output

A state machine model considers a system to be in a secure state when there is no instance of security breach at the time of state transition. In other words, a state transition should occur only by intent; otherwise, it is a security breach. Any state transition that is not intentional is considered a security breach.

U.S. Federal Regulations, Guidelines, and Standards

Several United States laws, regulations and standards are important to network security. Although you are only required to comply with these if you are in a related business (for example, PCI DSS only applies to credit card processing), they can provide insight into network security requirements.

The Health Insurance Portability & Accountability Act of 1996 (HIPAA)

The HIPAA Privacy Rule, also called the Standards for Privacy of Individually Identifiable Health Information, provided the first nationally recognizable regulations for the use/disclosure of an individual’s health information. Essentially, the Privacy Rule defines how covered entities use individually identifiable health information, or the PHI (Personal Health Information). Covered entities is a term often used in HIPAA-compliant guidelines.

HITECH

The Health Information Technology for Economic and Clinical Health Act (HITECH) was passed as part of the American Recoveries and Reinvestment Act of 2009. HITECH makes several significant modifications to HIPAA. These changes include the following:

images Creating incentives for developing a meaningful use of electronic health records

images Changing the liability and responsibilities of business associates

images Redefining what a breach is

images Creating stricter notification standards

images Tightening enforcement

images Raising the penalties for a violation

images Creating new code and transaction sets

Sarbanes-Oxley (SOX)

The Sarbanes-Oxley legislation came into force in 2002 and introduced major changes to the regulation of financial practice and corporate governance. Named after Senator Paul Sarbanes and Representative Michael Oxley, who were its main architects, it also set a number of deadlines for compliance.

The legislation affects not only the financial side of corporations, but also the IT departments whose job it is to store a corporation’s electronic records. The Sarbanes-Oxley Act states that all business records, including electronic records and electronic messages, must be saved for “not less than five years.” The consequences for non-compliance are fines, imprisonment, or both.

Computer Fraud and Abuse Act (CFAA): 18 U.S. Code § 1030

This law is perhaps one of the most fundamental computer crime laws, and merits careful study by anyone interested in the field of computer crime. The primary reason to consider this legislation as pivotal is that it was the first significant federal legislation designed to provide some protection against computer-based crimes. Prior to this legislation, courts relied on common law definitions and adaptations of legislation concerning traditional, non-computer crimes in order to prosecute computer crimes.

Throughout the 1970s and early 1980s, the frequency and severity of computer crimes increased, as we have seen in the preceding two chapters. In response to this growing problem, the Comprehensive Crime Control Act of 1984 was amended to include provisions to specifically address the unauthorized access and use of computers and computer networks. These provisions made it a felony offense to access classified information in a computer without authorization. They also made it a misdemeanor offense to access financial records in a computer system.

However, these amendments were not considered in and of themselves to be adequate. Thus during 1985, both the House and the Senate held hearings on potential computer crime bills. These hearings eventually culminated in the Computer Fraud and Abuse Act (CFAA), enacted by Congress in 1986, which amended 18 U.S.C. § 1030. The original goal of this act was to provide legal protection for computers and computer systems that were in one of the following categories:

images Under direct control of some federal entity

images Part of a financial institution

images Involved in interstate or foreign commerce

As you can see, this law was aimed at protecting computer systems that came within the federal purview. This act made several activities explicitly criminal. First and foremost was accessing a computer without authorization in order to obtain any of the following types of information:

images National security information

images Financial records

images Information from a consumer reporting agency

images Information from any department or agency of the United States

Fraud and Related Activity in Connection with Access Devices: 18 U.S. Code § 1029

This is closely related to 18 U.S.C. § 1030 but covers access devices (such as routers). Essentially this law mimics 1030 but applies to devices used to access systems. What is most fascinating about this law, in my opinion, is it also covers “counterfeit access devices.” Later in this book you will learn about rogue access devices and man-in-the-middle attacks. This law would expressly relate to that.

General Data Protection Regulation (GDPR)

This is a European Union law first created in 2016. Its entire purpose is to deal with data privacy. It applies to any entity (business, government agency, etc.) that either collects data or processes that data. Even if an organization is not within the EU, if it has EU data, then GDPR applies.

PCI DSS

The Payment Card Industry Data Security Standard (PCI DSS) is a proprietary information security standard for organizations that handle cardholder information for the major credit and debit cards. This industry regulation has several goals, and you can look up specific ones at https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-2.pdf?agreement=true&time=1517432929990. The most important are listed (from that website) here:

1.1 Requirement: All merchants must protect cardholder information by installing a firewall and router system. Installing a firewall system provides control over who can access an organization’s network and a router is a device that connects networks, and is therefore, PCI compliant.

images Program the standards of firewall and router to:

1. Perform testing when configurations change

2. Identify all connections to cardholder information

3. Review configuration rules every six months

images Configure firewall to prohibit unauthorized access from networks and hosts and deny direct public access to any information about the cardholder. Additionally, install firewall software on all computers that access the organization’s PCI compliance network.

1.2 Requirement: Change all default passwords. Default passwords provided when first setting up software are discernible and can be easily discovered by hackers to access sensitive information.

2.1 Requirement: Cardholder data is any personal information about the cardholder that is found on the payment card and can never be saved by a merchant—this includes preserving encrypted authentication data after authorization. Merchants can only display the maximum of the first six and last four digits of the primary account number (PAN). If a merchant stores PAN, ensure that the data is secure by saving it in a cryptographic form.

2.2 Requirement: It is required that all information is encrypted when transmitting the data across public networks, such as the Internet, to prevent criminals from stealing the personal information during the process.

3.1 Requirement: Computer viruses make their way onto computers in many ways, but mainly through e-mail and other online activities. The viruses compromise the security of personal cardholder information on a merchant’s computer, and therefore anti-virus software must be present on all computers associated on the network.

3.2 Requirement: In addition to anti-virus software, computers are also susceptible to a breach in the applications and systems installed on the computer. Merchants must install vendor-provided security patches within a month of their release to avoid exposing cardholder data. Security alert programs, scanning services, or software may be used to signal the merchant of any vulnerable information.

4.1 Requirement: As a merchant, you must limit the accessibility of cardholder information. Install passwords and other security measurements to limit employees’ access to cardholder data. Only employees who must access the information to complete their job are allowed to access the information.

4.2 Requirement: In order to trace employees’ activities when accessing sensitive information, assign each user an unreadable password used to access the cardholder data.

4.3 Requirement: Monitor the physical access to cardholder data; do not allow unauthorized persons the opportunity to retrieve the information by securing printed information as well as digital. Destroy all outdated cardholder information. Maintain a visitor log and save the log for at least three months.

5.1 Requirement: Keep system activity logs that trace all activity and review daily. The information stored in the logs is useful in the event of a security breach to trace employee activities and locate the source of the violation. Record entries reflect at a minimum: the user, event, date and time, success or failure signal, source of the affected data, and the system component.

5.2 Requirement: Each quarter, use a wireless analyzer to check for wireless access points to prevent unauthorized access. Also, scan internal and external networks to identify any possible vulnerable areas in the system. Install software to recognize any modification by unauthorized personnel. Additionally, ensure that all IDS/IPS engines are up to date.

If you process credit cards it is imperative that you be in compliance with this standard.

Summary

Computer security has a theoretical foundation that should be studied in addition to the hands-on practical techniques and procedures. The U.S. Department of Defense has the Rainbow Series, a series of color-coded manuals that dictate every aspect of security. While largely supplanted, it is still worthy of study. We also examined ISO standards and industry standards such as COBIT.

The Common Criteria is another series of criteria formed by a merger of the criteria used by several different nations. This Common Criteria is also used to evaluate the security of systems, particularly systems that are intended for use by defense- or intelligence-related organizations.

Security can also be viewed from the perspective of different models. The Bell-LaPadula model, the Clark-Wilson model, and the Biba Integrity model all view data access as a relationship between subjects and objects. These models originated in the defense industry. The Chinese Wall model, on the other hand, originated in private business and views information security from a conflict of interest perspective. Finally, we examined the state machine model, which concerns itself with system transitions from one state to another.

Test Your Skills

MULTIPLE CHOICE QUESTIONS

1. COBIT was published as what standard?

A. NIST SP 800-14

B. ISO/IEC 15408

C. ISO/IEC 17799:2005

D. NIST SP 800-35

2. Which U.S. standard covers risk assessment?

A. ISO 27037

B. NIST SP 800-30

C. ISO 27007

D. NIST SP 800-14

3. Which standard defines six phases of the IT security life cycle?

A. ISO 27007

B. NIST SP 800-30

C. NIST SP 800-35

D. ISO 27004

4. The _____ component of COBIT is one aspect of the standard that makes it relatively easy to integrate other standards.

A. integration

B. control objectives

C. process description

D. framework

5. Which U.S. standard should you consult to guide you in developing security policies?

A. NIST SP 800-35

B. NIST SP 800-14

C. ISO 27004

D. ISO 27008

6. What international standard would you consult for managing incident response?

A. ISO 27035

B. NIST SP 800-35

C. NIST SP 800-14

D. ISO 27004

7. What Canadian standard was used as one basis for the Common Criteria?

A. ITSEC

B. Orange Book

C. CTCPEC

D. CanSec

8. The Common Criteria applies mostly to what types of system?

A. Home user systems

B. Military/intelligence systems

C. Business systems

D. Commercial systems

9. What is an EAL?

A. Evaluation authority level

B. Execution assurance load

C. Execution authority level

D. Evaluation assurance level

10. Which of the following model focuses on any transaction that changes the system’s state?

A. Biba Integrity

B. ITSEC

C. Clark-Wilson

D. Bell-LaPadula

11. What does the concept of “write up” mean?

A. Writing files to a secure location

B. Sending data to an object at a higher security level

C. Documenting security flaws

D. Logging transactions

12. Which of the following subject-object models introduced the element of programs?

A. Bell-LaPadula

B. Chinese Wall

C. Clark-Wilson

D. Biba Integrity

13. What is a Chinese wall, in the context of business practices?

A. A barrier to information flow within an organization

B. A highly secure network perimeter

C. A barrier to information flow between organizations

D. An A2-rated network perimeter

14. Which of the following models is based on the concept of conflict of interest?

A. Biba Integrity

B. State Machine

C. Chinese Wall

D. Bell-LaPadula

15. Which of the following models considers a system to be in a secure state when there is no instance of security breach at the time of state transition?

A. Clark-Wilson

B. State Machine

C. Bell-LaPadula

D. Chinese Wall

EXERCISES

EXERCISE 13.1: Understanding COBIT

1. Read the COBIT description in this chapter, and use online resources.

2. Write a brief description of COBIT in your own words.

EXERCISE 13.2: Using NIST SP 800-30

1. Using NIST SP 800-30, outline how you would perform a risk assessment for a small network.

EXERCISE 13.3: Applying the Common Criteria

1. Using the web or other resources, find out what the Common Criteria guiding philosophy is. (Hint: It is clearly stated as such in the CC documentation.)

2. Find some examples of organizations that use the Common Criteria.

3. What are some advantages and disadvantages of the Common Criteria?

4. What situations are most appropriate for the Common Criteria?

EXERCISE 13.4: The Biba Integrity Model

1. Using the web or other resources, identify the company that created the Biba Integrity model. (Hint: Web searches on Biba Integrity model will reveal websites that include this detail.)

2. What was the original purpose of the development of this model?

3. Identify companies or organizations that use this model today.

4. What are some advantages and disadvantages of this model?

5. What situations are most appropriate for this model?

PROJECTS

Note: These projects are meant to guide the student into exploring other security models and standards.

PROJECT 13.1: Applying ITSEC

Using various resources including websites listed below, find the following information about ITSEC:

images Is the system still being used?

images If so, where?

images On what areas of security does the system focus?

images What are some advantages and disadvantages of this system?

The following websites may help:

images IT Security Dictionary: www.rycombe.com/itsec.htm

images The Information Warfare Site: www.iwar.org.uk/comsec/resources/standards/itsec.htm

images ITSEC Criteria: www.boran.com/security/itsec.htm

PROJECT 13.2: CTCPEC

Using various resources including websites listed below, look up information on CTCPEC, and find answers to the following questions:

images Is the system still being used?

images If so, where?

images On what areas of security does the system focus?

images What are some advantages and disadvantages of this system?

The following websites may help:

images Computer Security Evaluation FAQ: www.opennet.ru/docs/FAQ/security/evaluations.html

images Canadian Communications Security: http://www.acronymfinder.com/Canadian-Trusted-Computer-Product-Evaluation-Criteria-(CTCPEC).html

PROJECT 13.3: The Common Criteria

Using the web and other resources, write a brief essay on the Common Criteria. Feel free to elaborate on areas that interest you, but your paper must address the following questions:

images What is the current version being used?

images When was it released?

images How does this version define the scope of security?

images What industry certifications use the common criteria?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.144.170