Chapter 21. Evaluating Systems

 

LEONATO: O, she tore the letter into a thousandhalfpence; railed at herself, that she should beso immodest to write to one that she knew wouldflout her; 'I measure him,' says she, 'by my ownspirit; for I should flout him, if he writ to me;                  yea, though I love him, I should.'

 
 --Much Ado About Nothing, II, iii, 150–161

Evaluation is a process in which the evidence for assurance is gathered and analyzed against criteria for functionality and assurance. It can result in a measure of trust that indicates how well a system meets particular criteria. The criteria used depend on the goals of the evaluation and the evaluation technology used. The Trusted Computer System Evaluation Criteria (TCSEC) was the first widely used formal evaluation methodology, and subsequent methodologies built and improved on it over time. This chapter explores several past and present evaluation methodologies, emphasizing the differences among them and the lessons learned from each methodology.

Goals of Formal Evaluation

Perfect security is an ultimate, but unachievable, goal for computer systems. As the complexity of computer systems increases, it becomes increasingly difficult to address the reference validation mechanism concept of a system being simple enough to analyze. A trusted system is one that has been shown to meet specific security requirements under specific conditions. The trust is based on assurance evidence. Although a trusted system cannot guarantee perfect security, it does provide a basis for confidence in the system within the scope of the evaluation.

Formal security evaluation techniques were created to facilitate the development of trusted systems. Typically, an evaluation methodology provides the following features.

  • A set of requirements defining the security functionality for the system or product.

  • A set of assurance requirements that delineate the steps for establishing that the system or product meets its functional requirements. The requirements usually specify required evidence of assurance.

  • A methodology for determining that the product or system meets the functional requirements based on analysis of the assurance evidence.

  • A measure of the evaluation result (called a level of trust) that indicates how trustworthy the product or system is with respect to the security functional requirements defined for it.

  • Definition 21–1. A formal evaluation methodology is a technique used to provide measurements of trust based on specific security requirements and evidence of assurance.

Several evaluation standards have affected formal evaluation methodologies. Among the major standards have been the Trusted Computer System Evaluation Criteria (TCSEC) [285] and the Information Technology Security Evaluation Criteria (ITSEC) [210]. The Common Criteria (CC) [750, 751, 752] has supplanted these standards as a standard evaluation methodology. This chapter discusses components of each standard.

Even when a system is not formally evaluated, the security functional requirements and assurance requirements provide an excellent overview of the considerations that improve assurance. These considerations are invaluable to any development process.

Deciding to Evaluate

A decision to evaluate a system formally must take into consideration the many trade-offs between security and cost, such as time to market and the number of features. Groups seeking formal evaluation may have to pay the evaluator's charge as well as staffing costs for skilled experts to develop security documentation and assurance evidence. Interaction with the evaluator for training, clarification, or corrections takes development staff time and could affect development and delivery schedules. Unfortunately, security evaluation cannot prove that a system is invulnerable to attack. Most systems today must operate in hostile environments, and the systems must provide their own protections from attacks and inadvertent errors.

Security and trust are no longer the exclusive realm of the government and military, nor are they of concern only to financial institutions and online businesses. Computers are at the heart of the economy, medical processes and equipment, power infrastructures, and communications infrastructures. Systems having no security are unacceptable in most environments today. Systems providing some security are a step in the right direction, but a trusted system that reliably addresses specifically defined security issues engenders stronger confidence. Evaluation provides an independent assessment by experts and a measure of assurance, which can be used to compare products.

The independent assessment by experts of the effectiveness of security mechanisms and the correctness of their implementation and operation is invaluable in finding vulnerabilities and flaws in a product or system. An evaluated product has been scrutinized by security experts who did not design or implement the product and can bring a fresh eye to the analysis. Hence, the evaluated product is less likely to contain major flaws than a product that has not been evaluated. The analysis of such a system begins with an assessment of requirements. The requirements must be consistent, complete, technically sound, and sufficient to counter the threats to the system. Assessing how well the security features meet the requirements is another part of the evaluation. Evaluation programs require specific types of administrative, user, installation, and other system documentation, which provide the administrators and maintainers the information needed to configure and administer the system properly, so that the security mechanisms will work as intended.

The level of risk in the environment affects the level of trust required in the system. The measure of trust associated with an evaluated product helps find the optimum combination of trust in the product and in the environment to meet the security needs.

Historical Perspective of Evaluation Methodologies

Government and military establishments were the early drivers of computer security research. They also drove the creation of a security evaluation process. Before evaluation methodologies were available for commercial products, government and military establishments developed their own secure software and used internal methodologies to make decisions about their security. With the rapid expansion of technology, government and military establishments wanted to use commercial products for their systems rather than developing them. This drove the development of methodologies to address the security and trustworthiness of commercial products.

Evaluation methodologies provide functional requirements, assurance requirements, and levels of trust in different formats. Some list requirements and use them to build trust categories. Others list the requirements only within the description of a trust category. To help the reader compare the development of the methodologies, we present each methodology in a standard manner. We first present overview information about the methodology. Descriptions of functional requirements (when they exist), assurance requirements, and levels of trust follow. If the methodology was widely used to evaluate systems, we describe the evaluation process. The final discussion for each methodology addresses its strengths, its weaknesses, and the contributions it makes to the evaluation technology. Unfortunately, the methodologies use slightly different terminologies. In the discussion of each methodology, we will describe the terminology specific to that technique and relate it to the specific terminologies of previous methodologies.

TCSEC: 1983–1999

The Trusted Computer System Evaluation Criteria (TCSEC), also known as the Orange Book, was developed by the U.S. government and was the first major computer security evaluation methodology. It presents a set of criteria for evaluating the security of commercial computer products. The TCSEC defined criteria for six different evaluation classes identified by their rating scale of C1, C2, B1, B2, B3, and A1. Each evaluation class contains both functional and assurance requirements, which are cumulative and increasing throughout the evaluation classes. Classes were subdivided into three different “divisions” of lesser importance to our discussion than individual evaluation classes. A fourth division, D, was provided for products that attempted evaluation but failed to meet all the requirements of any of the six classes. The vendor could select the level of trust to pursue by selecting an evaluation class but otherwise had no say in either the functional or assurance requirements to be met.

The reference monitor concept (see Section 19.1.2.2) and the Bell-LaPadula security policy model (see Section 5.2) heavily influenced the TCSEC criteria and approach. Recall that a trusted computing base (TCB) is a generalization of the reference validation mechanism (RVM). The TCB is not required to meet the RVM requirements (always invoked, tamperproof, and small enough to analyze) for all classes. In the TCSEC, the TCB need not be a full RVM until class B3.

The TCSEC emphasizes confidentiality, with a bias toward the protection of government classified information. Although there is no specific reference to data integrity in the TCSEC, it is indirectly addressed by the *-property of the embedded Bell-LaPadula Model.[1] However, this is not a complete data integrity solution, because it does not address the integrity of data outside the mandatory access control policy. System availability is not addressed.

During the first few years that the TCSEC was available, the National Computer Security Center published a large collection of documents that expanded on requirement areas from the TCSEC. These “Rainbow Series”[2] documents discussed the requirements in specific contexts such as networks, databases, and audit systems, and some are still applicable today.

The TCSEC provides seven levels of trust measurement called ratings, which are represented by the six evaluation classes C1, C2, B1, B2, B3, and A1, plus an additional class, D. An evaluated product is a rated product. Under the TCSEC, some requirements that this text considers to be functional in nature appear under headings that use the word assurance. These requirements are identified in the text below.

TCSEC Requirements

The TCSEC is organized by evaluation class and uses an outline structure to identify named requirement areas. It defines both functional and assurance requirements within the context of the evaluation classes. The actual requirements are embedded in a prose description of each named area. The divisions and subdivisions of the document are of lesser importance than the actual requirement areas found within them.

TCSEC Functional Requirements

Discretionary access control (DAC) requirements identify an access control mechanism that allows for controlled sharing of named objects by named individuals and/or groups. Requirements address propagation of access rights, granularity of control, and access control lists.

Object reuse requirements address the threat of an attacker gathering information from reusable objects such as memory or disk memory. The requirements address the revocation of access rights from a previous owner when the reusable object is released and the inability of a new user to read the previous contents of that reusable object.

Mandatory access control (MAC) requirements, not required until class B1, embody the simple security condition and the *-property from the Bell-LaPadula Model. These requirements include a description of the hierarchy of labels. Labels attached to subjects reflect the authorizations they have and are derived from approvals such as security clearances. Labels attached to objects reflect the protection requirements for objects. For example, a file labeled “secret” must be protected at that level by restricting access to subjects who have authorizations reflecting a secret (or higher) clearance.

Label requirements, also not required until class B1, enable enforcement of mandatory access controls. Both subjects and objects have labels. Other requirements address accurate representation of classifications and clearances, exporting of labeled information, and labeling of human-readable output and devices.

Identification and authentication (I&A) requirements specify that a user identify herself to the system and that the system authenticate that identity before allowing the user to use the system. These requirements also address the granularity of the authentication data (per group, per user, and so on), protecting authentication data, and associating identity with auditable actions.

Trusted path requirements, not required until class B2, provide a communications path that is guaranteed to be between the user and the TCB.

Audit requirements address the existence of an audit mechanism as well as protection of the audit data. They define what audit records must contain and what events the audit mechanism must record. As other requirements increase, the set of auditable events increases, causing the auditing requirements to expand as one moves to higher classes.

The TCSEC presents other requirements that it identifies as system architecture requirements. They are in fact functional requirements, and they include a tamperproof reference validation mechanism, process isolation, the principle of least privilege, and well-defined user interfaces.

TCSEC operational assurance requirements that are functional in nature include the following. Trusted facility management requires the separation of operator and administrator roles and are required starting at class B2. Trusted recovery procedure requirements ensure a secure recovery after a failure (or other discontinuity). These requirements are unique to class A1. Finally, a system integrity requirement mandates hardware diagnostics to validate the on-site hardware and firmware elements of the TCB.

TCSEC Assurance Requirements

Configuration management requirements for the TCSEC begin at class B2 and increase for higher classes. They require identification of configuration items, consistent mappings among all documentation and code, and tools for generating the TCB.

The trusted distribution requirement addresses the integrity of the mapping between masters and on-site versions as well as acceptance procedures for the customer. This requirement is unique to class A1.

TCSEC system architecture requirements mandate modularity, minimization of complexity, and other techniques for keeping the TCB as small and simple as possible. These requirements begin at class C1 and increase until class B3, where the TCB must be a full reference validation mechanism.

The design specification and verification requirements address a large number of individual requirements, which vary dramatically among the evaluation classes. Classes C1 and C2 have no requirements in this area. Class B1 requires an informal security policy model that is shown to be consistent with its axioms. Class B2 requires that the model be formal and be proven consistent with its axioms and that the system have a descriptive top level specification (DTLS). Class B3 requires that the DTLS be shown to be consistent with the security policy model. Finally, class A1 requires a formal top level specification (FTLS) and that approved formal methods be used to show that the FTLS is consistent with the security policy model. Class A1 also requires a mapping between the FTLS and the source code.

The testing requirements address conformance with claims, resistance to penetration, and correction of flaws followed by retesting. A requirement to search for covert channels includes the use of formal methods at higher evaluation classes.

Product documentation requirements are divided into a Security Features User's Guide (SFUG) and an administrator guide called a Trusted Facility Manual (TFM). The SFUG requirements include a description of the protection mechanisms, how they interact, and how to use them. The TFM addresses requirements for running the product securely, including generation, start-up, and other procedures. All classes require this documentation, and as the level of the class increases, the functional and assurance requirements increase.

Internal documentation includes design and test documentation. The design documentation requirements and the design specification and verification requirements overlap somewhat. Other documentation requirements include a statement of the philosophy of protection and a description of interfaces. Test documentation requirements specify test plans, procedures, tests, and test results. As with the user and administrator documentation, requirements for test and design documentation increase as the functional and assurance requirements increase as the classes increase.

The TCSEC Evaluation Classes

Class C1, called discretionary protection, has minimal functional requirements only for identification and authentication and for discretionary access controls. The assurance requirements are also minimal, covering testing and documentation only. This class was used only briefly, and no products were evaluated under this class after 1986.

Class C2, called controlled access protection, requires object reuse and auditing in addition to the class C1 functional requirements and contains somewhat more stringent security testing requirements. This was the most commonly used class for commercial products. Most operating system developers incorporated class C2 requirements into their primary product by the end of the lifetime of the TCSEC.

Class B1, called labeled security protection, requires mandatory access controls, but these controls can be restricted to a specified set of objects. Labeling supports the MAC implementation. Security testing requirements are more stringent. An informal model of the security policy, shown to be consistent with its axioms, completes class B1. Many operating system vendors offered a class B1 product in addition to their primary products. Unfortunately, the B1 products did not always receive the updates in technology that the main line received, and they often fell behind technically.

Class B2, called structured protection, is acceptable for some government applications. At class B2, mandatory access control is required for all objects. Labeling is expanded, and a trusted path for login is introduced. Class B2 requires the use of the principle of least privilege to restrict the assignment of privilege to the users least required to perform the specific task. Assurance requirements include covert channel analysis, configuration management, more stringent documentation, and a formal model of the security policy that has been proven to be consistent with its axioms.

Class B3, called security domains, implements the full reference validation mechanism. It increases the trusted path requirements and constrains how the code is developed in terms of modularity, simplicity, and use of techniques such as layering and data hiding. It has significant assurance requirements that include all the requirements of class B2 plus more stringent testing, more requirements on the DTLS, an administrator's guide, and design documentation.

Class A1, called verified protection, has the same functional requirements as class B3. The difference is in the assurance. Class A1 requires significant use of formal methods in covert channel analysis, design specification, and verification. It also requires trusted distribution and increases both test and design documentation requirements. A correspondence between the code and the FTLS is required.

The TCSEC Evaluation Process

Government-sponsored evaluators staffed and managed TCSEC evaluations at no fee to the vendor. The evaluation had three phases: application, preliminary technical review (PTR), and evaluation. If the government did not need a particular product, the application might be denied. The PTR was essentially a readiness review, including comprehensive discussions of the evaluation process, schedules, the development process, product technical content, requirement discussions, and the like. The PTR determined when an evaluation team would be provided, as well as the fundamental schedule for the evaluation.

The evaluation phase was divided into design analysis, test analysis, and a final review. In each part, the results obtained by the evaluation team were presented to a technical review board (TRB), which approved that part of the evaluation before the evaluation moved to the next step. The TRB consisted of senior evaluators who were not on the evaluation team being reviewed.

The design analysis consisted of a rigorous review of the system design based on the documentation provided. Because TCSEC evaluators did not read the source code, they imposed stringent requirements on the completeness and correctness of the documentation. Evaluators developed the initial product assessment report (IPAR) for this phase. Test analysis included a thorough test coverage assessment as well as an execution of the vendor-supplied tests. The evaluation team produced a final evaluation report (FER) after approval of the initial product assessment report and the test review. Once the technical review board had approved the final evaluation report, and the evaluators and vendor had closed all items, the rating was awarded.

The Ratings Maintenance Program (RAMP) maintained assurance for new versions of an evaluated product. The vendor took the responsibility for updating the assurance evidence to support product changes and enhancements. A technical review board reviewed the vendor's report and, when the report had been approved, the evaluation rating was assigned to the new version of the product. RAMP did not accept all enhancements. For example, structural changes and the addition of some new functions could require a new evaluation.

Impacts

The TCSEC created a new approach to identifying how secure a product is. The approach was based on the analysis of design, implementation, documentation, and procedures. The TCSEC was the first evaluation technology, and it set several precedents for future methodologies. The concepts of evaluation classes, assurance requirements, and assurance-based evaluations are fundamental to evaluation today. The TCSEC set high technical standards for evaluation. The technical depth of the TCSEC evaluation came from the strength of the foundation of requirements and classes, from the rigor of the evaluation process, and from the checks and balances provided by reviews from within the evaluation team and the technical review boards from outside the evaluation team.

However, the TCSEC was far from perfect. Its scope was limited. The evaluation process was difficult and often lacked needed resources. The TCSEC bound assurance and functionality together in the evaluation classes, which troubled some users. Finally, the TCSEC evaluations were recognzed only in the United States, and evaluations from other countries were not valid in the United States.

Scope Limitations

The TCSEC was written for operating systems and does not translate well to other types of products or to systems. Also, the TCSEC focused on the security needs of the U.S. government and military establishments, who funded its development. All evaluation classes except C1 and C2 require mandatory access control, which most commercial environments do not use. Furthermore, the TCSEC did not address integrity, availability, or other requirements critical to business applications.

The National Computer Security Center (NCSC) tried to address the scope problems by providing criteria for other types of products. After an attempt to define a criteria document for networks, the NCSC chose to develop the Trusted Network Interpretation (TNI) of the TCSEC [286], released in 1987. The TNI offered two approaches: evaluation of networks and evaluation of network components. The TNI network approach addressed centralized networks with a single accreditation authority, policy, and Network TCB (NTCB). In the first part of the TNI, the TCSEC criteria were interpreted for networks, and one could evaluate a network at the same levels offered by the TCSEC. The second part of the TNI offered evaluation of network components. A network component may be designed to provide a subset of the security functions of the network as a whole. The TNI could provide an evaluation based on the specific functionality that the component offered.

In 1992, a Trusted Database Management System Interpretation (TDI) [288] of the TCSEC was released. In the early 1990s, IBM and Amdahl pushed for a Trusted Virtual Machine Monitor Interpretation [1001] of the TCSEC, but this project was eventually dropped. The interpretations had to address issues that were outside the scope of the TCSEC, and each had limitations that restricted their utility. Not many evaluations resulted from the TNI or the TDI.

Process Limitations

The TCSEC evaluation methodology had two fundamental problems. The first was “criteria creep,” or the gradual expansion of the requirements that defined the TCSEC evaluation classes. Evaluators found that they needed to interpret the criteria to apply them to specific products. Rather than publish frequent revisions of the TCSEC to address these requirement interpretations, the NCSC chose to develop a process for approval of interpretations and to publish them as an informal addendum to the TCSEC. The interpretations were sometimes clearer and more specific than the original requirement. Over time, the list became quite large and expanded the scope of the individual criteria in the TCSEC and its interpretations. The requirements of the classes became the union of the requirements in the TCSEC and the set of applicable interpretations. Thus, a class C2 operating system may have been required to meet stronger requirements than a system evaluated a few years before. This put an additional burden on the newer products under evaluation and meant that the minimum-security enforcement of all C2 operating systems was not the same. Although there were many problems with these differences, it caused the security community to learn more about security and create better security products.

The second problem with the evaluation process was that evaluations took too much time. Three factors contributed to this problem. Many vendors misunderstood the depth of the evaluation and the required interactions with the evaluation teams. The practices of the evaluation management caused misunderstandings and scheduling problems. Finally, the motivation to complete a free evaluation was often lacking. Typically, both vendors and evaluators caused delays in the schedule. Vendors often had to do additional unanticipated work. Evaluators were assigned to multiple evaluations, and the schedule of one evaluation could cause delays for another vendor. Many evaluations took so long to complete that the product was obsolete before the rating was awarded. Toward the end of the life of the TCSEC, commercial labs approved by the government were allowed to do TCSEC evaluations for a fee. Vendors had to be prepared for evaluation, and there was significantly less interaction between evaluators and vendors. This change addressed much of the timeliness problem, with labs completing evaluations in roughly a year.

A related problem was that RAMP cycles were as difficult as full evaluations and suffered from similar delays. Consequently, RAMP was not used very much.

Contributions

The TCSEC provided a process for security evaluation of commercial products. Its existence heightened the awareness of the commercial sector to the needs for computer security. This awareness would have arisen later if not for the influence of the TCSEC.

In the 1990s, new varieties of products emerged, including virus checkers, firewalls, virtual private networks, IPsec implementations, and cryptographic modules. The TCSEC remained centered on operating systems, and its interpretations were insufficient to evaluate all types of networks or the new varieties of products. The commercial sector was dissatisfied with the functional requirements of the evaluation classes. These inadequacies of the TCSEC stimulated a wave of new approaches to evaluation that significantly affected evaluation technology. Commercial organizations wrote their own criteria. Other commercial organizations offered a pass-fail “certification” based on testing. The Computer Security Act of 1987 gave the responsibility to the National Security Agency (NSA) for security of computer systems processing classified and national security–relevant information. The National Institute of Standards and Technology (NIST) received a charter for systems processing sensitive and unclassified information. In 1991, NIST and the NSA began working on new evaluation criteria called the Federal Criteria (FC). All these activities sprang from the impact of the TCSEC.

International Efforts and the ITSEC: 1991–2001

By 1990, several Western countries had developed their own security evaluation criteria. Canada released the first version of the Canadian Trusted Computer Product Evaluation Criteria (CTCPEC) [173] in 1989. The CTCPEC relied heavily on the TCSEC in the beginning but also incorporated some new ideas through successive releases. The CTCPEC espouses separation of assurance and functionality. It offers a catalogue of functional requirements in several categories. It introduces the concept of functionality “profiles” based on sets of well-defined requirements from the catalogue. It also addresses new functional requirement areas such as integrity and availability and new assurance areas such as the developer environment.

Some Western European countries—notably, France, Germany, the United Kingdom, and the Netherlands—also had security evaluation criteria by this time. The lack of reciprocity of evaluation among European nations created a move to harmonize the criteria of these countries, resulting in the Information Technology Security Evaluation Criteria (ITSEC), the European standard since 1991. The European Union officially endorsed the ITSEC as a Recommendation by the Council of the European Union in 1995. The ITSEC was widely used over a 10-year period until the Common Criteria (see Section 21.8) became available. The ITSEC took a different approach to evaluation than that of the TCSEC, and consequently it successfully addressed some of the shortcomings of the TCSEC. However, it created a new set of shortcomings of its own.

The ITSEC provided six levels of trust, called evaluation levels, E1, E2, E3, E4, E5, and E6. A seventh level, E0, was used for products that did not meet other levels. A product or system successfully evaluated under the ITSEC was called a certified product or certified system, and a certified product or system was said to have a certification. ITSEC did not provide functional criteria. It required the vendor to define the security functional criteria in a security target (ST). This effectively split functionality and assurance into distinct categories. Having vendor-defined or externally defined functional requirements permitted evaluation of any type of product or system. There was no equivalent to the concept of a TCB in the ITSEC. However, the following new term was introduced by the ITSEC:

  • Definition 21–2. A target of evaluation (TOE)[3] is a product or system, and its associated administrator and user documentation, that is the subject of an evaluation.

We use the acronym “TOE” and “product” or “system” interchangeably in this text, avoiding the use of “TOE” where appropriate.

The United Kingdom IT Security Evaluation and Certification Scheme Certification Body defined exemplary sets of functional requirements that were consistent with the functional requirements for TCSEC classes C1 through B3, as well as other fixed functionality classes. An evaluated product using these predefined sets of functional requirements received certification that had two components: one for the functional class (for example, FC2 was the U.K. functional requirement specification that mimicked TCSEC class C2) and one for the assurance class. Therefore, an operating system evaluated under the ITSEC could end up with a certification for “FC2-E3,” indicating that it met the assurance requirements stated in the E3 assurance class and the functional requirements stated in the FC2 functionality class.

ITSEC Assurance Requirements

The ITSEC assurance requirements were similar to those in the TCSEC, although there were substantial differences in terminology. As in the TCSEC, assurance requirements were defined within the constraints of the evaluation levels. ITSEC assurance was viewed in terms of correctness and effectiveness. The six effectiveness requirements applied equally to all levels of ITSEC evaluation. The first two effectiveness requirements were as follows.

  1. Suitability of requirementsThis requirement addressed consistency and coverage of the security target by showing how the security requirements and environmental assumptions found in the security target were sufficient to counter the threats defined in the security target.

  2. Binding of requirementsThis analysis investigated the security requirements and the mechanisms that implemented them. This ensured that the requirements and mechanisms were mutually supportive and provided an integrated and effective security system. The assessment took both the requirements and the implementing mechanisms into account.

These requirements applied to the security target and provided an analysis of the security target that contained the security requirements. There was no correspondence between the ITSEC and the TCSEC in this area because the corresponding analysis was done in defining the TCSEC evaluation classes.

This section discusses the remaining four effectiveness requirements along with the correctness requirements. The correctness requirements are subdivided, and, as with the TCSEC, the subdivisions are not as significant as the individual requirement areas. This section will identify the differences between the requirements of the ITSEC and those of the TCSEC.

Requirements in the TCSEC Not Found in the ITSEC

Some of the TCSEC system architecture requirements (notably, tamperproof reference validation mechanisms, process isolation, the principle of least privilege, well-defined user interfaces, and the requirement for system integrity) did not appear in the ITSEC as assurance requirements. Vendors supplied the functional requirements, so they might or might not have named these TCSEC requirements.

The TCSEC required approved formal techniques for design specification and verification at evaluation class A1. The ITSEC specified the use of formal methods but did not have a set of approved formal techniques.

Requirements in the ITSEC Not Found in the TCSEC

The ITSEC required an assessment of the security measures used for the developer environment during the development and maintenance of the product or system. The TCSEC had no such requirement.

Starting at level E2, the ITSEC required that a correspondence be defined between all levels of representation of the TOE (such as mappings of specifications to requirements, mappings between successive levels of specification, and mappings between the lowest specification and the code). The TCSEC required only a mapping from the top-level specification to the code, and only for higher evaluation classes. The ITSEC had requirements on compilers and languages that the TCSEC did not have. Finally, the ITSEC required the submission of source code at several levels and of object code at the highest level. The TCSEC evaluations were done without examining code.

The ITSEC requirements for delivery and generation procedures, and for approved distribution procedures, addressed many aspects of those procedures, whereas the TCSEC addressed only the use of masters in the distribution process. Furthermore, the distribution requirements began at the lowest level of the ITSEC, whereas the TCSEC required them only at the highest level.

Secure start and operation requirements in the ITSEC addressed more aspects than did the TCSEC requirements, which addressed only recovery after a discontinuity.

The effectiveness requirements of the ITSEC required several forms of vulnerability assessment that the TCSEC did not require. The design vulnerability analysis, which assessed vulnerabilities at the design level, had no equivalent in the TCSEC. The TCSEC had no equivalent to the ITSEC's ease of use analysis, which determined how the system could be misused based on a study of the system documentation. The ITSEC known vulnerabilities analysis was similar to the TCSEC design vulnerability analysis but addressed the implemented system. The strength of mechanisms effectiveness requirement applied to each security mechanism whose strength could be measured. For example, it applied to cryptographic algorithms (the measure was based on key size) and passwords (the measure was based on the size of the password space). The TCSEC has no corresponding requirement.

The ITSEC Evaluation Levels

The ITSEC levels were listed from lowest to highest. Each level included the requirements of the preceding level. If a product or system did not meet the requirements for any level, it was rated as level E0 (which corresponded to the TCSEC level D).

Level E1 required a security target against which to evaluate the product or system. It also required an informal description of the product or system architecture. The product or system had to be tested to demonstrate that it satisfied its security target.

Level E2 also required an informal description of the detailed design of the product or system TOE, as well as configuration control and a distribution control process. Evidence of testing had to be supplied.

Level E3 had more stringent requirements on the detail design and also required a correspondence between the source code and the security requirements.

Level E4 also required a formal model of the security policy, a more rigorous, structured approach to architectural and detailed design, and a design level vulnerability analysis.

Level E5 also required a correspondence between the detailed design and the source code, and a source code level vulnerability analysis.

Level E6 also required extensive use of formal methods. For example, the architecture design had to be stated formally and shown to be consistent with the formal model of the security policy. Another requirement was the partial mapping of the executable code to the source code.

The ITSEC Evaluation Process

Each participating country had its own methodology for doing evaluations under the ITSEC. All were similar and followed well-defined guidelines. This discussion uses the U.K. methodology.

Certified, licensed evaluation facilities (CLEFs) performed evaluations for a fee. The U.K. government certified the CLEFs. Evaluations typically started much later in the development cycle than did TCSEC evaluations. CLEFs often had an evaluation division and a consulting division. Vendors sought guidance and support from the consulting division to prepare for the evaluation, and consequently the products and systems were better prepared before evaluation began. Because fees were involved, all parties were motivated to finish the evaluation quickly. The evaluation process was much more structured and did not have the lengthy (but technically sound) checks and balances that were provided by TCSEC technical review boards.

The process began with an evaluation of the security target, based on the suitability and binding of assurance requirements. When the security target was approved, the evaluators evaluated the product against the security target. The documentation required for the ITSEC followed a slightly more rigid structure than that for the TCSEC, making it easier in some ways for vendors to provide useful evidence to the evaluators. ITSEC evaluators read the code for clarification when documentation proved inadequate.

The U.K. Scheme for the ITSEC had a very straightforward and simplistic certificate maintenance scheme. It required a plan and evidence to support correct implementation of the plan. Like the evaluation process, it did not have technical reviews such as those of the technical review boards of the TCSEC.

Impacts

The ITSEC evaluation allowed flexibility in requirement definition and in mixtures of assurance and functionality. Commercial labs performed the ITSEC evaluations, which effectively reduced the length of the evaluation process. Additionally, the ITSEC methodology lent itself to any kind of products or systems. ITSEC provided guidance on what documentation was required. Reciprocity of evaluation existed within the European states. The four effectiveness requirements were a very useful addition to assurance requirements.

In spite of the somewhat stronger assurance requirements in some areas, the ITSEC evaluations were often viewed as technically inferior to the TCSEC evaluations for two reasons. The first was a fundamental potential weakness in the development of functional requirements. The second dealt with the evaluation process itself, which was somewhat lacking in rigor.

Another limit of the ITSEC was the lack of reciprocity of evaluation with Canada and the United States.

Vendor-Provided Security Targets

Unfortunately, vendors did not always have qualified security experts to develop appropriate security targets. This raised the concern that ITSEC evaluations did not determine if a claim made sense; it merely verified that the product met the claim. In fact, security target evaluation was often the work of one or two individuals. No official review provided checks and balances. No board of experts (such as the TCSEC's technical review board) assessed the quality of the evaluators' work. The use of predefined functionality classes eased this limitation somewhat.

Process Limitations

Some considered using the same company for both evaluation preparation support and evaluation itself to be a conflict of interest. Different personnel provided the consulting and evaluation services, but their biases could be the same. Separation of these duties among different organizations may produce stronger results because this approach offers more diversity of opinion.

ITSEC product and system evaluations could have had one-or two-person teams. Usually, one or two people made all the decisions, and there was insufficient review of the decisions. One-or two-person teams cannot generate the rich set of opinions and internal review that a team of five or six security experts can provide.

Efficiency of process and ease of use are not substitutes for rigor or depth. There was no body of experts to approve evaluator design analysis or to test coverage analysis. The small evaluation team made the decision to move to the next phase of the evaluation. There was no equivalent to a final review by experts. A government body provided the final approval for the evaluation, but that body usually took the recommendation of the evaluation team.

Commercial International Security Requirements: 1991

The Commercial International Security Requirements (CISR) [253] was a joint effort of individuals from American Express and Electronic Data Systems (EDS). They used the TCSEC, Germany's IT-Security Criteria [Criteria for the Evaluation of Trustworthiness of Information Technology (IT)], and the newly released ITSEC. Their approach was to develop a “C2+” security evaluation class that stressed the areas of importance to business. As before, the following discussion focuses on the differences between the requirements of the CISR and the TCSEC.

CISR Requirements

The CISR had its roots in the TCSEC evaluation class C2. Because one level of trust was involved, the functional and assurance requirements were stated directly and not embedded in the description of several levels of trust. The CISR functional and assurance requirements included only those requirement areas required by the TCSEC evaluation class C2. This effectively eliminated design specification and verification, labeling, mandatory access control, trusted path, trusted facilities management, and trusted recovery. Assurance requirements were identical to the TCSEC C2 requirements with one small addition. The administrator guide had to contain a threat analysis that identified the protection measures addressing each threat. CISR functional requirements for object reuse and system integrity were identical to the TCSEC class C2 requirements. The other C2 functional requirement areas were enhanced.

  1. CISR discretionary access control requirements included the B3 TCSEC requirements of access control lists and limiting of access by specific modes. Several new access modes were added.

  2. CISR I&A requirements included password management constraints, as identified in the Password Management Guide of the Rainbow Series [284]. The CSIR offered one-time passwords as an alternative to fixed, stored passwords and required one-way encryption to protect stored passwords.

  3. CISR made minor modifications to address the auditing of new discretionary access control attributes, added a few auditable events, and included small issues from TCSEC evaluation classes B1 and B3. The CISR added B1, B2, and B3 requirements to the system architecture requirements from the TCSEC.

CISR added several new categories of requirements that were not found in the TCSEC. Session controls included login attempt thresholds, limits on multiple concurrent sessions, and keyboard locking. System entry constraints could be set to limit a user's access to the system based on time, location, and mode of access. CISR provided a set of workstation security requirements that included the use of encryption, virus deterrents, and restrictions on use of peripheral devices and operating commands. CISR network security requirements included the use of a centralized administrative interface as well as alternative user authentication methods such as tokens, challenge response techniques, and public key cryptography.

Impacts

Although the CISR never became a generally available evaluation methodology, it did contribute to the rapid growth of evaluation technology in the early 1990s. Perhaps the most significant contribution of this work was the awareness it brought to the U.S. federal government regarding the security evaluation needs of the commercial sector. The CISR influenced the Federal Criteria, which included many of the new requirements stated by the CISR.

Other Commercial Efforts: Early 1990s

In the late 1980s and early 1990s, private commercial companies in the United States and the United Kingdom began evaluating other types of products. These evaluations were oriented toward testing and did not include requirement analysis, design analysis, or other classical evaluation techniques. This approach offered no level of trust but rather used a “pass-or-fail” process. A product or system that passed the process was called certified, and a certified product received periodic recertification as part of the initial agreement. These companies evaluated products such as antivirus software, network firewalls, Internet filter software, cryptographic products, biometric products, and IPsec products with this technique. In the absence of U.S. government criteria, some of these evaluations were an effective stopgap measure for security evaluations of products that could not be addressed using the TCSEC. They are still available today, but they must compete with the lowest level of trust Common Criteria evaluations that provide similar services at similar costs but provide a government-validated assurance rating.

The Federal Criteria: 1992

The National Institute of Standards and Technology (NIST) and the National Security Agency (NSA) together developed the Federal Criteria (FC) [757] in 1992 to replace the TCSEC with a new evaluation approach. The FC attempted to address the shortcomings of the TCSEC and of the ITSEC and to address the concerns of the CISR authors. It was heavily influenced by the TCSEC technically but followed the lead of the ITSEC in its separation of assurance and functional requirements. The FC used a catalogue of functional requirements, which had been done in the CTCPEC. A new direction in the FC is evaluation of products with respect to protection profiles, with each profile identifying requirements and other information particular to a family of products or systems.

  • Definition 21–3. A protection profile (PP) is an abstract specification of the security aspects of an IT product. A protection profile is product-independent, describing a range of products that could meet this same need. It contains both functional and assurance requirements that are bound together in a profile with a rationale describing the anticipated threats and intended method of use.

NIST and NSA planned to create a set of Federal Information Processing Standards (FIPS) for each protection profile. The Minimum Security Functionality Requirements for Multi-User Operating Systems (MISR) was an example of such a profile. Before the FC approach could come to fruition, the Canadian Security Establishment (CSE) and the ITSEC community approached the U.S. government to encourage it to use the FC as a basis for a new set of international criteria.

FC Requirements

The FC included a catalogue of all functional requirements of the TCSEC. New functional requirements adopted from the CISR included the system entry constraints based on time, mode of entry, and location, and other functional issues. Possibly for the first time, there appeared an availability policy based on requirements for resource allocation and fault tolerance. Security management requirements were identified, enhanced, and added to a new section of the functional requirements. Assurance requirements met both TCSEC and ITSEC requirements. The FC included a new assurance requirement for a life cycle process.

Impacts

The most significant contribution of the FC was the concept of an evaluated protection profile. This approach also appears in the 1993 CTCPEC. The functional requirements sections of protection profiles are similar to the ITSEC functionality classes, but the protection profile requirements were selected from the FC functional requirements catalogue. The FC methodology supported evaluation of protection profiles. In contrast, the ITSEC functionality classes were not included in the ITSEC evaluation methodology.

The FC protection profile included the information needed for identification and cross-referencing as well as a brief description of the nature of the problem that the profile addressed. The rationale portion included identification of threats, the environment, and assumptions and provided the justification for the profile. The subsequent sections of the protection profile contained the functional and assurance requirements as stated in the FC. The FC also introduced the concept of a product-dependent security target that implemented the requirements of an approved protection profile.

A second significant contribution was the development of a profile registry that made FC-approved protection profiles available for general use.

FIPS 140: 1994–Present

During the time of the TCSEC, the U.S. government had no mechanism for evaluating cryptographic modules. Evaluation of such modules was needed in order to ensure their quality and security enforcement. Evaluation of cryptographic modules outside the United States under the ITSEC or within the United States under the commercial pass-or-fail techniques did not meet these needs. In 1994, U.S. government agencies and the Canadian Security Establishment (CSE) jointly established FIPS 140-1 as an evaluation standard for cryptographic modules for both countries. This standard was updated in 2001 to FIPS 140-2 [753] to address changes in technology and process since 1994. The program is now sponsored jointly by NIST and CSE under the Cryptographic Module Validation (CMV) Program. Certification laboratories are accredited in Canada and the United States to perform the evaluations, which are validated jointly under the CMV Program, sponsored by CSE and NIST. This scheme for evaluating cryptographic products has been highly successful and is actively used today. Currently, the United Kingdom is negotiating to enter the CMV program.

A cryptographic module is a set of hardware, firmware, or software, or some combination thereof, that implements cryptographic logic or processes. If the cryptographic logic is implemented in software, then the processor that executes the software is also a part of the cryptographic module. The evaluation of software cryptographic modules automatically includes the operating system.

FIPS 140 Requirements

FIPS 140-1 and FIPS 140-2 provide the security requirements for a cryptographic module implemented within federal computer systems. Each standard defines four increasing, qualitative levels of security (called security levels) intended to cover a wide range of potential environments. The requirements for FIPS 140-1 cover basic design and documentation, module interfaces, roles and services, physical security, software security, operating system security, key management, cryptographic algorithms, electromagnetic interference/electromagnetic compatibility, and self-testing. The requirements for FIPS 140-2 include areas related to the secure design and implementation of cryptographic modules: specification; ports and interfaces; roles, services, and authentication; a finite state model; physical security; the operational environment; cryptographic key management; electromagnetic interference/electromagnetic compatibility; self-testing; design assurance; and mitigation of other attacks.

FIPS 140-2 Security Levels

In this section we present an overview of the security levels of FIPS 140-2. Changes from those of FIPS 140-1 reflect changes in standards (particularly the move from the TCSEC to the Common Criteria), changes in technology, and comments from users of FIPS 140-1.

Security Level 1 provides the lowest level of security. It specifies that the encryption algorithm be a FIPS-approved algorithm but does not require physical security mechanisms in the module beyond the use of production-grade equipment. Security Level 1 allows the software and firmware components of a cryptographic module to be executed on a general-purpose computing system using an unevaluated operating system. An example of a Level 1 cryptographic module is a personal computer board that does encryption.

Security Level 2 dictates greater physical security than Security Level 1 by requiring tamper-evident coatings or seals, or pick-resistant locks. Level 2 provides for role-based authentication, in which a module must authenticate that an operator is authorized to assume a specific role and perform a corresponding set of services. Level 2 also allows software cryptography in multiuser timeshared systems when used in conjunction with an operating system evaluated at EAL2 or better under the Common Criteria (see Section 21.8) using one of a set of specifically identified Common Criteria protection profiles.

Security Level 3 requires enhanced physical security generally available in many existing commercial security products. Level 3 attempts to prevent potential intruders from gaining access to critical security parameters held within the module. It provides for identity-based authentication as well as stronger requirements for entering and outputting critical security parameters. Security Level 3 requirements on the underlying operating system include an EAL3 evaluation under specific Common Criteria protection profiles (see Section 21.8.1), a trusted path, and an informal security policy model. An equivalent evaluated trusted operating system may be used.

Security Level 4 provides the highest level of security. Level 4 physical security provides an envelope of protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access. Level 4 also protects a cryptographic module against a security compromise resulting from environmental conditions or fluctuations outside the module's normal operating ranges of voltage and temperature. Level 4 allows the software and firmware components of a cryptographic module to be executed on a general-purpose computing system using an operating system that meets the functional requirements specified for Security Level 3 and that is evaluated at the CC evaluation assurance level EAL4 (or higher). An equivalent evaluated trusted operating system may be used.

Impact

The CMV program has improved the quality and security of cryptographic modules. By 2002, 164 modules and 332 algorithms had been tested. Of the 164 modules, approximately half had security flaws and more than 95% had documentation errors. Of the 332 algorithms, approximately 25% had security flaws and more than 65% had documentation errors. Vendors were able to correct these problems before their modules and algorithms were deployed and used.

The Common Criteria: 1998–Present

The Common Criteria (CC) approach to security evaluation draws from the strengths of TCSEC, ITSEC, CTCPEC, and FC, as well as from commercial efforts. The original participants in the Common Criteria Project included Canada, NIST and the NSA from the United States, the United Kingdom, France, Germany, and the Netherlands. Although all participants had the common goal of developing a technically strong, easy to use, mutually reciprocal evaluation technology, each of the participants represented previous methodologies. The United Kingdom, France, Germany, and the Netherlands represented the ITSEC community. NIST and the NSA represented the work done for the Federal Criteria Project, and the NSA also represented the TCSEC and the interests of the U.S. military establishment for very high-assurance systems. Canada represented the CTCPEC. In 1998, the first signers of the Arrangement on the Recognition of the Common Criteria Certifications in the Field of Information Technology Security were the United States, the United Kingdom, France, Germany, and Canada. This arrangement is called the Common Criteria Recognition Arrangement (CCRA), and also the Mutual Recognition Arrangement (MRA), in the literature. As of May 2002, Australia, New Zealand, Finland, Greece, Israel, Italy, the Netherlands, Spain, Sweden, and Norway have signed the CCRA. Japan, Russia, India, and South Korea are working on developing appropriate evaluation schemes (see below), which is a requirement for any country signing the CCRA. To date, Canada, the United Kingdom, the United States, and Germany have been the most prolific in producing CC evaluated products. The CC is also Standard 15408 of the International Standards Organization (ISO).

The CC became the de facto security evaluation standard in the United States in 1998. The TCSEC was retired in 2000, when the last TCSEC evaluation was completed. European countries that used the ITSEC similarly retired it, although remnants of the old evaluation programs still exist.

The Common Criteria evaluation methodology has three parts: the CC documents, the CC Evaluation Methodology (CEM), and a country-specific evaluation methodology called an Evaluation Scheme or National Scheme. The CC provides an overview of the methodology and identifies functional requirements, assurance requirements, and Evaluation Assurance Levels (EALs). The CEM provides detailed guidelines for the evaluation of products and systems at each EAL. This document is useful to developers and invaluable to evaluators. Currently the CEM is complete for only the first four EALs defined in the CC. The first four EALs address low and medium levels of trust, whereas the higher three levels are specific to what are called high-assurance products and systems. Individual country Evaluation Schemes provide the infrastructure necessary to implement CC evaluation. Each country implements the methodology in its own way. The CC documents and the CEM set the fundamental criteria, EALs, and evaluation strategy, but countries may have different methods of selecting evaluators, awarding certifications, structuring interactions between evaluators and vendors, and the like. In the United States, for example, the Evaluation Scheme is the Common Criteria Evaluation and Validation Scheme (CCEVS), which is implemented within NIST. Under this scheme, NIST accredits commercial evaluation laboratories, which then perform product and system or protection profile evaluations. The sponsoring agencies of NIST then validate the evaluation and award the appropriate EALs.

The CC uses the following terms.

  • Definition 21–4. A TOE Security Policy (TSP) is a set of rules that regulate how assets are managed, protected, and distributed within a product or system.

  • Definition 21–5. The TOE Security Functions (TSF) is a set consisting of all hardware, software, and firmware of the product or system that must be relied on for the correct enforcement of the TSP.

Notice that the TSF is a generalization of the TCSEC concept of a trusted computing base (TCB).

The following discussion is based on Version 2.1 of the Common Criteria.

Overview of the Methodology

The CC supports two kinds of evaluations: evaluations of protection profiles and evaluations of products or systems against security targets (STs). Product evaluations are awarded at one of seven predefined EALs or at another, user-defined, EAL. All CC evaluations are reciprocal to the signers of the CCRA.

The concept of a protection profile evolved from the Federal Criteria, the CTCPEC profiles, and the ITSEC functionality classes. The form, structure, and terminology of a CC protection profile differs from that of an FC protection profile, although the concepts are similar.

  • Definition 21–6. A CC protection profile (PP) is an implementation-independent set of security requirements for a category of products or systems that meet specific consumer needs.

The PP provides a thorough description of a family of products in terms of threats, environmental issues and assumptions, security objectives, and CC requirements. The requirements include both functional requirements, chosen from the CC functional requirements by the PP author, and assurance requirements, which include one of the seven EALs and may include additional assurance requirements as well. The final section of the PP provides the assurance evidence in the form of a rationale that the PP is complete, consistent, and technically sound. PPs do not have to be evaluated and validated. PPs that are evaluated must undergo evaluation in accordance with the methodology outlined in the CC assurance class APE: Protection Profile Evaluation.

A PP consists of six sections.

  1. IntroductionThis section contains

    1. the PP Identification, which is precise information used to identify, catalogue, register, and cross reference the PP; and

    2. the PP Overview, which is a narrative summary of the PP that should be acceptable as a stand-alone abstract for use in catalogues and registries.

  2. Product or System Family DescriptionThis section includes a description of the type and the general IT features of the product or system. If the primary function of the product or system is security, this section may describe the wider application context into which the product or system will fit.

  3. Product or System Family Security EnvironmentThis section presents

    1. assumptions about the intended usage and the environment of use;

    2. threats to the assets requiring protection, in terms of threat agents, types of attacks, and assets that are the targets of the attacks; and

    3. organizational security policies by which the product or system must abide.

  4. Security ObjectivesThere are two types of security objectives:

    1. the security objectives for the product or system must be traced back to aspects of identified threats and/or organizational security policies; and

    2. the security objectives for the environment must be traced back to threats not completely countered by the product or system and/or organizational policies or assumptions not completely met by the product or system.

  5. IT Security RequirementsThis section covers functional and assurance requirements.

    1. The security functional requirements are drawn from the CC. If no CC requirements are appropriate, the PP author may supply other requirements explicitly without reference to the CC.

    2. The security assurance requirements are based on an EAL. The PP author may augment an EAL by adding extra security assurance requirements from the CC or may supply other requirements explicitly without reference to the CC. This includes security requirements for the environment, as applicable.

  6. RationaleThis section includes both objectives and requirements.

    1. The security objectives rationale demonstrates that the stated objectives are traceable to all of the assumptions, threats, and organizational policies.

    2. The security requirements rationale demonstrates that the requirements for the product or system and the requirements for the environment are traceable to the objectives and meet them.

The second form of evaluation offered by the CC is the evaluation of a product or system against a security target (ST). The results of the evaluation are recognized by all signatories to the CCRA. This type of evaluation has two parts. The first is the evaluation of the ST in accordance with assurance class ASE: Security Target Evaluation (see Section 21.8.4). The product or system itself is then evaluated against the ST.

Under the CC, the functional requirements for a specific product or system are defined in an ST, just as was done under the ITSEC. The concept of a security target evolved from the ITSEC, and the idea of evaluating a security target against an evaluated protection profile evolved from the FC.

  • Definition 21–7. A security target (ST) is a set of security requirements and specifications to be used as the basis for evaluation of an identified product or system.

There are two approaches to developing an ST. The first approach is to develop an ST based on a PP. The second approach is to develop an ST directly from the CC. If an evaluated PP is used, the ST process is generally simpler because much of the rationale in the ST can reference the PP directly. The ST addresses the same fundamental issues as the PP, with some notable differences. A significant difference is that the ST addresses the issues for the specific product or system, not for a family of potential products or systems.

An ST consists of eight sections.

  1. IntroductionThis section has three parts.

    1. The ST Identification gives precise information that is used to control and identify the ST and the product or system to which it refers.

    2. The ST Overview is a narrative summary of the ST that should be acceptable as a stand-alone abstract for use in evaluated product lists.

    3. The CC Conformance Claim is a statement of conformance to the CC. An ST is part 2 conformant if it uses only functional requirements found in part 2 of the CC. If it uses extended requirements defined by the vendor, it is called part 2 extended. Part 3 conformant and part 3 extended are similarly defined. An ST is conformant to a PP only if it is compliant with all parts of the PP.

  2. Product or System DescriptionThis section includes a description of the TOE as an aid to understanding its security requirements. It addresses the product or system type and the scope and boundaries of the TOE (both physically and logically).

  3. Product or System Family Security EnvironmentThis section includes

    1. assumptions about the intended usage and about the environment of use;

    2. threats to the assets requiring protection, in terms of threat agents, types of attacks, and assets that are the targets of attacks; and

    3. organizational security policies by which the product or system must abide.

  4. Security ObjectivesThere are two types of security objectives:

    1. the security objectives for the product or system must be traced back to aspects of identified threats and/or organizational security policies; and

    2. the security objectives for the environment must be traced back to threats not completely countered by the product or system and/or organizational policies or assumptions not completely met by the product or system.

  5. IT Security RequirementsThis section covers functional and assurance requirements.

    1. The security functional requirements are drawn from the CC. If no CC requirements are appropriate, the ST author may supply other requirements explicitly without reference to the CC.

    2. The security assurance requirements are based on an EAL. The ST author may augment an EAL by adding extra security assurance requirements from the CC or may supply other requirements explicitly without reference to the CC. This includes security requirements for the environment, as applicable.

  6. Product or System Summary Specification. This specification defines the instantiation of the security requirements for the product or system and includes

    1. a statement of security functions and a description of how these functions meet the functional requirements; and

    2. a statement of assurance measures specifying how the assurance requirements are met.

  7. PP ClaimsThis section makes claims of conformance with the requirements of one or more protection profiles.

  8. RationaleThis section explains various aspects of the ST.

    1. The security objectives rationale demonstrates that the stated objectives are traceable to all of the assumptions, threats, and organizational policies.

    2. The security requirements rationale demonstrates that the requirements for the product or system and the requirements for the environment are traceable to the objectives and meet them.

    3. The TOE summary specification rationale demonstrates how the TOE security functions and assurance measures meet the security requirements.

    4. A rationale for not meeting all dependencies.

    5. The PP claims rationale explains differences between the ST objectives and requirements and those of any PP to which conformance is claimed.

As shown in the list above, in addition to the PP issues, the ST includes a product or system summary specification that identifies specific security functions and mechanisms. It also describes the strength of the functional requirements and the assurance measures used to analyze those requirements. A PP claims section identifies claims made to PPs that the ST implements. The ST rationale section contains a summary specification rationale that shows how the security functional requirements are met, how any strength-of-function claims are met, and that the assurance measures are sufficient for the assurance requirements. An ST that claims to implement a PP must state those claims and justify them in the rationale.

The CC also has a scheme for assurance maintenance. The goal of such activities is to build confidence that assurance already established for a product or system will be maintained and that the product or system will continue to meet the security requirements through changes in the product or system or its environment.

CC Requirements

The heart of the CC is the requirements themselves. The CC defines both functional and assurance requirements and then builds EALs out of the assurance requirements. The requirements are organized into a somewhat elaborate naming and numbering scheme. However, this scheme is much easier to use than the textual descriptions of multiple requirements in a single section, as is done in other methodologies. Functional and assurance requirements are divided into classes based on common purpose. Classes are broken into smaller groups called families. Families contain components, which contain definitions of detailed requirements as well as dependent requirements and a definition of hierarchy of requirements.

CC Security Functional Requirements

There are 11 classes of security functional requirements, each having one or more families. Two of the security functional requirement classes are auditing and security management. The related requirements are unique in the sense that many requirements in other classes generate auditing and/or management requirements. A management section of each family overview provides specific information about management issues relevant to the subdivisions and requirements of the family. Similarly, the audit section of the family overview identifies relevant auditable events associated with the requirements of the family. Requirements may be hierarchical in nature. Requirement A is hierarchical to requirement B if the functional elements of requirement B contain the functional elements of requirement A along with some additions. Finally, nonhierarchical dependencies, which may cross classes, are also identified with each requirement. These four structural approaches (identification of management requirements, audit requirements, hierarchical issues, and nonhierarchical dependencies) help define a consistent and complete specification using the CC.

Consider the security functional requirements of the CC by class and family. The class is indicated by the title, and the families are identified in the descriptive text. All other requirements are derived from previously discussed methodologies.

Class FAU: Security Audit. This class contains six families of requirements that address audit automatic response, audit data generation, audit analysis, audit review, audit event selection, and audit event storage.

Class FCO: Communication. This class contains two families that address nonrepudiation of origin and nonrepudiation of receipt. The CC is the first methodology to contain this requirement.

Class FCS: Cryptographic Support. This class contains two families that address cryptographic key management and cryptographic operation. Encryption algorithms and other implementation issues can be addressed using FIPS 140-2.

Class FDP: User Data Protection. This class has 13 families. It includes two different types of security policies, each with one family for each type of policy and another family that defines the functions for that type of policy. These are access control and information flow policies. The difference between these two types of policies is essentially that an access control policy makes decisions based on discrete sets of information, such as access control lists or access permissions, whereas an information flow control policy addresses the flow of information from one repository to another. A discretionary access control policy is an access control policy and a mandatory access control policy is an information flow control policy. These families are also represented in other methodologies, but they are generalized in the CC, for flexibility.

The residual information protection family addresses the issues called “object reuse” in previous criteria. Other families address data authentication, rollback, stored data integrity, inter-TSF user data confidentiality transfer protection, inter-TSF user data integrity transfer protection, exporting to outside the TSF control, and importing from outside the TSF control.

Class FIA: Identification and Authentication. This class has six families that include authentication failures, user attribute definition, specification of secrets, user authentication, user identification, and user/subject binding.

Class FMT: Security Management. This class contains six families that include management of security attributes, management of TSF data, management roles, management of functions in TSF, and revocation.

Class FPR: Privacy. The CC is the first evaluation methodology to support this class. Its families address anonymity, pseudonymity, unlinkability, and unobservability.

Class FPT: Protection of Security Functions. This class has 16 families. TSF physical protection, reference mediation, and domain separation represent the reference monitor requirements. Other families address underlying abstract machine tests, TSF self-tests, trusted recovery, availability of exported TSF data, confidentiality of exported TSF data, integrity of exported TSF data, internal product or system TSF data transfer, replay detection, state synchrony protocol, timestamps, inter-TSF data consistency, internal product or system TSF data relocation consistency, and TSF self-tests.

Class FRU: Resource Utilization. The three families in this class deal with fault tolerance, resource allocation, and priority of service (first used in the CC).

Class FTA: TOE Access. This class has six families. They include limitations on multiple concurrent sessions, session locking, access history and session establishment, product or system access banners, and limitations on the scope of selectable attributes (system entry constraints).

Class FTP: Trusted Path. This class has two families. The inter-TSF trusted channel family is new to the CC, but the trusted path family was in all previous criteria.

Assurance Requirements

There are ten security assurance classes. One assurance class relates to protection profiles, one to security targets, and one to the maintenance of assurance. The other seven directly address assurance for the product or system.

Class APE: Protection Profile Evaluation. This class has six families, one for each of the first five sections of the PP and one for non-CC requirements.

Class ASE: Security Target Evaluation. This class contains eight families, one for each of the eight sections of the ST. They are similar to the PP families and include families for product or system summary specification, PP claims, and non-CC requirements. Like the requirements of class APE, these requirements are unique to the CC.

Class ACM: Configuration Management (CM). This class has three families: CM automation, CM capabilities, and CM scope.

Class ADO: Delivery and Operation. This class has two families: delivery and installation, and generation and start-up.

Class ADV: Development. This class contains seven families: functional specification, low-level design, implementation representation, TSF internals, high-level design, representation correspondence, and security policy modeling.

Class AGD: Guidance Documentation. The two families in this class are administrator guidance and user guidance.

Class ALC: Life Cycle. There are four families in this class: development security, flaw remediation, tools and techniques, and life cycle definition.

Class ATE: Tests. There are four families in this class: test coverage, test depth, functional tests, and independent testing.

Class AVA: Vulnerabilities Assessment. There are four families in this class: covert channel analysis, misuse, strength of functions, and vulnerability analysis.

Class AMA: Maintenance of Assurance. This class has four families: assurance maintenance plan, product or system component categorization report, evidence of assurance maintenance, and security impact analysis. These were not formal requirements in any of the previous methodologies, but the TCSEC Ratings Maintenance Program (RAMP) addressed all of them. The ITSEC had a similar program that included all these families.

Evaluation Assurance Levels

The CC has seven levels of assurance.

EAL1: Functionally Tested. This level is based on an analysis of security functions using functional and interface specifications, examining the guidance documentation provided, and is supported by independent testing. EAL1 is applicable to systems in which some confidence in correct operation is required but security threats are not serious.

EAL2: Structurally Tested. This level is based on an analysis of security functions, including the high-level design in the analysis. The analysis is supported by independent testing, as in EAL1, as well as by evidence of developer testing based on the functional specification, independent confirmation of developer test results, strength-of-functions analysis, and a vulnerability search for obvious flaws. EAL2 is applicable to systems for which a low to moderate level of independent assurance is required but the complete developmental record may not be available, such as legacy systems.

EAL3: Methodically Tested and Checked. At this level, the analysis of security functions is the same as at EAL2. The analysis is supported as in EAL2, with the addition of high-level design as a basis for developer testing and the use of development environment controls and configuration management.

EAL4: Methodically Designed, Tested, and Reviewed. This level adds low-level design, a complete interface description, and a subset of the implementation to the inputs for the security function analysis. An informal model of the product or system security policy is also required. Other assurance measures at EAL4 require additional configuration management including automation. This is the highest EAL that is likely to be feasible for retrofitting of an existing product line. It is applicable to systems for which a moderate to high level of independently assured security is required.

EAL5: Semiformally Designed and Tested. This level adds the full implementation to the inputs for the security function analysis for EAL4. A formal model, a semiformal functional specification, a semiformal high-level design, and a semiformal correspondence among the different levels of specification are all required. The product or system design must also be modular. The vulnerability search must address penetration attackers with moderate attack potential and must provide a covert channel analysis. Configuration management must be comprehensive. This level is the highest EAL at which rigorous commercial development practices supported by a moderate amount of specialist computer security engineering will suffice. This EAL is applicable to systems for which a high level of independently assured security is needed.

EAL6: Semiformally Verified Design and Tested. This level requires a structured presentation of the implementation in addition to the inputs for the security function analysis for EAL5. A semiformal low-level design must be included in the semiformal correspondence. The design must support layering as well as modularity. The vulnerability search at EAL6 addresses penetration attackers with high attack potential, and the covert channel analysis must be systematic. A structured development process must be used.

EAL7: Formally Verified Design and Tested. The final level requires a formal presentation of the functional specification and a high-level design, and formal and semiformal demonstrations must be used in the correspondence, as appropriate. The product or system design must be simple. The analysis requires that the implementation representation be used as a basis for testing. Independent confirmation of the developer test results must be complete. EAL 7 is applicable in extremely high-risk situations and requires substantial security engineering.

The following table gives a rough matching of the levels of trust of various methodologies. Although the correspondences are not exact, they are reasonably close. The table indicates that the CC offers a level that is lower than any previously offered level.

TCSEC

ITSEC

CC

Other

D

E0

No equivalent

 

No equivalent

No equivalent

EAL1

Private testing labs

C1

E1

EAL2

OS for FIPS 140-2 L2

C2

E2

EAL3

OS for FIPS 140-2 L3

B1

E3

EAL4

OS for FIPS 140-2 L4

B2

E4

EAL5

 

B3

E5

EAL6

 

A1

E6

EAL7

 

Evaluation Process

The CC evaluation process in the United States is controlled by the CC Evaluation Methodology (CEM) and NIST. Evaluations are performed by NIST-accredited commercial laboratories that do evaluations for a fee. Many of the evaluation laboratories have separate organizations or partner organizations that can support vendors in getting ready for evaluations. Teams of evaluators are provided to evaluate protection profiles as well as systems or products and their respective security targets. Typically the size of the team is close to the size of a TCSEC team (four to six individuals) but this may vary from laboratory to laboratory.

Typically, a vendor selects an accredited laboratory to evaluate a PP or a product or system. The laboratory performs the evaluation on a fee basis. Once negotiations and a baseline schedule have been developed, the laboratory must coordinate with the validating body. Under the U.S. scheme, the evaluation laboratory must develop a work plan and must coordinate on the evaluation project with the validator and with an oversight board. The evaluation of a PP procedes precisely as outlined in the CEM and according to schedules agreed to by the evaluation laboratory and the PP authors. When the PP evaluation is complete, the laboratory presents its findings to the validating agency, which decides whether or not to validate the PP evaluation and award the EAL rating.

Evaluation of a product or system is slightly more complex because there are more steps involved and more evaluation evidence deliverables. A draft of the product or system ST must be provided before the laboratory can coordinate the project with the validating organization. The vendor and the evaluation laboratory must coordinate schedules for deliverables of evaluation evidence, but otherwise the process is the same as described above for a PP. When the laboratory finishes the evaluation, it presents its findings to the validating agency, which decides whether or not to validate the product or system evaluation and award the EAL rating.

Impacts

The CC addresses many issues with which other evaluation criteria and methodologies have struggled. However, the CC is not perfect. At first glance, one might think that the protection profiles and security targets of the CC suffer the same weaknesses as those that plagued the security targets of the ITSEC. In some sense, this is true. A PP or ST may not be as strong as TCSEC classes because fewer security experts have reviewed it and it has not yet faced the test of time. Some of the CC requirements were derived from requirements of the previous methodologies. Such requirements may inherently have more credibility. Mature requirements and the CC process of identifying dependencies, audit requirements, and management requirements can contribute to the completeness, consistency, and technical correctness of a resulting PP or ST. The clarity of presentation of the requirements also helps, but ultimately the correctness of an ST lies in the hands of the vendor and the evaluation team.

The CC is much more complete than the functional requirements of most preceding technologies. However, it is not immune to “criteria creep.” A CC project board manages interpretations to support consistent evaluation results. Interpretations can be submitted by any national scheme for international review. The final interpretations agreed on become required on all subsequent evaluations under the CC and form the basis for future CC updates. Although this is a well-managed process, it does not address the fact that a newer evaluation may have more stringent requirements levied on it than an older evaluation of the same type.

Having a team member who is not motivated by financial issues to complete the evaluation quickly lends support to the depth of the evaluation and in some respects addresses the functions of a technical review board by providing impartial review. The evaluation process itself is very well-defined and well-monitored by the validating body. The process itself is less subjective than some of the preceding methodologies because every step is well-defined and carefully applied. Because many U.S. CC evaluators were part of the TCSEC evaluation system, the U.S. CC evaluations are probably close to the TCSEC evaluations in depth.

Future of the Common Criteria

The CC documentation and methodology continue to evolve. A new version of the CC is planned for release in mid-2003. The revision will include approved interpretations as well as other changes currently under consideration.

Interpretations

The Common Criteria Interpretation Management Board (CCIMB) is an international body responsible for maintaining the Common Criteria. Each signatory of the CCRA has a representative on the CCIMB. This group has the responsibility of accepting or rejecting interpretations of the CC submitted by national schemes or the general public. The charter of the CCIMB is to facilitate consistent evaluation results under the CCRA. Interpretations begin as Requests for Interpretation (RIs) that national schemes or the general public submit to the CCIMB for consideration. RIs fall into the following categories.

  • A perceived error that some content in the CC or CEM requires correction

  • An identified need for additional material in the CC or CEM

  • A proposed method for applying the CC or CEM in a specific circumstance for which endorsement is sought

  • A request for information to assist with understanding the CC or CEM

The CCIMB prioritizes the RIs, responds to each RI, and posts the RI on its Web site for approximately 3 months of public review. The CCIMB then reviews the feedback and finalizes the interpretation. Final interpretations agreed to by the CCIMB are posted and are levied on all subsequent evaluations certified by organizations party to the CCRA.

Assurance Class AMA and Family ALC_FLR

Class AMA is Maintenance of Assurance, which allows for assurance ratings to be applied to later releases of an evaluated product in specific cases. Family FLR is flaw remediation, which specifies the requirements for fixing flaws in a certified, released product. The combination of these activities creates a program along the lines of RAMP, which was initiated under the TCSEC. The updates to these areas will be released in a supplement prior to release 3.0 of the CC and will be incorporated in release 3.0 of the CC.

Products Versus Systems

Although the CC has been used successfully for many computer products, evaluations of systems are less frequent and less well-defined. The process for systems is being refined, with the intention of having significantly more information available for system evaluation in the next release.

Protection Profiles and Security Targets

The requirements and content of a PP or an ST are defined in several locations within the three parts of Common Criteria Version 2.1. These sections are being consolidated into part 3 of the CC. In addition to consolidation, there are some contextual changes in these documents. In addition, low-assurance PP and ST documents are to be substantially simplified. These changes are targeted for CC Version 3.0.

Assurance Class AVA

The assurance class AVA, Vulnerability Assessment, is currently under revision to make it better suited to the market and to ensure more consistent application between schemes. This class is the most common area for augmentation. Family AVA_VLI is being revised to address attack methods, vulnerability exploitation determination, and vulnerability identification. These changes are also targeted for CC Version 3.0.

EAL5

Currently the CEM defines the steps an evaluator must take for evaluations at levels EAL1 through EAL4. An effort is underway to increase the scope of the CEM to include the detailed evaluation methodology for level EAL5.

SSE-CMM: 1997–Present

The System Security Engineering Capability Maturity Model (SSE-CMM) [461, 462, 591, 985] is a process-oriented methodology for developing secure systems based on the Software Engineering Capability Maturity Model (SE-CMM). SSE-CMM was developed by a team of security experts from the U.S. government and industries to advance security engineering as a defined, mature, and measurable discipline. It helps engineering organizations define practices and processes and to focus on improvement efforts. The SSE-CMM became ISO Standard 21827 in 2002.

Taking a very abstract view, there is a similarity between evaluation of processes using a capability model and evaluation of security functionality using an assurance model. Capability models define requirements for processes, whereas methodologies such as the CC and its predecessors define requirements for security functionality. Capability models assess how mature a process is, whereas the CC type methodology evaluates how much assurance is provided for the functionality. SSE-CMM provides maturity levels, whereas the other methodologies provide levels of trust. In each case, there are specific requirements for the process or functionality and different levels of maturity or trust that can be applied to each.

The SSE-CMM can be used to assess the capabilities of security engineering processes and provide guidance in designing and improving them, thereby improving an organization's security engineering capability. The SSE-CMM provides an evaluation technique for an organization's security engineering. Applying the SSE-CMM can support assurance evidence and increase confidence in the trustworthiness of a product or system.

The SSE-CMM Model

The SSE-CMM is organized into processes and maturity levels. Generally speaking, the processes define what needs to be accomplished by the security engineering process and the maturity levels categorize how well the process accomplishes its goals.

  • Definition 21–8. A process capability is the range of expected results that can be achieved by following the process. It is a predictor of future project outcomes.

  • Definition 21–9. Process performance is a measure of the actual results achieved.

  • Definition 21–10. Process maturity is the extent to which a process is explicitly defined, managed, measured, controlled, and effective.

The SSE-CMM contains 11 process areas.

  • Administer Security Controls

  • Assess Impact

  • Assess Security Risk

  • Assess Threat

  • Assess Vulnerability

  • Build Assurance Argument

  • Coordinate Security

  • Monitor System Security Posture

  • Provide Security Input

  • Specify Security Needs

  • Verify and Validate Security

The definition of each process area contains a goal for the process area and a set of base processes that support the process area. The SSE-CMM defines more than 60 base processes within the 11 process areas.

Eleven additional process areas related to project and organizational practices adapted from the SE-CMM are

  • Ensure Quality

  • Manage Configuration

  • Manage Project Risk

  • Monitor and Control Technical Effort

  • Plan Technical Effort

  • Define Organization's Systems Engineering Process

  • Improve Organization's Systems Engineering Process

  • Manage Product Line Evolution

  • Manage Systems Engineering Support Environment

  • Provide Ongoing Skills and Knowledge

  • Coordinate with Suppliers

The five Capability Maturity Levels that represent increasing process maturity are as follows.

  1. Performed InformallyBase processes are performed.

  2. Planned and TrackedProject-level definition, planning, and performance verification issues are addressed.

  3. Well-DefinedThe focus is on defining and refining a standard practice and coordinating it across the organization.

  4. Quantitatively ControlledThis level focuses on establishing measurable quality goals and objectively managing their performance.

  5. Continuously Improving. At this level, organizational capability and process effectiveness are improved.

Using the SSE-CMM

Application of the SSE-CMM is a straightforward analysis of existing processes to determine which base processes have been met and the maturity levels they have achieved. The same process can help an organization determine which security engineering processes they may need but do not currently have in practice.

This is accomplished using the well-defined base processes and capability maturity levels that were overviewed in the preceding section. One starts with a process area, identifying the area goals and base processes that SSE-CMM defines for the process area. If all the processes within a process area are present, then the next step of the analysis involves determining how mature the base processes are by assessing them against the Capability Maturity Levels. Such an analysis is not simple and may involve interactions with engineers who actually use the process. The result of the analysis culminates in identification of the current level of maturity for each base process in the process area.

The analysis continues as described above for each process area. Processes within an area may have varying levels of maturity, and the level of maturity for the process area would be the lowest level represented by the set of levels for the base process. A useful way of looking at the result of a complete SSE-CMM analysis is to use a Rating Profile, which is a tabular representation of process areas versus maturity levels. An example of such a profile is provided in Figure 21-1. In a similar fashion, process area rating profiles can be used to show the ratings provided for individual base processes within a process area.

Example of a rating profile for the 11 process areas of the SSE-CMM (from [347]).

Figure 21-1. Example of a rating profile for the 11 process areas of the SSE-CMM (from [347]).

Summary

Since the early 1980s, the international computer security community has been developing criteria and methodologies for the security evaluation of IT products and systems. The first public and widely used technique was provided by the Trusted Computer System Evaluation Criteria (TCSEC), which was driven by the U.S. Department of Defense. Although the TCSEC was widely used for nearly two decades, criticisms of it inspired research and development of other approaches that addressed many areas of concern, including limitations of scope, problems with the evaluation process, binding of assurance and functionality, lack of recognition of evaluations in one country by the authorities of another, and inflexibility in selection of requirements, to name the most significant ones. New methodologies were developed to address these issues. Most notable of these were the Information Technology Security Evaluation Criteria (ITSEC) in Europe, the Canadian Trusted Computer Product Evaluation Criteria (CTCPEC), and the Federal Criteria (FC) in the United States. These foundational methodologies have culminated in the Common Criteria, which today has world-wide support.

Other evaluation techniques include a special-purpose evaluation of cryptographic modules, jointly managed by NIST and the Canadian CSE, and the process-oriented System Security Engineering Capability Maturity Model (SSE-CMM).

Research Issues

The Common Criteria (CC) methodology is the focus of much current research. Aside from the issues discussed in Section 21.8.8, mechanisms for spreading the use of the CC and other evaluation criteria are receiving much attention. Evaluations are expensive and time-consuming. Reducing both cost and time without diminishing the quality of the evaluation is a critical area of research.

Other questions involve the evaluation of high-assurance systems at levels EAL5 and higher. The evaluator guidelines for EAL1 through EAL4 are agreed on by all signatories of the CCRA and appear in the CEM. EAL5 evaluator guidelines are expected to be added to the CEM in the next CC release, but the international standards for evaluating products and systems at EAL6 and EAL7 are still undecided. Research into what is appropriate, and how to do it, is critical.

Another interesting research topic is reuse of evaluations in new environments or for systems composed of evaluated parts. Consumers of products and systems need to determine how effective those products and systems are in their current environments. Formal evaluation is suitable when one can determine precise security requirements and the environment in which the product or system is to be used and can provide appropriate evidence that the requirements are met in the defined environment. Today's evaluation techniques and approaches do not readily support reuse of evidence, for reasons of intellectual property ownership and proprietary information. Without detailed assurance evidence from the product or system developer, evaluation options for consumers may be limited. Current approaches that are alternatives to evaluation include various types of testing, such as penetration testing (see Section 23.2). Penetration testing is an excellent technique for identification of vulnerabilities but lacks the “total picture” view of formal evaluation. More complete and effective functional and structural testing is another alternative for finding problems. How can one make the testing as effective as possible, and what is the highest possible level of effectiveness?

Further Reading

The evaluation process of the TCSEC has been widely discussed and critiqued [38, 78, 193, 525, 771, 816, 903], and changes have been proposed for specific environments such as real-time embedded systems [17]. Several products and systems aimed at levels of the TCSEC have also been analyzed [143, 301, 331, 856, 994, 1053]. Pfleeger [806] compares the TCSEC with then-current European evaluation methodologies.

The results of ITSEC evaluations have also been presented [165, 497]. Straw [976] compares the ITSEC with the Federal Criteria, and Borrett [134] discusses the differences between evaluation under the TCSEC and under the U.K. ITSEC.

The basis for CC requirements arises in several papers, including one that describes the functional criteria for distributed systems [250]. Other papers discuss various aspects of CC ratings [128, 505] and protection profiles, including the use of SSE-CMM processes to develop those profiles [40, 1048]. Some evaluations have also been discussed [5, 449].

Hefner [461, 462] and Menk [696] discuss the origins and evaluation partnerships under the SSE-CMM. Some papers [566, 567] discuss the relationships between product-oriented evaluation and process-oriented evaluation. In particular, Ferraiolo [347] discusses the contribution of process capability to assurance and the definition of metrics to support process-based assurance arguments. Ferraiolo's tutorial [348] provides a good introduction to SSE-CMM.

Some systems have demanded their own specialized certification processes [358], as have some environments [183, 325].

Lipner [637] gives a short, interesting historical retrospective on evaluation, and Snow [941] briefly discusses the future.

Many organizations keep the most current information on evaluation standards and processes on the World Wide Web. For example, the FIPS 140-2 Web site [756] gives information about NIST's cryptographic module verification program. The Common Criteria Web site [211] contains copies of the Common Criteria and various national schemes, such as that of the United States [755]. It also offers historical information, information about current projects, registries of evaluated and unevaluated protection profiles, evaluated product and system listings (most of which include the security target for the product or system), products and PPs currently being evaluated, and information on testing laboratories and recognition agreements among the participating countries. Detailed information about SSE-CMM is also on the WWW [898].

Exercises

1:

The issue of binding assurance requirements to functional requirements versus treating them as mutually exclusive sets has been debated over the years. Which approach do you think is preferable, and why?

2:

What are the values of doing formal evaluation? What do you see as the drawbacks of evaluation?

3:

Recall that “criteria creep” is the process of refining evaluation requirements as the industry gains experience with them, making the evaluation criteria something of a moving target. (See Section 21.2.4.2.) This issue is not confined to the TCSEC, but rather is a problem universal to all evaluation technologies. Discuss the benefits and drawbacks of the CC methodology for handling criteria creep.

4:

What are the conceptual differences between a reference validation mechanism, a trusted computing base, and the TOE Security Functions?

5:

Choose a Common Criteria protection profile and a security target of a product that implements that profile (see the Common Criteria Web site [211]). Identify the differences between the PP and the ST that implements the PP.

6:

Identify the specific requirements in the Common Criteria that describe a reference validation mechanism. Hint: Look in both security functional classes and security assurance classes.

7:

Use the Common Criteria to write security requirements for identifying the security functional and assurance requirements that define a security policy that implements the Bell-LaPadula Model.

8:

Map the assurance requirements of the TCSEC (as defined in this chapter) to the assurance requirements of the ITSEC and the CC. Map the ITSEC assurance requirements to the CC assurance requirements.

9:

Map the security functional requirements of the CC to the functional requirements of the TCSEC (as described in this chapter).

10:

Describe a family of security functional requirements that is not covered in the Common Criteria. Using the CC style and format, develop several requirements.



[1] Recall that the *-property addresses writing of data, which provides some controls on the unauthorized modification of information. See Section 5.2.1.

[2] Each document had a different color cover.

[3] This acronym is pronounced as three separate letters rather than as the word “toe.”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.89.18