CHAPTER 51

SECURITY STANDARDS FOR PRODUCTS

Paul Brusii and Noel Zakin

51.1 INTRODUCTION

51.1.1 Value of Standards

51.1.2 Purpose of Product Assessment

51.1.3 Sources of Standards

51.1.4 Classes of Security Standards

51.1.5 Products for Which Standards Apply

51.1.6 Breadth of Product-Oriented Standards

51.1.7 Focus of This Chapter

51.2 NONSTANDARD PRODUCT ASSESSMENT ALTERNATIVES

51.2.1 Vendor Self-Declarations

51.2.2 Proprietary In-House Assessments

51.2.3 Consortium-Based Assessment Approaches

51.2.4 Open Source Approach

51.2.5 Hacking

51.2.6 Trade Press

51.2.7 Initial Third-Party Commercial Assessment Approaches

51.3 SECURITY ASSESSMENT STANDARDS FOR PRODUCTS

51.4 STANDARDS FOR ASSESSING PRODUCT BUILDERS

51.4.1 Capability Maturity Model

51.4.2 Quality (ISO 9000)

51.5 COMBINED PRODUCT AND PRODUCT BUILDER ASSESSMENT

51.5.1 Competing National Criteria Standards

51.5.2 Emergence of Common Criteria Standard

51.6 COMMON CRITERIA PARADIGM OVERVIEW

51.6.1 CC Scheme

51.6.2 Common Criteria Paradigm Process

51.6.3 Standards that Shape the Common Criteria Paradigm

51.7 DETAILS ABOUT THE COMMON CRITERIA STANDARD

51.7.1 Models for Security Profiles

51.7.2 Security Functional Requirements Catalog

51.7.3 Security Assurance Requirements Catalog

51.7.4 Comprehensiveness of Requirements Catalogs

51.8 DEFINE SECURITY REQUIREMENTS AND SECURITY SOLUTIONS

51.8.1 Protection Profile Construction and Contents

51.8.2 Security Target Construction

51.8.3 Benefits of PPs and STs

51.8.4 Extant PPs and STs

51.9 COMMON TEST METHODOLOGY FOR CC TESTS AND EVALUATIONS

51.10 GLOBAL RECOGNITION OF CEM/CC-BASED ASSESSMENTS

51.11 EXAMPLE NATIONAL SCHEME: CCEVS

51.11.1 Maintaining the Testing Infrastructure

51.11.2 Using the Testing Infrastructure

51.11.3 Maintaining Certification in an Evolving Marketplace

51.12 VALIDATED PROFILES AND PRODUCTS

51.13 BENEFITS OF CC EVALUATION

51.13.1 Helping Manufacturers

51.13.2 Helping Consumers

51.14 CONCLUDING REMARKS

51.15 NOTES

51.1 INTRODUCTION.

Standards provide for uniformity of essential characteristics of products and product-related procedures. Standards allow consumers to have a better understanding of what they purchase. This section provides a general introduction to standards: who creates standards, what types of features and capabilities are standardized, why standards are important, and what types of standards apply to products.

In later sections, attention turns to standards associated with testing and evaluation of products. The nonstandard approaches confronting and befuddling consumers, as well as the issues arising from nonstandard approaches, are contrasted with the confidence obtained by using a universal, internationally accepted standard for product testing and evaluation. The common standard allows the consumer to understand with greater certainty the security and assurance features offered by a product. Increased software quality assurance became of top concern to U.S. Government agency chief information security officers (CISOs) as attention turned to the Federal Information Security Management (FISMA) Act.1

51.1.1 Value of Standards.

Many parties benefit from standards: customers, vendors, testing houses, and more.

Customers find standards helpful in several ways. Standards help specify their needs for various security functionalities and the degrees of assurance they require in the products they buy. Standards help customers understand what security functionality and assurances that a product builder claims to provide. Standards help consumers select commercial off-the-shelf products that they can trust will conform to their security and assurance requirements and that, as needed, interoperate with comparable products. Customers under the mandates of the security-relevant regulations imposed by the Health Insurance Portability and Accountability Act (HIPAA) and the Sarbanes-Oxley Act (SOX) often look to establishing due diligence by leveraging products that have established trust in their security and assurance functionality in a standard way.

Vendors find standards helpful in several ways. Use of standards provides evidence that vendors have migrated their product development to a paradigm wherein security is built-in from the start. Use of standards provides evidence that security is not some afterthought that is patched or bolted on. Use of standards shows that security is the foundation on which a vendor is building a product. Use of standards helps open global marketplaces to vendors. By using standard, third-party verification of security capabilities, vendors are making their products either stand out from or be comparable to their competitors.

51.1.2 Purpose of Product Assessment.

Trust in the electronic processing, storage, and interactions among customers, business partners, suppliers, service organizations, and governments is key to electronic economy and electronic government models. The need for trust will only increase as new IT paradigms and technologies proliferate, mutate business and IT support models, and introduce new risks and vulnerabilities. For electronic business models and electronic government models to succeed, all e-business and e-government players need confidence in the IT products used by interacting players as well as in the intervening IT infrastructure. Recent federal government administrations have recognized the importance of establishing trust in IT. They have issued directives and guidance to elevate awareness of the central, critical nature of IT and to help to preserve trust in national IT infrastructures.2

IT “systems” are often composed of many individual products. Methods for establishing trust in products do not necessarily work for establishing trust in systems. Methods of establishing trust in IT systems exist. They include, for example, the U.S. Department of Defense (DoD) DITSCAP and NIST C&A, as mentioned earlier. Also, while the Common Criteria approach is basically focused on establishing trust in individual products, there is ongoing research into the potential use of the Common Criteria paradigm as a way to establish trust in systems of individual IT products.

Such systems-focused, trust-development methods tend to be employed only by large enterprises. Furthermore, the problem of establishing a quantitative measure of trust in a very large, heterogeneous system like a national IT infrastructure is not yet well understood. At a minimum, it requires coordination and cooperation among all who contribute to or use the infrastructure. Although still argued among security professionals, many believe a step in the right direction is to build IT systems with products and components that are individually assessed to be trustworthy, with some specified degree of confidence. What is most difficult, however, in a product assessment is to test its security aspects in an environment that perfectly mimics the environment in which the product is to be used.

Another key notion besides trust is the notion of risk management. When electronic relationships are established between parties, there are quantifiable risks associated with such relationships.

Risks are quantifiable in many ways. For example, they can be quantified in terms of the types of possible adverse events. They can be quantified in terms of the likelihood of different types of adverse events and by the value of what is to be protected by IT security solutions during an adverse event. They can be quantified in terms of the consequences of adverse events, such as the liability that may be exposed via compromises, or the entities that may be hurt by compromises.

Risks then can be mitigated in a number of possible ways. For example, risks can be mitigated by using products that reduce the occurrence or impacts of the adverse events of most concern. When assets of increasing value need to be protected, risks can be reduced by using products that have increased assurance. Risks also can be mitigated by using products that decrease the specific, deleterious liabilities and undesired consequences of greatest concern. Being able to specify the risks of concern, and security solutions that mitigate those risks, is a powerful strategy uniquely held by the Common Criteria paradigm.

Linking notions of trust together with notions of risk reduction makes for an even more powerful strategy. Establishing trust among interacting electronic parties that all parties are using products with appropriate security quality is akin to the role of face-to-face handshakes in older business models. In the e-business marketplace, trust, like the old handshake, is key to increased revenues via increased business transaction volumes. Furthermore, by mitigating risks, business losses and costs can be reduced. When trust enhancement is coupled with risk management, the resulting increased revenues, combined with decreased losses and lower costs, make for significant profit multiplication.

The Common Criteria paradigm is a standards-based product assessment strategy that provides for both establishing appropriate levels of trust in products and for specifying and mitigating risks of most concern. Indeed, the Common Criteria paradigm appears to be the only strategy available today for simultaneously establishing trust and managing risk in products.

Various ways have been developed and used over the years to build confidence about the quality of security implementations. The focus of such efforts includes establishing trust via one or both of two perspectives: (1) that a product performs its claimed security functionality completely and correctly and (2) that the product builder's processes (from design, to development, to delivery, to maintenance) are sound. Typically, trust is established either by testing just the implementation or by evaluating both the product and its implementer.

Not all approaches to establishing trust via product assessment necessarily address trust via assessment of implemented security functionality as well as by assurance of the soundness of the builder's abilities and processes. Furthermore, many of the testing and evaluation approaches are not based on standards.

51.1.3 Sources of Standards.

Formal standards relevant to information assurance (IA) are created, published, and maintained by “recognized” standards bodies. There are various recognized standards including technology-specific working groups associated with professional organizations like the IEEE (Institute of Electrical and Electronic Engineers) and the IETF (Internet Engineering Task Force). Standards are also created by working groups associated with recognized country-specific national standards bodies, such as the ASC (American Standards Committee, formerly ANSI) in the United States or BSI (British Standards Institute) in the United Kingdom. Such national standards bodies create either country-specific standards, or they collaborate and harmonize with other peer national standards bodies to create international, globally applicable standards such as those associated with ISO (Organization of International Standardization).

Recognized organizations within national governments also create standards. For example, the National Institute of Standards and Technology (NIST)—a U.S. federal agency under the U.S. Department of Commerce—creates and issues standards called FIPS (Federal Information Processing Standards). FIPS often, but not always, apply just to the U.S. federal government. Indeed, a 2005 amendment to the Federal Acquisition Regulation to implement the IT security provisions of the FISMA requires all federal procurements to adhere to pertinent FIPS. NIST also publishes Special Publications that may recommend standard best practices for use government-wide. Along similar lines, the U.S. DoD publishes standard directives. Such directives, for example, specify what must be done by DoD entities to meet a need or what constraints DoD organizations must follow when purchasing IT products.

National governments also form bodies to collaborate and to harmonize international standards for use by governments within several nations. The Common Criteria Project is an example of such a cross-government standardization body. Standards developed by this particular cross-government standards body are also fed to recognized international standards bodies like International Standards Organization (ISO) to create even broader-reaching standards.

Technology-specific consortia, such as the OMG (Object Management Group), and industry-specific consortia, such as the Smart Card Security Users Group and TeleManagement Forum, have large, broad-based, international membership. Often they are considered to be creators of informal standards for use within specific industries or technology sectors.

Smaller consortia or individual entities also claim to create “standards,” but, such efforts are generally not accepted by broad constituencies as legitimate, recognized standards.

51.1.4 Classes of Security Standards.

Standards are developed to provide mutually recognized specifications for various reasons. For example, like FIPS 140 related standards, they can specify security requirements, product evaluation methods, and product validation concepts. Like the Common Criteria (Sections 51.6 and 51.7), standards can specify the security functionality, or assurance characteristics, to be incorporated into products as well as test methods to use when evaluating products. Other standards can specify interoperable profiles of security capabilities. Standards can also specify how to become a product tester or how to assess the capabilities of vendors. Like ISO 177993 and NIST SP 800-53,4 standards can specify guidelines or best practices for users assembling and using secure components. Like the DITSCAP (replaced by DIACAP in November 2007)5 for U.S. DoD, and the NIACAP6 and NIST Certification & Accreditation7 (C&A) methodologies for U.S. federal civilian departments and agencies, standards can specify how to certify and how to accredit systems composed of secure products and secure components. Or, like DoD's 8500.18 and 8500.2,9 they can be standard policy and policy implementation directives, respectively, for what types of products DoD must procure. Various standards are described throughout this book. This chapter focuses on those standards associated with assessing products.

51.1.5 Products for Which Standards Apply.

There are two classes of products of interest in this chapter: “security” products and “security-enabled” products. Security products directly provide security services or prevent penetrations. Security products include, for example, intrusion detection products and firewalls. Security-enabled products are secured products that do not exist solely to provide security services; instead they provide other services that are protected. Examples include operating systems, database management systems, and virtual private networking gear that incorporate security functionality such as identification and authentication or IPsec (IP security) to protect either the product or the services provided by the product.

51.1.6 Breadth of Product-Oriented Standards.

In today's heterogeneous, multidisciplinary, multitechnology, multiparty, interconnected information technology (IT) environments, different types of security standards for products are essential for a variety of reasons.

Standards exist to provide consistent ways to stipulate security needs and requirements in both security products and secured products. Other standards specify the security functionality appearing within products. Standards like IPsec not only specify security functionality, but they also foster interoperability of separately built security implementations. Yet other standards specify security-related software interfaces, naming conventions, and data structures such as for Common Object Request Broker Architecture (CORBA) middleware products. In aggregate, standards promote consistent security, end to end as well as across different business domains and computing environments.

Standards also exist to govern the testing of products. As security products become just as popular attack targets as secured products themselves, it is essential that vendors and buyers of security and security-enhanced products determine what security requirements a product addresses and how well it meets those requirements. Conformance testing demonstrates whether an implementation includes the functionality stipulated in a functional standard. However, conformance of different products to the same functional standard does not necessarily ensure that these products will interoperate. Interoperability testing can assure secure interoperation between comparable products built by autonomous vendors or products used between autonomous parties. Conformance implies only that interoperability is possible; but interoperability needs to be verified by pair-wise testing. Standards exist to specify how to examine conformance of implementations to functional standards and how to assure the interoperability of implementations that must meet the same functional standard.

Interoperating, conformant, secured products and security products cannot necessarily be trusted to provide or to support sound security, or to mitigate the risks of greatest concern. Key to developing trust is to build confidence that products mitigate the risks of concern, that products are properly built and behave according to specification, and that products do no more or no less than advertised. Standards like the Common Criteria exist for establishing and testing the degree to which risks and vulnerabilities are mitigated to a specified level of confidence. These standards exist to specify and to test that the quality of security associated with products used by interacting parties are comparable.

Furthermore, yet other standards (within the suite of Common Criteria standards) exist to accredit organizations to conduct conformance, trust, or interoperability testing in a standard way.

51.1.7 Focus of This Chapter.

A major goal of standardization efforts is to evolve toward an IT-driven economy where security products, and secured products, approach plug-and-play status. They should be comparably trusted, purchased from multiple competing vendors, and mixed, matched, and integrated to provide requisite secure, trusted IT infrastructures that reduce the risks of greatest concern.

Although several different categories of security-relevant standards should be considered, standards associated with developing product trust are especially important in the electronically interwoven world.

This chapter focuses on such trust-enhancing standards. It summarizes various historical and current approaches for developing product trust. It spotlights the inter-nationally recognized Common Criteria paradigm that provides a standard way for stipulating (1) the risks of concern, (2) the security functional requirements that must be met in order to mitigate stated risks, and (3) the security assurance requirements that must be met to provide confidence that products are built with desired quality. According to this paradigm, risks and protection requirements are stipulated so that security solutions can be tested in a standard way, via testing laboratories that are accredited (in a standard way), to be able to verify (in a standard way) product compliance with stipulated standard security functional, and security assurance, requirements. The limitations of this paradigm are also summarized.

51.2 NONSTANDARD PRODUCT ASSESSMENT ALTERNATIVES.

For completeness, a variety of the product assessment approaches that are not dependent on standard, internationally recognized, security testing and evaluation approaches are listed here. They include:

There are shortcomings to most of these approaches. Many do not offer much value. One of the basic shortcomings is that lack of reliance on standards makes for difficulty in comparing the product assessment results:

  • of different products
  • for products tested and evaluated via different approaches
  • for products tested and evaluated by different testing facilities that all purportedly use the same product assessment approach

51.2.1 Vendor Self-Declarations.

One of the initial approaches to providing trust is based on the notion of vendor self-declarations. A vendor can unilaterally claim that a specific product meets the security needs of a class of customers and that an appropriate amount of customer-desired confidence can be placed in the product's implemented security features. In part, the confidence associated with this approach is implicitly tied to the past reputation of a vendor or to the customer's past experience in dealing with the vendor. If the vendor's reputation or customer's experiences are good, there is some sense that the vendor may have again done an adequate job of implementing security. This approach is prevalent when no independent testing and evaluation facilities exist. The problem with this approach is that it lacks measurable ways of quantifying the degree of trust that can be associated with a product. It also lacks measurable ways of comparing the relative degrees of trust that can be associated with different products.

With the typical torrent of upgrades and revisions to products, this approach may have some merit in establishing a degree of confidence in products that have been changed since the version of the product that may have undergone rigorous security testing and evaluation. If a vendor is known to have good security engineering capabilities—such as can be assessed, in part, by standard Capability Maturity Model approaches (see Section 51.4.1)—and if the vendor can provide reasonable evidence as to the nature of the upgrade or revision since the product version that underwent rigorous assessment, then there can be some qualitative (albeit, quantitatively unknown) degree of confidence about the upgraded or revised product. Under these conditions, customers who have innate trust in the vendor can believe that the quality of the changed product is similar to the quality of the version of the product that was formally assessed.

51.2.2 Proprietary In-House Assessments.

Product consumers can develop the requisite substantial technical expertise in-house to test and to evaluate specific security-enhanced products directly. Alternatively, consumers can contract a private evaluator, such as one of the big consulting houses or systems integrators, to do such testing and evaluation. Often their approaches are unique and proprietary.

Some financial institutions have used the in-house assessment approach. Financial institutions as a whole are very careful to make sure that products they use are trust-worthy. The security, integrity, and soundness of all products and systems supporting financial institutions must be consistent and verifiable. These institutions fear that any breach of IT security anywhere within their systems will result in a loss of confidence in the entire institution, not just in the specific, subverted IT product.

Many financial institutions developed their own internal security specifications and evaluation processes as well as an evaluation methodology to quantify, to compare, to approve, and to certify general security aspects of competing products. One of the consequences of this approach was that it required substantial, costly duplication of testing infrastructure across the financial industry as well as the costly duplication of vendor testing efforts for those products that were candidates for purchase by multiple customers. With each financial entity funding the establishment of its own testing program, the aggregate testing expenses were raised across all entities within the financial industry.

Furthermore, as the volume of financial devices, such as credit card platforms, operating systems, and thousands of applications, continues to increase dramatically, in-house resources are finding it difficult to keep up. They have found this kind of do-it-yourself, in-house testing approach to be a tremendous undertaking in terms of development, implementation, legitimacy demonstration, maintenance, and rejustification. They have found it to be expensive, time consuming, resource intensive, hard to maintain, always open to interpretation and to debate, and always in need of justification to regulators and principals in new markets.

51.2.3 Consortium-Based Assessment Approaches.

Many consortium-directed approaches exist, or have existed, to demonstrate product interoperability, or conformance of a product to stated security features or to specific security technology standards.

In the Internet world, the notion of implementation bake-offs among trial (pre-product) implementations of emerging IETF standards has been a mainstay in the community for quite some time. For example, in 2005 the IETF initiated a series of IPsec VPN Interoperability Workshops upon culmination of the Internet Key Exchange Version 2 (IKEv2) standard. Vendors of IKEv2-based pre-products gathered in a common testing facility to test the functionality and interoperability of their pre-products against those of their competitors. Initial test scenarios focused on basic functionality and secure tunnel maintenance.

Other consortia use either their own or standard testing approaches. Such consortia include, for example, the VPN Consortium, SPOCK, and the Smart Card Security Users Group.

51.2.3.1 Virtual Private Network Consortium.

The Virtual Private Network (VPN) Consortium (www.vpnc.org/) developed an approach for demonstrating conformance of a product to a specific security standard. The VPN Consortium conducts testing of the IPsec (IP SECurity) and Secure Sockets Layer (SSL) implementations built by its consortium members.

In the early 2000s, the consortium provided three specific profiles of conformance tests of VPN products implementing the IETF's IPSec standard. For each type of test profile, predefined tasks had to be performed successfully against two different reference test gateways. Due to the nonexhaustive set of tests, passing a VPN Consortium conformance test provided only indications that tested products conform, in limited part, to various standard parts of the IPsec standard. Such tests also provided indications that interoperability may be possible with other products that pass the same tests under the same environmental situations.

The consortium now focuses on interoperability testing. It conducts two classes of IPsec interoperability tests: Basic Interoperability and AES Interoperability. The types of capabilities being tested, and the profiles by which tested systems are set up for each class of interoperability testing, are specified. The tests help assure VPN users that IPsec systems set up according to the specified profile are generally interoperable with other IPsec systems also set up according to the same profile.

The consortium also provides interoperability testing for profiled uses of SSL implementations.

Plans for other interoperability profiles and details on products that have passed the VPN Consortium's testing, and received conformance certification logos or interoperability certification logos, can be obtained at www.vpnc.org.

There is speculation that other telecommunications-oriented consortia (e.g., the WiMAX Forum) that are currently focusing on conformance and interoperability testing of communication technology may expand to include testing of security capabilities associated with telecommunication gear (wireless security capabilities in the case of the WiMAX Forum).

51.2.3.2 Security Proof of Concept Keystone (SPOCK).

The Security Proof of Concept Keystone (SPOCK) (http://csrc.nist.gov/nissc/1996/papers/NISSC96/paper068/seminar.PDF) was a joint government-industry consortium sponsored by the National Security Agency (NSA). It developed and conducted product-specific, loose conformance and performance verification tests in government environments to demonstrate a vendor's or system integrator's stated claims about implemented security features and performance. Recently, the SPOCK program consolidated with other NSA initiatives, such as the Capabilities Presentation Program (www.nsa.gov/ia/industry/indus00022.cfm?MenuID=10.2.2) and the Information Assurance Technical Framework Forum. These other initiatives are focused on presenting and discussing IA products, IA-enabled products and IA solutions among U.S. Government agencies, industry, and academia.

51.2.3.3 Smart Card Security Users Group.

Although proprietary, inhouse approaches were used by individual financial institutions to assess thousands of financial IT products, components, and systems, the financial community banded together as a whole in the Smart Card Security Users Group. Through this group, financial institutions use a single standard—the Common Criteria (see Sections 51.6 and 51.7)—for product assessments and to avoid duplication of their individual product assessment efforts. Benefits of such an alliance include:

  • Financial institutions can replace their internal, custom product assessment approaches with a common, universally accepted approach.
  • They can pool their resources to address common security testing and evaluation needs by using standards-based, Common Criteria security specification and testing schemes recognized across all major financial players.
  • They can develop profiles of security requirements for the various common elements of smart cards (e.g., chips, operating systems, applications, crypto engines).
  • They can develop common test suites to unify the current hodgepodge of fragmented customer-specific and vendor-specific testing of smart cards.
  • They can outsource security testing and evaluation to competent, accredited testing laboratories whose expertise can be used by all financial institutions.

The alliance produced a Common Criteria profile of standard security functional requirements and assurance requirements for smart cards. This standard requirements profile was validated and certified by an independent, accredited security testing lab-oratory (www.cse-cst.gc.ca/services/ccs/scsug-v30-e.html), and it is now used across the financial community. Consumers within the financial industry reference the profile to state their requirements. Vendors reference the profile to indicate what they built.

Accredited testing laboratories use standard methods to test individual products once for the entire financial community (not once per financial institution). The standard tests assess a vendor's claim that its product meets the standard requirements profile.

With known confidence, a financial institution can then purchase any product from the pool of assessed products that have been independently certified to comply with the Smart Card Security Users Group requirements profile. Increasingly, smart cards and smart card components must be evaluated according to this basic Common Criteria paradigm as a necessary precursor to purchase by a financial institution.

51.2.4 Open Source Approach.

A widely used approach in software development is the Open Source model. According to this approach, software is made publicly available for inspection, for modification of flaws and inefficiencies, and for potential upgrading of capabilities and features. In theory, by the continuous and collective—but uncoordinated and seemingly semirandom—efforts of potentially thousands of autonomous software developers and testers, the public review will improve the quality of the software over time.

The downside of the open source approach is that the degree of trustworthiness achieved by the process is unmeasurable.

To help identify security issues in open source code, the U.S. Department of Homeland Security initiated the Vulnerability Discovery and Remediation Open Source Hardening Project. In this project, new approaches for finding critical defects in complex software code sets will be developed and used to test open source code to isolate defects and root causes.

However, the trustworthiness of a product is more than just improved code. Although the open source code will have some degree of trustworthiness developed by the open source process, the incorporation of such code into a product still leads to other factors that influence trustworthiness in the product. Product trustworthiness also depends on vendor processes, such as the quality of design, the protection provided to security features during the delivery of a product from the vendor to the consumer, vendor strategies for maintaining or upgrading security in the face of new threats, and so on.

Vendors such as IBM, Red Hat Enterprise Linux, and Trusted Computer Solutions, as well as Novell, SuSE, and IBM teams are relying on the standard Common Criteria testing and evaluation approach to assess their Linux software products running in Intel hardware environments. The benefit of tying an open source technology development such as Linux with a Common Criteria evaluation is that mass-market consumers in all industries can purchase a highly secure, trusted operating system at no additional price than that of the previous unsecured Linux. (With much of the R&D associated with the security aspects of Linux provided by the aggregate of the open source community, the vendor has essentially no security R&D costs to amortize over the sales of products.)

51.2.5 Hacking.

De facto assurance of the underlying security in a product can arise from those who actively probe new products for security flaws. Such probing may arise from internally sanctioned security probing or from unsanctioned, unexpected probing by individuals of ill will. Hacking approaches (ethical or otherwise) do not necessarily follow a consistent or comprehensive approach to evaluating the quality of the security functions and services that are implemented. Hence, the level of assurance achieved is unknown and typically very low.

51.2.6 Trade Press.

Many trade press publications and magazines conduct reviews of products that pertain to security. Products are tested in ad hoc environments and against ad hoc criteria that vary from product to product and magazine to magazine. Such magazines may rely on consultants, staff, or private labs to review products. Some reviews may focus on examining quantitative product details other than security, such as performance or throughput of a product. Tests performed often fall short of assessing the real security aspects of a product. Some reviews rate qualitative parameters, such as product “innovativeness.” Because of the potential lack of quantified testing rigor and potential dissimilarity of evaluation metrics, comparisons of trade press reviews from different sources are difficult. Perhaps most important, no evaluations are made of the confidence (assurance) that can be associated with the soundness of the security implementation.

Examples of publications that provide reviews of security products include Information Security Magazine (www.infosecuritymag.com), Secure Computing Magazine(www.scmagazine.com/), Network Computing (www.networkcomputing.com), and InfoWorld (www.infoworld.com).

51.2.7 Initial Third-Party Commercial Assessment Approaches.

Initial commercial approaches provided relatively low-confidence, so-called surface-level testing that generated a “brand mark” for vendors' product brochures and advertisements. Such commercial activities began at a time when there needed to be a lower-cost—albeit lower confidence—alternative to expensive, lengthy, economically inappropriate government evaluations such as the so-called Orange Book evaluations (see Section 51.5.1). These commercial assessment activities were also available to support trade press surveys and magazine reviews of products.

Such nonstandard, third-party approaches are still prevalent. There is a certain qualitative amount of risk reduction achievable by these approaches. They are typically based on simple, one-size-fits-all testing that usually provides minimal, cursory checks of some of the implemented security functions. Some of these tests focus on product details other than security, such as performance or throughput. No evaluation is made of the confidence (assurance) that can be associated with the soundness of the security implementation. Instead, these are “black box” approaches wherein products are examined based only on their outputs relative to stimuli. These approaches have no assessment capabilities based on the fundamental design of the product, or of the engineering principles used by the vendor to build the product.

Many vendors, nonetheless, undergo these types of commercial testing because of the pressures from their competitors' products being so tested. Testing costs are reasonable, but such testing provides no inputs (e.g., evaluation reports) to consumers that can be analyzed to differentiate products. More comprehensive products are not examined for any of their differentiating capabilities. Instead, such check-mark testing programs merely provide a common-denominator assessment floor for products.

Typical vendor reaction to these types of “branding” programs is that the check marks are often not very good and are often distracting nuisances.10 Vendors also indicate that unlike more rigorous testing paradigms, such as that based on the Common Criteria, these check-mark branding programs do not have processes to help improve the quality of the product under test. Unlike Common Criteria testing labs, many vendors do not see check-mark testing labs as strategic partners looking to improve the product under test.

Examples of these types of product assessment approaches include the West Coast Labs Check Mark Program, the ICSA Labs Certified Program, and the Spywaretesting.org initiative.

51.2.7.1 Check Mark Program.

The Check Mark program is a private testing service provided by West Coast Labs (www.westcoastlabs.org/). Although touted to use “standard” testing criteria in a “standard” testing approach, the Check Mark program—in truth—establishes private criteria and a private testing methodology that are not recognized by legitimate standards bodies such as ISO. West Coast's private criteria and testing approaches apply to certain types of computer security products, such as antivirus products, firewall products, and VPN products. The criteria are designed to achieve a very basic level of protection against a number of common hostile attacks. West Coast Labs tests products against the applicable Check Mark criteria and, if successfully tested, produces a certificate which shows that specific releases of products meet specific Check Mark criteria.

51.2.7.2 ICSA Labs Certified.

Another well-known, commercial product branding service is the product-certification program (www.icsalabs.com/icsa/topic.php?tid=fdghgf54645-ojojoj567) conducted by ICSA Labs (www.icsalabs.com), which was part of Cybertrust and is currently an independent division of Verizon Business.11

The ICSA approach is similar to the West Coast Labs testing approach. Product performance is tested against specified criteria to assess whether the product can resist the types of common threats and risks specified in the testing criteria. Product testing is typically a checklist-oriented approach geared for nonexpert testers. Testing criteria are developed for a number of classes of products, such as firewalls and antivirus (AV) software. While Check Mark uses “private” testing criteria, ICSA uses so-called public criteria. These “public” criteria are, however, nonstandard like those of Check Mark since they are created outside the recognized national or international standards-development communities. Instead, ICSA's testing criteria are developed via “invited participation.”

Products that pass ICSA criteria are entitled to display the ICSA brand mark. Products that fail are reported to their vendors with detailed analysis of the criteria they failed.

Unlike the West Coast certificate, once products are awarded an ICSA certificate, vendors take on the obligation to self-check and to self-declare continued certification of evolutions of the specific version product that passed ICSA testing. Spot checks by ICSA are used to verify that currently shipping products still can pass the ICSA tests.

51.2.7.3 StopBadware.org Initiative.

In early 2006, a small number of antispyware product vendors, testing houses, and academics started an initiative to publicize malicious spyware and adware and to create common, third-party spyware testing methods, evaluation criteria, and common spyware samples.12 At the time of the writing of this chapter (June 2008), the Web site was still active (www.stopbadware.org/home) and included alerts and reports, a clearinghouse for researchers and users, information for end users and Webmaste2rs, and ways to get involved in fighting “badware.” Time will tell if the initiative will be useful and will gain momentum.

51.3 SECURITY ASSESSMENT STANDARDS FOR PRODUCTS.

In contrast to the informal, nonstandard product assessment approaches just discussed, standards exist for assessing various aspects of security associated with products. Standards exist to assess the overall quality and soundness of the builder of a product (see Section 51.4). Security assessment approaches based on these standards can yield generalized conclusions that “good” vendors build “good” products. Other standards (see Section 51.5) are used to assess both the quality of product builders and the quality of their products. These standards can be used to quantify how well “good” vendors build “good” products (with identifiable and demonstrable assurance levels), how much “better” specific vendors can build even “better” products (with identifiable and demonstrable, generally higher assurance levels), and how comprehensive are the security functionalities within the specific classes of products these vendors build.

51.4 STANDARDS FOR ASSESSING PRODUCT BUILDERS.

Some standards apply specifically to developers of security products. These standards include the Capability Maturity Model (see Section 51.4.1) and ISO 9000 (see Section 51.4.2) standards.

51.4.1 Capability Maturity Model.

Standards exist to measure the software and security competency of any type of organization including vendors that build security-oriented products.

A standard for assessing the competency of software developers is the Capability Maturity Model (CMM).13 The “Capability Maturity Model for Software” is a family of Software, Systems Engineering, and Product Development Capability Maturity ModelsSM (CMMs®).

The Systems Security Engineering Capability Maturity Model (SSE-CMM) approach uses such standards. It can be used as a way to assess the soundness of a security product builder's engineering practices during the many stages of product development, such as during:

  • Product requirements capture and analysis
  • Product concept definition including accurate translation of security requirements into product requirements
  • Product architecting
  • Product design
  • Product implementation

A security product developer can demonstrate competence in building products by means of recognized, so-called capability maturity assessments of the developer's software and security engineering processes. Security-enhanced products built by organizations with demonstrated expertise and maturity can merit greater trust than products built by organizations that do not demonstrate mature, competent, software design, and security engineering capabilities.

The SSE-CMM establishes a framework of generally accepted security engineering principles and a standard way of measuring (and improving) the effectiveness of an organization's security engineering practices. The SSE-CMM describes the essential characteristics of, and provides tools for, assessing an organization's security engineering process that must exist to ensure good security engineering. These characteristics are graded by a set of security metrics that assess specific attributes about a vendor's processes, and the security effectiveness of the results of vendor's processes.

When the level of the SSE-CMM security metrics associated with a specific builder shows the builder to have mature security engineering capabilities, and effective security engineering practices, then confidence is increased that the builder can build sound security products.

Trust in, and assurance about, a product can be inferred to some degree for measurably competent vendors that use sound security engineering processes as assessed by the SSE-CMM. The quantitative comparability of assurance developed via the SSE-CMM approach to the assurance developed via other approaches such as evaluation of assurance requirements stipulated from the Common Criteria (CC) paradigm is currently unknown and is the subject of investigation. For now, it appears possible to assess the assurance of a vendor's capability to build quality products by both the SSE-CMM and CC approaches; but, perhaps both approaches should be integrated to form a more comprehensive assurance assessment model.

Further information about how the SSE-CMM approach can be used is available at the International Systems Security Engineering Association Web site (www.sse-cmm.org).

51.4.2 Quality (ISO 9000).

The ISO 9000 standard is used as a guide to conduct a broad, high-level, horizontal assessment of the quality of systems and of the competence of an organization across all its facets.14 Although not specific to organizations that build security products, it does provide some amount of basic information about the potential for quality and repeatability in an organization's ability to meet its mission. In fact, derivative standards, such as in the Common Criteria, in part inspired by ISO 9000, are used to accredit the quality associated with security testing laboratories.

51.5 COMBINED PRODUCT AND PRODUCT BUILDER ASSESSMENT STANDARDS.

Some standards evaluate both developers and their security products. The next sections review the history of sunsetted national standards (see Section 51.5.1) and the emergence of widely accepted, replacement, international standards (see Section 51.5.2) for evaluating both developers and their products.

51.5.1 Competing National Criteria Standards.

To introduce consistency in describing the security features and levels of trust of a limited set of security-enhanced products, and to facilitate comprehensive testing and evaluation of such products, the U.S. DoD developed the Trusted Computer System Evaluation Criteria(TCSEC).15 The TCSEC—oft-called the Orange Book—defined a small set of classes (C1 to A1) of increasing security functionality and increasing assurance applying to operating systems. The TCSEC was extended to networking devices16 and database management systems.17 Government in-house evaluations were offered first, followed by comparable, government-sponsored commercial evaluation services.

Partly because of large testing delays and costs, other countries developed other criteria that were more flexible and adaptable to accommodate rapidly evolving IT. The Information Technology Security Evaluation Criteria (ITSEC) arose from the combined inputs of earlier German criteria, French criteria, and U.K. confidence levels.18 The Canadian Trusted Computer Product Evaluation Criteria (CTCPEC) were then developed as a combination of the TCSEC and ITSEC approaches. The U.S. Federal Criteria development then attempted to combine the CTCPEC and ITSEC with the TCSEC.

With growth of the international market for trusted IT products, competing criteria had the potential to fracture the market. Efforts began to harmonize the various criteria into common criteria that would be standards-based and internationally accepted. The result was a single, wide-ranging, widely accepted, Common Criteria. These standard criteria provide a fully flexible, highly tailorable approach to the standardization of security functionality, evaluation assurance requirements, specification, and testing of implementations. U.S., national government policy (DoD 8500.1) sunsetted the Orange Book TCSEC approach and now points to requiring DoD (DoD 8500.2) and potentially civilian agencies (President Bush's “National Strategy”) to use the Common Criteria approach19 instead.

51.5.2 Emergence of Common Criteria Standard.

Out of the old, cumbersome, and costly Orange Book approach, and out of the experiences, lessons, and analyses of the other national criteria, a new, commercially driven strategy emerged for testing products, and demonstrating confidence that their security features behave properly. This best-of-all-previous-breeds strategy is based on a new, international standard—referred to as the Common Criteria (CC). Version 1.0 of the CC was published in 1996 followed by Version 2 in 1998. Version 2 was subsequently standardized by the ISO as International Standard 15408.20 The international community began transitioning to version 3 in 2006 according to a stipulated transition schedule (www.niap-ccevs.org/cc-scheme/cc_docs/).

The CC strategy resolves differences among the earlier competing national criteria and accompanying certification and accreditation strategies, and integrates the best aspects of its predecessors. The CC strategy offers a single, internationally recognized approach for specification and evaluation of IT product security. This approach is widely useful within the entire international community.

In contrast to the preceding, nonstandard approaches to establishing trust in products (see Section 51.3), the Common Criteria paradigm presents a fundamentally new strategy that overcomes shortcomings of these other approaches. It provides a recognized, reliable, internationally maintained mechanism to develop trust that:

  1. Security requirements are specified correctly.
  2. Vendors do not misunderstand the requirements.
  3. Vendors design and manufacture products that address the requirements and provide risk integrity.

The CC provides an unprecedented, fully flexible process for specifying and testing security requirements for any and all classes, and specific instances, of existing or future IT products.

Unlike all the other approaches, the CC also provides a way to specify assurance requirements and to evaluate how well they are met. Assurance requirements are extremely important and are typically not considered in other product assessment approaches. Assurance requirements are the essential ingredients in establishing confidence in implementations of security and in providing the level of trust necessary for economies and governments to rely on new e-business and e-government models. Trust is established by gaining confidence that the security functionalities claimed to be implemented to address specific security functional requirements (1) are effective in satisfying specified security objectives and (2) are implemented correctly and completely. It is also established by ensuring that product developers have sound processes and take specified actions accompanying the life cycle of the product they build, test, deliver, and maintain.

The Common Criteria also establishes a method to develop common tests and evaluation methods and to use them to verify the security aspects of products in competent, accredited laboratories. Assessments of security products are composed of both analysis and testing of the product. Use of standard evaluation criteria and standard evaluation methodology leads to repeatable, objective test and evaluation results. Furthermore, independent review and validation of Common Criteria–based testing and evaluation is available to boost consumers' confidence even further. Well-respected consulting houses have concluded that Common Criteria evaluation provides a substantial improvement over the testing practices used by many vendors today that result in seriously undertested software.21 The Common Criteria specification and testing approach is equally applicable to any and all types of products, such as products that implement security technologies (e.g., crypto boxes, firewalls, or audit tools) as well as products that are either security-enabled (e.g., messaging or Web e-commerce packages) or security relevant (e.g., operating systems or network management systems).

The CC is today's unified choice for developing trust in products. The Common Criteria paradigm provides an extra level of due diligence. It improves and differentiates products and allows buyers to compare products objectively. It is accepted by mutual agreement in most of the world's largest IT-building and IT-buying countries.

51.6 COMMON CRITERIA PARADIGM OVERVIEW.

This section introduces the Common Criteria (CC) paradigm. The underlying scheme, the associated processes, and the underlying standards are summarized.

The remainder of this chapter goes into detail. Section 51.7 describes the standards associated with defining profiles of customer security requirements and vendor solutions, while Section 51.8 describes how these standards are used. Section 51.9 describes standards associated with defining testing and evaluation22 methodology, while Section 51.10 describes the agreement for allowing products to be tested in any country and sold within any other country with no further testing. Section 51.11 describes the scheme used within the United States to support this agreement fostering the recognition and sale of tested products across national borders, while Section 51.12 gives a sense of the numbers and types of products within the marketplace of CC-tested products.

51.6.1 CC Scheme.

The Common Criteria paradigm is a multipurpose scheme for:

  • Stipulating security requirements that can be used in product procurements
  • Specifying companion security solutions in products
  • Testing products according to product-tailored—but standard—criteria and testing methodologies using accredited, third-party, commercial testing laboratories
  • Independently validating test results
  • Providing certificates to tested and validated products that obviate any need for further product retesting in all countries that mutually recognize each other's commercial testing capabilities and testing results

The benefits of the CC paradigm are many. It forces consumers to be thorough in analyzing and understanding:

  • Their security requirements
  • The threats and vulnerabilities that products need to handle
  • The security policies—both organizational as well as legal and/or regulatory—that apply
  • The environment in which products are to work
  • Assumptions about the environment that are pertinent to the security offered or needed by a product

It forces vendors and consumers to understand thoroughly:

  • The products they build or buy
  • Whether and how products address customer security requirements and mitigate stipulated threats and vulnerabilities
  • How vendors validate their claims about meeting requirements and mitigating threats and vulnerabilities

51.6.2 Common Criteria Paradigm Process.

The path to trustworthy products begins by consumers using a standard methodology, standard language, and a catalog of standard security requirements to develop security profiles tailored to the types of products they want. The profiles stipulate the security functional needs. They also stipulate the confidence or assurance desired in products as well as in product builders' processes from product design through maintenance.

Next, product builders use the same standard methodology, language, and catalog to define their products' security specifications. They define product security specifications in terms of both security functionality and security assurance about the product and the builders' processes. The builders' specifications show how their products meet stated consumer security functionality and assurance needs. Builders' specifications may also show how their products meet any additional builder-claimed security features that go beyond the consumers' stated needs.

Then an international agreement on how such products should be tested comes into play. This so-called Mutual Recognition Arrangement (MRA) outlines in broad terms the scheme of processes and procedures that individual countries should follow in order to have Common Criteria–based testing performed once in any country and accepted internationally.

In the United States, the Common Criteria Evaluation and Validation Scheme (CCEVS) is just such a national scheme developed in accordance with the MRA. The MRA allows CCEVS-based testing results in the United States to be recognized by all the major IT building and buying countries in the world. The CCEVS is the only way available in the United States that is recognized across national borders to demonstrate security and assurance requirements conformance of products evaluated in the United States (See Section 51.11 for CCEVS details.)

51.6.3 Standards that Shape the Common Criteria Paradigm.

Several international standards shape the CC paradigm. These include formal standards that provide for a thorough, step-by-step discipline to define the security requirements of consumers and to define the security capabilities of products. There are also standards that define processes, procedures, rules, and infrastructure for testing products and validating test results.

The “Common Criteria for Information Technology Security Evaluation” is the ISO standard geared toward the part of the paradigm pertaining to profiling security requirements and product specifications.

The Common Evaluation Methodology is an informal international standard, maintained by a board of experts that pertains to product testing methods and evaluation actions.23

The “Arrangement on the Mutual Recognition of Common Criteria Certificates in the Field of Information Technology Security” document, colloquially called the Mutual Recognition Arrangement (MRA) or the Common Criteria Recognition Arrangement (CCRA), is a treaty-level international agreement among a growing number of national bodies for the purpose of recognizing and accepting testing performed in other signatory nations.24

The Common Criteria Evaluation and Validation Scheme is organized and managed, and operates according to concepts, specified in an informal standard25 developed by the U.S. MRA signatories.26 Other CCEVS documents detail CCEVS procedures and provide guidance to submitters of products to be tested, to test labs and to test validators (see www.niap-ccevs.org/cc-scheme/policy/ccevs/guidance_docs.cfm).

The CCEVS also uses standards that pertain to accrediting prospective testing laboratories. An ISO standard is used for stipulating testing laboratory competency.27 Another ISO standard further refines these general requirements for IT testing laboratories.28 Countries also define technology-specific refinements and extensions to lab competency requirements. In the United States, for example, an NIST standard stipulates more specific laboratory competency requirements pertaining to security assessment procedures and to CC security proficiency of lab staff.29

In aggregate, the CCEVS-specified and CCEVS-used standards define the processes and procedures used within the United States to test, to validate, and to certify products recognized under the terms of the MRA. Other similar national schemes for using the CC exist in other countries.

51.7 DETAILS ABOUT THE COMMON CRITERIA STANDARD.

As described in Section 51.8.1, the ISO 15408 CC standard identifies the models for two types of security profiles, the methodology for developing such profiles, as well as the structures for each of these types of profiles. The CC standard also provides catalogs of security and assurance requirements (discussed here in Sections 51.7.2 and 51.7.3 respectively) of known validity that can be used, mixed, and matched to express the security requirements of virtually any type of IT product or system.

51.7.1 Models for Security Profiles.

The CC standard defines the language and structures for two types of security profiles and the methodology to create such profiles.

One type of security profile is the Protection Profile (PP). A PP is used to stipulate the generic, product-neutral profile of security requirements for some class of IT product, such as a firewall or a telecommunications switch. Two types of security requirements can be stipulated: functional requirements and assurance requirements. Functional requirements define desired security behavior. Assurance requirements provide the basis for establishing trust. PPs are typically developed by prospective consumers or by consortia of IT buyers or sellers. PPs are developed to create standard sets of security requirements that meet the needs of prospective buyers.

The second type of profile structure is the Security Target (ST). An ST is a product-specific stipulation of the security requirements addressed by a specific product along with information as to how the implemented product meets the stated security requirements. As stipulated in its accompanying ST, a specific product may claim conformance to one or more Protection Profiles. For example, a turnkey healthcare information system product can claim it conforms to a DBMS PP, an operating system PP, a PKI PP, and a firewall PP. Alternatively, a specific product may claim conformance to product-specific security requirements enumerated solely within the ST. A specific product can alternatively claim conformance to both a PP (or several PPs) as well as to additional product-specific security requirements enumerated within the ST.

The purpose of testing and evaluation of a product is to confirm that the product meets the product-specific requirements and evaluation criteria contained in the product's ST.

51.7.2 Security Functional Requirements Catalog.

The CC standard contains a catalog of security functional requirements organized according to a taxonomy of several different classes of security functional requirements, such as the Audit class, the Identification and Authentication class, and the Security Management class.

Each class has several families of functional security requirements, each family of which differs in emphasis or rigor in the way the security objectives of the parent class are considered. For example, the Audit class includes families of requirements that pertain to different aspects of auditing, such as audit data generation and audit event storage.

Each family of security functional requirements typically contains several individual, more specialized, elemental security functionality requirements components. These components are the specific security functional requirements that can be stipulated within a PP or ST as a desired security functionality. For example, the Audit Data Generation family of security functional requirements contains a component that pertains to audit record generation and another component that pertains to the linkage between a user and an auditable event.

To foster versatility and evolution, the CC also permits any new security requirements that do not appear in the taxonomies of standard requirements to be created using the CC's standard language. If additional requirements become popular, future versions of the CC standard may incorporate such additional requirements directly into the CC catalogs.

51.7.3 Security Assurance Requirements Catalog.

The CC standard also contains a catalog of assurance requirements organized according to a taxonomy of several assurance classes, such as classes pertaining to assurance requirements associated with (a) configuration management of the product, (b) design, development, delivery, and operation of the product, and (c) maintaining assurance throughout the product's life cycle. Two assurance classes contain assurance requirements applicable to the evaluation of PP or ST profiles.

The taxonomy of assurance requirements is refined in terms of a number of families of requirements within each assurance requirements class. Furthermore, a number of hierarchical assurance requirements components are contained within each family of assurance requirements.

51.7.3.1 Packaged Levels of Assurance.

Assurance requirements are bundled into one of seven predefined levels of assurance called Evaluation Assurance Levels (EALs). Higher assurance level bundles (e.g., EAL4) contain more rigorous packages of component assurance requirements than lower level bundles (e.g., EAL1). EALs form a rising scale of objective measures of risk reduction.

The names associated with the EALs provide a sense of the increasing assurance:

EAL1: Functionally Tested

EAL2: Structurally Tested

EAL3: Methodically Tested and Checked

EAL4: Methodically Designed, Tested, and Reviewed

EAL5: Semi-Formally Designed and Tested

EAL6: Semi-Formally Verified Design and Tested

EAL7: Formally Verified Design and Tested

EALs provide a monotonically increasing scale that balances the increasing levels of confidence that can be obtained with the increasing cost and decreasing feasibility of conducting the testing and evaluation necessary to develop a specific, higher level of confidence. EALs range upward from the entry-level EAL1 to a stringent EAL7.

At the low end, EAL1 can be used to support the contention that baseline due care has been exercised with regard to protection of personal information and to establish some confidence in correct operation of a product in an environment where the threats to security are not considered very serious. In the middle range, EAL4 specifies more rigorous assurance requirements, such as automated configuration management, and development controls that are supported by a life-cycle model. At the highest extreme, EAL7 requires a formally verified design and extensive formal analysis. EAL7 may be applicable to certain highly specific, perhaps one-of-a-kind products, targeted for extremely high-risk situations or where the high value of the assets being protected justifies the extraordinary costs of an evaluation to this level of confidence.

Typical commercial products can be expected to fall in the range from EAL1 to EAL4.

Typically, an EAL package is stipulated within a PP or an ST. Alternatively, a PP or ST may, instead, stipulate either (a) individual component assurance requirements from the assurance requirements catalog, or (b) a specific EAL plus additional specific assurance requirements from the assurance requirements catalog.

51.7.4 Comprehensiveness of Requirements Catalogs.

The taxonomies of both types of security requirements—functional and assurance—are broad, and both catalogs of security requirements are deep: Hundreds of individual, selectable security requirements are defined. Furthermore, the CC was designed to accommodate evolution of its security requirements catalogs as technologies and security needs evolve. New security requirements that are specified according to the CC language, but that are outside the current CC catalogs, can become potential candidates for new, internationally agreed-on, security requirements to be incorporated into later versions of the standard CC catalogs.

Given the breadth, depth, and changeability of possible security requirements that can be stipulated within PPs or STs, the Common Criteria provides the ability to stipulate the requirements and products of virtually an unlimited number of existing and yet-to-be-conceived security product needs (PPs) and security product solutions (STs).

51.8 USING THE CC TO DEFINE SECURITY REQUIREMENTS AND SECURITY SOLUTIONS.

The CC provides a standard way to stipulate (a) the risks of concern, (b) the security functional requirements that must be met in order to mitigate stated risks, and (c) the security assurance requirements that must be met to provide the confidence desired that products are built with desired quality. Consumers develop PPs to specify their needs to their suppliers. Product developers create STs to specify how implemented security functions and assurance measures meet consumers' needs.

The CC defines the structures and content requirements for constructing PPs and STs. The topics of content are summarized in Sections 51.8.1 and 51.8.2, respectively, with examples drawn from a PP for a Private Branch Exchange (PBX) style of telecommunications switch (www.niap-ccevs.org/pp/draf_pps/archived/pbxpp.pdf).

Benefits of PPs and STs are given in Section 51.8.3. A summary of the state of profile development in the community is given in Section 51.8.4

51.8.1 Protection Profile Construction and Contents.

PPs enumerate the IA security functional and assurance requirements that an organization considers appropriate and valuable for a specific type of product in a specific threat environment. Each PP states the security problem that a PP-compliant product is intended to solve and stipulates the security requirements that are known to be useful and effective in meeting specific security objectives.

PPs should be created to be realistic in their ability to be met by a variety of potential products, each of which can have its high-level security specifications documented in an ST. Care must be taken to make sure PPs are achievable, generally with commercially available technology.

Guidance for developing PPs is available.30 For PPs in the government sector, principles for developing U.S. Government PPs are available (www.niap-ccevs.org/cc-scheme/policy/ccevs/scheme-pub-3.pdf), along with manuals to help select security requirements appropriate for environments with specific classes of robustness needs (www.niap-ccevs.org/pp/ci_manuals.cfm).

The main contents of a PP include statements about:

  1. The threats and vulnerabilities to which a product will be exposed
  2. The security environment within which a product is to reside
  3. The security objectives to be met by a product
  4. The security requirements to be addressed by a product
  5. The rationale that is provided to justify all decisions and choices made in developing the content within a PP

51.8.1.1 Threats and Vulnerabilities.

A PP must identify the expected threats and vulnerabilities against which a prospective product must be protected.

As needed, threat and vulnerability analyses should be conducted. In the PBX case, threats and vulnerabilities include, for example:

  • Theft of telecommunications services
  • Hijacking of resources within the PBX to commit cyber crime or cyber espionage
  • Unauthorized use and modification (or destruction) of processes, system software, applications, and databases embedded within the product

51.8.1.2 Security Environment.

The intended operational environment within which a product is to reside must be described in terms of both its IT and non-IT aspects. Assumptions about the product's usage, administration, and management should be enumerated. All policies to which the product must comply within the intended environment should be enumerated. Such policies include all applicable laws and regulations as well as any organizational policies or rules with which the product must comply. All natural-language expressions of the environment, threats, policies, and assumptions must be codified via CC terminology and put into the PP. In the PBX case, the environment includes the types of PBX users and administrators, the types of PBX interfaces, and the types of interconnectivity expected with other telecommunications gear.

51.8.1.3 Security Objectives.

The PP must identify the realistic and achievable security objectives that must be established for both the product and the environment within which the product is to reside. The security objectives should indicate which threats are to be countered by the product and which are to be countered by the IT and non-IT environments in which the product is to operate, and to be administered and managed. The security objectives should indicate with which of the organizational policies the product will comply and which organizational policies the environment will address. All stipulated security objectives should be traceable back to the underlying stipulations of threats, policies, and assumptions.

51.8.1.4 Security Requirements.

A compatible set of security functional requirements components should be selected and refined from the CC catalogs to meet the stipulated security objectives and to thwart specified threats. The security functional requirements also should be selected to support the stipulated security policies, under the stipulated assumptions pertaining to the operational and administrative environment. The security functional requirements should specify functionality that will meet each security objective stipulated for the product as well as functionality that will meet each security objective that applies to the IT part of the environment in which the product is to reside.

In the PBX case, 50 security functional requirements components were selected from seven classes of security functional requirements, including, for example, Audit (Class FAU), Cryptographic Support (Class FCS), Authentication Handling (Class FIA), and Protection of Security Functions (Class FPT). The requirements components selected for the Audit Class include components that stipulate what the PBX must audit, what information must be logged, who can access logs, and what are the rules for monitoring, operating, and protecting logs.

Likewise, security assurance requirements should be selected from the catalog of assurance requirements. Specific assurance requirements components, or a specific EAL package, should be selected to provide the desired level of assurance that the security objectives within the PP have been met. The desired level of assurance and the cost to attain such assurance should be balanced against factors, such as (1) the value of the resources to be protected, (2) the risk and extent of possible losses, (3) the level of confidence desired, and (4) any reasonably expected cost and delay factors that may accompany the development, and any subsequent testing, evaluation, and validation of a product at a specific level of assurance.

In the PBX case, over 100 assurance requirements components were selected from seven assurance requirements classes to demonstrate that:

  • Product design and configuration are good.
  • Adequate protection is provided during the design and implementation of the product.
  • Vendor testing on the product is of a specified depth and breadth.
  • Security functionality is not compromised during product delivery.
  • There is a specified quality and appropriateness in the product manuals that pertain to product installation, maintenance, and use.

Standard CC requirements components may be used exactly as defined in the CC; or, when appropriate, they may be tailored through the use of permitted, standard operations in order to meet a specific security policy or to counter a specific threat. Through such operations, (1) options within the standard CC requirements may be selected; (2) quantifications within specific CC security requirements may be assigned; and (3) any refinements to CC requirements may be made. Any such tailoring that is not completed within a PP may be completed within an ST that claims conformance to the PP.

51.8.1.5 Rationale.

Rationale should be provided for all decisions and choices made in developing all of the PP content. Rationale should justify how selected security functional and assurance requirements components are suitable to counter the enumerated threats. Rationale should justify how selected requirements components comply with the enumerated policies or how they handle the enumerated assumptions stipulated for the environment. The rationale should provide a mapping between stipulated requirements and stated security objectives and then back to the original, underlying, driving needs as stated in terms of threats, policies, assumptions, and the environment in which a product is to reside.

51.8.2 Security Target Construction.

Product developers create STs to document detailed information about the security aspects of products they build. An ST is required when a specific product is submitted for CC-based testing; the product-specific ST provides the basis and evaluation criteria against which testing and evaluation of the product are performed. STs also may be used for conveying detailed information to consumers about the security functionality of, and assurance associated with, a specific product and the configuration in which a product has been formally tested.

STs contain a description of the environment within which the product described via an ST is intended to operate. That is, an ST enumerates the (a) threats to the product, (b) policies, laws, and regulations with which the product is claimed to conform, and (c) assumptions about the security aspects of the IT and non-IT environment within which the product is intended to be used. STs delineate the security objectives that the builder of a specific product claims are addressed by the product. STs also enumerate the security requirements that the product builder claims are addressed by the product as well as those requirements to be addressed by the environment within which the product is intended to operate.

Part of an ST is very much like the PP to which an ST may be compliant. Some PP-compliant STs will merely reference the associated PP rather than repeating the PP contents within the ST.

However, the ST goes further than a PP because it also specifies the security functions offered by the product to meet each of the stated security requirements in the ST. It also specifies the assurance measures taken by the product builder to meet all the stated assurance requirements in the ST.

Furthermore, a product builder may claim in the ST that the specific product conforms to one or more PPs. If so, the ST can claim that the product it describes addresses the security functional and assurance requirements stipulated in each of the PPs to which the product is claimed to be conformant. As appropriate, the ST may refine PP-stipulated security requirements that are generic or product-neutral in the PP. The ST can thus specifically tailor to the product any general requirements that may appear in a PP. The ST also may add security requirements over and above any PPs to which the product claims to be conformant.

Rationale is provided for all decisions and choices made in developing the ST content. Rationale justifies how selected security functional and assurance components (a) are suitable to counter the enumerated threats, (b) comply with the enumerated policies, and (c) handle the enumerated assumptions stipulated for the environment. Rationale also justifies how implemented security functions meet stated security functional requirements and how the assurance measures used meet the stated security assurance requirements. Rationale justifies all claims made in an ST about the PPs with which the product conforms. In essence, the rationale provided demonstrates that the ST contains an effective and suitable set of countermeasures and that this set of countermeasures is consistent, complete, and cohesive.

51.8.3 Benefits of PPs and STs.

For potential consumers of security-relevant products, PPs provide a standard and flexible way to help organize requirements capture and analysis. PPs provide a standard and flexible way to transform security needs and policies into unambiguous, widely recognized, exactly tailored, product-neutral, security requirements for the desired security functional behavior and assurance levels for any class of desired IT product.

PPs provide a standards-based mechanism to convey clearly an organization's, or a market sector's, security needs to suppliers, vendors, and business partners. Indeed, PPs associated with broad aspects of critical infrastructure may be approved as recognized standards. PPs provide the bases for standard, requirements-based testing and evaluation in that stated assurance requirements impact the requisite depth and breadth of testing and evaluation.

As for STs, by examining an ST, consumers can verify completely and unambiguously whether the claimed security functionality of a product and the claimed assurance associated with the product are consistent with consumers' requirements. Comparing product-specific STs with PPs of interest fosters a common buyer/seller understanding of security needs and security capabilities and confidence in products. By comparing the STs of products, consumers are better able to compare the security features of competing products. Consumers are able to understand what types of tests and evaluations that a specific product underwent. They also are able to determine whether the configuration in which a product was tested is consistent with the environment into which the product will be deployed. Through these types of benefits, consumers can shorten their acquisition cycles for security-relevant products.

Furthermore, vendors' and suppliers' marketing can improve by using a product's ST to show how a specific product matches customer security requirements.

Evaluators in CC testing laboratories also benefit by being able to understand the scope of the product-specific evaluation that needs to be performed on a specific product. That is, claims made within the ST are subject to evaluation by a testing laboratory.

51.8.4 Extant PPs and STs.

PPs have been created, and are being created, for many classes of products and technology. Of some note is the registry of U.S. Government PPs that have been validated by CC testing labs operating under the CCEVS. Over 25 classes of PPs are available and stipulate security needs for products and technology such as:

  • Antivirus
  • Biometrics
  • Certificate management
  • Role-based access control
  • Firewalls (packet-filtering firewalls and application-filtering firewalls)
  • Intrusion detection/prevention systems
  • DBMSs
  • Public key infrastructure/key management Infrastructure
  • Operating systems

Also, PPs support specific needs of specific industries. For example, a PBX PP, telecom switch PP, and Voice over IP firewall PP apply to the telecommunication industry's needs. PPs for smart cards, smart card components (chips, operating systems, applications, crypto-engine, etc.), and Certificate Issuing and Management Systems apply to the financial industry. An Industrial Control System PP www.isd.mel.nist.gov/projects/processcontrol/SPP-ICSvL0.pdf) expresses the needs of the 600 member enterprises within the Process Control Security Requirements Forum.

Since PP development is decentralized by its very nature of being customer specific or industry specific, no single registry of completed PPs (or PPs under development) exists. For example, in the United States, the several government PPs are listed at www.niap-ccevs.org/cc-scheme/pp/. There are comparable national registries in other countries. Commercial customers or vendors also develop PPs (such as for Windows and Linux), but knowledge of the existence of such PPs tends to arise in a haphazard fashion (often as products are entered into CCEVS testing and evaluation).

STs exist for at least all products that have undergone CC-based testing. Tested products that have further undergone testing validation by a CC Validation Body are listed on the Web sites of the several national bodies and national schemes associated with the CC paradigm. Thus, at least the products listed, for example, on the CCEVS Validated Products List (www.niap-ccevs.org/cc-scheme/vpl/) have associated STs. Many other products also have associated STs. But it is difficult to develop centralized listings of all such products since not all products that have undergone CC-based testing have necessarily been tested at accredited CC testing laboratories; and products that have been tested by such testing labs have not necessarily had their testing validated and listed by a CC national validation body.

51.9 COMMON TEST METHODOLOGY FOR CC TESTS AND EVALUATIONS.

According to the Common Criteria paradigm, testing and evaluation either of CC profiles or of products claiming conformance to CC profiles are conducted as structured and formal processes. That is, evaluators use a standard methodology to carry out a series of standard testing and evaluation activities. The standard methodology and assessment activities are defined in the Common Evaluation Methodology (CEM) standard.

The CEM standard provides a common base for independent, autonomous CC testing laboratories to assess CC profiles and vendor products in the same ways. Use of the CEM by all CC testing labs provides a common floor of confidence in similar products that may have been assessed by different CC testing labs.

Because the CEM is internationally recognized, the use of customer-unique or country-unique assessment is minimized if not avoided all together. Assessment costs are minimized, since vendors need only prepare for one testing campaign rather than for a battery of different testing campaigns conducted against different customer-specific, or country-specific, assessment criteria.

51.10 GLOBAL RECOGNITION OF CEM/CC-BASED ASSESSMENTS.

To fortify assessment consistency, and the use of standard CEM-based CC testing and evaluation in CC testing labs that may be in different countries, many countries have banded together to declare their mutual recognition of CEM-based assessment regardless of the country in which a CC testing lab is located. The goal is to allow CC profiles or products claiming conformance to CC profiles to be tested once in a competent, consistent, credible way by any CC testing lab anywhere in the world so that product suppliers can sell CEM/CC-assessed products everywhere in the world, without any further reassessments by different countries or different buyers.

The “Common Criteria Arrangement on the Mutual Recognition of the Common Criteria Certifications in the Field of Information Technology Security,” often called just the Mutual Recognition Arrangement (MRA), is the official multicountry declaration that different countries will recognize the CEM/CC-based assessments that may be conducted by CC testing labs in each others' jurisdictions.

The MRA identifies the several conditions necessary for mutual recognition of CEM/CC-based assessments by CC testing labs. The MRA requires the use of the CC and CEM as the basis for evaluation criteria and evaluation methods. The MRA stipulates that each country must have a national scheme for supporting CEM/CC-based assessments by accredited, quality-consistent CC testing labs and that each country must establish an accredited validation body to validate and to certify the results of CC testing labs in the country.

Each country's validation body is responsible for defining and maintaining each country's national scheme for mutually recognizing CEM/CC-based testing, evaluation, national validation, and national certification of CE/CC-assessed CC profiles or vendor products. Each country's validation body is responsible for achieving correct, consistent, credible, and competent application of the CC and CEM within the country.

51.11 EXAMPLE NATIONAL SCHEME: CCEVS.

The U.S. national scheme is called the Common Criteria Evaluation and Validation Scheme (CCEVS). The CCEVS operates under the authority of the U.S. CCEVS Validation Body. The CCEVS Validation Body operates under the terms of the MRA as well as additional national requirements.

The purpose of the CCEVS is twofold. The CCEVS establishes, and maintains the quality of, the CC-based security testing, validation, and certification infrastructure in the U.S (see Section 51.11.1). It also defines the policies, procedures, processes, and sequence of events for using this infrastructure for testing, evaluating, validating, and certifying products and profiles in the United States (see Section 51.11.2). More about the CCEVS is available at www.niap-ccevs.org/cc-scheme/defining-ccevs.cfm.

51.11.1 Maintaining the Testing Infrastructure.

The CCEVS defines the organizational entities associated with, and the processes and procedures for establishing and maintaining, an MRA-recognized, U.S. Government–overseen, commercial security testing infrastructure for conducting CC-based testing in the United States. Both products' security claims and the proper construction of customer security requirements profiles can be assessed via this infrastructure.

This infrastructure depends on the availability of multiple, accredited, independent, commercial, fee-for-service, security testing laboratories, a government oversight body, and government certificate issuing authorities.

The labs, called CC Testing Labs (CCTLs), are accredited by the U.S. Department of Commerce's National Voluntary Laboratory Accreditation Program (NVLAP). NVLAP uses standards and guidance established by the CCEVS. Information about the process and benefits of accrediting laboratories appears elsewhere.31 Accreditation fosters commonality of testing across all CCTLs. Lists of CCEVS-accredited CCTLs (www.niap-ccevs.org/cc-scheme/cctls/) or labs in the process of seeking accreditation (www.niap-ccevs.org/cc-scheme/cctls/7candidate) are maintained by the CCEVS Validation Body. Lists of official CCTLs around the world (www.commoncriteriaportal.org/labs.html) and accredited under different national schemes are available at the Common Criteria Portal (www.commoncriteriaportal.org/).

The CCEVS Validation Body provides the services of U.S. government oversight of the testing infrastructure and its operation. It also provides the service of validating the results of product or profile testing and issuing certificates to tested products or profiles that have been validated.

51.11.2 Using the Testing Infrastructure.

A discussion of the basic processes associated with using the testing infrastructure follows.

Organizations seeking analysis and assessment of PPs, STs, and products claiming conformance to such profiles select and contract with accredited labs (CCTLs) of their choice.

CCTLs provide impartial, standardized, third-party analysis and assessment of such profiles and products. CCTLs use appropriate, common CEM test methods that are standard across all test labs. These methods are tailored to security requirements or to security claims about a product. These common test methods are used to verify in a standard way that a product under test complies with the builders' claims about (1) the security features of a product and (2) the processes associated with the development and life cycle of the product.

The cost to conduct an evaluation varies dramatically, depending on the complexity of the product and the EAL claim to be assessed. CCTL fees to evaluate simple, low-EAL products can be on the order of tens of thousands of dollars, whereas CCTL fees to evaluate complex, moderately high EAL products can cost a million or more.

The costs to evaluate many products fall between these extremes. Similarly, time to complete an evaluation may range from a couple to several months.

The CCEVS Validation Body provides independent, standardized, government oversight of the security analysis and assessment activities of the accredited labs. It ensures consistency across CCTLs and therefore promotes comparability of testing and evaluation results across all CCTLs.

The CCEVS Validation Body can be used as an independent body to validate the testing results generated by a CCTL. Such additional validation, if pursued, establishes even greater confidence and trust in the product under test. Under the authority of NSA and NIST, the CCEVS Validation Body issues MRA-recognized certificates to successfully assessed and validated products and CC profiles. These certificates confirm that the conclusions of the testing lab were consistent with the evidence presented and that there are no factors which would invalidate the evaluation. The certificate applies only to the product that was evaluated, not future versions of that product. However, an assurance maintenance scheme (discussed later) allows future versions of the certified product to maintain certification.

The CCEVS Validation Body also maintains lists of profiles and products that have been validated and certified in the United States (see Section 51.12).

Detailed information about preparing for, conducting, and validating testing appears elsewhere.32

51.11.3 Maintaining Certification in an Evolving Marketplace.

Validated products that claim conformance to the CC AMA Assurance Maintenance class of assurance requirements, and that have been altered or upgraded since the time of their original CC evaluation, can undergo a mutually recognized assurance maintenance scheme process33 to maintain their CC certification at the same level of assurance. Certification retention depends on (a) the vendor pursuing all the assurance maintenance activities stipulated in the CC AMA assurance requirements, (b) the vendor submitting a Security Impact Analysis (SIA) that in part provides an audit of the vendor's assurance maintenance activities, and (c) acceptance of the SIA by the CCEVS Validation Body. Otherwise, the revised product must undertake a new CC evaluation. Practical information about the assurance maintenance process is available elsewhere.34

51.12 VALIDATED PROFILES AND PRODUCTS.

CC profiles successfully validated under the CCEVS are placed on a national registry of PPs (www.niap-ccevs.org/cc-scheme/pp/). Profiles under development in the United States and not yet validated are also listed (www.niap-ccevs.org/pp/drafLpps/) in an attempt to help avoid duplication of effort by independent organizations that may wish to develop similar PPs. A complete list of profiles validated around the world by all national schemes is available at the Common Criteria Portal (www.commoncriteriaportal.org/products.html).

Products successfully validated under the CCEVS are placed on a Validated Products List (www.niap-ccevs.org/cc-scheme/vpl/). The list is sorted by product type, assurance level, product name, and vendor name. Common Criteria certificates issued for IT products apply only to the specific versions and releases of those products for which a validated CCTL assessment took place. An evaluated product will remain on the Validated Products List for three years, after which time it will be moved and indefinitely listed on a historical evaluated products list (www.niap-ccevs.org/cc-scheme/vpl/archived/). This archival list contains validated products that are no longer commercially available. A list of products currently undergoing evaluation within the CCEVS is also maintained (www.niap-ccevs.org/cc-scheme/in_evaluation/).

A complete list of the hundreds of products evaluated across all national schemes, along with their associated PPs, is available at the Common Criteria Portal (www.commoncriteriaportal.org/products.html). Similar to the CCEVS, other national schemes maintain lists of products currently under evaluation within each national scheme.

Many of the market leading vendors in many areas of technology have validated products, and many of the most prevalent areas of technology are supplied by validated products. Vendors include major players such as Apple, BMC, Cisco, CA, HP, IBM, Juniper, Microsoft, Oracle, Sun, Sybase, Symantec, Xerox, and many more. The technologies for which validated products are available include proprietary and open source operating systems, network and security management equipment, certificate management components, firewalls, IDSs/IPSs, copiers, web servers, messaging gear, AV gear, VPN gear, and many more.

Validated products are getting significant traction in the government marketplace. Certain commercial industries—for example, the process control and financial industries—also are attracted to validated products. Although there are still consumers and vendors who believe that the CC paradigm is complicated, inefficient, or costly, the many who do build and buy validated products see the CC paradigm as the best, proactive way to improve the security and assurance of products bought and sold in the marketplace.

51.13 BENEFITS OF CC EVALUATION.

CC-based testing under the aegis of a national scheme benefits both manufacturers and their customers.

51.13.1 Helping Manufacturers.

CC-based accredited testing directly helps product manufacturers, and thereby also indirectly benefits product consumers, in a number of ways.

Reliance on standards broadens potential markets. It is important that security evaluations of IT products be carried out in accordance with recognized standards and procedures. The use of standard IT security evaluation criteria, and IT security evaluation methodology, contributes to the repeatability and objectivity of the results. The use of standards tends to increase the product appeal to various, nonrelated customer constituencies.

CC-based testing under an MRA-recognized national scheme helps manufacturers penetrate global markets. It provides access to the MRA, and products and profiles so tested and validated can be recognized internationally within all countries, without needing to undergo any additional testing.

It helps manufacturers penetrate specific domestic markets. In the United States, for example, national government policy (NSTISSP #11 and DoD 8500) requires procurement and use of products that are CCEVS certified.35 According to this policy, U.S. DoD and civil agencies working for DoD must purchase only CCEVS-certified products whenever products are needed for environments pertaining to matters of national security. The civil sector government agencies also are encouraged to purchase such products for all other applications.

CC-based testing helps manufacturers lower costs by providing a pool of private, accredited, competitive security testing labs that have consistent testing quality and competence. Even when selecting the least expensive testing lab, manufacturers do not have to worry about sacrificing the quality of the testing services they receive.

By avoiding any further testing or retesting beyond that performed by the selected testing lab, manufacturers are spared enormous and costly country-specific or customer-specific testing campaigns. Lower testing costs can increase product profit margins or lower product prices or both.

An analysis of the CCEVS paradigm reveals some deficiencies to be addressed. The cost and time duration of evaluations may be addressed through tax credit incentives and development of so-called Common Criteria Lite concepts. Also, as vendors begin to leverage the lessons they learn from CC-based testing into their own development processes, it seems plausible that some sort of “self-certification program, subject to the same rigorous evaluation and audit requirements as the third-party evaluation program, may reduce the cost and time burdens of third-party evaluations.”36 There have also been criticisms that certified products can cost more than uncertified versions,37 but a consensus seems to be growing that the CC scheme is overwhelmingly positive.38

51.13.2 Helping Consumers.

CC-based testing directly benefits product consumers in a number of ways.

It provides consumers an impartial, high-quality assessment of a security profile, or a security-focused IT product, since it is an assessment that is conducted by an independent entity. The fact that testing by accredited testing labs is done according to standards gives consumers a strong sense that testing is objective and not slanted to benefit the product that was tested.

CC-based testing is a first step to helping consumers understand what features are offered by products, whether products comply with certain stated security requirements, and how trustworthy such products should be considered.

Because the CC paradigm has overcome many of the limitations of other approaches to developing trust in products, products tested by CCTLs are, arguably, among the most trustworthy products available. CCEVS lessons reveal that tested products tend to be more trustworthy. Indeed, statistics39 show that nearly a third of products tested were improved by identification and elimination of security flaws that could have been exploited by attackers. Even a greater percentage of tested products were improved by adding or extending security functionality.

With products that can be trusted, customers will be able to reduce their product acquisition costs by eliminating acceptance testing that duplicates testing already performed on a product by an accredited testing lab.

Furthermore, such as the case with Linux CCEVS certification, vendors of CC-tested products often choose not to increase the price of their product after it is evaluated and certified. Instead, such vendors are looking to increased sales to cover the costs of product certification.

51.14 CONCLUDING REMARKS.

Early efforts to test the security features of products were lacking in many aspects. Not many evaluated the assurance associated with products. Product comparisons typically were “apples-to-oranges” comparisons, and consumers had no common way to articulate their security and assurance needs. The Common Criteria changes these dynamics.

The experience gathered from ongoing security requirements, profiling, and security product testing efforts has consistently shown the Common Criteria paradigm to be a powerful, flexible, standards-based mechanism. It is a mechanism that facilitates defining IT security and assures confidence requirements tailored to users' specific needs. It facilitates stipulating IT product security specifications that are compliant to requirements. It facilitates testing and test verification of products to show they are correct, complete, well built, and compliant to their security specifications. It facilitates worldwide recognition of tested, CC-specified IT products and systems.

The CC paradigm is effective. It improves product quality by forcing a meticulous and clear focus on security, by forcing a rigorous security design and development discipline, and by forcing a testing campaign that is appropriate to the normal operation of the product or system, so as to verify that the implemented security is correct, complete, and compliant under normal operations.

The CC paradigm helps users understand what risks and vulnerabilities are important (via PPs) and what risks and vulnerabilities are addressed by products (via STs). It helps users understand what level of protection and confidence they want (via PPs) and what level of protection and confidence are provided by products (via STs). It also may help shorten users' product acquisition cycles since PPs can be used as procurement specifications, and since users can minimize their own acceptance testing efforts.

The CC paradigm can provide due diligence evidence. By providing a way to demonstrate the traceability of the security aspects of a product back to user requirements as well as to applicable policies, laws, and regulations, the CC paradigm can minimize users' exposure to potential penalties for noncompliance to security-relevant laws or regulations.

The CC paradigm helps vendors describe and demonstrate what level of security they designed and built into their products. It helps vendors show that they understand and meet consumer requirements and that they have nothing to hide. Consumers know exactly what they get in CC-specified and CC-validated products.

Even products that do not undergo evaluation benefit, as vendors that have under-taken CC-based evaluations have improved their overall product development, delivery, and support processes, and accordingly have reduced their overall likelihood to produce security defects and vulnerabilities. Products that are more secure because of the culture and environment in which they were built all tend to increase the overall assurance in all kinds of IT infrastructures.

The high and consistent testing and verification standards that the CC paradigm provides worldwide allow vendors to outsource the security testing, validation, and certification of their products anywhere in the world. The standards allow testing costs to be capped because they limit the number of necessary security (re)evaluations to only one. The standards allow builders to implement security products anywhere in the world and to test security products anywhere in the world; and, correspondingly, the use of these standards allow users to buy products with confidence from anywhere around the world.

The CC paradigm improves product quality and therefore helps to differentiate products in terms of verified quality. It facilitates expansion of a stable set of quality products from which to build integrated systems. It raises the level of confidence, credibility, and trust in products as well as in vendors and their product design, development, testing, and maintenance processes.

Most important, the CC paradigm assures that better-engineered, more acceptable products are available from vendors who prepare for and undergo rigorous, independent, CC-based evaluations.

51.15 NOTES

1. “Federal Information Security Management Act of 2002” (colloquially, Title III of the E-Government Act of 2002), U.S. Public Law 107-347, Section III, Dec 2002. (Available via http://csrc.nist.gov/sec-cert/ca-library.html)

2. Bill Clinton, “Critical Infrastructure Protection,” Presidential Decision Directive/NSC-63, PDD-63, B The White House, Washington, DC, May 22, 1998, http://fas.org/irp/offdocs/pdd/pdd-63.htm. “Defending America's Cyberspace—National Plan for Information Systems Protection,” Version 1.0, The White House, 2000. George W. Bush, “The National Strategy to Secure Cyberspace,” The White House, Washington, DC, February 2003, www.whitehouse.gov/pcipb/.

3. “Information Technology—Security Techniques—Code of Practice for Information Security Management,” ISO/IEC 17799 (2005), http://webstore.ansi.org/ansidocstore/product.asp?sku=ISO%2FIEC+17799%3A2005.

4. Ron Ross et al., “Recommended Security Controls for Federal Information Systems,” Special Publication 800-53, National Institute of Standards and Technology, Department of Commerce (December 2007), http://csrc.nist.gov/publications/nistpubs/800-53-Rev2/sp800-53-rev2-final.pdf

5. Department of Defense Instruction, “DoD Information Technology Security Certification and Accreditation Process (DITSCAP),” 5200.40, December 30, 1997, http://iase.disa.mil/ditscap/i520040.pdf.

6. National Security Telecommunications and Information Systems Security Instruction (NSTISSI) No. 1000, “National Information Assurance Certification and Accreditation Process (NIACAP)” (April 2000), www.cnss.gov/Assets/pdf/nstissi_1000.pdf.

7. NIST Special Publication 800-37, “Guide for the Security Certification and Accreditation of Federal Information Systems (May 2004), http://csrc.nist.gov/publications/nistpubs/800-37/SP800-37-final.pdf.

8. Department of Defense Directive 8500.1, “Information Assurance (IA),” October 24, 2002, www.acq.osd.mil/ie/bei/pm/ref-library/dodd/d85001p.pdf.

9. Department of Defense Instruction 8500.2, “Information Assurance (IA) Implementation,” February 6, 2003, www.niap-ccevs.org/cc-scheme/policy/dod/d85002p.pdf.

10. Personal communications with several keynote speakers at the First International Common Criteria Conference, National Information Assurance Partnership, Baltimore, MD, May 2000.

11. “Verizon Business Completes Cybertrust Acquisition” July 9, 2007, www.verizonbusiness.com/us/about/news/displaynews.xml?newsid=22913&mode=vzlong&lang=en&width=530&root=/us/about/news/releases/&subroot=release.xml&langlinks=off.

12. Greg Keizer, “Google, Sun, Lenovo, Others Out to Shame Spyware.” TechWeb Network, January 25, 2006; www.techweb.com/wire/ebiz/177103886.

13. Carnegie Mellon University Software Engineering Institute, “Capability Maturity Model for Software,” CMU/SEI-91-TR-24, 1991. The model is described at www.sei.cmu.edu/cmm/. See also www2.umassd.edu/SWPI/processframework/cmm/cmm.html.

14. “International Standards for Quality Management,” 2nd ed., ISO 9000 (1992).

15. U.S. Department of Defense, “Trusted Computer System Evaluation Criteria” (TCSEC), DOD5200.28-STD (December 1985).

16. National Computer Security Center, National Security Agency, “Trusted Network Interpretation of the Trusted Computer System Evaluation Criteria” (TNI), 9800 Savage Rd., Ft. Meade, MD 20755, July 31, 1987.

17. National Computer Security Center, National Security Agency, “Trusted Database Management System Interpretation of the Trusted Computer System Evaluation Criteria,” NCSC-TG-021, 9800 Savage Rd., Ft. Meade, MD 20755, April 1991.

18. Office for Official Publications of the European Communities, “Information Technology Security Evaluation Criteria” (ITSEC), Luxembourg (1991).

19. National Security Telecommunications and Information Systems Security Committee (NSTISSC), “Advisory Memorandum on the Transition from the Trust Computer System Evaluation Criteria to the International Common Criteria for Information Technology Security Evaluation,” NSTISSAM Compusec/1-99, http://niap.nist.gov/cc-scheme/nstissam_compusec_1-99.pdf, NSTISSC Secretariat, National Security Agency, 9800 Savage Rd., Ste. 6716, Ft. Meade, MD 20755-6716, March 11, 1999.

20. “Common Criteria for Information Technology Security Evaluation,” Version 2, ISO/IEC International Standard (IS) 15408-1 through 15408-3, ISO/IEC JTC1 SC27 WG3, 1999. Updates and future drafts are available: www.niap-ccevs.org/cc-scheme/cc_docs/.

21. W. Rishel, “ISO 15408 for Security and Privacy in Healthcare,” Research Note, Technology, T-10-5507, Gartner Group, March 2, 2000.

22. The terms “testing” and “evaluation” are a source of ambiguity and discrepancy in the community. In this chapter the terms “testing,” “evaluation,” and “testing and evaluation” are used interchangeably.

23. Common Evaluation Methodology Editorial Board, “Common Methodology for Information Technology Security Evaluation: Evaluation Methodology,” Version 3.1 (September 2007). Updates and future drafts are available.

24. “Mutual Recognition of the Common Criteria Certifications in the Field of In-formation Technology Security,” initially signed October 5, 1998; updated as “The Arrangement on the Recognition of Common Criteria Certificates in the field of Information Technology Security” (May 2000), www.niap-ccevs.org/cc-scheme/cc-recarrange.pdf, and as updated periodically with new signatory countries: www.niap-ccevs.org/cc-scheme/ccra.cfm.

25. NIAP/Department of Commerce, National Institute of Standards and Technology, Computer Security Division, “Common Criteria Evaluation and Validation Scheme for Information Technology Security—Organization, Management and Concept of Operations,” Scheme Publication #1, Version 2. Available from NIAP/Department of Commerce, Room 426, NN, 100 Bureau Drive, Mail Stop 8930, Gaithersburg, MD 20899-8930 (May 1999). http://www.niap-ccevs.org/cc-scheme/policy/ccevs/scheme-pub-1.pdf

26. These signatories are the NSA and the NIST operating together in a joint program called the National Information Assurance Partnership (NIAP); www.niap-ccevs.org/.

27. ISO/IEC Guide 25, “General Requirements for the Competence of Calibration and Testing Laboratories” (1990). www.fasor.com/iso25/.

28. ISO/IEC Technical Report 13233, “Information Technology Interpretation of Accreditation Requirements in Guide 25 Accreditation of Information Technology and Telecommunications Testing Laboratories for Software and Protocol Testing Services.” Can be purchased online: www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=21468.

29. NIST Handbook 150-20 “Information Technology Security Testing—Common Criteria,” (www.niap-ccevs.org/cc-scheme/policy/ccevs/HB150-20.pdf), a technology-specific extension to J. L. Cigler and V. R. White, eds., NIST Handbook 150, “National Voluntary Laboratory Accreditation Program—Procedures and General Requirements,” U.S. Department of Commerce, Technology Administration, National Institute of Standards and Technology (Washington, DC: U.S. Government Printing Office, 1994); http://ts.nist.gov/Standards/Accreditation/upload/nist-handbook-150.pdf

30. Technical Report ISO/IEC TR 15446, “Information Technology—Security Techniques—Guide for the Production of Protection Profiles and Security Targets” (July 2004), (http://isotc.iso.org/livelink/livelink/fetch/2000/2489/IttLHome/PubliclyAvailableStandards.htm). Look under ISO/IEC TR 15446:2004.

31. Seymour Bosworth and M. E. Kabay, Computer Security Handbook, 4th ed. (New York: John Wiley & Sons, 2002), Chapter 27, Section 27.9.7, Section 27.4.7 New York.

32.Bosworth and Kabay, Computer Security Handbook, 4th ed., Chapter 27, Section 27.9.8.

33. Common Criteria project, “Assurance Continuity: CCRA Requirements,” CCIMB-2004-02-009, Version 1.0 (February 2004), www.niap-ccevs.org/cc-scheme/cc_docs/assur_con_v1.pdf.

34. Soheila Amiri, “The Purpose and Value of Common Criteria Assurance Maintenance (AMA),” CyberGuard Corp., Fort Lauderdale, FL (May 2004) (www.cyberguard.com/download/white_paper/en_cg_AMA.pdf).

35. National Security Telecommunications and Information Systems Security Committee, “National Policy Governing the Acquisition of Information Assurance (IA) and IA-Enabled Information Technology (IT) Products,” National Security Telecommunications and Information Systems Security Policy (NSTISSP) No. 11, NSTISSC Secretariat, National Security Agency, 9800 Savage Rd., Suite 6716, Ft. Meade MD 20755-6716 (January 2000). (Updated as NSTISSP No. 11, Revised Fact Sheet, “National Information Assurance Acquisition Policy” (July 2003); www.cnss.gov/Assets/pdf/nstissp_11_fs.pdf.) Also, National Security Telecommunications and Information Systems Security Committee, “Advisory Memorandum for the Strategy for using the National Information Assurance Partnership (NIAP) for the Evaluation of Commercial Off-the-Shelf (COTS) Security Enabled Information Technology Products,” NSTISSAM Inforsec/2-00, February 8, 2000, www.cnss.gov/Assets/pdf/nstissam_infosec_2-00.pdf. Available from NSTISSC Secretariat, National Security Agency, 9800 Savage Rd., Suite 6716, Ft. Meade MD 20755-6716.

36. National Cyber Security Partnership, Technical Standards and Common Criteria Task Force, “Recommendations Report” (April 2004), www.cyberpartnership.org/init-tech.html.

37. P. Wait, “Energy Contract Stirs Conflict: Deal Raises Questions about Evaluation Process, Common Criteria's Value,” Government Computer News, May 14, 2006; www.gcn.com/print/25_12/40754-1.html#.

38. W. Jackson, “Mary Ann Davidson: In Defense of Common Criteria,” Government Computer News, October 8, 2007; www.gcn.com/print/26_26/45166-1.html#.

39. Rutrell Yasin, “NIAP Chief Touts Common Criteria,” Federal Communications Week, October 27, 2004; http://archives.neohapsis.com/archives/isn/2004-q4/0097.html.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.85.100