Chapter 19. Building Systems with Assurance

 

LORD BARDOLPH: When we mean to build,We first survey the plot, then draw the model;And when we see the figure of the house,Then must we rate the cost of the erection;Which if we find outweighs ability,What do we then but draw anew the modelIn fewer offices, or at last desistTo build at all?

 
 --King Henry IV, Part II, I, iii, 41–48.

Designing and implementing systems with assurance requires that every step of the process involve an appropriate level of assurance. This chapter discusses how to provide the levels of assurance during the steps of building a system. It emphasizes the documentation and methods required to obtain evidence to support claims of assurance and provides the context for detailed discussions of methodologies such as formal program verification and testing.

Assurance in Requirements Definition and Analysis

Understanding the role of assurance in the development of requirements means understanding what requirements must provide. The set of requirements must be complete and correct in the context of security policy models. Defining requirements is an iterative process that normally begins with threat definition and culminates with the detailed level requirements that are used in the design, implementation, and maintenance of the system.

Threats and Security Objectives

In building a secure or trusted system, it is a mistake to assume that threats to the system are obvious or well-defined. This section briefly discusses identification of the security threats to the system and development of high-level security requirements, or security objectives, to mitigate the threats. This approach parallels that of the Common Criteria (see Section 21.8).

  • Definition 19–1. A threat is a potential occurrence that can have an undesirable effect on the system assets or resources. It is a danger that can lead to undesirable consequences.

  • Threats are different from vulnerabilities.

  • Definition 19–2. A vulnerability is a weakness that makes it possible for a threat to occur.

At the highest layer of abstraction, security threats are breaches of confidentiality, disruptions of integrity, or denials of service. It is important to refine these threats in relation to the specific system and the environment in which it must operate. Threats may come from either outside or inside some boundary that defines the system. Threats can come from authorized users or from unauthorized users who masquerade as valid users or find ways to bypass security mechanisms. Threats can also come from human errors or from acts of God.

If the system is not connected to external networks, outside attackers may not be a threat. Elimination or mitigation of the threat of penetration does not, however, eliminate the threat of disclosure of secrets, breaches of integrity, or denials of service. Typically, inside users are trusted to use the system correctly, but there are many ways in which this trust can go wrong. One way is through intentional misuse of authorizations, whether for fun, profit, or revenge. Another is the so-called fat-finger error, whereby an authorized user makes a mistake or inadvertently corrupts or misuses the system. Other means of misusing systems include finding ways to defeat or bypass authorization and access controls or other security mechanisms to reach information that would ordinarily be denied the perpetrator.

Every identified threat must be addressed by some countermeasure that mitigates it. Security objectives are high-level requirements that can serve this purpose. For example, threats regarding unauthorized use of the system can be mitigated by an objective that requires user identification and authentication before a user is given access to any system resources. Objectives are requirements at the highest level, and they provide clues about the kinds of mechanisms that are needed to implement them. In addition, objectives reveal information that can help in the subsequent development of a detailed requirement specification. Objectives suggest models and other existing policies. Sometimes security objectives are not sufficient to address all threats, which leads to assumptions about the operating environment, such as physical protection mechanisms, to address all threats.

Mapping the security threats into the set of objectives and assumptions partially addresses the completeness of the system security requirements. Note that every threat must be addressed. Threats may be mitigated by a combination of assumptions or objectives. Often a single objective or assumption can address multiple threats.

Architectural Considerations

An early architectural decision is to determine the primary focus of control of security enforcement mechanisms. Computer security centers on access to information, but the primary focus of control of security protection and enforcement mechanisms may be on user identity or on operations. In operating systems, for example, the focus of control is on the data. Access decisions are based on predefined permissions to data for processes acting on behalf of users. User-based mechanisms include mandatory access control mechanisms, discretionary access control mechanisms, and privileges assigned to users. In applications, the focus of control may be on operations that a user is allowed to perform. A user may be restricted to certain operations. These operations control access to the data needed to perform their functions. Role-based access control mechanisms focus on operations.

Another architectural decision is whether to centralize some security functions or to distribute them among systems or system components. There are trade-offs between a centralized security enforcement mechanism and a distributed mechanism. In distributed systems, a function may be spread across components or centralized in a single component. In a single-host system, a function may be distributed across modules or consolidated into a single module. An example in a distributed system is the collection of security audit information. The system could forward all auditing information to a central audit repository, or each component could do its own auditing. As another example, an operating system can use centralized or distributed mandatory access control checks. The mechanism may be centralized, and called by other routines or may be distributed, and duplicated within the operating system where needed.

Generally, it is easier to analyze and develop sound assurance evidence for centralized mechanisms. A mechanism that is in one place need only be analyzed once, and the remainder of the assurance steps simply argue that the routine is called appropriately. However, a centralized mechanism may be a bottleneck and may impact performance.

Security Mechanisms and Layered Architecture

Computer architectures are layered, and security enforcement mechanisms may reside at any architectural layer. Systems designed and built using layers describe the functionality of each layer precisely.

When an application receives a request, it passes the request to the layer underneath the application. That layer processes the request and passes it to the next layer. This continues until the request reaches the layer that can fulfill the request. Successive layers simply follow the instructions they are given by the preceding layers. When the request is satisfied, the pertinent information is passed back up the layers to the user at the application layer.

An early architectural decision is selecting the correct layer for a mechanism. Designers must select the layer at which the mechanism will be the most efficient and the most effective. Security mechanisms for controlling user actions may be most effective at the application level, but security mechanisms for erasing data in freed disk blocks may be most effective at the operating system level.

Once a layer has been chosen for a security mechanism, one must consider how to protect the layers below that layer. For example, a secure operating system requires security mechanisms in the hardware layer as well as in the operating system itself. A secure application requires security mechanisms inside the application as well as at the services, operating system, and hardware layers.

The security mechanisms at the hardware layer may be a combination of physical security mechanisms that isolate the hardware in rooms requiring special access and administrative procedures that restrict access to them. Some computer manufacturers suggest that security mechanisms be built into the firmware and hardware [1000] as well as into the software.

It may not be possible to place a mechanism in the desired layer unless what is being developed includes all the pertinent architectural layers. For example, when developing an application, the builder may not be able to make changes in the operating system layer. Doing so would mean defining requirements for the operating system and acquiring an operating system that meets those requirements. If no such operating system exists, the mechanism must be placed at a less optimal layer, or the builders must consider a special-purpose operating system.

The security enforcement mechanisms of the application and the database management system can only control accesses to the underlying operating system that use the internal mechanisms of the application and the database management system. Application and database mechanisms cannot control a system user from accessing the operating system directly, bypassing the controls of the application or the database mechanism entirely. If a user can access application or database information by accessing the operating system directly, then the system is vulnerable. Regardless of the security mechanisms within the database management system, the operating system must also enforce security. For this reason, all evaluated and rated database management systems require the underlying operating system to provide specific security features and to be a rated and evaluated operating system.

Building Security In or Adding Security Later

Like performance, security is an integral part of a computer system. It should be integrated into the system from the beginning, rather than added on later.

Imagine trying to create a high-performance product out of one that has poor performance. If the poor performance is attributable to specific functions, those functions must be redesigned. However, the fundamental structure, design, and style of the system are probably at the heart of the performance problem. Fixing the underlying structure and system design is a much harder problem. It might be better to start over, redesigning the system to address performance as a primary goal. Creating a high-security system from one that previously did not address security is similar to creating a high-performance system. Products claiming security that are created from previous versions without security cannot achieve high trust because they lack the fundamental and structural concepts required for high assurance.

A basic concept in the design and development of secure computer systems is the concept of a reference monitor and its implementation—the reference validation mechanism.

  • Definition 19–3. [26] A reference monitor is an access control concept of an abstract machine that mediates all accesses to objects by subjects.

  • Definition 19–4. [26] A reference validation mechanism (RVM) is an implementation of the reference monitor concept. An RVM must be tamperproof, must always be invoked (and can never be bypassed), and must be small enough to be subject to analysis and testing, the completeness of which can be assured.

Any secure or trusted system must obviously meet the first two requirements. The “analysis and testing” of the reference monitor provides evidence of assurance. The third requirement engenders trust by providing assurance that the operational system meets its requirements.

  • Definition 19–5. [26] A security kernel is a combination of hardware and software that implements a reference monitor.

Security kernels were early examples of reference validation mechanisms. The idea of a security kernel was later generalized by the definition of a trusted computing base, which applies the reference validation mechanism rules to additional security enforcement mechanisms.

  • Definition 19–6. [285] A trusted computing base (TCB) consists of all protection mechanisms within a computer system—including hardware, firmware, and software—that are responsible for enforcing a security policy.

A TCB consists of one or more components that together enforce the security policy of a system. The ability of a TCB to enforce a security policy depends solely on the mechanisms within the TCB and on the correct input of parameters (such as a user's clearance) related to the security policy.

If a system is designed and implemented so as to be “small enough to be subject to analysis and testing, the completeness of which can be assured,” it will be more amenable to assurance than a system that is not so designed and implemented. Design analysis is possible using a variety of formal and informal methods. More thorough testing is possible because what must be tested is clear from the structured, analyzed design. More and deeper assurance leads to a higher level of trust in the resulting system. However, trade-offs may occur between features and simplicity. Inclusion of many features often leads to complexity, which limits the ability to analyze the system, which in turn lowers the potential level of assurance.

Systems in which security mechanisms are added to a previous product are not as amenable to extensive analysis as those that are specifically built for security. Often the functions are spread throughout the system in such a way that a thorough design analysis must analyze the entire system. Rigorous analysis of large and complex designs is difficult. So, it may not be feasible to determine how well the design implements the requirements. Assurance may be limited to test results. Testing of conformance to a flawed design is similar to designing a system to meet inappropriate requirements. The gap in abstraction between security requirements and implementation code may prohibit complete requirements testing. Hence, systems with security mechanisms added after development has been completed are inherently less trustworthy.

Building a system with security as a significant goal may provide the best opportunity to create a truly secure system. In the future, this may be the norm. However, many products today, including many high-assurance products, are developed by rearchitecting existing products and reusing parts as much as possible while addressing fundamental structure as well as adding new security features.

Policy Definition and Requirements Specification

Recall from Section 18.1.2 that we can consider a security policy to be a set of specific statements or security requirements.

  • Definition 19–7. A specification is a description of characteristics of a computer system or program. A security specification specifies desired security properties.

Good specifications are as important as the properties of the systems or programs that they describe. Specifications can be written at many different levels of abstraction. For example, some specifications may describe the security requirements, whereas other specifications may describe an architectural view of the system. More detailed specifications may describe individual components. Even more detailed specifications may describe individual functions. As this example implies, there may be multiple levels of specifications at different layers of abstraction.

Specifications must be clear, unambiguous, and complete. This is difficult when using informal methods that rely on natural language because natural languages do not have precise syntax or semantics.

Precision in stating requirements can be difficult to achieve.

There are several different methods of defining policies or requirement specifications. One technique is to extract applicable requirements from existing security standards, such as the Common Criteria. These specifications tend to be semiformal because of the structure of the requirements and the mappings among them. Another method is to create a new policy by combining the results of a threat analysis with components of existing policies.

A third technique is to map the system to an existing model. If the model is appropriate for the goals of the system, creating a mapping between the model and the system may be simpler and cheaper than constructing a requirements specification by other methods. If the mapping is accurate, the proofs of the original model establish the correctness of the resulting policy.

The expression of the specification can be formal or informal in nature. Section 20.2 contains an example of a formal specification of the Bell-LaPadula Model in the specification language SPECIAL.

Justifying Requirements

Once the policy has been defined and specified, it must be shown to be complete and consistent. This section examines part of a security policy developed in accordance with the ITSEC [210] guidelines. It also provides a partial informal demonstration that the resulting security policy meets the threats defined for the system.

The ITSEC (see Section 21.3) is a harmonization of security evaluation criteria of several European countries. ITSEC introduced the concept of a security target (ST) that defines the security threats to the system and the functional requirements of the system under evaluation. An ITSEC suitability analysis justifies that the security functional requirements are sufficient to meet the threats to the system.

The suitability analysis maps threats to requirements and assumptions in tabular form. For each threat, a prose description describes how the references address the threat.

Assurance During System and Software Design

Design assurance is often neglected. Design flaws are usually uncovered when tests produce numerous flaws that cannot be fixed easily. Had the design been analyzed, the security flaws could have been corrected at that level, and then the implementation flaws would have been easier to fix. Hence, identifying and correcting security flaws at the design level not only enhances the trustworthiness of the system but also supports both implementation and operational assurance.

Design assurance is the process of establishing that the design of the system is sufficient to enforce the security requirements for the system. Design assurance techniques employ a specification of the requirements, a specification of the system design, and processes for examining how well the design (as specified) meets the requirements (as specified).

Design Techniques That Support Assurance

Modularity and layering are techniques of system design and implementation that can simplify the system, thus making it more amenable to security analysis. If a complex system has well-defined independent modules, it may be amenable to a security analysis. Similarly, layering simplifies the design. Layering supports a better understanding of the system and therefore leads to more assurance. Layering can also support data hiding. For example, global variables span all layers and modules and therefore may allow sensitive information to be available to functions for which that information is not needed. This type of unnecessary interaction between layers or between modules should be eliminated. This reduces the risk that errors in one layer or module will contaminate another.

The reference validation mechanism suggests that functions not related to security be removed from modules supporting security functionality. This makes those modules smaller and thus easier to analyze. These design concepts must be carefully described in design documentation and in the implementations derived from them.

Large systems can be broken down into layers, making it easier to develop specifications at different levels of abstraction. The following terminology describes the different levels of a system.

  • Definition 19–8. A subsystem or component is a special-purpose division of a larger entity.

The subsystems or components of an operating system may include the memory management system or file systems, whereas a subsystem or component of a Web store may be the collection of credit-card processing activities. A component consists of data structures and subcomponents or modules. A system that does not have subsystems in the traditional sense may be subdivided by other means, such as layers or servers.

It may be easier to describe a large component if it is broken into smaller parts, each having a specific functionality or purpose.

  • Definition 19–9. A subcomponent is a part of a component.

For example, in an operating system, an I/O component may be broken down into I/O management and I/O drivers. It may be useful to break a subcomponent into even lower subsystems, such as a component for each I/O driver. The lowest level of decomposition is made up of modules.

  • Definition 19–10. A module is a set of related functions and pertinent data structures.

A set of modules may be a subcomponent or component. The functions that may make up a module include commands, system calls, library routines, and other supporting routines. Functions have inputs, outputs, exception conditions, error conditions, and effects on data or other functions. Function descriptions may include internal logic and algorithms or just address interfaces.

Another design consideration is the principles of secure design (see Chapter 13). For example, consider the principle of least privilege (see Section 13.2.1). The modular structure of a design can support the use of this principle. Each level of the design should address privilege. At the time of implementation, it may be tempting to give more privilege than is required, because it is simpler, because the privilege may be needed again shortly, or for other reasons. This temptation should be resisted. Implementers should understand how to write programs and configure systems so that the assignment of privilege is tightly controlled and privileges are revoked when no longer needed.

Design Document Contents

Most life cycle models require design documentation, although the documentation requirements are not always sufficient for developing design assurance. A more rigorous specification may be necessary to establish that the system design is sufficient to enforce the security requirements. Design specifications can be informal, semiformal, or formal in style. Specifications that are more formal can be subjected to more rigorous security analysis and justification, providing a higher level of assurance. A significant benefit of writing specifications is the ability to correct a design as one defines it in writing. The more precise the descriptions, the more likely one can find and correct flaws.

For security analysis, documentation must specify three types of information.

  1. Security functionsHigh-level descriptions of the functions that enforce security on the system, such as identification and authentication, access controls, and auditing, provide an overview of the protection approach of the system.

  2. External interfacesThe interfaces visible to the users are the mechanisms through which users access system resources and information. The system security enforcement functions control these actions, and security enforcement depends on the constraints and effects that determine their behavior.

  3. Internal designHigh-level design descriptions of the system address the architecture of the entity being described in terms of the next layer of decomposition. For example, system high-level designs describe the system architecture in terms of its major subsystems. The low-level or detail design is a description of the internal function of a module. The low-level description identifies and describes all the interfaces and data structures of the module.

The next three subsections expand on each of these types of information.

Security Functions Summary Specification

This is the highest level of specification of security enforcement and is significant to the development of all subsequent specifications and to the security analysis on which they depend.

  • Definition 19–11. A security functions summary specification identifies the high-level security functions that are defined for the system.

These functions are the protection mechanisms defined to meet the security functional requirements in a requirement specification. The content of the security functions summary specification should include the following information.

  1. Description of individual security functionsThis description should be complete enough to show the intent of the function. The activities of each function relate to one or more security requirements and may specify behavior that is not explicitly a part of the security requirements.

  2. Overview of the set of security functionsThis overview should describe how the security functions work together to satisfy security requirements.

  3. Mapping to requirementsThis section should specify a mapping between the security functions and the security requirements. It is often presented as a table.

External Functional Specification

A description of the expected behavior of each external interface should include parameters, syntax, effects, security constraints, and security error conditions. Each of the security functions mentioned above may have several user-visible interfaces, which are of particular importance for a specification of a secure or trusted product or system.

  • Definition 19–12. An external functional specification, also called a functional specification, is a high-level description of external interfaces to a system, component, subcomponent, or module.

The interface descriptions provide details about parameters, effects, exceptions, and error conditions. An external functional specification can be written for an entire system, a component, a subcomponent, or even a module. The technical content of this specification should include the following information.

  1. Component overviewThis overview identifies the component, its parent component, and how the component fits into the design structure of the parent component. It also identifies the substructures (such as modules) to be specified in this document as well as in related documents.

  2. Data descriptionsThese descriptions identify and define data types and data structures that are necessary to support the external interface descriptions specific to this component. They provide references to definitions of data types and structures that are defined outside the scope of this component but that are used in this component. Finally, they identify security issues or protection requirements relevant to data structures.

  3. Interface descriptionsInterfaces include commands, system calls, library calls, functions, and application program interfaces. They may be visible to the user or may be application program interfaces to the particular component being specified. Interfaces visible to the user are of special interest from a security perspective, and each such interface should be explicitly identified as visible to the user. An interface description should follow a standard syntax and should identify parameters, exception conditions, effects, and error messages. The exception conditions and effects are especially important.

Internal Design Description

  • Definition 19–13. An internal design description describes the internal structures and functions of the components of the system.

The description of the internal structures and functions of the system consists of a set of one or more documents. The complexity of the system and its decomposition into components and subcomponents determine the decomposition of the high-level design documentation.

High-level design documents focus on subsystems or components and address their structures, functions, and the ways in which they are used. The architecture of the system, in terms of its major subsystems, is the most abstract layer of the high-level design. The high-level design documents of each major subsystem provide specific information about the subsystem design in terms of the subcomponents, regardless of the layer of the design decomposition.

The high-level design documents for each layer, from the system architecture through all intermediate layers, provide the same fundamental information and should follow the same structure. The technical content of a high-level design document includes the following information.

  1. Overview of the parent componentOnly the highest level of the design or system architecture lacks a parent component. If there is a parent component, the high-level design identifies its high-level purpose and function. The description identifies all subsystems of the parent component, describes their purpose and function, and includes how they interact with each other to transfer data and control. A description of the security relevance of the parent component completes this overview.

  2. Detailed description of the componentThis document expands on the purpose and functionality of the component. It includes a description of the features and functions that the component provides.The component structure is described in terms of the subcomponents, providing an overview of how the subcomponents support the component in accomplishing its purpose and functionality. Any underlying hardware, firmware, and software that the component or its subcomponents need are also identified. The document describes the data model for the component in terms of the variables and data structures that are global to it and describes the data flow model in terms of how subcomponents of the component interact and communicate to transfer information and control. The document identifies all interfaces to the component and explicitly notes which are externally visible. A more complete description includes a description of the interfaces in terms of effects, exceptions, and error messages, as appropriate.

  3. Security relevance of the component. This section identifies the relevance of the component and its subcomponents to the system security in terms of the security issues that the component and its subcomponents should address. It includes specific information on the protection needs of global variables, data structures, and other information under the control of the component. Other issues include correctness of particular routines and management of security attributes. The mechanisms supporting security in the underlying hardware, firmware, and software mechanisms must also be identified and described.

Low-level design documents focus on the internal design of modules, describing relevant data structures, interfaces, and logic flow. These documents include detailed descriptions of interface functions such as application program interface routines, system calls, library calls, and commands. This specification focuses on how a function is to be implemented and may include specific algorithms and pseudocode.

A low-level design description of a module should contain sufficient information for a developer to write the implementation code for the module. The design description includes the following information.

  1. Overview of the module being specifiedThis overview describes the purpose of the module and its interrelations with other modules—especially dependencies on other modules. The description of the module structure is in terms of interfaces and internal routines as well as global data structures and variables of the module and provides details of the logic and data flow throughout the module.

  2. Security relevance of the moduleThis section identifies how the module is relevant to system security in terms of the security issues that the module and its interfaces should address. It provides specific information on the protection needs of global variables, data structures, or other information under the control of the module.

  3. Individual module interfacesThis section identifies all interfaces to the module, explicitly naming those that are externally visible. It describes the purpose and method of use of each routine, function, command, system call, and other interface, in terms of effects, exceptions, and error messages. The documentation must provide details of the flow of control and of the algorithms used.

Internal Design Specification

The internal design specification is slightly more complex than either the security functions summary specification or the external functional specification. Previous sections discussed content. They did not show how to relegate various designs to specific documents, thus making a set of documents that is a useful, readable, and complete set. Developers may use an internal design specification document, which covers parts of both the low-level and high-level design documents. The internal design specification is most useful when specifying the lowest layer of decomposition of a system and the modules that make up that layer. However, the internal design specification is not always used to describe the higher levels of design, making them incomplete for security analysis or for developers who are new to the system.

The following two examples present an outline of an internal design specification and an approach for dividing the internal design documentation for the I/O system of Windows 2000, following the component decomposition described in the example in Section 19.2.1.

Building Documentation and Specifications

Considerations other than the kind of specification required to support design assurance affect the development of documentation. Time, cost, and efficiency issues may impact how a development organization creates a complete set of documents. For example, a time constraint may compel an organization to write informal rather than formal specifications. Other shortcuts can result in effective documentation if done carefully.

Modification Specifications

When a system or product is built from previous versions or components, the specification set may consist of specifications of previous versions or parent products, together with modification specifications that describe the required changes. Time and cost constraints may compel developers to write specifications that are restricted to changes in the existing parts. These modification specifications describe the changes in existing modules, functions, or components; the addition of new modules, functions, or components; and possibly the methods for deleting discarded modules, functions, or components.

The use of modification specifications is most effective in developing new (or maintenance) releases of existing products, where security requirements and specifications are well defined for the older releases. Creating modification specifications gives the developer the advantage of understanding the specifications, design, and implementation of the system on which the new release is built. However, it can create problems for security analysis. The security analysis must rest on the specification of the resulting product, not just the changes. If there are full specifications of the previous versions of the parts, it may be possible to do an informal analysis based on the two sets of specifications. Modifications of modifications make the analysis even more complex.

Problems arise when the modification specifications are the only specifications of the system. Because there are no specifications for the parts not being modified, security analysis must be based on incomplete specifications.

Security Specifications

When external and internal design specifications are adequate in every way except for security issues, a supplemental specification may be created to describe the missing functionality. One approach is to develop a document that starts with the security functions summary specification. It is expanded to address the security issues of components, subcomponents, modules, and functions. Depending on the size and organization of the existing documentation, the information can be organized in the same way as the existing documentation. It can also be organized by security function.

Formal Specifications

Any of the four specification types discussed above (requirements specifications, security functions summary specifications, functional specifications, and design specifications) can be informal, semiformal, or formal. Informal methods use natural language for specifications. Semiformal methods also use natural language for specifications but apply a specific overall method that imposes some rigor on the process. Formal methods use mathematics and machine-parsable languages. Formal specifications are written in formal languages based on well-defined syntax and sound semantics. The languages themselves are usually supported by parsers and other tools that help the author check the resulting specification for consistency and proper form. The semantics of the language may help catch some oversights in the specification, but in general the author determines the completeness and correctness of the specification. Some high-level formal languages are appropriate for requirements specifications or functional specifications. Other languages are more like programming languages and can easily describe algorithms and logic flow. Chapter 20, “Formal Methods,” describes a variety of formal languages and discusses their use.

Justifying That Design Meets Requirements

The nature of the specification limits the techniques that can validate the specified design. Informal specifications and semiformal specifications cannot be analyzed using formal methods because of the imprecision of the specification language. However, it is possible to do some informal security analysis. An informal specification can justify the correct implementation of requirements or justify consistency between two levels of specification. The most common informal techniques are requirements tracing, informal correspondence, and informal arguments. An excellent technique for verifying any of the informal techniques is called review. Other methods, producing higher assurance, are formal in nature, such as formal specifications and precise mathematical proofs of correctness. Chapter 20, “Formal Methods,” discusses these methods.

Requirements Tracing and Informal Correspondence

Two techniques help prevent requirements and functionality from being discarded, forgotten, or ignored at lower levels of design. They also highlight functionality that may creep into the design but does not meet specific requirements.

  • Definition 19–14. Requirements tracing is the process of identifying specific security requirements that are met by parts of a specification.

  • Definition 19–15. Informal correspondence (also called representation correspondence) is the process of showing that a specification is consistent with an adjacent level of specification.

Together, these two methods can provide confidence that the specifications constitute a complete and consistent implementation of the security requirements defined for the system.

A typical set of design documentation for a system contains security requirements, external functional specifications, and internal design specifications, presented by one of the methods described in Section 19.2.2. The final level of decomposition of this design is the implementation code. Figure 19-2 shows the requirements tracing steps and the informal correspondence steps in such a design decomposition.

Requirements mapping and informal correspondence. Arrows 1, 2, and 3 indicate requirements tracing for each of the three levels of specification. Arrows 4 and 5 represent informal correspondence between adjacent levels of specification.

Figure 19-2. Requirements mapping and informal correspondence. Arrows 1, 2, and 3 indicate requirements tracing for each of the three levels of specification. Arrows 4 and 5 represent informal correspondence between adjacent levels of specification.

Identifying how a very high-level and abstract requirement applies to a very specific and concrete function in an external functional specification is not always straightforward. The difference in level of abstraction may obscure the relationship. Having an intermediate level between the very abstract and the very concrete often makes the process simpler. A security functions summary specification provides such an intermediate step between the requirements and the external functional specification. High-level design documentation can bridge the gap between functional specifications and low-level design specifications.

Requirements tracing and informal correspondence are most appropriate when all levels of specification or representation of the system have identified requirements and all adjacent pairs of specifications have been shown to be consistent. In addition to the security functions summary specification, the external functional specification, and the high- and low-level design specifications, the implementation (source) code is the final and lowest level. The adjacent pairs of specifications are as follows.

  • Security functions summary specification and functional specification

  • Functional specification and high-level design specification

  • High-level design specification and low-level design specification

  • Low-level design specification and implementation code

If requirements have been traced to the nth level of specification, developing an informal correspondence between level n and level n + 1 provides a straight path to the identities of specific requirements in the descriptions of specification level n + 1.

The requirements trace and the informal correspondence information may be included in the design specifications described above by adding sections from the security functions summary specifications, functional specifications, high-level design documentation, and low-level design documentation. These sections describe the informal correspondence to the next-higher level of specification and identify the security requirements met by the lowest-level entities of the specification. Requirements tracing and correspondence mapping can also be written in a separate document, with high-level overviews or references to the relevant parts of the specifications themselves.

Informal Arguments

Requirements tracing identifies the components, modules, and functions that meet requirements, but this technique does not fully address how well the requirements are met. This requires analysis beyond simple mappings. A technique called informal arguments uses an approach similar to mathematical proofs.

Common Criteria protection profiles and security targets (see Section 21.8.8.4) provide examples of informal arguments. Protection profiles define threats to the system and security objectives for the system. The rationale section of the protection profile presents an argument justifying that the objectives are adequate to prevent the threats. A security target identifies the mechanisms used to implement the security requirements and justifies that the mechanisms are sufficient to meet the objectives. This technique helps the writer analyze the completeness and correctness of security objectives (in protection profiles) and of security mechanisms (in security targets).

Formal Methods: Proof Techniques

Producing a formal specification is expensive. Thus, the specifiers usually intend to process the specification using an automated tool such as a proof-based technology or a model checker. Requirements tracing for a formal specification will check that the specification satisfies the requirements. Creating informal justifications before applying formal methods provides intuition about the proofs. Chapter 20, “Formal Methods,” discusses formal proof technologies and model checking.

Formal proof mechanisms are general-purpose techniques. They are usually based on logic such as the predicate calculus. They are generally highly interactive and are sometimes called “proof checkers” to indicate that the user constructs the proof and the tool merely verifies the steps in the proof. Proof technologies are designed to allow one to show that a specification satisfies certain properties (such as security properties). An automated theorem prover processes the properties and the specification. There may be many intermediate steps, such as proving of supporting lemmata and splitting of cases. Some proof technologies use a separate tool to generate formulas that can be given to the prover. The formula generator takes the specification of the system and a specification of properties as input. The generator develops formulas claiming that the specification parts meet the properties.

Model checking, on the other hand, checks that a model satisfies a specification. A model checker is an automated tool with a specific security model and processes a specification to determine if the specification meets the constraints of the model. This type of checking is designed for systems such as operating systems that do not terminate. Model checkers are usually based on temporal logic. Chapter 20, “Formal Methods,” discusses them in detail.

Review

A mechanism for gaining consensus on the appropriateness of assurance evidence is especially important when the assurance technique used for the evidence is informal in nature. A formal review process can meet this need. Every meaningful review process has three critical parts: reviews of guidelines, conflict resolution methods, and completion procedures.

The reviewers receive (or determine) guidelines on how to review the entity. These guidelines vary from general directions to specific instructions. For example, a review guideline might instruct a reviewer to focus on the correctness of a particular section of a document. It might request that the reviewer ensure that relevant requirements are described for each interface in an external functional specification.

Reviewers will have different strengths, opinions, and expertise. The review process must have a method for resolving any conflicts among the reviewers and authors.

Finally, the review must terminate, ensuring the completion of the entity being reviewed. This may include techniques for tracking and organizing feedback, ensuring the correct implementation of feedback, final approval procedures, and the like.

Assurance in Implementation and Integration

The most well-known technique for showing that an implementation meets its security requirements is testing. Section 19.3.3 discusses security testing methodologies, but other techniques also increase assurance in implementation and integration.

Implementation Considerations That Support Assurance

A system should be modular, with well-defined modules having a minimal number of well-defined interfaces. Whenever possible, functionality not relevant to security should be removed from modules that enforce security functionality.

The choice of the programming language for the implementation can affect the assurance of the implementation. Some languages strongly support security by providing built-in features that help to avoid commonly exploited flaws. Programs written in these languages are often more reliable. For example, the C programming language can produce programs with limited reliability, because C does not constrain pointers adequately and has only rudimentary error handling mechanisms. Implementations of C usually allow a program to write past the bounds of the program's memory and buffers. The extra data goes into the next contiguous piece of memory, overwriting what was already there. The C language does not provide checks to prevent this overwriting, leaving the responsibility for preventing this type of buffer overflow to the C programmer.

Languages that provide features supporting security will detect many implementation errors. Languages having features such as strong typing, built-in buffer overflow protections (such as array bounds handling), data hiding, modularity, domains and domain access protections, garbage collection, and error handling support the development of more secure, trustworthy, and reliable programs. For example, the programming language Java was designed to support the development of secure code as a primary goal. Other languages provide some support for security. Perl, a general-purpose programming language, provides a “taint mode,” which monitors input and warns when a program uses the information inappropriately.

Sometimes it is not feasible to use a high-level language because of efficiency constraints or the need to exploit system features that the high-level language cannot access. In such cases, coding standards can compensate for some of the security enforcement limitations. Although not as reliable as built-in features, coding standards help programmers avoid many errors. Another technique is to restrict the use of lower-level languages to specific situations in which high-level languages are inadequate.

Assurance Through Implementation Management

Teams of programmers often develop systems designed in modules. Each programmer develops modules independently of the others. Well-defined module interfaces are critical, especially when the work of the different programmers is integrated into a single system. Supporting tools and processes are important for small and large systems, whether developed by one programmer or a large team of programmers.

  • Definition 19–16. Configuration management is the control of changes made in the system's hardware, software, firmware, documentation, testing, test fixtures, and test documentation throughout the development and operational life of the system.

Configuration management tools and processes provide discipline and control of the refinement and modification of configuration items such as the source code and other information such as documentation. The configuration management system is made up of several tools or manual processes and should perform several functions.

  1. Version control and trackingMost development organizations use a source code control system that stores code modules and subsequent versions of them. Other configuration items, such as documents or document sections, require similar version control and tracking, whether using the same or a different tool. These tools allow an individual to make a copy of a particular version of a configuration item under control of the system and to return a new version later.

  2. Change authorizationVersion control and tracking tools do not always control who can make a change in a document. Typically, these tools allow anyone to have a copy of a version and to place the new version in the database. Hence, there must be a mechanism that allows only authorized individuals to “check in” versions. Consider the case in which two programmers each need to make changes in a module. They both request a copy of the module, make their changes, and return the changed module to the database. Without any change in authorization controls, both versions will be kept, but the version from the first programmer will not include the changes made by the second programmer, and vice versa. Hence, some changes will be lost. Some tools require that a specific individual or gatekeeper check versions in. Other systems restrict check-in to the first person to check out the configuration item. Others can check out review copies but cannot check them back in. When the authorized first user checks in the new version, others can then check that item out and merge their changes.

  3. Integration proceduresIntegration procedures define the steps that must be taken to select the appropriate versions of configuration items to generate the system. This ensures that the system generation tools process properly authorized versions.

  4. Tools for product generationProduct generation creates the current system from the properly authorized versions provided by the integration procedures. It may include various steps of compiling source code and linking binaries to create the full executable system.

The development of code standards is another implementation management tool that supports assurance. Coding standards support improved software development practices. Coding standards may require or recommend naming conventions, style considerations, and commenting guidelines. Although useful, these standards provide limited support for development of good code that produces secure and trusted systems. No programming language solves all the security problems, and coding standards may address some issues not covered by the language itself. Other coding guidelines address constraints on the use of the language that help prevent common security flaws. Still other guidelines may be specific to handling of permissions or processing of secret or sensitive information, or may address specification of error handling or security exceptions.

Justifying That the Implementation Meets the Design

Code reviews, requirements tracing, informal correspondence, security testing, and formal proof techniques can be used to enhance assurance about the implementation. Code walk-throughs, or code reviews, take place at system implementation. Section 19.2.4.4 describes the review process. That description applies to code reviews. The review guidelines, however, will be specific to software development techniques rather than to documentation.

Requirements tracing and informal correspondence apply to the code. Comments in the code typically show the results of a requirement trace and a correspondence between the code and the lowest level of design documentation.

Security Testing

There are two types of testing techniques.

  • Definition 19–17. Functional testing, sometimes called black box testing, is testing of an entity to determine how well it meets its specification.

  • Definition 19–18. Structural testing, sometimes called white box testing, is testing based on an analysis of the code in order to develop test cases.

  • Testing occurs at different times during the engineering process.

  • Definition 19–19. Unit testing consists of testing by the developer on a code module before integration. Unit testing is usually structural.

  • Definition 19–20. System testing is functional testing performed by the integration team on the integrated modules of the system. It may include structural testing in some cases.

  • Definition 19–21. Third-party testing, sometimes called independent testing, is testing performed by a group outside the development organization, often an outside company.

  • Definition 19–22. Security testing is testing that addresses the product security.

  • Security testing consists of three components.

    1. Security functional testing is functional testing specific to the security issues described in the relevant specification.

    2. Security structural testing is structural testing specific to security implementation found in the relevant code.

    3. Security requirements testing is security functional testing specific to the security requirements found in the requirements specification. It may overlap significantly with security functional testing.

In general, security functional testing and security requirements testing are parts of unit testing and system testing. Third-party testing may include security functional testing or just security requirements testing. Security structural testing can be part of a unit test or a system test.

Security functional testing differs from ordinary functional testing in its focus, coverage, and depth. Normal testing focuses on the most commonly used functions. Security testing focuses on functions that invoke security mechanisms, particularly on the least used aspects of such mechanisms. The least used parts often contain the exploitable flaws. Security functional testing focuses on pathological cases, boundary value issues, and the like.

Test coverage describes how completely the entity has been tested against its functional specification. Security testing requires broader coverage than normal testing. Security testing covers system security functions more consistently than ordinary testing. A completed test coverage analysis provides a rigorous argument that all external interfaces have been completely tested. An interim test coverage analysis indicates additional test requirements.

Finally, security testing against high-level and low-level specifications shows how well the testing covers the specifications of the subsystem, module, and routine. A completed test depth analysis provides a rigorous argument that testing at all levels is sufficient. An interim test depth analysis indicates additional test requirements that must be met.

During a unit test, the programmer should perform extensive security and requirements tests. A unit test should focus on the least used aspects, pathological cases, or boundary value issues. Most structural testing occurs during unit testing.

Most development organizations perform system testing on their systems. For the most part, security system testing takes place at the external interface level. In this context, an interface is a point at which processing crosses the security perimeter. Users access the system services through external interfaces. Therefore, violations of policy occur through external interfaces. Occasionally, noninterface tests are required. Typically, there are two parallel efforts, one by the programming team and the other by the test team. Figure 19-3 illustrates this.

Relationship between code development and testing. At the left are the development tasks, and at the right are the testing tasks. Reverse arrows indicate iterative steps.

Figure 19-3. Relationship between code development and testing. At the left are the development tasks, and at the right are the testing tasks. Reverse arrows indicate iterative steps.

Security test suites are very large. Automated test suites are essential, as are configuration management and documentation. The testers must also develop and document test plans, test specifications, test procedures, and test results.

Writing test plans, specifications, and procedures gives the author the ability to examine and correct approaches as the writing proceeds. This provides assurance about the test methodology. This documentation increases the assurance of the testing because it enables analysis of the test suite for completeness and correctness.

The reports of the results of security testing are the tangible evidence of the test effort. These reports identify which tests the entity has passed. Ideally, it will pass all tests. In practice, the entity will fail some tests, so unusual results must be examined. In particular, automated test suites can introduce some problems; the entity may fail a test when the test is part of an automated test suite but pass the test when it is run independently of the test suite. Also, the tester may demonstrate the desired result by means other than execution of the particular test.

Security Testing Using PGWG

PGWG, the PAT (Process Action Team) Guidance Working Group, presents a systematic approach to system and requirements test development using successive decomposition of the system and requirements tracing. This methodology works well in a system defined into successively smaller components, such as systems, components, modules, and functions, as described in previous sections of this chapter. Requirements are mapped to successively lower levels of design using test matrices. At the lowest level of decomposition, usually the individual function and interface level, test assertions claim that the interfaces meet the specifics of each requirement for those interfaces. These test assertions are used to develop test cases, which may be individual tests or families of tests. This strategy is accompanied by a documentation approach that fits nicely with traditional test planning and documentation.

Test Matrices

The PGWG methodology defines two levels of test matrices (high and low). Rows reflect the decomposition of the entity to be tested. If design decomposition is previously defined in the design documentation, identifying row headers is a simple task. The rows of the high-level matrix are the entity subsystems or major components. The columns in the high-level matrix reflect security areas to be considered. Selection of security areas should be a simple task, because security requirements should already be well-defined. The security areas focus on functional requirements (as opposed to assurance requirements or documentation requirements). Examples of security functional areas may be discretionary access controls, nondiscretionary access controls, audit, integrity controls, and the like. The cells of the high-level test matrix provide pointers to relevant documentation and to lower-level test matrices.

In a large and complex system with a multiple-layer design decomposition, it may be useful to create intermediate levels of test matrices to address components or even modules of components in individual matrices. If intermediate levels are used, there is one lower-level matrix for each row of the higher-level matrix until the lowest level is reached. At the intermediate levels, it may be useful to refine the security areas that define the columns. For example, discretionary access could be decomposed into protection–bit–based access controls and access–control–list–based controls. The cells of intermediate levels have contents similar to those of a high-level matrix.

At the lowest level, matrix rows are the interfaces to the subsystem or component. The columns could be represented as security areas, subdivisions of security areas, or even individual requirements. The size and complexity of the system are the determining factors. The cells of the lowest level are the heart of the decomposition methodology. They contain test assertions (or pointers to test assertion sets). Each assertion applies to a single interface and a single requirement, and all assertions relative to each cell in the low-level test matrix must be identified. Once the assertions have been developed, it is a simple matter to fill in the cells in the higher-level matrices.

When the low-level matrices are completed, any empty cells must be justified to ensure that coverage is complete. The cells should refer to a rationale that justifies why a particular requirement class does not apply to a particular interface.

Test Assertions

Test assertions are created by reviewing design documentation and identifying conditions that are security-relevant, testable, and analyzable. If the documentation contains requirements tracing, creating test assertions is greatly simplified and developing assertions provides an excellent review of the existing requirement trace. Assertions are at a very fine level of granularity, and each assertion should generate one or more individual tests that illustrate that the assertion is met. In rare cases, an assertion will not be testable. It should then be verified by other means, such as analysis.

PGWG presents three methods for stating assertions. The first technique is to develop brief statements describing behavior that the tester must verify, such as “Verify that the calling process needs DAC write access permission to the parent directory of the file being created. Verify that if access is denied, the return error code is 2.” The second technique is very similar to the first, but the form of the statement is different, making claims that the tester must prove or disprove with tests. For example, an assertion might be “The calling process needs DAC write access permission to the parent directory of the file being created, and if access is denied, it returns error code 2.” The third method states assertions as claims that are embedded within a structured specification format.

Test Specifications

Define one or more test cases to verify the truth of each assertion for each interface. The test cases are specified by test specifications. PGWG suggests the use of high-level test specifications (HLTS), to describe and specify the test cases for each interface, and low-level test specifications (LLTS), which provide specific information about each test case, such as setup conditions, cleanup conditions, and other environmental conditions.

Formal Methods: Proving That Programs Are Correct

Just as there are formal methods for specification and for proving that a design specification is consistent with its security requirements, there are techniques for proving properties about programs. Used during the coding process, these techniques help avoid bugs. They work best on small parts of a program that performs a well-defined task. This technique can be used for some programs that enforce security functionality. Chapter 20, “Formal Methods,” covers these techniques.

Assurance During Operation and Maintenance

While a system is in operation, bugs will occur, requiring maintenance on the system. A hot fix addresses bugs immediately and is sent out as quickly as possible. Hot fixes fix bugs that can immediately affect the security or operation of the system. A regular fix addresses less serious bugs or provides long-term solutions to bugs already addressed with hot fixes. Regular fixes are usually collected until some condition is met. Then the vendor issues a maintenance release containing those fixes. If the system involved is not sold to others but instead is used internally, the problems that hot fixes address are usually not addressed by regular fixes also.

As part of the maintenance of a system, well-defined procedures track reported flaws. The information about each flaw should include a description of the flaw, remedial actions taken or planned, the flaw's priority and severity, and the progress in fixing the code, documentation, and other aspects of the flaw.

The action taken for a maintenance release or bug fix should follow the same security procedures used during the original development. Any new design should follow the modularity considerations, design principle considerations, documentation, and justifications for the first release. Furthermore, the vendor must apply to the bug fix or maintenance release all security considerations and assurance measures that were used in the implementation, integration, and security testing of the original product. The vendor must update the assurance evidence appropriately. For fixes, the vendor must rerun the pertinent parts of the security test suite. For maintenance releases, the vendor must rerun the security tests for the system.

Summary

Security assurance is an integral part of the life cycle of product or system development. Assurance measures are taken at every step of the process, from requirements development through design and development to testing and release, and must be supported during product or system operation.

The process begins with analyses of the goals of the system and the threats against which the system must be protected. These analyses guide the development of the architecture of the system and its security policy and mechanisms. As part of this process, the requirements for each of these elements are stated and justified.

System and software design must also include assurance. There are specific design goals that lead to the desired level of assurance. Documentation of decisions, designs, and the process through which these decisions and designs were developed provides information on which beliefs of assurance can be based. The design documents and software specification documents include both external and internal interfaces and functions and justify that the design meets the requirements. Formal (or informal) methods of implementation and testing provide assurance at the implementation level.

Research Issues

Research issues abound. One important issue is creating systems and products from commercial off-the-shelf (COTS) components and providing as high a level of assurance as possible that the resulting system meets its requirements. The problem lies in the difficulty of assessing composition and assessing the COTS components, few of which are constructed using high-assurance techniques.

Adding appropriate assurance measures at appropriate times in the software engineering life cycle is another critical issue. The process by which the system or product is developed affects the degree of assurance. Use of a methodology such as the SSE-CMM (see Section 21.9) imparts a certain level of assurance.

Requirements analysis is often overlooked in the security arena, and yet it forms the basis for the definition of security by guiding the development of a security policy. Expressing requirements unambiguously but in a way that is easily understood and analyzing requirements for feasibility in a particular environment and for consistency are difficult problems.

Testing of systems and products for security is another area of active research. Property-based testing abstracts security as a set of properties and then tests conformance to those properties. Other types of testing, notably software fault injection, assess the assurance of existing systems and products.

Further Reading

Yen and Paul [1063] present a short survey of six areas in which high assurance is critical. Assurance is also critical in safety-related software [135, 570, 777, 1017].

An early methodology for assertion-based testing and requirements correspondence in security is discussed by Bullough, Loomis, and Weiss [156].

Several papers consider the problem of providing assurance when components are assembled for a system [168, 440, 647]. Programming languages provide a foundation for assurance, and the design and iteration of assurance into both languages and supporting subsystems such as libraries is critical [410].

Technologies that aid the processes described in this chapter include requirements analysis and checking [441, 651, 675, 1070], architectural description languages [1004], and documentation [106, 728]. Several authors have described methodologies and experiences [384, 571, 676, 1050].

Arbo, Johnson, and Sharp [36] present a network interface that allows System V/MLS to be used in a network MLS environment. Kang, Moore, and Moskowitz analyze the design of the NRL pump for assurance [545]. Smith [938] discusses the cost-benefit impacts of using formal methods for software assurance.

Property-based testing [349, 350, 394, 580] tests process conformance to a stated security policy. Software fault injection [390, 1019, 1021] tests how systems react to failures of components. Both methods can be adapted to testing for other, nonsecurity problems as well.

Exercises

1:

Distinguish between a policy requirement and a mechanism. Identify at least three specific security requirements for a system you know and describe at least two different mechanisms for implementing each one.

2:

Justify that the four security properties in System X (see the example that begins on page 506) are consistent with the Bell-LaPadula properties. Use the System X statements in this chapter. Identify any information you may need to complete the justification that you do not find in this material.

3:

In System Y (see the example on page 509), assumption A3 restricts the access to authentication data to administrators. Should this assumption have been used in the justification of threat T1? Why or why not? If yes, create the appropriate statements to add to the justification given above.

4:

Pick a life cycle development model not discussed in Chapter 18 and describe how useful it is for development of secure and trusted products.

5:

This exercise deals with the external specifications discussed in Section 19.2.4.1.

  1. Write the external functional specification for the login function in the example that begins on page 525.

  2. Write the external functional specification for the change_password function in the example that begins on page 525.



[1] The exact security requirements would be listed here. The requirements given are examples.

[2] This is not recommended.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.44.143