Chapter 18. Introduction to Assurance

 

BOTTOM: Not a whit: I have a device to make allwell. Write me a prologue; and let the prologueseem to say, we will do no harm with our swords,and that Pyramus is not killed indeed; and,for the more better assurance, tell them that I,Pyramus, am not Pyramus, but Bottom theweaver: this will put them out of fear.

 
 --A Midsummer Night's Dream, III, i, 17–23.

This chapter presents an overview of the concepts of security assurance and trusted systems. Assurance for secure and trusted systems must be an integral part of the development process. The following chapters will elaborate on the concepts and ideas introduced here.

Assurance and Trust

In previous chapters we have used the terms trusted system and secure system without defining them precisely. When looked on as an absolute, creating a secure system is an ultimate, albeit unachievable, goal. As soon as we have figured out how to address one type of attack on a system, other types of attacks occur. In reality, we cannot yet build systems that are guaranteed to be secure or to remain secure over time. However, vendors frequently use the term “secure” in product names and product literature to refer to products and systems that have “some” security included in their design and implementation. The amount of security provided can vary from a few mechanisms to specific, well-defined security requirements and well-implemented security mechanisms to meet those requirements. However, providing security requirements and functionality may not be sufficient to engender trust in the system.

Intuitively, trust is a belief or desire that a computer entity will do what it should to protect resources and be safe from attack. However, in the realm of computer security, trust has a very specific meaning. We will define trust in terms of a related concept.

  • Definition 18–1. An entity is trustworthy if there is sufficient credible evidence leading one to believe that the system will meet a set of given requirements. Trust is a measure of trustworthiness, relying on the evidence provided.

These definitions emphasize that calling something “trusted” or “trustworthy” does not make it so. Trust and trustworthiness in computer systems must be backed by concrete evidence that the system meets its requirements, and any literature using these terms needs to be read with this qualification in mind. To determine trustworthiness, we focus on methodologies and metrics that allow us to measure the degree of confidence that we can place in the entity under consideration. A different term captures this notion.

  • Definition 18–2. Security assurance, or simply assurance, is confidence that an entity meets its security requirements, based on specific evidence provided by the application of assurance techniques.

Examples of assurance techniques include the use of a development methodology, formal methods for design analysis, and testing. Evidence specific to a particular technique may be simplistic or may be complex and fine-grained. For example, evidence that measures a development methodology may be a brief description of the methodology to be followed. Alternatively, development processes may be measured against standards under a technique such as the System Security Engineering Capability Maturity Model (SSE-CMM; see Section 21.9).

Assurance techniques can be categorized as informal, semiformal, or formal. Informal methods use natural languages for specifications and justifications of claims. Informal methods impose a minimum of rigor on the processes used. Semiformal methods also use natural languages for specifications and justifications but apply a specific overall method that imposes some rigor on the process. Often these methods mimic formal methods. Formal methods use mathematics and other machine-parsable languages with tools and rigorous techniques such as formal mathematical proofs.

Security assurance is acquired by applying a variety of assurance techniques that provide justification and evidence that the mechanism, as implemented and oper-ated, meets the security requirements described in the security policy for the mecha-nism (or collection of mechanisms). Figure 18-1 illustrates this process.

Assurance, policy, and mechanisms.

Figure 18-1. Assurance, policy, and mechanisms.

A related term, information assurance, refers to the ability to access information and preserve the quality and security of that information [761]. It differs from security assurance, because the focus is on the threats to information and the mechanisms used to protect information and not on the correctness, consistency, or completeness of the requirements and implementation of those mechanisms. However, we use the word “assurance” to mean “security assurance” unless explicitly stated otherwise.

We are now in a position to define a trusted system.

  • Definition 18–3. A trusted system is a system that has been shown to meet well-defined requirements under an evaluation by a credible body of experts who are certified to assign trust ratings to evaluated products and systems.

Specific methodologies aggregate evidence of assurance, and results are interpreted to assign levels of trustworthiness. The Trusted Computer System Evaluation Criteria [285] and the Information Technology Security Evaluation Criteria [210] are two standards that have been replaced by the Common Criteria [750, 751, 752]. These methodologies provide increasing “levels of trust,” each level having more stringent assurance requirements than the previous one. When experts evaluate and review the evidence of assurance, they provide a check that the evidence amassed by the vendor is credible to disinterested parties and that the evidence supports the claims of the security requirements. Certification by these experts signifies that they accept the evidence.

The Need for Assurance

Applying assurance techniques is time-consuming and expensive. Operating systems, critical applications, and computer systems are often marketed as “secure,” whereas in reality they have serious flaws that undermine their security features, or they are used in environments other than those for which their security features were developed. The marketing creates a false sense of well-being, which in turn encourages the users, system administrators, and organizations to act as though their systems were protected. So they fail to develop the defenses needed to protect critical information.

Accidental or unintentional failures of computer systems, as well as intentional compromises of security mechanisms, can lead to security failures. Neumann [772] describes nine types of problem sources in computer systems.

  1. Requirements definitions, omissions, and mistakes

  2. System design flaws

  3. Hardware implementation flaws, such as wiring and chip flaws

  4. Software implementation errors, program bugs, and compiler bugs

  5. System use and operation errors and inadvertent mistakes

  6. Willful system misuse

  7. Hardware, communication, or other equipment malfunction

  8. Environmental problems, natural causes, and acts of God

  9. Evolution, maintenance, faulty upgrades, and decommissions

Assurance addresses each of these problem sources (except for natural causes and acts of God). Design assurance techniques applied to requirements address items 1, 2, and 6. A specification of requirements must be rigorously analyzed, reviewed, and verified to address completeness, consistency, and correctness. If the security requirements are faulty, the definition of security for that system is faulty, so the system cannot be “secure.” Proper identification of threats and appropriate selection of countermeasures reduce the ability to misuse the system. Design assurance techniques can detect security design flaws, allowing their correction prior to costly development and deployment of flawed systems.

Implementation assurance deals with hardware and software implementation errors (items 3, 4, and 7), errors in maintenance and upgrades (item 9), willful misuse (item 6), and environmentally induced problems (item 8). Thorough security testing as well as detailed and significant vulnerabilities assessment find flaws that can be corrected prior to deployment of the system.

Operational assurance can address system use and operational errors (item 5) as well as some willful misuse issues (item 6).

Neumann's list is not exclusive to security problems. It also addresses risks to safety, reliability, and privacy.

Sometimes safety and security measures can backfire. Assurance techniques highlight the consequences of these errors.

Other failures have had less serious consequences. When bugs were found in the trigonometric functions of the Intel 486 chip, Intel's public reputation was damaged, and replacing the chips cost Intel time and money. As a result, Intel began using high-assurance methods to verify the correctness of requirements in their chip design [819].

The Role of Requirements in Assurance

Although security policies define security for a particular system, the policies themselves are created to meet needs. These needs are the requirements.

  • Definition 18–4. A requirement is a statement of goals that must be satisfied.

A statement of goals can vary from generic, high-level goals to concrete, detailed design considerations. The term security objectives refers to the high-level security issues and business goals, and the term security requirements refers to the specific and concrete issues.

A brief review of definitions will prove helpful. Definition 4–1 states that a security policy is a statement that partitions the states of the system into a set of authorized or secure states and a set of unauthorized or nonsecure states. Equivalently, we can consider a security policy to be a set of specific statements that, when enforced, result in a secure system. The individual statements are the security requirements for the entity and describe what behavior must take place (or not take place) in order to define the authorized states of the system. Typically, requirements do not contain implementation details, which are the realm of the implementing mechanism (see Definition 4–7). On the other hand, a security model describes a family of policies, systems, or entities (see Definition 4–8) and is more abstract than a policy, which is specific to a particular entity or set of entities.

Selecting the right security requirements for a computer entity requires an understanding of the intended use of that entity as well as of the environment in which it must function. One can then examine policy models to determine if any are appropriate. Part 3, “Policy,” describes several types of policies and models that have been used in the past. These models have been subjected to significant analysis and peer review, and most have had corrections during their life spans. This process of acceptance is like the acceptance of mathematical proofs over the centuries. Typically, mathematicians study a mathematical proof to find its flaws and weaknesses. Some proofs have survived this test of time, and others have not.

Assurance Throughout the Life Cycle

The goal of assurance is to show that an implemented and operational system meets its security requirements throughout its life cycle. Because of the difference in the levels of abstraction between high-level security requirements and low-level implementation details, the demonstration is usually done in stages. Different assurance techniques apply to different stages of system development. For this reason, it is convenient to classify assurance into policy assurance, design assurance, implementation assurance, and operational or administrative assurance.

  • Definition 18–5. Policy assurance is the evidence establishing that the set of security requirements in the policy is complete, consistent, and technically sound.

Policy assurance is based on a rigorous evaluation of the requirements. Completeness and consistency are demonstrated by identifying security threats and objectives and by showing that the requirements are sufficient to counter the threats or meet the requirements. If a security policy model is used, the justifications in the model can support the technical soundness of the requirements.

Once the proper requirements have been defined, justified, and approved for the system, the design and development process can begin with confidence. The developers create the system design to implement the security requirements and provide assurance evidence that the design meets the security requirements. The next step is to show that the system implements the design correctly. The design and development approach is illustrated in Figure 18-2 As that figure shows, following every design and implementation refinement step is an assurance justification step that shows that the requirements continue to be met at successive levels of development of the trusted system.

Development of a trusted system. There may be multiple levels of design and implementation. Note that the refinement steps alternate with the assurance steps.

Figure 18-2. Development of a trusted system. There may be multiple levels of design and implementation. Note that the refinement steps alternate with the assurance steps.

This process is usually iterative, because assurance steps identify flaws that must be corrected. When this happens, the affected steps must be rechecked.

Assurance must continue throughout the life of the system. Because maintenance and patching usually affect the system design and implementation, the assurance requirements are similar to those described above.

  • Definition 18–6. Design assurance is the evidence establishing that a design is sufficient to meet the requirements of the security policy.

Design assurance includes the use of good security engineering practices to create an appropriate security design to implement the security requirements. It also includes an assessment of how well the system design meets the security requirements.

Design assessment techniques use a policy or model of the security requirements for the system as well as a description or specification of the system design. Claims are made about the correctness of the design with respect to security requirements. The design assurance techniques provide a justification or proof of such claims.

  • Definition 18–7. Implementation assurance is the evidence establishing that the implementation is consistent with the security requirements of the security policy.

In practice, implementation assurance shows that the implementation is consistent with the design, which design assurance showed was consistent with the security requirements found in the security policy. Implementation assurance includes the use of good security engineering practices to implement the design correctly, both during development and through the maintenance and repair cycles. It also includes an assessment of how well the system as implemented meets its security requirements through testing and proof of correctness techniques, as well as vulnerability assessment.

Design assurance and implementation assurance verify that the security policy requirements are properly designed and built into the system. However, computer systems and applications must be delivered, installed, and operated as assumed during design and implementation. Typically, the vendor provides procedures and processes in the form of supporting automated tools and documentation. The customer is responsible for ensuring their correct use.

  • Definition 18–8. Operational or administrative assurance is the evidence establishing that the system sustains the security policy requirements during installation, configuration, and day-to-day operation.

One fundamental operational assurance technique is a thorough review of product or system documentation and procedures, to ensure that the system cannot accidentally be placed into a nonsecure state. This emphasizes the importance of proper and complete documentation for computer applications, systems, and other entities.

Building Secure and Trusted Systems

Building secure and trusted systems depends on standard software engineering techniques augmented with specific technologies and methodologies. Hence, a review of the life cycles of systems will clarify much of what follows.

Life Cycle

The concept of a life cycle addresses security-relevant decisions that often are made outside the engineering disciplines in business situations. There is more to building a product or system than just the engineering steps. Security goals may impact both the life cycle and the engineering process used. Such processes establish both discipline and control and provide confidence in the consistency and quality of the resulting system. Assurance requires a life cycle model and engineering process in every situation, although the size and complexity of the project, the project team, and the organization guide selection of the appropriate model and process. In a small operation, where individuals play multiple roles, an informal structure of the life cycle process may work best. In a larger company with complex roles and interactions among several projects and project team members, a more rigorous and formal process might be more appropriate.

A life cycle starts when a system is considered for development and use. The life cycle ends when the system is no longer used. A life cycle includes a set of processes that define how to perform activities, as well as methods for managing activities. Examples of such activities are writing of marketing literature, sales training, and design and development of code. Management activities include planning, configuration management, and selection and use of standards. Both types of activities follow the system from its initial conception through the decision to create the system, the steps required to develop, sell, and deploy the system, the maintenance of the system, and the decommissioning and retirement of the system.

A typical life cycle process is defined in stages. Some stages depend on previous stages, whereas others do not. Each stage describes activities of all the involved disciplines and controls interdiscipline interactions. As work progresses, the project ideally transitions from one stage to the next. In practice, there is often some iteration of the stages—for example, when a more advanced stage uncovers flaws or omissions in the work of the previous stage.

Consider a very general life cycle “metamodel” to illustrate these concepts. This model captures the fundamental areas of system life for any type of project, although the focus is on software engineering projects. An actual, functioning life cycle process may be more detailed, but this metamodel addresses the needs of any business application. It incorporates the four stages of conception, manufacture, deployment, and fielded product life. Engineering processes tend to focus on manufacture and, to a lesser degree, on fielded product life, although engineering function responsibilities may exceed this typical view.

Conception

The conception stage starts with an idea. Ideas come from anywhere—for example, from customers, engineers, other disciplines, user groups, or others. The organization decision makers may decide to

  • fund the idea and make it a project,

  • reject the idea, or

  • ask for further information or for a demonstration that the idea has merit.

How decisions are made varies. A decision may be rather spontaneous in a very small and self-contained organization, where communication is ubiquitous and informal. A larger company may have formalized processes for initiation of new projects requiring many layers of approval.

  • Definition 18–9. A proof of concept is a demonstration that an idea has merit.

The decision makers may ask for a proof of concept if they are unsure, or not convinced, that the idea is worth pursuing. Developing proofs of concept typically involves small projects. A request for a proof of concept may result in a rapid prototype, an analysis, or another type of proof. It need not involve the engineering staff, and it need not use steps in the engineering process.

The output of the conception stage must provide sufficient information for all disciplines to begin their tasks in the next stage. This information may be an overview of the project; high-level requirements that the project should meet; or schedule, budget, staffing, or planning information. The planning information could be a detailed project plan or more general high-level plans for each of the disciplines involved in the project. The exact nature of the information depends on the size and complexity of the project.

Security feasibility and high-level requirement analysis should begin during this stage of the life cycle. Before time and resources are invested in development or in proof of concept activities, the following questions should be considered.

  • What does “secure” mean for this concept?

  • Is it possible for this concept to meet this meaning of security?

  • Is the organization willing to support the additional resources required to make this concept meet this meaning of security?

Identification of threats comprises another important set of security issues. It is especially important to determine the threats that are visible at the conception stage. This allows those threats to be addressed in rapid prototypes and proofs of concept. It also helps develop realistic and meaningful requirements at later stages. It provides the basis for a detailed threat analysis that may be required in the manufacturing phase to refine requirements.

Development of assurance considerations is important at this stage. A decision to incorporate assurance, and to evaluate mechanisms and other evidence of assurance, will influence every subsequent step of development. Assurance decisions will affect schedules and time to market.

Manufacture

Once a project has been accepted, funded, approved, and staffed, the manufacturing stage begins. Each required discipline has a set of substages or steps determined in part by the size of, complexity of, and market for the system. For most disciplines, the manufacturing stage is the longest.

Manufacturing begins with the development of more detailed plans for each of the involved disciplines, which could include marketing plans, sales training plans, development plans, and test plans. These documents describe the specific tasks for this stage of the life cycle within each discipline. The actual work required by each discipline depends on the nature of the system. For example, a system designed for internal use would not have sales requirements, and marketing requirements might target internal groups who may use the completed entity. Alternatively, a product designed for commercial use could require massive marketing campaigns and significant effort on the part of the sales force.

The software development or engineering process lies in this stage. It includes procedures, tools, and techniques used to develop and maintain the system. Technical work may include design techniques, development standards and guidelines, and testing tools and methods. Management aspects may include planning, scheduling, review processes, documentation guidelines, metrics, and configuration management such as source code control mechanisms and documentation version controls.

The output of this stage from each discipline should be the materials necessary to determine whether to proceed. Marketing groups could complete marketing collateral such as white papers and data sheets. Sales groups could develop documented leads and sales channels, as well as training materials for the sales force. Engineering groups would develop a tested, debugged system that is ready for use. Documentation groups would complete manuals and guides. Service groups would add staffing to handle telephone calls, installation support, bug tracking, and the like. The focus of this book is on the engineering steps of this stage.

Deployment

Once the system has passed the acceptance criteria in the manufacturing stage, it is ready for deployment. This stage is the process of getting the system out to the customer. It is divided into two substages.

The first substage is the domain of production, distribution, and shipping. The role of the other disciplines (such as engineering and marketing) is to deliver masters to the production staff. That staff creates and packages the materials that are actually shipped. If there is no assurance that masters have been appropriately protected from modification, and that copies are replicas of the masters, then the painstaking assurance steps taken during manufacture may be for naught.

The distribution organization ships systems to customers and to other sales organizations. In the case of an internal system, this step may be small. Users of the system may require specific types of documentation. Security and assurance issues in this part of deployment include knowing that what was received is actually what was shipped.

The second substage of deployment is proper installation and configuration of the system in its production setting. The developers must ensure that the system will work appropriately in this environment. The developers are also responsible for appropriate assurance measures for functionality, tools, and documentation. Service personnel must know appropriate security procedures as well as all other aspects of the system.

Fielded Product Life

The primary tasks of fielded product life are patching or fixing of bugs, maintenance, and customer service. Routine maintenance and emergency patching may be the responsibility of engineering in smaller organizations, or for systems in internal use only. Alternatively, maintenance and patching may the responsibility of an organization entirely separate from the product development organization. Wherever this responsibility lies, an engineering process must track maintenance and patches, and a deployment process must distribute patches and new releases. Modifications and enhancements must meet the same level of assurance rigor as the original development.

Commercial systems often have separate customer service and support organizations and engineering organizations. The support organization tasks could include answering questions, recording bugs, and solving routine customer problems. The engineering organization handles maintenance and patching.

Product retirement, or the decision to take a product out of service, is a critical part of this stage of the life cycle. Vendors need to consider migration plans for customers, routine maintenance for retired products still in use, and other issues.

The Waterfall Life Cycle Model

We have discussed life cycles in terms of stages. The waterfall model captures this.

  • Definition 18–10. [851] The waterfall life cycle model is the model of building in stages, whereby one stage is completed before the next stage begins.

This model is not the only technique for building secure and trusted systems, but it is perhaps the most common. It consists of five stages, pictured in Figure 18-3. The solid arrows show the flow from each stage to the next.

The waterfall life cycle model. The solid arrows represent the flow of development in the model. The dashed arrows represent the paths along which information about errors may be sent.

Figure 18-3. The waterfall life cycle model. The solid arrows represent the flow of development in the model. The dashed arrows represent the paths along which information about errors may be sent.

Requirements Definition and Analysis

In this phase, the high-level requirements are expanded. Development of the overall architecture of the system may lead to more detailed requirements. It is likely that there will be some iteration between the requirements definition step and the architecture step before either can be completed.

Requirements may be functional requirements or nonfunctional requirements. Functional requirements describe interactions between the system and its environment. Nonfunctional requirements are constraints or restrictions on the system that limit design or implementation choices. Requirements describe what and not how. They should be implementation-independent.

Often, two sets of requirements are defined. A requirements definition of what the customer can expect the system to do is generally presented in natural language. A technical description of system characteristics, sometimes called a requirements specification, may be presented in a more precise form. The analysis of the requirements may include a feasibility study and may examine whether or not the requirements are correct, consistent, complete, realistic, verifiable, and traceable.

System design includes the development of the overall system architecture by partitioning requirements into hardware and/or software systems. The nature of the overall architecture may place additional constraints or requirements on the system, thus creating the need for iteration between this step and the previous one. An architecture document may or may not be required. In projects that are revisions or new releases of previous products, the basic architecture may be already defined. The architecture and the requirements must be reconciled to be consistent—that is, the architecture must be able to support the requirements.

System and Software Design

Software design further partitions the requirements into specific executable programs. Typically, at this stage, external functional specifications and internal design specifications are written. The external functional specifications describe the inputs, outputs, and constraints on functions that are external to the entity being specified, whereas the internal design specifications describe algorithms to be used, data structures, and required internal routines.

This stage is sometimes broken into the two phases system design, in which the system as a whole is designed, and program design, in which the programs of the system are individually designed.

Implementation and Unit Testing[1]

Implementation is the development of software programs based on the software design from the previous step. Typically, the work is divided into a set of programs or program units. Unit testing is the process of establishing that the unit as implemented meets its specifications. It is in this phase that many of the supporting processes described earlier come into play.

Integration and System Testing

Integration is the process of combining all the unit-tested program units into a complete system. Automated tools and guidelines governing the integration process may be in place. System testing is the process of ensuring that the system as a whole meets the requirements. System testing is an iterative step because invariably bugs and errors are found that have to be corrected. Typically, the errors are sent back to the development team to be corrected. This requires iteration with the previous step. The corrected code is reintegrated into the system, and system testing is repeated.

Operation and Maintenance

Once the system is finished,[2] it is moved into production. This is called fielding the system. Maintenance involves correction of errors that have been reported from the field and that have not been corrected at earlier stages. This stage also involves routine maintenance and the release of new versions of the system. Finally, retirement of the system also falls under this phase.

Discussion

In reality, there is usually some iteration between the processes at each stage of the waterfall because a later process may uncover deficiencies in a previous stage, causing it to be revisited. For example, implementation errors in the fielded system may not become clear until the operation and maintenance stage. Correction of such a deficiency will “trickle down” through the waterfall of phases. For example, if an error discovered in system testing is found to impact the software design, that change would feed into the system and software design phase, through implementation and unit testing to integration and system testing. An error found in the field may affect any stage from requirements to integration and system testing. Figure 18-3 shows the waterfall model, depicted by the solid arrows, and the potential error paths, represented by the dotted arrows.

Use of good system engineering practices provides discipline and process control during development and maintenance. Security analysis and development of assurance evidence on a regular basis, and as an integral part of the development and maintenance activities, increase confidence that the resulting system meets its security requirements. Use of a life cycle model and reliable supporting tools cannot ensure freedom from flaws or compliance with requirements. However, an appropriate process may help limit the number of flaws, especially those that can lead to security violations. Hence, building security into a product increases its trustworthiness. This demonstrates that the methods used to build a system are critical to the security of that system.

Other Models of Software Development

A few words on other life cycle models will illuminate the differences between those models and the waterfall model with respect to assurance [950].

Exploratory Programming

In exploratory programming approaches, a working system is developed quickly and then modified until it performs adequately. This approach is commonly used in artificial intelligence (AI) system development, in which users cannot formulate a detailed requirements specification and in which adequacy rather than correctness is the aim of the system designers. The key to using this approach successfully is to use techniques that allow for rapid system iterations. Using a very high-level programming language may facilitate rapid changes.

In this technique, there are no requirements or design specifications. Hence, assurance becomes difficult. A system subjected to continual modification suffers the same vulnerabilities that plague any add-on system. The focus on adequacy rather than correctness leaves the implementation potentially vulnerable to attack. Therefore, this model is not particularly useful for building secure and trusted systems because such systems need precise requirements and detailed verification that they meet those requirements as implemented.

Prototyping

Prototyping is similar to exploratory programming. The first phase of development involves rapid development of a working system. However, in this case, the objective of the rapid development is specifically to establish the system requirements. Then the software is reimplemented to create a production-quality system. The reimplementation can be done using another model that is more conducive to development of secure and trusted systems.

Formal Transformation

In the formal transformation model, developers create a formal specification of the software system. They transform this specification into a program using correctness-preserving transformations. The act of formal specification, if tied to well-formed security requirements, is beneficial to security and to design in general. The use of correctness-preserving transformations and automated methods can assist in developing a correct implementation. However, a system developed by such a method should be subjected to the same rigorous implementation testing and vulnerabilities analysis that are applied to any other methodology.

System Assembly from Reusable Components

This technique assumes that systems are made up mostly of components that already exist. The system development process becomes one of assembly rather than creation. Developing trusted systems out of trusted components is complex because of the need to reconcile the security models and requirements of each component, and developing trusted systems out of untrusted components is even more complex. However, this is a common approach to building secure and trusted systems.

Extreme Programming

Extreme programming is a development methodology based on rapid prototyping and best practices such as separate testing of components, frequent reviewing, frequent integration of components, and simple design. A project is driven by business decisions, not by project stakeholders, and requirements are open until the project is complete. The design evolves as needed to remove complexity and add flexibility. Programmers work in teams or pairs. Component testing procedures and mechanisms are developed before the components are developed. The components are integrated and tested several times a day. One objective of this model is to put a minimal system into production as quickly as possible and then enhance it as appropriate.

Use of this technique for security has several benefits and several drawbacks. The nature of an evolving design leaves the product vulnerable to the problems of an add-on product (see Section 19.1.2.2). Leaving requirements open does not ensure that security requirements will be properly implemented into the system. However, if threats were analyzed and appropriate security requirements developed before the system was designed, a secure or trusted system could result. However, evidence of trustworthiness would need to be adduced after the system was developed and implemented.

Summary

Assurance is the foundation for determining the trustworthiness of a computer system. Assurance techniques test the appropriateness of requirements and the effectiveness of specification, design, implementation, and maintenance. These techniques cannot guarantee system security or safety, but they can significantly increase the likelihood of finding security flaws during requirements definition, design, and implementation. Errors found early can be corrected early. A well-defined life cycle process provides rigorous, well-defined steps with checks and balances that contribute significantly to the quality of the software developed and also increases the credibility of the measures of assurance that are used.

Research Issues

Probably the most important area in assurance research is getting people to understand the importance and the value of assurance and trust. Assurance techniques are expensive and time-consuming, but they result in more reliable products. Moreover, assurance techniques support the identification of more clearly defined problems for products to solve and functions for them to perform. Most current systems are fragile—particularly systems used as infrastructure. Applying increasingly rigorous assurance techniques would strengthen these systems, not only in terms of security but also in terms of reliability and robustness. However, the level of assurance used with systems and products is driven by regulation and consumer demand as well as by the ability to hire people who know these techniques. Therefore, the problem of getting assurance techniques to be more widely used is in large part a problem of persuading consumers, developers, vendors, and regulators of their importance.

Part of the problem is cost; most assurance techniques are expensive. If assurance techniques were more effective, more efficient, less costly, and easier to use, would they be used more often? How can their cost be lowered? How can the use of these techniques, and the techniques themselves, be automated? In particular, formal methods require organizations not just to invest money but also to find qualified people who can use those methods effectively. Automating the less formal testing of software and systems, and providing better tools for evaluation methodologies such as those discussed in Chapter 21, “Evaluating Systems,” would help.

This leads to the issue of selecting appropriate assurance techniques. Some assurance technologies are appropriate in specific environments or for meeting specific goals. How does one determine which of the many techniques to use? Given specific environments and goals, how do the techniques compare?

One important area for research and standardization is the strength of security functionality. The effectiveness of a cryptographic algorithm has several measures (none of them perfect): the size of the key, the arrangement of elements in a substitution table, the size of the possible message space, and the strength of the cipher when used as a pseudorandom number generator. Other types of security functionality have more obscure, or more meaningless, measures. Of course, not all such functions lend themselves to computational measures, but there may be other methods that can be applied.

Another important area is the investigation of new approaches to assurance. Assurance is generally measured by the performance of the resulting product or system rather than the process by which it was developed. Several methodologies, notably the SSE-CMM (see Section 21.9), deal with the process of development rather than its result. In practice, which approach produces systems with some level of assurance and with lowest cost? Would combining the two approaches improve the level of assurance, or would it make the development process more cumbersome with no added benefit?

Further Reading

Any serious student of assurance should read James Anderson's seminal paper [26]. This paper defines many key concepts on which assurance is based.

Assurance techniques have been developed for a variety of environments, including outer space [775, 846], systems that control trains [391, 664], telephone and electronic switching systems [33, 596], and aviation [93, 174].

Metrics have been used to measure assurance with respect to specific properties, such as failure tolerance [1020], abnormal system behavior [335], and test coverage [18]. The Visual Network Rating Methodology (VNRM) [795] helps users organize and document assurance arguments.

Pfleeger's book [807] presents an excellent description of software engineering. Berzins and Luqi [90] discuss applications of formal methods to software engineering. Brooks' description of the development of OS/360 [149] focuses on the human practices and problems as well as the technical ones. It is a classic in the field of software engineering.

Exercises

1:

Definition 18–2 defines assurance in terms of “confidence.” A vendor advertises that its system was connected to the Internet for three months, and no one was able to break into it. It claims that this means that the system cannot be broken into from any network.

  1. Do you share the vendor's confidence? Why or why not?

  2. If a commercial evaluation service had monitored the testing of this system and confirmed that, despite numerous attempts, no attacker had succeeded in breaking into it, would your confidence in the vendor's claim be increased, decreased, or left unchanged? Justify your answer.

2:

A computer security expert contends that most break-ins to computer systems today are attributable to flawed programming or incorrect configuration of systems and products. If this claim is true, do you think design assurance is as important as implementation and operational assurance? Why or why not?

3:

Suppose you are the developer of a computer product that can process critical data and will likely run in a hostile environment. You have an outstanding design and development team, and you are very confident in the quality of their work.

  1. Explain why you would add assurance steps to your development environment.

  2. What additional information (if any) would you need in order to decide whether or not the product should be formally evaluated?

4:

Requirements are often difficult to derive, especially when the environment in which the system will function, and the specific tasks it will perform, are unknown. Explain the problems that this causes during development of assurance.

5:

Why is the waterfall model of software engineering the most commonly used method for development of trusted systems?

6:

The goal of a researcher is to develop new ideas and then test them to see if they are feasible. Software developed to test a new idea is usually similar to software developed for proof of concept (see Definition 18–9). A commercial firm trying to market software that uses a new idea decides to use the software that the researchers developed.

  1. What are the problems with this decision from an assurance point of view?

  2. What should the company do to improve the software (and save its reputation)?

7:

A company develops a new security product using the extreme programming software development methodology. Programmers code, then test, then add more code, then test, and continue this iteration. Every day, they test the code base as a whole. The programmers work in pairs when writing code to ensure that at least two people review the code. The company does not adduce any additional evidence of assurance. How would you explain to the management of this company why their software is in fact not “high-assurance” software?



[1] Some authors break this phase into two parts: implementation testing and unit testing. In practice, the developer of a program is usually responsible for the unit testing of that program. Because the two are often done concurrently, it seems appropriate to treat them as a single phase.

[2] By “finished,” we mean that the system meets the criteria established to define when it has been completed.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.148.113.229