image
17
Secure Software Acceptance
In this chapter you will
•  Learn the fundamentals of assuring all forms of sourced software
•  Learn basic terminology of the acceptance process
•  Discover the basic activities involved in acceptance
•  Examine security validation and verification
•  Explore the importance and implications of the software acceptance process
image
Software acceptance is the portion of the secure lifecycle development process where software is determined to meet the requirements specified earlier in the development process. Testing criteria is used to help determine if the software is acceptable for use.
Introduction to Acceptance
The purpose of the acceptance phase of the lifecycle is to determine whether a purchased product or service has met the delivery criteria itemized in the contract. Thus, acceptance is the point in the process where a check gets written. As a result, all aspects of the acceptance process ought to be rigorous. That rigor is embodied by the suite of technical and managerial assessments that the customer organization will employ to ensure that each individual acceptance criterion is duly satisfied.
The evidence to support that judgment is gathered through a series of tests and structured audits that are, as a rule, planned at the beginning of the project and specified in the contract. As a group, those audits and tests seek to determine whether the product meets the functional requirements stipulated in the contract. Then, based on the results of that assessment, responsibility for the product is transitioned from the supplier organization to the customer. The general term for this step is “delivery,” and the processes that ensure its successful execution are called software “qualification,” or “acceptance.” Both of these processes involve testing.
Software Qualification Testing
Qualification or acceptance testing is the formal analysis that is done to determine whether a system or software product satisfies its acceptance criteria. Thus, in practical terms, the customer does qualification testing in order to determine whether or not to accept the product. The qualification testing process ensures that the customer’s requirements have been met and that all components are correctly integrated into a purchased product.
image
image   
EXAM TIP  The formal analysis that is performed to determine whether a system or software product satisfies its acceptance criteria is called qualification or acceptance testing.
Software qualification testing provides evidence that the product is compliant with the requisite levels of design, performance, and assurance that are stipulated in the contract. Thus, the software qualification phase should be designed to prove that the system meets or exceeds the acquirer’s requirements. Qualification audit and testing procedures look for meaningful defects in the software’s design and execution that might cause the software to fail or that might be exploited in actual use. As a consequence, the scope of the software qualification audit and testing elements of this phase is tailored to specifically assess whether the design and development of the software are correct.
Given those purposes, the audits and tests that are part of the qualification phase have to be designed so that they not only evaluate compliance with the stipulations of the initial requirements document, but they also evaluate compliance with all pertinent contract, standard, and legal requirements. The tests involve a detailed assessment of the important elements of the code under both normal and abnormal conditions. The audits assure that all requisite documentation has been done correctly and that the product satisfies all performance and functional requirements stipulated in the contract.
Qualification Testing Plan
Qualification testing is always guided by a plan. That plan spells out the scope, approach, resources, and schedule of the testing activity. The plan estimates the number of test and assurance cases and their duration, and defines the test completion criteria. The plan also makes provisions for identifying risks and allocating resources.
The design and execution of the qualification tests themselves are normally dictated in the contract. That usually includes consideration of seven things:
•  The required features to be tested
•  Requisite load limits
•  Number and types of stress tests
•  All necessary risk mitigation and security tests
•  Requisite performance levels
•  Interfaces to be tested
•  The test cases to address each of aforementioned questions
image
Elements of a Qualification Testing Plan
The qualification testing plan should answer the following nine practical questions:
1.  Who’s responsible for generating the test designs/cases and procedures?
2.  Who’s responsible for executing the tests?
3.  Who’s responsible for building/maintaining the test bed?
4.  Who’s responsible for configuration management?
5.  What are the criteria for stopping the test effort?
6.  What are the criteria for restarting testing?
7.  When will source code be placed under change control?
8.  Which test designs/cases will be placed under configuration management?
9.  What level will anomaly reports be written for?
image
In particular, each test case must stipulate the actual input values and expected results. The general goal of the testing activity is to exercise the component’s logic in a way that will expose any latent defects that might produce unanticipated or undesirable outcomes. The overall aim of the testing process is to deploy the smallest number of cases possible that will still achieve sufficient understanding of the quality and security of the product. In that respect, each explicit testing procedure contributes to that outcome.
The Qualification Testing Hierarchy
Software testing itself begins at the component level in the development stage of the lifecycle and proceeds up through the hierarchy of testing levels to the fully integrated system that is assured at the acceptance phase. Targets of this top-level phase include the software architecture, components, interfaces, and data. Testing of the delivered product is normally an iterative process because software is built that way. Activities at the acceptance level include
•  Software design traceability analysis (e.g., trace for correctness)
•  Software design evaluation
•  Software design interface analysis
•  Test plan generation (by each level)
•  Test design generation (by each level)
Assurance concerns at the qualification stage are usually supported by analyzing how well the code adheres to design specifications and coding standards. This assessment is normally supported by such activities as source code and interface traceability analysis, and by evaluations of the documentation that is associated with each unit tested. The actual testing is supported by targeted test cases and test procedures that are exclusively generated for the particular object that is undergoing analysis.
Pre-release Activities
Pre-release activities consist of all of the actions that are undertaken to access, analyze, evaluate, review, inspect, and test the product and its attendant processes prior to release. Done properly, these activities are all executed in parallel through the final stages of the software development lifecycle, not just at the end of the project. The goal is to provide an objective assessment of the accuracy, completeness, consistency, and testability of the evolving product. Although it is normally done as a part of software acceptance, it is incorrect to assume that pre-release activities only take place there. Pre-release testing should be considered an ongoing activity that stays on top of meaningful changes to any product in the latter stages of its development.
Testing a new build for the purpose of determining whether it is ready for operational use is certainly the most visible part of the acceptance process. However, prerelease testing does not necessarily just apply to final acceptance. For contract purposes, there have to be explicit mechanisms built into the process to collect and preserve the results of the entire pre-release testing phase, from unit test to integration. That preservation is necessary because the presence and accuracy of the entire suite of test results provide the compiled evidence of overall system correctness. Thus, the pre-release testing process must ensure the proper execution of all pre-release tests as well as the integrity of all relevant data.
Pre-release testing is important because it assures that the product complies with its purpose. Pre-release testing facilitates early detection and correction of errors. It also enhances management’s understanding of the ongoing process and product risks. Pre-release testing warrants that the software does what it was contracted to do. Thus, pre-release testing activities do not simply warrant that the product performs as it was designed. Rather, this testing guarantees that the product will reliably perform its duties according to all of its specified requirements within the actual operating environment.
Pre-release testing is typically defined as part of the overall planning that takes place at the beginning of the project. This planning process involves the preparation of a comprehensive test plan and the protocols and data that will be used to actually perform the pre-release tests. Then, at the designated points in the development process, pre-release testing establishes that the product fully and correctly conforms to each functional and nonfunctional feature stipulated at that milestone. In that respect, pre-release testing is the meat and potatoes part of the software acceptance process. Normally, the involved parties, both customer and supplier, select and describe a valid set of test requirements and applicable test cases. The participants have to ensure that whatever test requirements, cases, and specifications they select truly reflect the intended environment and outcomes. This process also includes determining the procedures that will be used to interpret the test results. That is because in many respects, accurate interpretation is as important as the outcomes themselves.
The testing organization then conducts the tests based on the defined set of test requirements, cases, and specifications. The overall goal is to certify that the product satisfies the terms of its intended use within the target environment, not just on a test bench. Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed and reasonably predictable number of ways. By contrast, because of its virtuality and complexity, software can fail in many bizarre ways. So it is important to be able to test as many of the logic and syntactic features of the product as possible.
Nevertheless, detecting all of the different failure modes for software is generally not possible. That is because the key to identifying every failure mode is to exhaustively test the code on all possible inputs. For most commercial programs, this is computationally infeasible. To complicate matters further, the dynamic nature of computer code does not ensure that any change will have the intended effect.
For instance, if a failure occurs during preliminary testing and the code is changed, the software may now be able to pass that particular test case. But because there is a change to the dynamic of the software, there is no longer any guarantee that the code will be able to pass any of the test cases that it passed prior to that change. So to truly assure the correctness of a product after a fix, the testing of the entire product ought to be done all over again. Obviously, the expense of this kind of retesting would quickly become prohibitive. So in the end, total product assurance prior to release is a noble, but not necessarily achievable, goal.
Implementing the Pre-release Testing Process
Tests are always defined in advance by an individual test plan, which is developed as part of the initial project setup process. A software test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. That plan can subsequently be updated and refined as the downstream tests produce better information about the development.
The advantage of doing the test planning early in the development process is that it will help people outside the designated testing group understand the “why” and “how” of the pre-release testing for that particular project. Because it is a public document, the plan should be detailed enough to be useful to the testing group, but not so detailed that no one outside that group will be able to understand it.
Conducting a Test
The first step in the testing process involves deciding what has to be tested. That typically involves consulting all requirements, functional design, and internal design specifications and other relevant documents. From this the tester identifies the application’s higher-risk aspects, sets testing priorities, and determines the scope and limitations of the tests. Based on this understanding the tester then determines the specific test approaches and methods, as well as the explicit requirements of the test environment, which might include hardware platforms, associated software, and communication requirements.
In conjunction with this, the tester defines any necessary testware requirements, such as record/playback tools, coverage analyzers, test tracking, and problem/bug tracking utilities. Then, for each test target, the tester determines the test input data requirements and the specific task and responsibility requirements. The tester then prepares the test plan and gets the needed approvals. Once that approval is granted, the tester writes the specific test cases and gets them approved. The tester is then ready to do the testing based on these authorized documents.
The tester prepares the test environment and testware, and obtains the required documents and guides. The tester then sets up the test tracking processes and the logging and archiving processes. Finally, the tester sets up or establishes the actual test data inputs and performs the tests, evaluating and reporting the results and tracking problems and recommended fixes. The tester might retest as needed and maintain and update test plans, test cases, the test environment, and testware as more information is gained throughout the lifecycle.
Key Component: Test Cases
A test case is a document that describes an input, action, or event that is expected to produce a predictable response. The fundamental aim of all test cases is to find out if a specified feature in a system or software product is working properly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. The process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it’s a useful habit to prepare test cases as early in the development process as possible.
Functionally, testing involves examining a system or application under controlled conditions and then evaluating the results. For example, the tester asks, “If the user is in interface A of the application while using hardware B and does C, then D should happen.” From a practical standpoint, the controlled conditions ought to include both normal and abnormal conditions. In that respect, testing should intentionally try to make things go wrong in the product in order to determine what undesirable events might occur when they shouldn’t or which desirable events don’t happen when they should.
There are numerous forms of testing, all with a slightly different purpose and focus. But generally, these fall into two categories: white-box and black-box. Testing that tries to exercise as much of the code as possible within some set of resource constraints is called white-box testing. Techniques that do not consider the code’s structure when test cases are selected are called black-box techniques. What we are going to do is examine the most common approaches within those categories, keeping in mind that even the very explicit methodologies we are going to discuss are implemented in different ways depending on the integrity level required for the software product being tested.
Black-Box Testing
Black-box testing approaches are not based on any knowledge of the details of the internal design or code. These tests are normally based on requirements and the functionality specified in them. Their purpose is to confirm that a given input reliably produces the anticipated output condition. Black-box testing is ideal for situations where the actual mechanism is not known—for instance, in design, or where it is irrelevant, or in reliability assessment.
White-Box Testing
White-box testing is based on and requires knowledge of the internal logic of an application’s code. The purpose of white-box testing is to confirm internal integrity and logic of the artifact, which is primarily the code. This is normally done using a targeted set of prearranged test cases. Tests are based on and examine coverage of code statements, branches, paths, and conditions. This is often carried out through desk checks, but it can be highly automated as well, which would allow tests to evaluate the reliability of branches under a wide range of given conditions.
Load Testing
Load testing is frequently performed on an ad hoc basis during the normal development process. It is particularly useful because it can quickly and economically identify performance problems without hand checking. It amounts to testing an application under heavy loads, such as testing of a website under a range of loads to determine at what point the system’s response time degrades or fails. Load testing is almost always supported by some sort of software tool or driver, since it requires presenting the unit under test with conditions that would be hard to duplicate under normal use.
Stress Testing
Stress testing is a term that is often used interchangeably with “load” and “performance” testing by professionals. It serves the same purpose as load testing in the sense that it is looking to predict failure thresholds. Stress tests normally test system functioning while under such conditions as unusually heavy loads, heavy repetition of certain actions or inputs, or input of large numerical values and large complex queries to a database system. Stress testing is normally supported by software and other forms of automation. It differs from load testing in the sense that any potential area of failure under stress is targeted, not just load.
Performance Testing
Performance testing is a term that is also often used interchangeably with “stress” and “load” testing. It differs from the other two in that it normally references criteria that are established in advance, or benchmarks that are created in the testing process—for instance, if what is being examined is performance degradation over time. Ideally, “performance” testing criteria for every artifact is defined up front in the contract or the pre-release testing plan. Like the other two, performance testing is almost always supported by software and testing scenarios.
Usability Testing
Usability testing is another one of the more common general testing approaches. As its name implies, usability testing is testing for “user friendliness.” The problem with usability testing is that this is subjective and will depend on the targeted end user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. It is generally not supported by software.
Alpha Testing
Although it is not as common as its brother method, alpha testing is still an important part of easing the product into the real world of use. Alpha testing is typically done by end users or others, definitely not by programmers or technical testers. This testing takes place when a product is nearing delivery. It is understood that minor design changes will still be made as a result of such testing. Thus, the product is not as widely circulated during an alpha test.
Beta Testing
On the other hand, beta testing is by far the most common method of pre-release testing for any new product in the real world. Beta testing takes place when development and testing are essentially completed and final bugs and problems need to be found before final release. It is typically done by end users or others, not by programmers or testers. To be maximally effective, there has to be a direct reporting link that hooks the testers to the people who are responsible for the final polishing prior to release.
Some Common Testing Tools
The most common type of automated tool is the “record/playback.” A tester clicks through all combinations of menu choices, dialog box choices, and buttons in an application and has the results of those accesses logged by a tool. Then if new buttons are added or some underlying code in the application is changed, the application can then be retested by just playing back the recorded actions and comparing the results in order to see the effects of the changes. Other automated tools can include
•  Code analyzers, which monitor code complexity and adherence to coding standards.
•  Coverage analyzers, which determine the parts of the code that are yet to be tested. Coverage analyzers can be oriented toward determining the statement coverage, the condition coverage, and the path coverage of tests.
•  Memory analyzers, which look at physical performance—for instance, boundary checkers.
•  Load/performance test tools, which test applications under various load levels.
•  Web test tools, which check that links are valid, that HTML code usage is correct, that client-side and server-side programs work, and that a website’s interactions are secure.
Completion Criteria
Completion criteria are always established by the project’s contract. The reason why such an explicit degree of specification is needed lies in the unique culture of software development. Software is an abstract and invisible entity. Therefore, it is hard to judge progress in its construction and to determine its eventual point of completion without some type of tangible landmark. Those landmarks are proxies for the achievement of the requisite goals, and they are essential to proper management of the project because they define the transition points in the process. In that respect then, the satisfaction of the proxy completion criteria that are specified in the contract serves as the requisite evidence of the product’s readiness for delivery.
Completion criteria have to characterize a tangible outcome or action that can be judged in objective terms. For instance, one completion criterion might be the delivery of a properly inspected set of user documents or the delivery of all specified modules of source code. These two things are tangible, so they can be assessed and judgments can be made about whether a specified milestone requirement has been achieved. The actual form of a lot of the completion criteria is generally project specific. However, standard criteria do exist. These criteria can be used to build a process for judging product suitability.
The criteria are specified in the ISO 9126 standard. ISO 9126 concerns itself with defining a common set of criteria that can be used to assess the general suitability of a software product or service. ISO 9126 defines six universal criteria, which are intended to be exhaustive. To aid application, the definition of each of these characteristics is provided in the standard. In addition, ISO 9126 provides an assessment process that can be utilized to tailor a specific set of completion criteria for a product. Criteria, in turn, can be decomposed into metrics. One or more actual measures can be characterized for each satisfaction criteria, with some of those measures overlapping several criteria.
image
ISO 9126 Criteria
The six generic criteria for judging the suitability of a product are
•  Functionality
•  Reliability
•  Usability
•  Efficiency
•  Maintainability
•  Portability
image
For the sake of actually measuring in objective terms whether the product has satisfied its requirements, these large criteria can then be decomposed into subcriteria, from which a meaningful set of objective metrics can be developed. For instance, for each area
•  Functionality can be decomposed into measures of: 1) suitability to requirements, 2) accuracy of output, 3) interoperability, 4) functional compliance, and 5) security.
•  Reliability can be decomposed into measures of: 1) product maturity, 2) fault tolerance, 3) frequency of failure, and 4) recoverability.
•  Usability, or ease of use, can be decomposed into measures of understandability, which simply involve the users’ ability to acquire a logical understanding of the software’s use and its applicability to a given situation. Usability can be evaluated by measuring the length of time that it takes to learn the product, as well as how long it takes for the user to operate it at an acceptable level of proficiency.
•  Efficiency can be measured in two terms: time behavior and resource behavior. Time behavior is characterized by factors like response and processing times and throughput rates. Resource behavior can be measured in terms of the amount of resources used and the duration of such use in performing a given function.
•  Maintainability can be measured in terms of the time it takes to analyze and address a problem report or change request and the ease of the change if a decision is made to alter it. This is particularly true where change to the structure of the product is required. In addition to the time it takes to analyze and change the product, maintainability can be expressed as a measure of the general stability (e.g., failure rate) and the operational testability of the product.
•  Portability can be judged in terms of the ability of the product to be adapted to a given situation, its ease of installation, and its conformance with the organization’s general requirements and any applicable regulations. Moreover, the ability to replace the product if an improved version is available also should be considered (e.g., the product’s extensibility).
These criteria obviously do not represent all of the measures that can be used to judge completion. However, they have the advantage of being standard and therefore commonly accepted as correct. An organization that is seeking to create and establish completion criteria can use this common set as a core for defining a broader range of situation-specific measures that will help it evaluate in objective terms whether the project has been sufficiently developed to be judged complete.
Risk Acceptance
Because of the number of threats in cyberspace and the complexity of the product, risks are a given in any delivered system or software product. As a result, formal procedures must be in place to ensure that risks are properly accounted for in the final acceptance. Risks comprise a range of properties that must be considered in order to stay secure within acceptable limits. Those properties include
•  Implicit and explicit safety requirements
•  Implicit and explicit security requirements
•  Degree of software complexity
•  Performance factors
•  Reliability factors
Mission-critical software, requiring a high degree of integrity, also requires a large and rigorous number of tasks to ensure these factors. These tasks are normally defined early in the development process and are enshrined in the contract. Making that assurance explicit is very important to the process because, in the normal project, there will be many more risks involved than there will be resources to address them. So it is critical to be able to document in the contract a process for identifying and prioritizing risks.
The evaluation process that underlies risk acceptance should always answer two questions: 1) what is the level of certainty of the risk, which is typically expressed as likelihood, and 2) what is the anticipated impact, which is normally an estimate of the loss, harm, failure, or danger if the event does happen. Traditionally, these are the two factors that determine which risks are acceptable and which ones aren’t. These two questions should be approached logically. Practically speaking, the first consideration is likelihood, since a highly unlikely event might not be worthy of further consideration. However, the estimate of the consequences is the activity that truly shapes the form of the response. That is because there is never enough money to secure against every conceivable risk, and so the potential harm that each risk represents always has to be balanced against the likelihood of its occurrence and prioritized for response.
Therefore, the basic goal of the risk assessment process is to maximize resource utilization by identifying those risks with the greatest probability of occurrence and that will cause the most harm. These options are then arrayed in descending order of priority and addressed based on the available resources. Since all of the decisions about the tangible form of the processes for software assurance will depend on getting these priorities absolutely correct, it should be easy to see why a rigorous and accurate risk assessment process is so critical to the overall goal of effective risk acceptance. However, software is an intangible and highly dynamic product, so without a well-defined process, it is difficult to assign priorities with absolute confidence.
It is easy to have confidence in priorities that are set in the physical world—a civil engineering problem, for instance. However, confidence is bound to diminish if the estimate is based on something as complex and hard to visualize as software. As a result, risk acceptance assessments for software have to be built around concrete evidence. That tangible proof is usually established through tests, reviews, and analysis of threats, as well as any other form of relevant technical or managerial assessment.
Because the threats to software are extremely diverse, the data collection process has to be systematic and well coordinated. As a result, risk acceptance assessments should always embody a commonly accepted and repeatable methodology for data collection that produces reliable and concrete evidence that can be independently verified as correct. The gathering, compilation, analysis, and verification of all data that is obtained from tests, reviews, and audits can be a time-consuming and resource-intensive process. In order to ensure the effectiveness and accuracy of any given risk analysis, the practical scope of the inquiry has to be precisely defined and should be limited to a particular identified threat.
Therefore, it is perfectly acceptable to approach the understanding of operational risk in a highly compartmentalized fashion, as long as the organization understands that the results of any particular risk assessment only characterize a part of the problem. In fact, the need for a detailed, accurate picture of all conceivable threats almost always implies a series of specifically focused, highly integrated risk assessments that take place over a defined period, rather than a single monolithic effort. The assessments typically target the various known threats to the electronic, communication, and human-interaction integrity of the product. The insight gained from each focused assessment is then aggregated into a single comprehensive understanding of the total impact of a given threat, which serves as the basis for judging how it will be mitigated or accepted.
image
image   
EXAM TIP  Risk assessment requires detailed knowledge of the risks and consequences associated with the software under consideration. This information is contained in a properly executed threat model, which is created as part of the development process.
Nevertheless, targeted risk assessments only drive acceptance decisions about specific, known vulnerabilities at a precise point in time. That explicit understanding is called a threat picture, or situational assessment. However, because threats are constantly appearing on the horizon, that picture has to be continuously updated. Consequently, risk acceptance assessments are always an ongoing process. The assessment should maintain continuous knowledge of three critical factors:
•  The interrelationships among all of the system assets
•  The specific threats to each asset
•  The precise business and technological risks associated with each vulnerability
These factors are separate considerations, in the sense that conditions can change independently for any one of them. However, they are also highly interdependent in the sense that changes to one factor will most likely alter the situation for the other two. Moreover, the same careful investigative process has to be followed to track every risk after it has been identified. That is because of the potential for latent threats to become active. A latent threat might not have immediate consequences because the conditions that would make it harmful are not present yet. As a consequence, latent threats are normally disregarded in the real-world process of risk acceptance. This is understandable, since resources ought to be concentrated on the threats that are known to cause significant harm. However, a latent threat can become an active one if those conditions change. Therefore, all latent threats that could exploit a known vulnerability have to be tracked.
Post-release Activities
The goal of the post-release activities is to seamlessly place a completed and tested application into an existing operating environment. This placement is typically carried out by a unit other than the development organization. Because post-release activities can best be described as configuration management, they have to start with the establishment of a formal baseline for control. Therefore, the activities in this phase should always be initiated by a complete audit of the installed configuration.
The audit is aimed at determining whether all software products are installed and operating correctly. That includes analyzing and confirming that all site-dependent parameters have been met. Moreover, because change requests and anomaly reporting occur throughout the configuration management lifecycle, it is also important for the audit element to determine whether a practical change request and anomaly reporting process has implemented. The results of this are all captured in the post-release plan.
The post-release plan should describe the procedures required to administer the post-release process. It should tell how the process will be structured to provide maximum effectiveness in configuration control, and it will describe the flow of information through that process, as well as the mechanisms that will be used to adjust it to ensure its continuing effectiveness. That includes specifications of the timing of the reports, the deviation policy, the control procedures for problem resolution, and any additional elements. Additional elements in the plan might include procedures for
•  Configuration audit
•  Baseline generation
•  Operation and maintenance problem reporting
•  Configuration and baseline management plan revision
•  Anomaly evaluation, reporting, and resolution
•  Proposed change assessment/reporting
•  General status reporting
•  Configuration management administration
•  Policy and procedure to guide the selection of standard practices and conventions
Validation and Verification
The validation and verification processes support all of the primary process activities of the organization as well as their management. Validation and verification (V&V) activities also apply to all stages of the software lifecycle. Those activities are established and documented in the software validation and verification plan (SVVP). That plan includes the specification of the information and facilities necessary to manage and perform V&V activities and the means for coordinating all relevant validation and verification activities with other related activities of the organization and its projects.
image
image   
EXAM TIP  Validation means that the software meets the specified user requirements. Verification describes proper software construction. Barry Boehm clarifies the difference in this simple fashion: Validation: Are we building the right product? Verification: Are we building the product right?
Planners should assess the overall validation and verification effort in order to ensure that the software validation and verification plan remains effective and continues to actively monitor and evaluate all validation and verification outputs. This should be based on the adoption of a defined set of metrics. At a minimum, the validation and verification processes should support configuration baseline change assessment, management and technical reviews, the interfaces between development and all organizational and supporting processes, and all V&V documentation and reporting. Documentation and reporting includes all validation and verification task reports, activity summary reports, anomaly reports, and final V&V reports for project closeouts. To do this properly, the software validation and verification plan should specify the administrative requirements for
•  Anomaly resolution and reporting
•  Exception/deviation policy
•  Baseline and configuration control procedures
•  Standards practices and conventions adopted for guidance
•  Form of the relevant documentation, including plans, procedures, cases, and results
Management V&V
There are two generic forms of V&V: management and technical. Validation and verification activities for management fittingly examine management plans, schedules, requirements, and methods for the purpose of assessing their suitability to the project. That examination supports decisions about corrective actions, the allocation of resources, and project scoping. It is carried out for the purpose of supporting the management personnel who have direct responsibility for a system. Management reviews and other forms of V&V are meant to discover and report variations from plans and/or defined procedures. They might also recommend corrective action as required. Since management V&V is done to support the management process, it is likely to involve the following management roles:
•  Decision maker
•  Review leader
•  Review recorder
•  Management staff
•  Technical staff
•  Customer (or user) representative
Management reviews normally consider such things as the statement of project objectives, the status of the software product itself, the status of the project management plan with respect to milestones, any identified anomalies or threats, standard operating procedures, resource allocations, project activity status reports, and other pertinent regulations. Management reviews are scheduled as part of initial project planning and are usually tied to milestones and terminal phases. This does not exclude ad hoc management reviews, which can be scheduled and held for the purposes of risk or threat analysis, software quality management, operational/functional management, or at the request of the customer.
Technical V&V
Technical V&V evaluates the software product itself, including the requirements and design documentation and the code, test and user documentation and manuals and release notes, and the build and installation procedures. Technical reviews support decisions about whether the software product conforms to its specifications; adheres to regulations, standards, and plans; and has been correctly implemented or changed.
Technical reviews are carried out for the purpose of supporting the customer’s and supplier’s technical and management personnel who have direct responsibility for the system. They are meant to discover and report defects and anomalies in the software under construction and/or changes. The reviews primarily focus on the software product and its artifacts. Technical reviews are typically done by technical and technical management personnel. They potentially involve the following management roles:
•  Decision maker
•  Review leader
•  Review recorder
•  Technical staff and technical managers
•  Customer (or user) technical staff
Technical reviews normally consider the statements of objectives for the technical review, the software product itself, the project management plan, anomalies, defects, and security risks. Any relevant prior review reports and pertinent regulations also have to be considered.
Technical reviews should be scheduled as part of initial project planning. These can also be held to evaluate impacts of anomalies or defects. This does not exclude ad hoc technical reviews, which can be scheduled and held for the purposes of supporting functional or project management, system engineering, or software assurance.
Independent Testing
Independent testing can be carried out by third parties in order to ensure confidence in the delivered product. By involving a disinterested third party in the evaluation process, the customer and supplier can both ensure maximum trust in product integrity. The testing process is similar to the testing activities described earlier. The difference is that the third party carries out those tests. If a third party is involved this part of the process, it is often termed “independent validation and verification,” or IV&V. The essential requirement of independent testing lies in the word “independence.”
The testing manager has to ensure that the testing agent has all necessary latitude to conduct tests and audits in a manner that they deem proper to achieve the assurance goals written into the contract. In general, what this means is that the testing agent maintains a reporting line that is not through the management of the product being evaluated. It also means that the testing agent should report to a person at a sufficient level in the organization to enforce findings from the testing process. The idea in IV&V is to ensure that the people whose product is undergoing evaluation have no influence over the findings of the testers.
Notwithstanding classic testing, audits are perhaps the most popular mechanism for independent acceptance evaluations. Audits provide third-party certification of conformance to regulations and/or standards. Items that may be audited include project
•  Plans
•  Contracts
•  Complaints
•  Procedures
•  Reports and other documentation
•  Source code
•  Deliverables
At the acceptance stage, audits are typically utilized to ensure confidence that the delivered product is correct, complete, and in compliance with all legal requirements. The process itself is normally conducted by a single person, who is termed the “lead auditor.” The lead auditor is responsible for the audit, including administrative tasks. Audits are normally required by the customer organization to verify compliance with requirements, or by the supplier organization to verify compliance with plans, regulations, and guidelines, or by a third party to verify compliance with standards or regulations. The auditor is always a third-party agency, and the initiator is usually not the producer.
Audits are initiated by planning activities. Plans and empirical methods have to be approved by all of the parties involved in the audit, particularly the audit initiator. All of the parties involved in the audit participate in an opening meeting in order to get all of the ground rules set. This meeting might sound like a bureaucratic exercise, but it is important since audits themselves are intrusive, and it is important to ensure that they are conducted as efficiently as possible. The auditors then carry out the examination and collect the evidence. Once all the evidence has been collected, it is analyzed and a report is prepared. Normally, this report is reviewed by the audited party prior to release in order to head off any misinformation or misinterpretation. After the initial meeting with the audited party, a closing meeting is held with all parties in attendance and a report is generated. That report includes
•  Preliminary conclusions
•  Problems experienced
•  Recommendations for remediation
Once the report is accepted, the auditors typically are also responsible for following up on the resolution of any problems identified. In that process, the auditor examines all of the target items in order to provide assurance that the rework has been properly performed for each item. On acceptance of the final problem resolutions, the auditor submits a final report that itemizes
•  The purpose and scope
•  The audited organization
•  The software product(s) audited
•  Any applicable regulations and standards
•  The audit evaluation criteria
•  An observation list classifying each anomaly detected as major and minor
•  The timing of audit follow-up activities
Chapter Review
This chapter covered the activities associated with the software acceptance process. It began with an examination of the topic of acceptance and qualification testing. The final stage of the development lifecycle, acceptance activities are based on policies and implemented by plans. The chapter covered pre-release activities, including tests and inspections done prior to product delivery. The chapter discussed the use of verification and validation to assure that products properly reflect a customer’s specified requirements and that they comply with its purpose. Post-release activities were discussed with respect to how they ensure that product integrity is maintained throughout its lifecycle in the enterprise. Connection to risk management via threat modeling was discussed, including the role of communicating and accepting the risk associated with software components or systems.
Quick Tips
•  Technical reviews are important tools in supporting and enforcing producer and customer understanding of the evolving product.
•  Pre-release activities should be planned as early as possible in order to be written into the contract. This is the only way they can be enforced.
•  Management reviews support the individuals who have direct responsibility for the system.
•  Audits are carried out by a team that is headed by a lead auditor. The lead auditor ensures coordination in the collection of evidence and the interpretation of results.
•  Black-box tests do not assume any knowledge of the internal mechanism of the product. Examples of these are load, stress, and performance tests.
•  White-box tests evaluate the internal workings of the product. Examples of these are static testing, code checking, and design reviews.
•  Alpha tests take place as the product is nearing completion but it is not considered to be a completed product yet.
•  Beta tests take place when it is assumed that the product is complete. Their aim is to exercise the product in its environment.
•  Completion criteria are critical to the documentation of the correctness of the product. They should be developed early and they should be refined as the product evolves.
•  The completion of the audit process does not signal delivery. Delivery happens when all nonconcurrences and anomalies identified by the audit have been certified as closed.
•  Pre-release testing should start when the system is initially integrated and end with beta testing of the delivered product.
Questions
To further help you prepare for the CSSLP exam, and to provide you with a feel for your level of preparedness, answer the following questions and then check your answers against the list of correct answers found at the end of the chapter.
  1.  Software testing provides evidence that the software complies with:
A.  The customer’s view of what they want
B.  Legal regulations
C.  The contract
D.  The configuration management plan
  2.  The degree of testing that is done is defined by:
A.  Personal abilities
B.  Available resources
C.  The number of threats
D.  The customer
  3.  Stress tests differ from load tests in that stress tests aim to:
A.  Exercise all components
B.  Exert maximum pressure
C.  Measure performance against benchmarks
D.  Measure where the test object fails
  4.  Pre-release activities start:
A.  As soon as possible after integration
B.  When the product is delivered
C.  After delivery
D.  When the first problem is encountered
  5.  Qualification testing is always guided by:
A.  Prior results
B.  The customer
C.  A plan
D.  A beta test
  6.  Changes to code can often:
A.  Cause other parts of the system to fail
B.  Identify latent defects
C.  Lead to product success
D.  Happen accidentally
  7.  The essential requirement for IV&V is:
A.  Testing
B.  Reviews
C.  Audits
D.  Independence
  8.  The aim of black-box testing is to confirm that a given input:
A.  Is correct
B.  Can be processed accurately
C.  Produces a predictable output
D.  Will not cause a defect
  9.  One subfactor of usability is:
A.  Integrity
B.  Rapid adoption
C.  Few failures
D.  Security
10.  The post-release plan should have a policy to allow:
A.  Rules
B.  Deviations
C.  Procedures
D.  Practices
11.  The foundation for post-release management is:
A.  The product testing
B.  The product performance
C.  The product release date
D.  The product baseline
12.  Management reviews recommend:
A.  Best practices
B.  Roles
C.  Accountability
D.  Corrective or remedial action
13.  A test case describes:
A.  An output that produces a predictable input
B.  An input that produces a predictable output
C.  An algorithm
D.  A result
14.  Which of the following terms is associated with operational risk?
A.  Act carefully
B.  Act expeditiously
C.  compartmentalized
D.  Compounding
15.  Software fails in:
A.  Many cases
B.  Bizarre ways
C.  Predictable patterns
D.  The worst possible times
Answers
  1.  C. The contract defines every aspect of the deliverable.
  2.  B. Testing can only be done to the degree of the resources available.
  3.  D. Stress tests determine the point where the object will fail.
  4.  A. Pre-release activities begin as soon as possible and progress through delivery.
  5.  C. Qualification testing is established by a plan.
  6.  A. Changes can cause unintended consequences in other parts of the system.
  7.  D. IV&V has to have independence from development.
  8.  C. The outcome of any input should be known and predictable.
  9.  B. Ease of use is measured by the ability to rapid adopt an application.
10.  B. Post-release planning should allow the ability to deviate based on new information.
11.  D. Post-release management requires an initial statement of the product baseline.
12.  D. The outcome of management reviews is corrective action.
13.  B. Test cases align inputs to outputs in a predictable fashion.
14.  C. Operational risk can be compartmentalized.
15.  B. Software is unpredictable because it is virtual and it fails in bizarre ways.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.174.76