Chapter 7
Verification and Validation

After completing this chapter, you will be able to:

  • understand what is meant by verification and validation (V&V);
  • understand the benefits and costs of applying V&V techniques;
  • understand the traceability technique and its usefulness;
  • learn about the IEEE 1012 V&V standard and models used in industry;
  • understand the processes and activities of V&V;
  • understand the activities of the software validation phase;
  • learn how to develop and use a V&V checklist for your project;
  • understand how to write a V&V plan for your project;
  • learn about the V&V tools available;
  • understand the relationship between V&V and the software quality assurance plan.

7.1 Introduction

In an article about safety, Leveson [LEV 00] brilliantly explains the dangers of modern software-based systems. The following text box summarizes her thinking.

The introduction of innovative products associated with software that contain an increasing number of functions often involves increasing the number of computer processing units and size of software. For example, in the automobile sector, over 60 small computers (named electronic control units (ECUs)) with more than 100 million lines of code from different suppliers are used in many car models [REI 04]. These networked ECUs control, amongst others, ignition, braking, the entertainment system, and now autopilot and self-parking. Failure of some units will result in recalls, dissatisfaction of customers, or deterioration in the performance of the vehicle, whereas a failure with the autopilot, direction, acceleration, or braking system could cause an accident, injury, or death.

According to the IEEE 1012 standard for V&V [IEE 12], the goal of V&V in software projects is to help the organization incorporate quality in the software throughout the software life cycle. The V&V process provides an objective evaluation of software products and processes. It is a simple question of addressing quality during development as opposed to trying to add quality to a product after its been built.

Often it is not possible, and perhaps seldom reasonable, to inspect all the fine details of all the software products created during the development and maintenance life cycle, particularly because of a lack of time and budget. Therefore, all organizations have to make certain compromises and this is where the software engineer is expected to follow a rigorous process of justification and selection. Software teams must establish V&V activities during the project planning phase, so as to choose the techniques and approaches that will allow the products to have a proper level of V&V. The choice of these activities and their priorities is based on assessing risk factors and their potential impact. These V&V activities need to be added to the development process of the project in order to reduce risks to an acceptable level.

In this chapter, we present V&V for software. First, we explain the concepts of V&V and the intended benefits and costs. We then present the international standards and models that define V&V activities or impose them in certain situations. We continue with the presentation of an inventory of the various V&V techniques available as well as their utility to address the particular concerns of the software engineer. We then present the typical contents of a V&V process. This is followed by a brief discussion on the importance of independent V&V (IV&V) in critical projects. We then provide more detail for one of the V&V activities often required for safety: traceability. Finally, we explain how to develop and use checklists.

Verification aims to show that an activity was done correctly (doing it right), in accordance with its implementation plan and has not introduced defects in its output. It can be done on successive intermediate states of a product that is the outcome of an activity.

Validation is composed of a series of activities which start, early in the development life cycle, with the validation of customer requirements. End-users or their representatives will also evaluate the behavior of the software product in the target environment, either real, simulated, or on paper.

Validation helps minimize the risk of developing the wrong items by ensuring that the requirements are adequate and complete. Subsequently, it will be ensured that these validated requirements are developed during the following phase, notably, the specifications. Validation also ensures that the software does not do what it should not do. This means that no unintended behavior should arise from it.

If quality assurance is the poor cousin of software development, validation has the same relationship with V&V. While verification practices, such as testing, have a very important place in academia and industry, we cannot say the same for validation techniques. Validation techniques are often absent or ignored by developers and the mandated development processes. Some organizations validate requirements early in a project. Unfortunately, they will carry out some validation only at the very end. Occasionally, we find validation practices embedded at different stages of the development cycle (as shown by Figure 7.1). The lines at the top of this figure indicate the development cycle phases where validation activities can be performed.

Flow diagram shows activities of V and V where requirements leads to business functions and performance, leads to system requirements, leads to system design, leads to system testing, et cetera.

Figure 7.1 V&V activities in the software development life cycle.

Some organizations explicitly have a phase called software validation in their software development process, and if they produce software that is integrated into a system, they will also explicitly have a system validation phase. Figure 7.2 illustrates this with a software life cycle named the V development life cycle process. It illustrates, along the center arrow, that system and software validation plans that will be executed during the validation phase originate from the system and software specification phases. These plans will be updated throughout the development phases and will be used during the validation phases on the ascending line of activities shown on the right hand side of the figure.

Flow diagram shows life cycle of V software where system specification leads to SW requirements specification, leads to preliminary design, leads to detailed design, et cetera.

Figure 7.2 A V software life cycle describing when system and software validations are executed.

One reason for preparing these plans early in the development life cycle is that validation activities may require special equipment or environments to perform the validation of the system that includes software. It is possible that a testing environment also needs to be established. For example, the validation of an air traffic control system that needs to operate in conditions where dozens of aircraft are in the air at the same time may require validation of the system near a busy airport. This would enable the validation of several functional and non-functional requirements of the system.

Figure 7.2 shows that system and software validation plans are developed during the descending part of the development cycle in the V-shaped diagram. These plans will be used during the subsequent phases of development to validate, among other things, the system requirements and software requirements against the needs and during the ascending part of the development cycle in the V-shaped diagram to validate the software during the validation phase. Since the software is a component of a system, it will be integrated with hardware or other software and subject to system validation activities.

After identifying risks and the required V&V techniques, the V&V activities need to be planned. In some projects, V&V activities are planned by a team belonging to different parts of an organization, for example, system engineers, software developers, supplier personnel, a risk manager, V&V or software quality assurance (SQA), software testers, a configuration manager, etc. The main objective of the V&V activity will be to develop a detailed V&V plan for the project.

7.2 Benefits and Costs of V&V

As we have mentioned above, the goal of V&V is to build quality into the software early during its construction and not just try to fix this at the testing stage. Figure 7.3 shows an example of the software processes, in an American company, where defects are injected. The figure shows that a large percentage of defects, approximately 70%, are injected even before any line of code has been produced. It is therefore necessary that we include techniques in the software life cycle that will allow for the early detection and removal of these defects as close as possible to when they are created. Additionally, good detection techniques will greatly reduce the costly rework associated with corrections, which is an important cause of schedule delays.

Bar graph shows system development phase from proposal to operations versus defects in percentage from 0.0 to 50.0 with plots.

Figure 7.3 Example of software process phases where defects are injected [SEL 07].

Figure 7.4 describes the defect detection effectiveness in an American company [SEL 07]. These results originate from Northrop Grumman who collected data for 14 systems where 3418 defects where detected during 731 reviews. These systems contained between 25,000 and 500,000 lines of code and the corresponding teams ranged from 10 to 120 developers. This study shows that not only is it possible to detect errors, but also to eliminate them in the same phase where they were produced. For example, Figure 7.3, shows that 50% of the defects where injected in the requirements phase. Figure 7.4 also shows that 96% of these defects where eliminated in the same phase.

Bar graph shows system development phase from proposal to all versus defects detected/defects injected in percentage from 0.0 to 100.0 with plots.

Figure 7.4 Percentage of the defects detected by the development process phase [SEL 07].

What can be learned from this figure is that it is possible to estimate the percentage of injected defects and to detect and correct a high percentage of them such that they will not propagate from one phase to another. This process has been named the error containment process. It is therefore possible to describe, in the quality plan of the project, quantitative quality criteria concerning defect removal objectives at each phase of the project.

Why is it important to correct defects where they are injected? As shown in Chapter 2 (Figure 2.2), there is a compounding cost of a defect that is not corrected in the phase where it is injected. For example, a defect that arises during the assembly phase will cost three times more to fix than one that is corrected during the previous phase (during which we should had been able to find it). It will cost seven times more to fix the defect in the next phase (test and integration), 50 times more in the trial phase, 130 times more in the integration phase, and 100 times more when it is a failure for the client and has to be repaired during the operational phase of the product.

7.2.1 V&V and the Business Models

We recall here the main business models used by the software industry that were introduced in Chapter 1 [IBE 02]:

  • Custom systems written on contract: The organization makes profits by selling tailored software development services for clients (e.g., Accenture, TATA, and Infosys).
  • Custom software written in-house: The organization develops software to improve organizational efficiency (e.g., your current internal IT organization).
  • Commercial software: The company makes profits by developing and selling software to other organizations (e.g., Oracle and SAP).
  • Mass-market software: The company makes profits by developing and selling software to consumers (e.g., Microsoft and Adobe).
  • Commercial and mass-market firmware: The company makes profits by selling software in embedded hardware and systems (e.g., digital cameras, automobile braking system, airplane engines).

These business models help us to understand the risks associated with each situation. V&V techniques can be used to detect defects and reduce these risks. The project manager, supported by SQA, will choose, budget and plan the adequate V&V practices for his project commensurate to the risks faced. The mass-market business model and embedded systems use these techniques extensively.

7.3 V&V Standards and Process Models

The most important standards and process models that describe required processes and practices for V&V are presented next: the ISO 12207 [ISO 17], the IEEE 1012 [IEE 12], and the CMMI®. Some standards will go as far as to recommend, for critical software, that some programming languages be avoided. For example, a railway control software standard forbids that programmers use the “GoTo” programming instruction and that they remove “dead code” before the final delivery of the product. In the next section, the IEEE 1012 standard is explained, then other standards will be briefly covered.

7.3.1 IEEE 1012 V&V Standard

The IEEE 1012—Standard for System and Software Verification and Validation [IEE 12] is applicable to the acquisition, supply, development, operation and maintenance of systems, software, and hardware. This standard is applicable to all types of life cycles.

7.3.1.1 Scope of IEEE 1012

IEEE 1012 addresses all the life cycle processes of systems and software. It is applicable to all types of systems. In this standard, the V&V processes determine whether the products completed by a specific development activity meet the requirements of their intended use and the corresponding end-user needs. This assessment can include analysis, evaluation, reviews, inspections, and testing of the products and the development activity.

The verification process provides objective evidence that the system, software, or hardware and its associated products [IEE 12]:

  • conform to requirements (e.g., for correctness, completeness, consistency, and accuracy) for all life cycle activities during each life cycle process (acquisition, supply, development, operation, and maintenance); refer to the quality characteristics of requirements listed in section 1.3.1 of Chapter 1;
  • satisfy standards, practices, and conventions during life cycle processes;
  • successfully complete each life cycle activity and satisfy all the criteria for initiating succeeding life cycle activities.

The validation process provides evidence that the system, software, or hardware and its associated products [IEE 12]:

  • satisfy requirements allocated to it at the end of each life cycle activity;
  • solve the right problem (e.g., correctly model physical laws, implement business rules, and use the proper system assumptions);
  • satisfy intended use and user needs.

7.3.1.2 Purpose of IEEE 1012

The intention of this standard is to perform the following [IEE 12]:

  • establish a common framework for all the V&V processes, activities and tasks in support of the system, software, and hardware life cycle processes;
  • define the V&V tasks, required inputs, and required outputs in each life cycle process;
  • identify the minimum V&V tasks corresponding to a four-level integrity scheme;
  • define the content of the V&V Plan.

7.3.1.3 Field of Application

IEEE 1012 applies to all types of systems. When executing V&V for a system, software, or hardware element, it is important to pay special attention to the interactions with the system.

A system provides the capacity to satisfy a need or an objective by combining one or more of the following elements: processes, hardware, software, facilities, and human resources. These relationships require that the V&V processes address interactions with all of the system elements. Since software interconnects all the key elements of a digital system, the V&V processes also examine the interactions with every key component of the system to determine the impact of each element on the software. The V&V processes take the following system interactions into account [IEE 12]:

  • environment: determines that the system correctly accounts for all conditions, natural phenomena, physical laws of nature, business rules, and physical properties and the full range of the system operating environment.
  • operators/users: determines that the system communicates the proper status/condition of the system to the operator/user and correctly processes all operator/user inputs to produce the required results. For incorrect operator/user inputs, assure that the system is protected from entering into a dangerous or uncontrolled state. Validate that operator/user policies and procedures (e.g., security, interface protocols, data representations, and system assumptions) are consistently applied and used across each component interface.
  • other software, hardware, and systems: determines that the software or hardware component interfaces correctly with other components in the system in accordance with requirements and that errors are not propagated between components of the system.

7.3.1.4 Expected Benefits of V&V

The expected benefits of V&V are [IEE 12]:

  • facilitate early detection and correction of anomalies;
  • enhance management insight into process and product risks;
  • support the life cycle processes to assure conformance to program performance, schedule, and budget;
  • provide an early assessment of performance;
  • provide objective evidence of conformance to support a formal certification process;
  • improve the products from the acquisition, supply, development, and maintenance processes;
  • support process improvement activities.

7.3.2 Integrity Levels

IEEE 1012 uses integrity levels to identify V&V tasks that should be executed depending on the risk. High integrity level system and software require more emphasis on V&V processes as well as a more rigorous execution of the V&V tasks in the project.

Table 7.1 lists the IEEE 1012 definition of each of the four integrity levels and their expected consequences.

Table 7.1 Definition of Consequences [IEE 12]

Consequence Definition
Catastrophic Loss of human life, complete mission failure, loss of system security and safety, or extensive financial or social loss.
Critical Major and permanent injury, partial loss of mission, major system damage, or major financial or social loss.
Marginal Severe injury or illness, degradation of secondary mission, or some financial or social loss.
Negligible Minor injury or illness, minor impact on system performance, or operator inconvenience.

Table 7.2 presents an example of a four level integrity framework that takes into account the notion of risk. It is based on the possible consequences and risk mitigation.

Table 7.2 Integrity Levels and the Description of Consequences [IEE 12]

Software integrity level Description
4 An error to a function or system feature that causes the following:
  • catastrophic consequences to the system with reasonable, probable, or occasional likelihood of occurrence of an operating state that contributes to the error;
or
  • critical consequences with reasonable or probable likelihood of occurrence of an operating state that contributes to the error.
3 An error to a function or system feature that causes the following:
  • catastrophic consequences with occasional or infrequent likelihood of occurrence of an operating state that contributes to the error;
or
  • critical consequences with probable or occasional likelihood of occurrence of an operating state that contributes to the error;
or
  • marginal consequences with reasonable or probable likelihood of occurrence of an operating state that contributes to the error.
2 An error to a function or system feature that causes the following:
  • critical consequences with infrequent likelihood of occurrence of an operating state that contributes to the error;
or
  • marginal consequences with probable or occasional likelihood of occurrence of an operating state that contributes to the error;
or
  • negligible consequences with reasonable or probable likelihood of occurrence of an operating state that contributes to the error.
1 An error to a function or system feature that causes the following:
  • critical consequences with infrequent likelihood of occurrence of an operating state that contributes to the error;
or
  • marginal consequences with occasional or infrequent occurrence of an operating state that contributes to the error;
or
  • negligible consequences with probable, occasional, or infrequent likelihood of occurrence of an operating state that contributes to the error.

Table 7.3 illustrates the risk-based framework using the four levels of integrity and their potential consequences described in Tables 7.1 and 7.2. Each cell of Table 7.3 attributes an integrity level on the basis of the potential consequence of a defect and its probability of occurring in an operating state that contributes to the failure. Some of the cells in this table reflect more than one integrity level. This is an indication that the final assignment of the integrity level by a project team can be selected to reflect the system requirements and the need for risk mitigation.

Table 7.3 Example of Probability Combinations of Integrity Levels and Consequences [IEE 12]

Error Likelihood of occurrence of an operating state that contributes to the error (decreasing order of likelihood)
Consequence Reasonable Probable Occasional Infrequent
Catastrophic 4 4 4 or 3 3
Critical 4 4 or 3 3 2 or 1
Marginal 3 3 or 2 2 or 1 1
Negligible 2 2 or 1 1 1

Tools that generate or translate source code (e.g., compilers, optimizers, code generators) are characterized by the same integrity level as the software they are used for. As a general rule, the integrity level assigned to a project should be the highest integrity level of any of the components of a system, even if there is only one critical component.

The integrity level assignation process should be consistent and reassessed throughout the project development life cycle. The rigor and intensity level of the V&V and documentation activities in the project should be commensurate to its integrity level. As the integrity level of a project lowers, the rigor and intensity level of V&V should also be diminished accordingly. For example, a risk analysis conducted for a project at integrity level 4 will be formally documented and will investigate failures at the module level, while risk analysis at integrity level 3 could assess only important failure scenarios and be documented informally during a design review process.

The four-level integrity framework is essentially used for the V&V practices recommended by IEEE 1012. The next section provides an example of V&V practices recommended for the software requirements activity.

7.3.3 Recommended V&V Activities for Software Requirements [IEE 12]

The recommended V&V activities for software requirements address functional and non-functional software requirements, interface requirements, system qualification requirements, security and safety, data definition, user documentation, installation, acceptance, operation, and ongoing maintenance of the software. The V&V test planning is initiated at the same time as V&V activities for software requirements and continues throughout many other V&V activities.

The objectives of the V&V activities for software requirements are to ensure that they are correct, complete, accurate, testable and consistent with the system software requirements. The V&V effort for software requirements, for any integrity level, shall perform:

  • requirements evaluation;
  • interface analysis;
  • traceability analysis;
  • criticality analysis;
  • software qualification test plan V&V;
  • software acceptance test plan V&V;
  • hazard analysis;
  • security analysis;
  • risk analysis.

The following table, presented in the IEEE 1012, indicates the minimum V&V tasks that must be executed at each integrity level. For example, concerning the traceability analysis task, the standard indicates an “X” when this task is recommended (e.g., for three integrity levels shown in Table 7.4). Alternatively, safety analysis is recommended for levels 3 and 4 only.

Table 7.4 Minimum V&V Tasks by Integrity Level

  Integrity level
Minimum V&V tasks 1 2 3 4
Traceability analysis   X X X
Security analysis     X X

Source: Adapted from IEEE (2012) [IEE 12].

Table 7.5 describes the V&V tasks recommended for the traceability analysis of software requirements.

Table 7.5 Description of the Traceability Task [IEE 12]

Requirements for V&V (Process: Development)
V&V tasks Required inputs Required outputs

Traceability analysis

Trace the software requirements (SRS and IRS) to the system requirements (concept documentation) and the system requirements to the software requirements.

Analyze identified relationships for correctness, consistency, completeness, and accuracy. The task criteria are as follows:

  • Correctness

    Validate that the relationships between each software requirement and its system requirement are correct.

  • Consistency

    Verify that the relationships between the software and system requirements are specified to a consistent level of detail.

  • Completeness
  • Verify that every software requirement is traceable to a system requirement with sufficient detail to show conformance to the system requirement.
  • Verify that all system requirements related to software are traceable to software requirements.
  • Accuracy

    Validate that the system performance and operating characteristics are accurately specified by the traced software requirements.

Concept documentation (system requirements)

Software requirements specifications (SRS)

Interface requirements specifications (IRS)

Task report(s)—

Traceability analysis

Anomaly report(s)

7.4 V&V According to ISO/IEC/IEEE 12207

The ISO 12207 [ISO 17] standard also presents the requirements for V&V processes. We will not describe all the details here but provide a high level view of the V&V processes, their purpose, and outcomes.

7.4.1 Verification Process

The purpose of the verification process is to provide objective evidence that a system or system element fulfills its specified requirements and characteristics.

The verification process identifies the anomalies (errors, defects, or faults) in any information item (e.g., system/software requirements or architecture description), implemented system elements, or life cycle processes using appropriate methods, techniques, standards, or rules. This process provides the necessary information to determine resolution of identified anomalies.

As a result of the successful implementation of the verification process [ISO 17]:

  • constraints of verification that influence the requirements, architecture, or design are identified;
  • any enabling systems or services needed for verification are available;
  • the system or system element is verified;
  • data providing information for corrective actions are reported;
  • objective evidence that the realized system fulfills the requirements, architecture, and design is provided;
  • verification results and anomalies are identified;
  • traceability of the verified system elements is established.

7.4.2 Validation Process

The purpose of the validation process is to provide objective evidence that the system, when in use, fulfills its business or mission objectives and stakeholder requirements, achieving its intended use in its intended operational environment.

The objective of validating a system or system element is to acquire confidence in its ability to achieve its intended mission, or use, under specific operational conditions. Validation should be approved by the stakeholders of the project. This process provides the necessary information so that identified anomalies can be resolved by the appropriate technical process where the anomaly was created.

As a result of the successful implementation of the validation process [ISO 17]:

  • validation criteria for stakeholder requirements are defined;
  • the availability of services required by stakeholders is confirmed;
  • constraints of validation that influence the requirements, architecture, or design are identified;
  • the system or system element is validated;
  • any enabling systems or services needed for validation are available;
  • validation results and anomalies are identified;
  • objective evidence that the realized system or system element satisfies stakeholder needs is provided;
  • traceability of the validated system elements is established.

7.5 V&V According to the CMMI Model

Another perspective of V&V can be seen in process models like the CMMI. The staged representation of the CMMI for Development [SEI 10a] has two process areas, at maturity level 3, dedicated to V&V. Preparation for verification is the first step suggested by the CMMI. It consists of selecting the life cycle phase outputs and the methods chosen for each product, in order to prepare the verification activity, or environment, depending on the specific needs of the project. It also suggests that verification success criteria and an iterative procedure be put in place, in parallel to the product design activities.

The purpose of verification is to ensure that selected work products meet their specified requirements. The verification process area includes the following specific goals (SG) and specific practices (SP) [SEI 10a]:

  • SG 1 Prepare for verification
    • SP 1.1 Select work products for verification,
    • SP 1.2 Prepare the verification environment,
    • SP 1.3 Establish verification procedures criteria;
  • SG 2 Perform peer reviews
    • SP 2.1 Prepare for peer reviews,
    • SP 2.2 Conduct peer reviews,
    • SP 2.3 Analyze peer review data;
  • SG 3 Verify selected work products
    • SP 3.1 Perform verification,
    • SP 3.2 Analyze verification results.

CMMI-DEV recommends inspections and walk-throughs for peer reviews as they have been described in a previous chapter.

The purpose of validation is to demonstrate that a product or product component fulfills its intended use when placed in its intended environment. The validation process area includes the following SG and SP [SEI 10a]:

  • SG 1 Prepare for validation
    • SP 1.1 Select products for validation,
    • SP 1.2 Establish the validation environment,
    • SP 1.3 Establish validation procedures and criteria;
  • SG 2 Validate product or product components
    • SP 2.1 Perform validation,
    • SP 2.2 Analyze validation results.

Validation can be applied to all aspects of the product within its target operational environment: operation, training and maintenance and support. The validation should be executed in a real operational environment with actual data volumes.

V&V activities are often executed together and can use the same environment. End-users are usually invited to conduct the validation activities.

7.6 ISO/IEC 29110 and V&V

The ISO 29110 standard for very small entities has already been introduced. Elements of ISO 12207 V&V processes have been used to develop ISO 29110 standards and guides. This section shows how these very small organizations can conduct V&V using one of the four recommended profiles: the ISO 29110 Basic profile. This profile describes two processes: a project management (PM) process and a software implementation (SI) process.

One of the seven objectives of a PM process is to prepare a project plan describing the activities and tasks for the development of a software for a specific customer. Required tasks and resources are sized and estimated early on. In this plan, V&V tasks are described and are reviewed between the development team and the customer, and then approved.

One of the objectives is that the V&V tasks, for each identified work product, be done according to stated exit criteria to ensure the coherence between the outputs and the inputs of each development task. Defects are identified and corrected and the quality records stored in the V&V report.

Table 7.6 lists the V&V tasks. The table shows the role of the person executing the task, a brief description of the task, the input and output, and their states (in brackets). The following acronyms are used for roles: TL for technical lead, AN for analyst, PR for programmer, CUS for customer, and DES for designer. Table 7.6 is limited to describing only the first task in detail and then lists only the subsequent task names.

Table 7.6 V&V Task List of the Implementation Process of ISO 29110 [ISO 11e]

Role Task list Input work products Output work products
ANTL

SI.2.3 Verify and obtain approval of the Requirements specification.

Verify the correctness and testability of the requirements specification and its consistency with the product description.

Additionally, review that requirements are complete, unambiguous and not contradictory.

The results found are documented in a verification results and corrections are made until the document is approved by AN. If significant changes were needed, initiate a change request.

Requirements specifications

Project plan

Verification results

Requirements specification [verified]

Change request [initiated]

CUS

AN

SI.2.4 Validate and obtain approval of the Requirements Specification

Validate that requirements specification satisfies needs and agreed upon expectations, including the user interface usability.

The results found are documented in a validation results and corrections are made until the document is approved by the CUS.

Requirement Specifications [verified]

Validation results

Requirement specifications [validated]

AN

DES

SI.3.4 Verify and obtain approval of the Software Design.

Verify correctness of software design documentation, its feasibility, and consistency with their requirement specification.

Verify that the traceability record contains the adequate relationships between requirements and the software design elements. The results found are documented in a verification results.

Results and corrections are made until the document is approved by DES.

If significant changes are needed, initiate a change request.

Software design

Traceability record

Requirements specifications [validated, baselined]

Verification results

Software design [verified]

Traceability record [verified]

Change request [initiated]

DES

AN

SI.3.6 Verify and obtain approval of the Test Cases and Test Procedures.

Verify consistency among requirements specification, software design and test cases and test procedures. The results found are documented in a verification results and corrections are made until the document is approved by AN.

Test cases and test procedures

Requirements specification [validated, baselined]

Software design [verified, baselined]

Verification results

Test cases and test procedures [verified]

PR

DES

SI.5.8 Verify and obtain approval of the *Product Operation Guide.

Verify consistency of the product operation guide with the software. The results found are documented in a verification results and corrections are made until the document is approved by DES.

*(Optional)

*Product operation guide

Software [tested]

Verification results

*Product operation guide [verified]

AN

CUS

SI.5.10 Verify and obtain approval of the

*Software User Documentation.

*(Optional)

*Software user documentation

Software [tested]

Verification results

*Software user documentation [verified]

DES

TL

SI.6.4 Verify and obtain approval of the Maintenance Documentation. Verify consistency of Maintenance Documentation with Software Configuration. The results found are documented in a Verification Results and corrections are made until the document is approved by TL.

Maintenance documentation

Software configuration

Verification results

Maintenance documentation [verified]

ISO 29110 Basic profile imposes a minimum number of V&V tasks to ensure that the end product will meet the requirements and needs of the customer even with a small budget.

ISO 29110 also suggests that a verification result file be updated in order to record the V&V activity results. Table 7.7 shows an example of the proposed format for this important project quality record.

Table 7.7 Example of a Verification Result File [ISO 11e]

Name Description
Verification results Documents the verification execution. It may include the record of:
  • participants
  • date
  • place
  • duration
  • verification check-list
  • passed items of verification
  • failed items of verification
  • pending items of verification
  • defects identified during verification

7.7 Independent V&V

IV&V are V&V activities conducted by an independent organization. This can be used to supplement internal V&V and is often used for very critical software: medical devices, metro and railway control, and airplane navigation systems.

Technical independence requires that the V&V effort use personnel who are not involved in the development of the system or its elements. Managerial independence requires that the responsibility for the IV&V effort be vested in an organization separate from the development and program management organizations. Financial independence requires that control of the IV&V budget be vested in an organization independent of the development organization.

7.7.1 IV&V Advantages with Regards to SQA

SQA and V&V are the main organizational processes, that is, the “watchdogs,” put in place to ensure process, product, and service quality. Since software development is under pressure to deliver, there is a need to counter balance the situation so that quality is not forgotten. Internal politics can interfere with these processes and this is why IV&V can be useful.

Given that SQA is part of the development organizational process, this function sometimes has very little influence when there are schedule and cost pressures. The IV&V process is like an external watchdog representing the client's interests and not those of the developers.

Figure 7.5 describes the relationships between customer, supplier, and IV&V.

Flow diagram shows relationship between IV and V, supplier, and customer where customer leads to IV and V agent and vice versa, and customer leads to developer and sub-contractors and vice versa.

Figure 7.5 Relationship between IV&V, supplier, and customer [EAS 96].

7.8 Traceability

Software traceability is a simple V&V technique that ensures that all the user requirements have been:

  • documented in specifications;
  • developed and documented in the design document;
  • implemented in the source code;
  • tested;
  • delivered.

Traceability facilitates the development of test plans and test cases. It ensures that the resulting tests have covered all the approved requirements. With traceability, we focus on detecting the following situations: a need without a specification, a specification without a design element, or a design element without source code or tests.

7.8.1 Traceability Matrix

A software traceability matrix is a simple tool that can be developed to facilitate traceability. This matrix is completed at each phase of the development life cycle. But in order for the matrix information to be useful, it requires user requirements that have been well defined, documented, and reviewed. During the project, requirements will evolve (e.g., requirements will be added, deleted, and modified). The organization must use process management to ensure the matrix is kept up to date or it will become useless. Traceability of requirements is explained by the CMMI-DEV in two separate process areas: (1) requirements development and (2) requirements management. You can read more about traceability by referring to this source.

For small projects that have only 20 requirements, it is easy to develop such a matrix. For large projects, specialized tools like IBM Rational DOORS are available to support this functionality.

Table 7.8 presents an example of a basic traceability matrix with only four columns: (1) requirements; (2) source code; (3) tests; and (4) test success indicator.

Table 7.8 Example of a Simple Traceability Matrix

Requirement Code Test Test success indicator
Ex 001 CODE 001

Test 001

Test 002

Test 003

Pass

Pass

Fail

Ex 001 CODE 002

Test 004

Test 005

……

……

Ex 002 CODE 003

Test 006

Test 007

Test 008

……

……

……

Ex 003 CODE 004

Test 010

Test 011

…..

…..

To illustrate the importance of traceability, the failure of the “Mars Polar Lander” mission landing on Mars in 1999 is explained in the following text box. The NASA failure report pointed to the premature shutdown of the propulsion engine 40 meters above the surface of Mars [JPL 00].

7.8.2 Implementing Traceability

The first step is to document the traceability process indicating “who does what.” We will also assign the task of documenting and updating the content of the matrix for the project. Then, the matrix can be created, as illustrated in this chapter, using each requirement identification number. When other components pertaining to the requirements are produced, like design, code, or tests, they are added to the matrix. This is done until all tests are successful.

Once the development team has accepted this new practice, then additional information can be added to the traceability matrix. For example, on the left of Table 7.8 we could add a column to paste the original text of the needs of the customer. Finally, at the far right we could add what technique was used to verify the requirement, that is, a test (T), a demonstration (D), a simulation (S), an analysis (A), or an inspection (I).

7.9 Validation Phase of Software Development

In some organizations, validation activities have been regrouped into a single development life cycle phase. It is often located at the end of the process. The objective of this last phase is to prove that the software meets the initial requirements, for example, that the right product was developed. The software is tested by the end-users to ensure it is fit for use in a real environment. The validation plan scenarios and test cases are developed and baselined during the integration and test phase.

Figure 7.6 presents a validation process using the Entry-Task-Verification-eXit (ETVX) notation presented earlier in this book. In certain situations, the validation phase will be split into many steps [CEG 90]:

  • testing in the presence of the customer or its representative;
  • installation of the software in the operational environment;
  • user acceptance testing, where the software is either accepted as is or accepted providing defects are corrected or not accepted. In the case where the software is accepted providing defects are corrected, the errors detected during the tests must be corrected and the software tested again before the customer accepts this software;
  • end-user trial testing: use, in production, of the software in trial mode;
  • warranty period, where the system is delivered and used, defects are corrected and change requests are processed;
  • software final acceptance.
Table shows markings for software validation plan, software acceptance test book, software project manager, software validation phase procedure, et cetera.

Figure 7.6 Validation representation of a process using the ETVX process notation [CEG 90] (© 1990 - ALSTOM Transport SA).

The validation phase is very important for the organization. Indeed, the success of this phase will lead to the transfer of the software to the client and, more importantly, to payment to the supplier when a contract is involved. For the developer it is often followed by a final project review where lessons learned are compiled to be used for process improvement.

The end of the validation also leads to the use, in production, of the software and the start of the support phase. The transition to maintenance is also an important phase of the life cycle. Even if there are still minor defects, they will be addressed during the maintenance phase.

During the validation phase, a series of tests are performed. It is not uncommon for anomalies to be detected and that minor changes are required. In addition, corrections or changes must be made, testing the corrected components as well as regression testing must be performed, and the configuration management process must be used to ensure that the changes are reflected in all of the documentation. At this time, the traceability matrix is used to ensure that all documents in the process have been corrected.

Validation can also lead to product qualification or even external certification in certain domains. For example, the Food and Drug Administration (FDA) requires a pre-market submission to the FDA before the release of the software in some situations.

7.9.1 Validation Plan

A software validation plan, written by the project manager, lists the organization and resources required to validate software. It should be approved during the software specification review and it describes:

  • the validation activities planned as well as the roles, responsibilities, and resources assigned;
  • the grouping of the test iterations, steps, and objectives.

To develop this plan, the project manager can use the following source documents: contractual documents, project plan, specifications document, system validation plan (if applicable), software quality plan, and the organization template for the validation plan and the validation plan checklist. Figure 7.7 describes a typical table of contents for the validation plan.

Table shows markings for software validation plan, software acceptance test book, software project manager, software validation phase procedure, et cetera.

Figure 7.7 Typical table of contents of a validation plan [CEG 90].

The many roles and responsibilities of the individuals involved in creating this plan can be summarized as follows (adapted from [CEG 90]):

  • the project manager:
  • write the validation plan;
  • get the approval of the client during a review that takes place at the end of the software specification phase;
  • update the plan as required during the subsequent phases;
  • supervise the execution of the validation plan;
  • the tester:
  • execute the validation plan;
  • organize and lead test iterations;
  • produce the test iteration report;
  • raise defect reports and agree on defect severity;
  • test execution support personnel:
  • prepare and configure the test environment;
  • get the testing documents from configuration management;
  • execute test procedures;
  • find defects;
  • correct defects;
  • correct any documentation impacted by the correction of defects;
  • customer:
  • approve and sign the software validation plan;
  • approve and sign the test iteration minutes;
  • approve the defect correction list;
  • SQA personnel:
  • review the software validation plan and provide comments;
  • verify that the right versions of documents are used;
  • assist with testing iterations;
  • assist the project team during the lessons learned review;
  • configuration management personnel:
  • provide the latest approved versions of documents required for tests;
  • assist the testing team when an error is found or with a minor modification request;
  • prepare deliverables identified in the contract and project plan;
  • archive project artifacts according to guidelines.

The validation plan does not necessarily need to be a document of its own. The information presented here can also be a section within the SQA plan or of the project plan depending on the size of the project.

7.10 Tests

Tests are central to the V&V of a software. There are four major categories of tests: development tests, qualification, acceptance, and operational tests. The following text box provides their definitions.

7.11 Checklists

A checklist is a tool that facilitates the verification of a software product and its documentation. It contains a list of criteria and questions to verify the quality of a process, product, or service. It also ensures the consistency and completeness of the execution of tasks.

An example of a checklist is one that helps detect and classify a defect (e.g., an oversight, a contradiction, or an omission). A checklist can also be used to ensure that a list of tasks to be accomplished was completed, like a “to do list.” Elements of a checklist are specific to the document, activity, or process. For example, a verification checklist to review a plan is different than a code review checklist. In this section, the following topics are presented:

  • how to develop a checklist;
  • how to use a checklist;
  • how to improve and manage a checklist.

We also provide examples of different types of checklists. Following is the description of an anecdote about the creation of the first checklist.


7.11.1 How to Develop a Checklist

There are two popular approaches used in the development of a checklist. The first is to use an existing list, such as the ones available in this book or those found on the Internet, and adapt them to your needs. The second approach is to develop a checklist from a list of errors, omissions, and problems that were already noted during document reviews and lessons learned reviews. We will see how to improve these lists below.

According to Gilb, checklists are developed according to some rules [GIL 93]:

  • a checklist must be derived, among others things, from process rules or from a standard;
  • a checklist should include a reference to the rule it is inspired from and that it is interpreting;
  • a checklist should not exceed one page because it is difficult to memorize and effectively use a list containing more than twenty items to be checked;
  • a keyword should describe each item in the list, this facilitates its retention;
  • a checklist must include a version number and the date of the last update;
  • the checklist items can be stated using a sentence structure that responds in the affirmative if the condition is satisfied. For example, regarding the clarity of a requirement: “the requirement is clear” and not “the requirement is not ambiguous”;
  • a checklist can contain a classification, for example, the severity of defects: major or minor;
  • a checklist should not contain all possible questions or details, as a concise list should focus on key issues and steps that need to be executed sequentially;
  • a checklist should be kept updated to reflect the experience gained by the organization and its developers.

Lastly, a checklist should be included in the training of the individual user. It does not replace the knowledge required to perform the tasks listed. Table 7.9 describes a checklist used to classify defects.

Table 7.9 Example of Defect Classification Scheme [CHI 02]

Defect class number Defect type Description
10 Documentation Comments, messages
20 Syntax Spelling, punctuation, instruction format
30 Build, package Change management, library, version management
40 Assignment Declaration, name duplication, scope, limits
50 Interface Procedure call, input/output (I/O), user format
60 Validation checks Error messages, inadequate validation
70 Data Structure, content
80 Function Logic, pointers, loops, recursion, calculations, function call defect
90 System Configuration, timing, memory
100 Environment Design, compilation, test, other system support problems

Figure 7.8 shows an example of a checklist to verify software requirements, hence the abbreviation used for this list is REQ. Note, that for each item of the checklist, a keyword has been added. Keywords greatly facilitate the memorization of the items of the checklist.

Photograph shows checklist which is sewn onto sleeve of glove used by astronaut Buzz Aldrin during mission of Apollo 11.

Figure 7.8 A software requirements checklist [GIL 93].

7.11.2 How to Use a Checklist

We present two ways to use a checklist. The first way is to review a document while keeping in mind all the elements of the checklist. The second way is to review the entire document using only one element of the checklist at a time. This second approach is carried out as follows:

  • Use the first item in the checklist to review the document in full. When finished, check off that item on the checklist and move to the next;
  • Continue the review using the second element of the checklist and check off that item on the checklist when done;
  • Continue reviewing the document until all the items on the checklist are checked off;
  • During the review, note defects or errors with the document;
  • After completing the review of the entire document, correct all defects listed;
  • After completing the correction of defects, print the updated version of the document and check all the corrections to ensure none are forgotten;
  • If there were many important corrections, review the entire document again.

Table 7.10 provides an example of the use of a checklist designed for code review using this approach. The columns on the right are used during the review of a section of the document. For example, for program source code, consider the first item on the checklist, that is, review the initialization step of the program. After checking this item, check off this box and then move on to the next item on the checklist.

Table 7.10 Example of the Use of a Checklist to Verify Component # 1

Name Description 1 2 3 4
Initialization
  • Variables and initialization values:
    • When the program starts
    • At the start of each loop
     
Interfaces
  • Internal interface (procedure call)
  • Input/Output (e.g., display, printout, communication)
  • User (e.g., format, content)
     
Pointers
  • Initialization of pointers to NULL
     

Typical exit or completion criterion with this approach is to ensure that all elements of the checklist have been checked.

For both approaches, unless the document to be reviewed is very short (i.e., less than one page), it is suggested not to review a document on screen, but use a hard copy in order to highlight the identified defects. During a review, a paper copy makes it easier to navigate from one page to another of the document and facilitates the identification of omissions and contradictions that may occur in large documents.

7.11.3 How to Improve and Manage a Checklist

Every professional, whether due to training, experience or writing style, makes mistakes. We must update checklists periodically as we learn from our mistakes.

The disadvantage of using a checklist is that the reviewer will focus his attention only on the items that are on the list. This may leave defects in software that are not listed on the checklist. It is therefore important to update the checklist based on results obtained and not just to follow it blindly.

7.12 V&V Techniques

Tools and techniques can be used to help perform V&V activities. Using these tools is highly dependent on the integrity level of the applications, the maturity of the product, the corporate culture, and the type of development, modeling, and simulation paradigm of individual projects.

The degree to which verification activities can be automated directly influences the overall efficiency of V&V efforts. As there is no formal process for selecting tools, it is important to select the right tool. Ideally, modeling and simulation tools used during the design and the development phases should be integrated with the verification tools. Moreover, validation does not permit a detailed match with modeling and simulation processes.

The market for verification tools is large. It is easy to find a list of at least a hundred vendors on the Internet today. These tools fall into the following two categories:

  • generic tools supporting data results from validation:
  • database management systems;
  • data manipulation tools;
  • data modeling tools;
  • formal methods:
  • formal language;
  • mechanized reasoning tools (automated theorem proofs);
  • model verification (checker) tool.

7.12.1 Introduction to V&V Techniques

Wallace et al. [WAL 96] wrote an excellent technical report presenting the different V&V techniques and it is still current today. First, we present three types of V&V techniques, then we briefly describe these techniques, and finally, we propose techniques for each of the development life cycle phases.

The V&V tasks are composed of three types of techniques: static analysis, dynamic analysis, and formal analysis [WAL 96]:

  • Static analysis techniques are those that directly analyze the content and structure of a product without executing the software. Reviews, inspections, audits, and data flow analysis are examples of static analysis techniques;
  • Dynamic analysis techniques involve the execution or simulation of a developed product looking for errors/defects by analyzing the outputs received following the entry of inputs. For these techniques, the output values or expected ranges of values must be known. Black box testing is the most widely used and well known dynamic V&V technique;
  • Formal analysis techniques use mathematics to analyze the algorithms executed in a product. Sometimes, the software requirements may be written in a formal specification language (e.g., VDM, Z), which can be verified using a formal analysis technique.

7.12.2 Some V&V Techniques

7.12.2.1 Algorithms Analysis Technique [WAL 96]

The algorithms analysis technique examines the logic and accuracy of the configuration of a software by the transcription of the algorithms in a structured language or format. The analysis involves re-deriving equations or evaluating whether specific numerical techniques apply. It checks that the algorithms are correct, appropriate, stable, and that they meet the accuracy requirements, timing, and sizing. The algorithms analysis technique examines, among other things, the accuracy of equations and numerical techniques, the effects of rounding and truncations.

7.12.2.2 Interface Analysis Technique [WAL 96]

Interface analysis is a technique used to demonstrate that program interfaces do not contain errors that can lead to failures. The types of interfaces that are analyzed include external interfaces to the software, internal interfaces between components, interfaces with hardware between software and system, between software and hardware, and between the software and a database.

7.12.2.3 Prototyping Technique [WAL 96]

Prototyping demonstrates the likely results of the implementation of software requirements, especially the user interfaces. The review of a prototype can help identify incomplete or incorrect software requirements and can also reveal whether the requirements will not result in undesirable system behavior. For large systems, prototyping can prevent inappropriate designs and development which can be a costly waste.

7.12.2.4 Simulation Technique [WAL 96]

Simulation is a technique used to evaluate the interactions between large complex systems composed of hardware, software, and users. The simulation uses an “executable” model to examine the behavior of the software. Simulation can be used to test the operator's procedures and to isolate installation problems.

7.13 V&V Plan

The V&V plan essentially answers the following questions: what do we verify and/or validate? How and when, and by whom will the V&V activities be performed and what level of resources will be required?

IEEE 1012 specifies that the V&V effort starts with the production of a plan that addresses the following list of elements. If there is an irrelevant item that the project should not cover in its plan, it is better to state that “This section is not applicable to this project” instead of removing the item from the plan. This allows the SQA to clearly see that the item was not forgotten by the project team. Of course, additional topics may be added. If elements of the plan are already documented in other documents, the plan should refer to it instead of repeating it. The plan must be maintained throughout the software life cycle. The V&V plan proposed by IEEE 1012 includes the following (without listing the system and hardware V&V elements) [IEE 12]:

  1. Purpose
  2. Referenced documents
  3. Definitions
  4. V&V overview
    1. 4.1 Organization
    2. 4.2 Master schedule
    3. 4.3 Integrity level scheme
    4. 4.4 Resources summary
    5. 4.5 Responsibilities
    6. 4.6 Tools, techniques, and methods
  5. V&V processes
    1. 5.1 Common V&V processes, activities, and tasks
    2. 5.2 System V&V processes, activities, and tasks
    3. 5.3 Software V&V processes, activities, and tasks
      1. 5.3.1 Software concept
      2. 5.3.2 Software requirements
      3. 5.3.3 Software design
      4. 5.3.4 Software construction
      5. 5.3.5 Software integration test
      6. 5.3.6 Software qualification test
      7. 5.3.7 Software acceptance test
      8. 5.3.8 Software installation and checkout (Transition)
      9. 5.3.9 Software operation
      10. 5.3.10 Software maintenance
      11. 5.3.11 Software disposal
    4. 5.4 Hardware V&V processes, activities, and tasks
  6. V&V reporting requirements
    1. 6.1 Task reports
    2. 6.2 Anomaly reports
    3. 6.3 V&V final report
    4. 6.4 Special studies reports (optional)
    5. 6.5 Other reports (optional)
  7. V&V administrative requirements
    1. 7.1 Anomaly resolution and reporting
    2. 7.2 Task iteration policy
    3. 7.3 Deviation policy
    4. 7.4 Control procedures
    5. 7.5 Standards, practices, and conventions
  8. V&V test documentation requirements

7.14 Limitations OF V&V

No technique can prevent all errors or defects. Regarding V&V, we note the following limitations [SCH 00]:

  • Impracticability of testing all the data: for most programs, it is virtually impossible to try to review the program with all possible inputs, due to the multitude of possible combinations;
  • Impracticability of testing all the branch conditions: for most programs, it is impractical to try to test all the possible execution paths of a software. This is also due to the multitude of possible combinations;
  • Impracticability of obtaining absolute proof: there is no absolute proof of correctness of a software based system unless formal specifications can prove it to be correct and accurately reflect user expectations.

It is not uncommon for test plans to be designed by the developer of the system and then approved by the V&V staff. This practice is far from ideal to guarantee a high level of quality. Although the V&V role should not be part of the development team, sometimes the developer becomes the evaluator of his own software. It is therefore important that the V&V role consist of people who have good knowledge and experience of systems in order to provide sound evaluations of the quality of the resulting product.

7.15 V&V in the SQA Plan

The IEEE 730 standard discusses V&V and starts with a statement that SQA activities need to be coordinated with the verification, validation, review, audit, and other life cycle processes needed to ensure the conformity and quality of the final product. There is no need to duplicate efforts here. The standard asks the project team to ensure that the V&V concerns have been well explained in the V&V or SQA plan.

For verification activities, the standard lists the following questions that the project team members should ask themselves [IEE 14]:

  • has verification between the system requirements and the system architecture been performed?
  • have verification criteria for software items been developed that ensure compliance with the software requirements allocated to the items?
  • has an effective validation strategy been developed and implemented?
  • have appropriate criteria for validation of all required work products been identified?
  • have verification criteria been defined for all software units against their requirements?
  • has verification of the software units compared with the requirements and the design been accomplished?
  • have adequate criteria for verification of all required software work products been identified?
  • have required verification activities been performed adequately?
  • have results of the verification activities been made available to the customer and other involved parties?

For the validation of the software, the standard recommends that the tools that will be used for validation be chosen and assessed based on product risk and that the project team evaluate whether these tools need validation. If they are validated, they are to keep the records of this validation. It also asks the team to answer each of the following questions [IEE 14]:

  • have all tools that require validation been validated before using them?
  • has an effective validation strategy been developed and implemented?
  • have appropriate criteria for validation of all required work products been identified?
  • have required validation activities been performed adequately?
  • have problems been identified, recorded, and resolved?
  • has evidence been provided that the software work products as developed are suitable for their intended use?
  • have results of the validation activities been provided to the customer and other involved parties?

Of particular interest in the SQA plan, the project team will pay special attention to the acceptance process and how to classify defects until an exit criterion is met. It is important to clarify the final testing process stage of a project as it is the last line of defense before going to production.

7.16 Success Factors

The execution of the V&V practices can be helped or slowed down depending on a number of organizational factors. The next text box lists some of these factors.

7.17 Further Reading

  1. SCHULMEYER G. G. (DIR.) Handbook of Software Quality Assurance, 4th edition. Artech House, Norwood, MA, 2008.
  2. WIEGERS K. Software Requirements, 3rd edition. Microsoft Press, Redmond, WA, 2013.

7.18 Exercises

  1. List the key activities of a procedure to verify requirements.

  2. Classify the V&V techniques listed in the following table according to three categories [WAL 96]:

    1. Static analysis: analysis of the structure and form of a product without executing it;
    2. Dynamic analysis: executing or simulating a developed product with the objective of detecting defects by analyzing its outputs based on input scenarios;
    3. Formal analysis: use of mathematical equations and techniques to rigorously analyze algorithms used in a product.

     

    Technique Static analysis Dynamic analysis Formal analysis
    Algorithm analysis      
    Boundary value analysis      
    Code reading      
    Coverage analysis      
    Control flow analysis      
    Database analysis      
    Data flow analysis      
    Decision (truth) tables      
    Desk-checking      
    Error seeding      
    Software fault tree analysis or Software failure mode      

     

    Technique Static analysis Dynamic analysis Formal analysis
    Finite state machines      
    Functional testing      
    Inspections      
    Interface analysis      
    Interface testing      
    Performance testing      
    Petri-nets      
    Prototyping      
    Regresion analysis and testing      
    Reviews      
    Simulation      
    Sizing and timing analysis      
    Software failure mode, effects, and criticality analysis      
    Stress testing      
    Structural testing      
    Symbolic execution      
    Test certification      
    Walk-throughs      
  3. Provide examples of selection criteria for an IV&V service supplier.

  4. Your manager asks you to develop a job description for the position of V&V engineer for critical software products. List the qualifications/experience, accountability, and responsibilities typically required for that position.

  5. For some critical systems, a standard imposes that the developer demonstrates to the client that there is no dead code in the final product. Explain why this requirement is imposed by the customer?

  6. List three V&V techniques for each of the development life cycle phases.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.191.134