In this chapter—
In this chapter, we will look at some of the project management aspects of testing. The Project Management Institute [PMI-2004] defines a project formally as a temporary endeavor to create a unique product or service. This means that every project has a definite beginning and a definite end and that the product or service is different in some distinguishing way from all similar products or services.
Testing is integrated into the endeavor of creating a given product or service; each phase and each type of testing has different characteristics and what is tested in each version could be different. Hence, testing satisfies this definition of a project fully.
Given that testing can be considered as a project on its own, it has to be planned, executed, tracked, and periodically reported on. We will look at the test planning aspects in the next section. We will then look into the process that drives a testing project. Subsequently, we will look at the execution of tests and the various types of reporting that takes place during a testing project. We will conclude this chapter by sharing some of the best practices in test management and execution.
Testing—like any project—should be driven by a plan. The test plan acts as the anchor for the execution, tracking, and reporting of the entire testing project and covers
Failing to plan is planning to fail.
As was explained in the earlier chapters, various testing teams do testing for various phases of testing. One single test plan can be prepared to cover all phases and all teams or there can be separate plans for each phase or for each type of testing. For example, there needs to be plans for unit testing integration testing, performance testing, acceptance testing, and so on. They can all be part of a single plan or could be covered by multiple plans. In situations where there are multiple test plans, there should be one test plan, which covers the activities common for all plans. This is called the master test plan.
Scope management pertains to specifying the scope of a project. For testing, scope management entails
It is always good to start from the end-goal or product-release perspective and get a holistic picture of the entire product to decide the scope and priority of testing. Usually, during the planning stages of a release, the features that constitute the release are identified. For example, a particular release of an inventory control system may introduce new features to automatically integrate with supply chain management and to provide the user with various options of costing. The testing teams should get involved early in the planning cycle and understand the features. Knowing the features and understanding them from the usage perspective will enable the testing team to prioritize the features for testing.
The following factors drive the choice and prioritization of features to be tested.
Features that are new and critical for the release The new features of a release set the expectations of the customers and must perform properly. These new features result in new program code and thus have a higher susceptibility and exposure to defects. Furthermore, these are likely to be areas where both the development and testing teams will have to go through a learning curve. Hence, it makes sense to put these features on top of the priority list to be tested. This will ensure that these key features get enough planning and learning time for testing and do not go out with inadequate testing. In order to get this prioritization right, the product marketing team and some select customers participate in identification of the features to be tested.
Features whose failures can be catastrophic Regardless of whether a feature is new or not, any feature the failure of which can be catastrophic or produce adverse business impact has to be high on the list of features to be tested. For example, recovery mechanisms in a database will always have to be among the most important features to be tested.
Features that are expected to be complex to test Early participation by the testing team can help identify features that are difficult to test. This can help in starting the work on these features early and line up appropriate resources in time.
Features which are extensions of earlier features that have been defect prone As we have seen in Chapter 8, Regression Testing, certain areas of a code tend to be defect prone and such areas need very thorough testing so that old defects do not creep in again. Such features that are defect prone should be included ahead of more stable features for testing.
A product is not just a heterogeneous mixture of these features. These features work together in various combinations and depend on several environmental factors and execution conditions. The test plan should clearly identify these combinations that will be tested.
Given the limitations on resources and time, it is likely that it will not be possible to test all the combinations exhaustively. During planning time, a test manager should also consciously identify the features or combinations that will not be tested. This choice should balance the requirements of time and resources while not exposing the customers to any serious defects. Thus, the test plan should contain clear justifications of why certain combinations will not be tested and what are the risks that may be faced by doing so.
Once we have this prioritized feature list, the next step is to drill down into some more details of what needs to be tested, to enable estimation of size, effort, and schedule. This includes identifying
We have discussed various types of tests in earlier chapters of this book. Each of these types has applicability and usefulness under certain conditions. The test approach/strategy part of the test plan identifies the right type of testing to effectively test a given feature or combination.
The test strategy or approach should result in identifying the right type of test for each of the features or combinations. There should also be objective criteria for measuring the success of a test. This is covered in the next sub-section.
As we have discussed in earlier chapters (especially chapters on system and acceptance testing) there must be clear entry and exit criteria for different phases of testing. The test strategies for the various features and combinations determined how these features and combinations would be tested. Ideally, tests must be run as early as possible so that the last-minute pressure of running tests after development delays (see the section on Risk Management below) is minimized. However, it is futile to run certain tests too early. The entry criteria for a test specify threshold criteria for each phase or type of test. There may also be entry criteria for the entire testing activity to start. The completion/exit criteria specify when a test cycle or a testing activity can be deemed complete. Without objective exit criteria, it is possible for testing to continue beyond the point of diminishing returns.
A test cycle or a test activity will not be an isolated, continuous activity that can be carried out at one go. It may have to be suspended at various points of time because it is not possible to proceed further. When it is possible to proceed further, it will have to be resumed. Suspension criteria specify when a test cycle or a test activity can be suspended. Resumption criteria specify when the suspended tests can be resumed. Some of the typical suspension criteria include
When such conditions are addressed, the tests can resume.
Scope management identifies what needs to be tested. The test strategy outlines how to do it. The next aspect of planning is the who part of it. Identifying responsibilities, staffing, and training needs addresses this aspect.
A testing project requires different people to play different roles. As discussed in the previous two chapters, there are the roles of test engineers, test leads, and test managers. There is also role definition on the dimensions of the modules being tested or the type of testing. These different roles should complement each other. The different role definitions should
Role definitions should not only address technical roles, but also list the management and reporting responsibilities. This includes frequency, format, and recipients of status reports and other project-tracking mechanisms. In addition, responsibilities in terms of SLAs for responding to queries should also be addressed during the planning stage.
Staffing is done based on estimation of effort involved and the availability of time for release. In order to ensure that the right tasks get executed, the features and tasks are prioritized the basis of on effort, time, and importance.
People are assigned to tasks that achieve the best possible fit between the requirements of the job and skills and experience levels needed to perform that function. It may not always be possible to find the perfect fit between the requirements and the skills available. In case there are gaps between the requirements and availability of skills, they should be addressed with appropriate training programs. It is important to plan for such training programs upfront as they are usually are de-prioritized under project pressures.
As a part of planning for a testing project, the project manager (or test manager) should provide estimates for the various hardware and software resources required. Some of the following factors need to be considered.
In addition to all of the above, there are also other implied environmental requirements that need to be satisfied. These include office space, support functions (like HR), and so on.
Underestimation of these resources can lead to considerable slowing down of the testing efforts and this can lead to delayed product release and to de-motivated testing teams. However, being overly conservative and “safe” in estimating these resources can prove to be unnecessarily expensive. Proper estimation of these resources requires co-operation and teamwork among different groups—product development team, testing team, system administration team, and senior management.
The test plan also identifies the deliverables that should come out of the test cycle/testing activity. The deliverables include the following, all reviewed and approved by the appropriate people.
As we will see in the next section, a defect repository gives the status of the defects reported in a product life cycle. Part of the deliverables of a test cycle is to ensure that the defect repository is kept current. This includes entering new defects in the repository and updating the status of defect fixes after verification. We will see the contents of some of these deliverables in the later parts of this chapter.
The scope identified above gives a broad overview of what needs to be tested. This understanding is quantified in the estimation step. Estimation happens broadly in three phases.
We will cover size estimation and effort estimation in this sub-section and address schedule estimation in the next sub-section.
Size estimate quantifies the actual amount of testing that needs to be done. Several factors contribute to the size estimate of a testing project.
Size of the product under test This obviously determines the amount of testing that needs to be done. The larger the product, in general, greater is the size of testing to be done. Some of the measures of the size of product under test are as follows.
This methodology of estimating size or complexity of an application is comprehensive in terms of taking into account realistic factors. The major challenge in this method is that it requires formal training and is not easy to use. Furthermore, this method is not directly suited to systems software type of projects.
Extent of automation required When automation is involved, the size of work to be done for testing increases. This is because, for automation, we should first perform the basic test case design (identifying input data and expected results by techniques like condition coverage, boundary value analysis, equivalence partitioning, and so on.) and then scripting them into the programming language of the test automation tool.
Number of platforms and inter-operability environments to be tested If a particular product is to be tested under several different platforms or under several different configurations, then the size of the testing task increases. In fact, as the number of platforms or touch points across different environments increases, the amount of testing increases almost exponentially.
All the above size estimates pertain to “regular” test case development. Estimation of size for regression testing (as discussed in Chapter 8) involves considering the changes in the product and other similar factors.
In order to have a better handle on the size estimate, the work to be done is broken down into smaller and more manageable parts called work breakdown structure (WBS) units. For a testing project, WBS units are typically test cases for a given module, test cases for a given platform, and so on. This decomposition breaks down the problem domain or the product into simpler parts and is likely to reduce the uncertainty and unknown factors.
Size estimate is expressed in terms of any of the following.
Size estimate provides an estimate of the actual ground to be covered for testing. This acts as a primary input for estimating effort. Estimating effort is important because often effort has a more direct influence on cost than size. The other factors that drive the effort estimate are as follows.
Productivity data Productivity refers to the speed at which the various activities of testing can be carried out. This is based on historical data available in the organization. Productivity data can be further classified into the number of test cases that can be developed per day (or some unit time), the number of test cases that can be run per day, the number of pages of pages of documentation that can be tested per day, and so on. Having these fine-grained productivity data enables better planning and increases the confidence level and accuracy of the estimates.
Reuse opportunities If the test architecture has been designed keeping reuse in mind, then the effort required to cover a given size of testing can come down. For example, if the tests are designed in such a way that some of the earlier tests can be reused, then the effort of test development decreases.
Robustness of processes Reuse is a specific example of process maturity of an organization. Existence of well-defined processes will go a long way in reducing the effort involved in any activity. For example, in an organization with higher levels of process maturity, there are likely to be
All these reduce the need to reinvent the wheel and thus enable reduction in the effort involved.
Effort estimate is derived from size estimate by taking the individual WBS units and classifying them as “reusable,” “modifications,” and “new development.” For example, if parts of a test case can be reused from existing test cases, then the effort involved in developing these would be close to zero. If, on the other hand, a given test case is to be developed fully from scratch, it is reasonable to assume that the effort would be the size of the test case divided by productivity.
Effort estimate is given in person days, person months, or person years. The effort estimate is then translated to a schedule estimate. We will address scheduling in the next sub-section.
Activity breakdown and schedule estimation entail translating the effort required into specific time frames. The following steps make up this translation.
During the effort estimation phase, we have identified the effort required for each of the WBS unit, factoring in the effect of reuse. This effort was expressed in terms of person months. If the effort for a particular WBS unit is estimated as, say, 40 person months, it is not possible to trade the “persons” for “months,” that is, we cannot indefinitely increase the number of people working on it, expecting the duration to come down proportionally. As stated in [BROO-74], adding more people to an already delayed project is a sure way of delaying the project even further! This is because, when new people are added to a project, it increases the communication overheads and it takes some time for the new members to gel with the rest of the team. Furthermore, these WBS units cannot be executed in any random order because there will be dependencies among the activities. These dependencies can be external dependencies or internal dependencies. External dependencies of an activity are beyond the control and purview of the manager/person performing the activity. Some of the common external dependencies are
Internal dependencies are fully within the control of the manager/person performing that activity. For example, some of the internal dependencies could be.
The testing activities will also face parallelism constraints that will further restrict the activities that can be done at a time. For example, certain tests cannot be run together because of conflicting conditions (for example, requiring different versions of a component for testing) or a high-end machine may have to be multiplexed across multiple tests.
Based on the dependencies and the parallelism possible, the test activities are scheduled in a sequence that helps accomplish the activities in the minimum possible time, while taking care of all the dependencies. This schedule is expressed in the form of a Gantt chart as shown in Figure 15.1. The coloured figure is available on Illustrations.
Communications management consists of evolving and following procedures for communication that ensure that everyone is kept in sync with the right level of detail. Since this is intimately connected with the test execution and progress of the testing project, we will take this up in more detail in Section 15.3 when we take up the various types of reports in a test cycle.
Just like every project, testing projects also face risks. Risks are events that could potentially affect a project's outcome. These events are normally beyond the control of the project manager. As shown in Figure 15.2, risk management entails
As some risks are identified and resolved, other risks may surface. Hence as risks can happen any time, risk management is essentially a cycle, which goes through the above four steps repeatedly.
Risk identification consists of identifying the possible risks that may hit a project. Although there could potentially be many risks that can hit a project, the risk identification step should focus on those risks that are more likely to happen. The following are some of the common ways to identify risks in testing.
Risk quantification deals with expressing the risk in numerical terms. There are two components to the quantification of risk. One is the probability of the risk happening and the other is the impact of the risk, if the risk happens. For example, the occurrence of a low-priority defect may have a high probability, but a low impact. However, a show stopper may have (hopefully!) a low probability, but a very high impact (for both the customer and the vendor organization). To quantify both these into one number, Risk exposure is used. This is defined as the product of risk probability and risk impact. To make comparisons easy, risk impact is expressed in monetary terms (for example, in dollars).
Risk mitigation planning deals with identifying alternative strategies to combat a risk event, should that risk materialize. For example, a couple of mitigation strategies for the risk of attrition are to spread the knowledge to multiple people and to introduce organization-wide processes and standards. To be better prepared to handle the effects of a risk, it is advisable to have multiple mitigation strategies.
When the above three steps are carried out systematically and in a timely manner, the organization would be in a better position to respond to the risks, should the risks become a reality. When sufficient care is not given to these initial steps, a project may find itself under immense pressure to react to a risk. In such cases, the choices made may not be the most optimal or prudent, as the choices are made under pressure.
The following are some of the common risks encountered in testing projects and their characteristics.
Unclear requirements The success of testing depends a lot on knowing what the correct expected behavior of the product under test is. When the requirements to be satisfied by a product are not clearly documented, there is ambiguity in how to interpret the results of a test. This could result in wrong defects being reported or in the real defects being missed out. This will, in turn, result in unnecessary and wasted cycles of communication between the development and testing teams and consequent loss of time. One way to minimize the impact of this risk is to ensure upfront participation of the testing team during the requirements phase itself.
Schedule dependence The schedule of the testing team depends significantly on the schedules of the development team. Thus, it becomes difficult for the testing team to line up resources properly at the right time. The impact of this risk is especially severe in cases where a testing team is shared across multiple-product groups or in a testing services organization (see Chapter 14). A possible mitigation strategy against this risk is to identify a backup project for a testing resource. Such a backup project may be one of that could use an additional resource to speed up execution but would not be unduly affected if the resource were not available. An example of such a backup project is chipping in for speeding up test automation.
Insufficient time for testing Throughout the book, we have stressed the different types of testing and the different phases of testing. Though some of these types of testing—such as white box testing—can happen early in the cycle, most of the tests tend to happen closer to the product release. For example, system testing and performance testing can happen only after the entire product is ready and close to the release date. Usually these tests are resource intensive for the testing team and, in addition, the defects that these tests uncover are challenging for the developers to fix. As discussed in performance testing chapter, fixing some of these defects could lead to changes in architecture and design. Carrying out such changes into the cycle may be expensive or even impossible. Once the developers fix the defects, the testing team would have even lesser time to complete the testing and is under even greater pressure. The use of the V model to at least shift the test design part of the various test types to the earlier phases of the project can help in anticipating the risks of tests failing at each level in a better manner. This in turn could lead to a reduction in the last-minute crunch. The metric days needed for release (see Chapter 17) when captured and calculated properly, can help in planning the time required for testing better.
“Show stopper” defects When the testing team reports defects, the development team has to fix them. Certain defects which are show stoppers may prevent the testing team to proceed further with testing, until development fixes such show stopper defects. Encountering this type of defects will have a double impact on the testing team: Firstly, they will not be able to continue with the testing and hence end up with idle time. Secondly, when the defects do get fixed and the testing team restarts testing, they would have lost valuable time and will be under tremendous pressure with the deadline being nearer. This risk of show stopper defects can pose a big challenge to scheduling and resource utilization of the testing teams. The mitigation strategies for this risk are similar to those seen on account of dependence on development schedules.
Availability of skilled and motivated people for testing As we saw in Chapter 13, People Issues in Testing, hiring and motivating people in testing is a major challenge. Hiring, retaining, and constant skill upgrade of testers in an organization is vital. This is especially important for testing functions because of the tendency of people to look for development positions.
Inability to get a test automation tool Manual testing is error prone and labor intensive. Test automation, as discussed in Chapter 16, alleviates some of these problems. However, test automation tools are expensive. An organization may face the risk of not being able to afford a test automation tool. This risk can in turn lead to less effective and efficient testing as well as more attrition. One of the ways in which organizations may try to reduce this risk is to develop in-house tools. However, this approach could lead to an even greater risk of having a poorly written or inadequately documented in-house tool.
These risks are not only potentially dangerous individually, but even more dangerous when they occur in tandem. Unfortunately, often, these risks do happen in tandem! A testing group plans its schedules based on development schedules, development schedules slip, testing team resources get into an idle time, pressure builds, schedules slip, and the vicious cycle starts all over again. It is important that these risks be caught early or before they create serious impact on the testing teams. Hence, we need to identify the symptoms for each of these risks. These symptoms and their impacts need to be tracked closely throughout the project.
Table 15.1 gives typical risks, their symptoms, impacts and mitigation/contingency plans.
In the previous section, we considered testing as a project in its own right and addressed some of the typical project management issues in testing. In this section, we will look at some of the aspects that should be taken care of in planning such a project. These planning aspects are proactive measures that can have an across-the-board influence on all testing projects.
Standards comprise an important part of planning in any organization. Standards are of two types—external standards and internal standards. External standards are standards that a product should comply with, are externally visible, and are usually stipulated by external consortia. From a testing perspective, these standards include standard tests supplied by external consortia and acceptance tests supplied by customers. Compliance to external standards is usually mandated by external parties.
Internal standards are standards formulated by a testing organization to bring in consistency and predictability. They standardize the processes and methods of working within the organization. Some of the internal standards include
Naming and storage conventions for test artifacts Every test artifact (test specification, test case, test results, and so on) have to be named appropriately and meaningfully. Such naming conventions should enable
As an example of using naming conventions, consider a product P, with modules M01, M02, and M03. The test suites can be named as PM01nnnn.<file type>. Here nnnn can be a running sequence number or any other string. For a given test, different files may be required. For example, a given test may use a test script (which provides details of the specific actions to be performed), a recorded keystroke capture file, an expected results file. In addition, it may require other supporting files (for example, an SQL script for a database). All these related files can have the same file name (for example, PMO1nnnn) and different file types (for example, .sh, .SQL, .KEY, .OUT). By such a naming convention, one can find
All files relating to a specific test (for example, by searching for all files with file name PM01nnnn), and
All tests relating to a given module (for example, those files starting with name PM01 will correspond to tests for module M01) those
With this, when the functionality corresponding to module M01 changes, it becomes easy to locate tests that may have to be modified or deleted.
This two-way mapping between tests and product functionality through appropriate naming conventions will enable identification of appropriate tests to be modified and run when product functionality changes.
In addition to file-naming conventions, the standards may also stipulate the conventions for directory structures for tests. Such directory structures can group logically related tests together (along with the related product functionality). These directory structures are mapped into a configuration management repository (discussed later in the chapter).
Documentation standards Most of the discussion on documentation and coding standards pertain to automated testing. In the case of manual testing, documentation standards correspond to specifying the user and system responses at the right level of detail that is consistent with the skill level of the tester.
While naming and directory standards specify how a test entity is represented externally, documentation standards specify how to capture information about the tests within the test scripts themselves. Internal documentation of test scripts are similar to internal documentation of program code and should include the following.
Without such detailed documentation, a person maintaining the test scripts is forced to rely only on the actual test code or script to guess what the test is supposed to do or what changes happened to the test scripts. This may not give a true picture. Furthermore, it may place an undue dependence on the person who originally wrote the tests.
Test coding standards Test coding standards go one level deeper into the tests and enforce standards on how the tests themselves are written. The standards may
Test reporting standards Since testing is tightly interlinked with product quality, all the stakeholders must get a consistent and timely view of the progress of tests. Test reporting standards address this issue. They provide guidelines on the level of detail that should be present in the test reports, their standard formats and contents, recipients of the report, and so on. We will revisit this in more detail later in this chapter.
Internal standards provide a competitive edge to a testing organization and act as a first-level insurance against employee turnover and attrition. Internal standards help bring new test engineers up to speed rapidly. When such consistent processes and standards are followed across an organization, it brings about predictability and increases the confidence level one can have on the quality of the final product. In addition, any anomalies can be brought to light in a timely manner.
Testing requires a robust infrastructure to be planned upfront. This infrastructure is made up of three essential elements.
A test case database captures all the relevant information about the test cases in an organization. Some of the entities and the attributes in each of the entities in such a TCDB are given in Table 15.2.
Table 15.2 Content of a test case database.
Entity | Purpose | Attributes |
---|---|---|
Test case | Records all the “static” information about the tests |
|
Test case - Product cross- reference | Provides a mapping between the tests and the corresponding product features; enables identification of tests for a given feature |
|
Test case run history | Gives the history of when a test was run and what was the result; provides inputs on selection of tests for regression runs (see Chapter 8) |
|
Test case—Defect cross-reference | Gives details of test cases introduced to test certain specific defects detected in the product; provides inputs on the selection of tests for regression runs |
|
A defect repository captures all the relevant details of defects reported for a product. The information that a defect repository includes is given in Table 15.3.
Table 15.3 Information in a defect repository.
Entity | Purpose | Attributes |
---|---|---|
Defect details | Records all the “static” information about the tests |
|
Defect test details | Provides details of test cases for a given defect. Cross-references the TCDB |
|
Fix details | Provides details of fixes for a given defect; cross-references the configuration management repository |
|
Communication | Captures all the details of the communication that transpired for this defect among the various stakeholders. These could include communication between the testing team and development team, customer communication, and so on. Provides insights into effectiveness of communication |
|
The defect repository is an important vehicle of communication that influences the work flow within a software organization. It also provides the base data in arriving at several of the metrics we will discuss in Chapter 17, Metrics and Measurements. In particular, most of the metrics classified as testing defect metrics and development defect metrics are derived out of the data in defect repository.
Yet another infrastructure that is required for a software product organization (and in particular for a testing team) is a software configuration management (SCM) repository. An SCM repository also known as (CM repository) keeps track of change control and version control of all the files/entities that make up a software product. A particular case of the files/entities is test files. Change control ensures that
Version control ensures that the test scripts associated with a given release of a product are baselined along with the product files. Baselining is akin to taking a snapshot of the set of related files of a version, assigning a unique identifier to this set. In future, when anyone wants to recreate the environment for the given release, this label would enable him or her to do so.
TCDB, defect repository, and SCM repository should complement each other and work together in an integrated fashion as shown in Figure 15.3. For example, the defect repository links the defects, fixes, and tests. The files for all these will be in the SCM. The meta data about the modified test files will be in the TCDB. Thus, starting with a given defect, one can trace all the test cases that test the defect (from the TCDB) and then find the corresponding test case files and source files from the SCM repository.
Similarly, in order to decide which tests to run for a given regression run,
Developer: These testing folks… they are always nitpicking!
Tester: Why don't these developers do anything right?!
Sales person: When will I get a product out that I can sell?!
People management is an integral part of any project management. Often, it is a difficult chasm for engineers-turned-managers to cross. As an individual contributor, a person relies only on his or her own skills to accomplish an assigned activity; the person is not necessarily trained on how to document what needs to be done so that it can be accomplished by someone else. Furthermore, people management also requires the ability to hire, motivate, and retain the right people. These skills are seldom formally taught (unlike technical skills). Project managers often learn these skills in a “sink or swim” mode, being thrown head-on into the task.
Most of the above gaps in people management apply to all types of projects. Testing projects present several additional challenges. We believe that the success of a testing organization (or an individual in a testing career) depends vitally on judicious people management skills. Since the people and team-building issues are significant enough to be considered in their own right, we have covered these in detail in Chapter 13, on People Issues in Testing, and in Chapter 14, on Organization Structures for Testing Teams. These chapters address issues relevant to building and managing a good global testing team that is effectively integrated into product development and release.
We would like to stress that these team-building exercises should be ongoing and sustained, rather than be done in one burst. The effects of these exercises tend to wear out under the pressure of deadlines of delivery and quality. Hence, they need to be periodically recharged. The important point is that the common goals and the spirit of teamwork have to be internalized by all the stakeholders. Once this internalization is achieved, then they are unlikely to be swayed by operational hurdles that crop up during project execution. Such an internalization and upfront team building has to be part of the planning process for the team to succeed.
Ultimately, the success of a product depends on the effectiveness of integration of the development and testing activities. These job functions have to work in tight unison between themselves and with other groups such as product support, product management, and so on. The schedules of testing have to be linked directly to product release. Thus, project planning for the entire product should be done in a holistic way, encompassing the project plan for testing and development. The following are some of the points to be decided for this planning.
The purpose of the testing team is to identify the defects in the product and the risks that could be faced by releasing the product with the existing defects. Ultimately, the decision to release or not is a management decision, dictated by market forces and weighing the business impact for the organization and the customers.
A test plan combines all the points discussed above into a single document that acts as an anchor point for the entire testing project. A template of a test plan is provided in Appendix B at the end of this chapter. Appendix A gives a check list of questions that are useful to arrive at a Test Plan.
An organization normally arrives at a template that is to be used across the board. Each testing project puts together a test plan based on the template. Should any changes be required in the template, then such a change is made only after careful deliberations (and with appropriate approvals). The test plan is reviewed by a designated set of competent people in the organization. It then is approved by a competent authority, who is independent of the project manager directly responsible for testing. After this, the test plan is baselined into the configuration management repository. From then on, the baselined test plan becomes the basis for running the testing project. Any significant changes in the testing project should thereafter be reflected in the test plan and the changed test plan baselined again in the configuration management repository. In addition, periodically, any change needed to the test plan templates are discussed among the different stake holders and this is kept current and applicable to the testing teams.
Using the test plan as the basis, the testing team designs test case specifications, which then becomes the basis for preparing individual test cases. We have been using the term test cases freely throughout this book. Formally, a test case is nothing but a series of steps executed on a product, using a pre-defined set of input data, expected to produce a pre-defined set of outputs, in a given environment. Hence, a test case specification should clearly identify
As we have discussed in Chapter 4, Black Box Testing, a requirements traceability matrix ensures that the requirements make it through the subsequent life cycle phases and do not get orphaned mid-course. In particular, the traceability matrix is a tool to validate that every requirement is tested. The traceability matrix is created during the requirements gathering phase itself by filling up the unique identifier for each requirement. Subsequently, as the project proceeds through the design and coding phases, the unique identifier for design features and the program file name is entered in the traceability matrix. When a test case specification is complete, the row corresponding to the requirement which is being tested by the test case is updated with the test case specification identifier. This ensures that there is a two-way mapping between requirements and test cases.
The test case design forms the basis for writing the test cases. Before writing the test cases, a decision should be taken as to which tests are to be automated and which should be run manually. We have described test automation in detail in Chapter 16. Suffice to say here, some of the criteria that will be used in deciding which scripts to automate include
Based on the test case specifications and the choice of candidates for automation, test cases have to be developed. The development of test cases entails translating the test specifications to a form from which the tests can be executed. If a test case is a candidate for automation, then, this step requires writing test scripts in the automation language. If the test case is a manual test case, then test case writing maps to writing detailed step-by-step instructions for executing the test and validating the results. In addition, the test case should also capture the documentation for the changes made to the test case since the original development. Hence, the test cases should also have change history documentation, which specifies
All the artifacts of test cases—the test scripts, inputs, scripts, expected outputs, and so on—should be stored in the test case database and SCM, as described earlier. Since these artifacts enter the SCM, they have to be reviewed and approved by appropriate authorities before being baselined.
The prepared test cases have to be executed at the appropriate times during a project. For example, test cases corresponding to smoke tests may be run on a daily basis. System testing test cases will be run during system testing.
As the test cases are executed during a test cycle, the defect repository is updated with
The defect repository should be the primary vehicle of communication between the test team and the development team. As mentioned earlier, the defect repository contains all the information about defects uncovered by testing (and defects reported by customers). All the stakeholders should be referring to the defect repository for knowing the current status of all the defects. This communication can be augmented by other means like emails, conference calls, and so on.
As discussed in the test plan, a test may have to be suspended during its run because of certain show stopper defects. In this case, the suspended test case should wait till the resumption criteria are satisfied. Likewise, a test should be run only when the entry criteria for the test are satisfied and should be deemed complete only when the exit criteria are satisfied.
During test design and execution, the traceability matrix should be kept current. As and when tests get designed and executed successfully, the traceability matrix should be updated. The traceability matrix itself should be subject to configuration management, that is, it should be subject to version control and change control.
When tests are executed, information about test execution gets collected in test logs and other files. The basic measurements from running the tests are then converted to meaningful metrics by the use of appropriate transformations and formulae, as described in Chapter 17, Metrics and Measurements.
At the completion of a test cycle, a test summary report is produced. This report gives insights to the senior management about the fitness of the product for release. We will see details of this report later in the chapter.
One of the purposes of testing is to decide the fitness of a product for release. As we have seen in Chapter 1, testing can never conclusively prove the absence of defects in a software product. What it provides is an evidence of what defects exist in the product, their severity, and impact. As we discussed earlier, the job of the testing team is to articulate to the senior management and the product release team
While “under testing,” a product could be a risk “over testing” a product trying to remove “that last defect” could be as much of a risk!
The senior management can then take a meaningful business decision on whether to release a given version or not.
Testing requires constant communication between the test team and other teams (like the development team). Test reporting is a means of achieving this communication. There are two types of reports or communication that are required: test incident reports and test summary reports (also called test completion reports).
A test incident report is a communication that happens through the testing cycle as and when defects are encountered. Earlier, we described the defect repository. A test incident report is nothing but an entry made in the defect repository. Each defect has a unique ID and this is used to identify the incident. The high impact test incidents (defects) are highlighted in the test summary report.
As discussed, test projects take place in units of test cycles. A test cycle entails planning and running certain tests in cycles, each cycle using a different build of the product. As the product progresses through the various cycles, it is to be expected to stabilize. A test cycle report, at the end of each cycle, gives
The final step in a test cycle is to recommend the suitability of a product for release. A report that summarizes the results of a test cycle is the test summary report.
There are two types of test summary reports.
A summary report should present
Based on the test summary report, an organization can take a decision on whether to release the product or not.
Ideally, an organization would like to release a product with zero defects. However, market pressures may cause the product to be released with the defects provided that the senior management is convinced that there is no major risk of customer dissatisfaction. If the remnant defects are of low priority/impact, or if the conditions under which the defects are manifested are not realistic, an organization may choose to release the product with these defects. Such a decision should be taken by the senior manager only after consultation with the customer support team, development team, and testing team so that the overall workload for all parts of the organization can be evaluated.
Best practices in testing can be classified into three categories.
A strong process infrastructure and process culture is required to achieve better predictability and consistency. Process models such as CMMI can provide a framework to build such an infrastructure. Implementing a formal process model that makes business sense can provide consistent training to all the employees, and ensure consistency in the way the tasks are executed.
Ensuring people-friendly processes makes for process-friendly people.
Integrating processes with technology in an intelligent manner is a key to the success of an organization. A process database, a federation of information about the definition and execution of various processes can be a valuable addition to the repertoire of tools in an organization. When this process database is integrated with other tools such as defect repository, SCM tool, and TCDB, the organization can maximize the benefits.
Best practices in testing related to people management have been covered in detail in Chapter 13, People Issues in Testing. We will summarize here those best practices that pertain to test management.
The key to successful management is to ensure that the testing and development teams gel well. This gelling can be enhanced by creating a sense of ownership in the overarching product goals. While individual goals are required for the development and testing teams, it is very important to get to a common understanding of the overall goals that define the success of the product as a whole. The participation of the testing teams in this goal-setting process and their early involvement in the overall product-planning process can help enhance the required gelling. This gelling can be strengthened by keeping the testing teams in the loop for decision-making on product-release decisions and criteria used for release.
As discussed earlier in this chapter, job rotation among support, development, and testing can also increase the gelling among the teams. Such job rotation can help the different teams develop better empathy and appreciation of the challenges faced in each other's roles and thus result in better teamwork.
As we saw earlier, a fully integrated TCDB — SCM — defect repository can help in better automation of testing activities. This can help choose those tests that are likely to uncover defects. Even if a full-scale test automation tool is not available, a tight integration among these three tools can greatly enhance the effectiveness of testing. In Chapter 17, Metrics and Measurements, we shall look at metrics like defects per 100 hours of testing, defect density, defect removal rate, and so on. The calculation of these metrics will be greatly simplified by a tight integration among these three tools.
As we will discuss in Chapter 16, Test Automation, automation tools take the boredom and routine out of testing functions and enhance the challenge and interest in the testing functions. Despite the high initial costs that may be incurred, test automation tends to pay significant direct long-term cost savings by reducing the manual labor required to run tests. There are also indirect benefits in terms of lower attrition of test engineers, since test automation not only reduces the routine, but also brings some “programming” into the job, that engineers usually like.
Twenty-first century tools with nineteenth-century processes can only lead to nineteenth-century productivity!
When test automation tools are used, it is useful to integrate the tool with TCDB, defect repository, and an SCM tool. In fact, most of the test automation tools provide the features of TCDB and defect repository, while integrating with commercial SCM tools.
A final remark on best practices. The three dimensions of best practices cannot be carried out in isolation. A good technology infrastructure should be aptly supported by effective process infrastructure and be executed by competent people. These best practices are inter-dependent, self-supporting, and mutually enhancing. Thus, the organization needs to take a holistic view of these practices and keep a fine balance among the three dimensions.
1.1 |
Scope |
4.1 |
Entry Criteria |
4.2 |
Exit Criteria |
4.3 |
Suspension Criteria |
4.4 |
Resumption Criteria |
5.1 |
Assumptions |
5.2 |
Dependencies |
5.3 |
Risks and Risk Management Plans |
6.1 |
Size Estimate |
6.2 |
Effort Estimate |
6.3 |
Schedule Estimate |
9.1 |
Hardware Resources |
9.2 |
Software Resources |
9.3 |
People Resources (Number of people, skills, duration, etc.) |
9.4 |
Other Resources |
10.1 |
Details of Training Required |
10.2 |
Possible Attendees |
10.3 |
Any Constraints |
[PMI-2004] is a comprehensive guide that covers all aspects of project management. [RAME-2002] discusses the activities of managing a globally distributed project, including topics like risk management, configuration management, etc. [BURN-2004] provides inputs on test planning. [FAIR-97] discussed methods of work breakdown structure applicable in general to projects. [HUMP-86] is the initial work that introduced the concept of process models for software development that eventually led to the Capability Maturity Model (CMM). [CMMI-2002] covers the Capability Maturity Model Integrated, a popular process model. [ALBE-79] and [ALBE-83] discuss function points, a method of size estimation for applications. [IEEE-829] presents the various templates and documentation required for testing like test plans, test case specifications, test incident report, etc. [IEEE-1059], [IEEE-1012] also contain useful information for these topics.
3.146.37.35