CHAPTER 14
Testing

The manager of the maintenance team is accountable for the quality of the system after it is transferred to the maintenance team. The initial quality is a reflection on the project team responsible for the implementation; but as time goes on, the quality will be a reflection of the maintenance team, and rightfully so. The maintenance team will continue to apply enhancements and defect fixes to the systems in order to improve the systems. But there is also the possibility of inadvertently introducing new defects into production along with the enhancements and defect fixes. Though any given enhancement may be small, the quality and integrity of the entire system must be the concern of the maintenance team, and any coding change requires rigorous testing.

This chapter focuses on testing, but there are other aspects of quality than just testing. These other aspects are addressed throughout this book and include:

•   Accurate and usable documentation

•   Adherence to procedures

•   Effective tracking using appropriate metrics

Note that testing does not replace quality programming. Testing does not create the quality; testing confirms whether the quality is there or not. Quality must be built in at every phase and in every step. Your job is to establish the correct metrics and proper incentives for the team members to produce a quality product.

Test methods may be carried over from the project, or you may have to establish them yourself. The testing methods covered here provide a systematic way to approach testing. This approach applies to maintenance fixes, enhancements, small projects, and large projects. As with most aspects of project management, success is built on planning and replanning.

This chapter presents testing methods that are commonplace for IT development projects. These methods are included in this book because, even though commonplace, they are not always followed or understood by project teams or maintenance teams.

The definition of testing terms is not consistent throughout the industry. For example, the definition of test case, test procedure, and test script can be the same depending on the individuals you talk to. You do not have to adopt the terms and definitions used in this chapter. What is important is that you adopt standard terms and definitions inside your own company.

Testing is not an activity just involving IT. The customers or business should also be deeply involved in the test process, because they are the owners of the system and know how the software functions fit into their business processes. They should be more astute in knowing what to test. Some companies have a separate independent quality control team that performs testing. In these cases, testing will be the responsibility of that team with oversight from the customer.

Figure 14-1 shows the basic test life cycle. The steps go from developing the high-level test plan to executing test scripts until no defects are found. These are the steps that any new development project would have to complete. The job of the maintenance team is easier if the project team completes all the Preparation Phase steps and then turns over the documents to the maintenance team. When the maintenance team makes an enhancement or defect fix, it will just have to review and update the Preparation Phase documents as necessary. The Execution Phase steps would then follow.

Figure 14-1: Test Life Cycle

Images

In Figure 14-1, the diamonds with the letter “A” represent approval control points that govern whether to proceed to the next step. Approval control points require the appropriate business process owner to sign off on the documents and confirm that the testing is appropriate.

If the Preparation Phase documents don’t exist, your team will have to develop them. Then the Execution Phase steps will have to be performed. The remainder of this chapter assumes that you are going to complete all the steps.

The steps in the Test Life Cycle shown in Figure 14-1 should take place at specific times in the Software Development Life Cycle (SDLC). It is important to make sure testing execution starts on time, is well documented, and is orderly. Figure 14-2 provides a table that maps the test life cycle steps into the software development phases.

Develop Test Plan

The primary goal of all testing is to ensure that the product performs all identified business functions correctly and at the quality level specified in the requirements. All the levels of testing should be specified in the Test Plan. Successful completion of all testing will signify that the product is ready to be deployed.

Figure 14-2: Mapping Test Life Cycle Steps

Images

There is a cost to testing, but there also is a cost to not testing—debugging, fixing, customer dissatisfaction, and business interruptions, along with delays in obtaining the business benefits that are expected from the system. These costs result whether the work is in development or maintenance. The manager and team need to determine the appropriate amount of testing to balance the risks. This determination should appear in the Test Plan.

Whether for a project or maintenance, it is best to start planning your test activities early. Starting to plan your testing when all the programming is completed is too late; that is the time to execute your plan.

Figure 14-3 provides a Test Plan Template that can be used to develop your own test plan. After you have created your first test plan, you can modify it for the next project or next enhancement. Each of the sections in the template provides an explanation of what to consider and include in your test plan.

After the Test Plan is complete, you will be able to estimate the effort it will take your team members to perform all the preparation and execution tasks. Doing this may not be necessary for all enhancements, but will be necessary for major enhancements and projects. After estimating, you can assign the responsibilities for completing the tasks.

Figure 14-3: Test Plan Template

Types of Testing

It is important that your team has clear descriptions of the types of testing they will perform. We use the following types in this book to explain the testing process:

•   Unit Test (also called Module Test)

•   Assembly Test (also called String Test or sometimes Integration Test)

•   System Test

•   Integration Test

•   Performance Test (also includes Stress Test)

•   Usability Test

•   Acceptance Test

The terms used are suggested only, because there are no universal terms in testing.

Test Type Details

This section presents the details for each type of test.

Unit Test (also called Module Test)

Objective

Confirm that the newly coded module functions correctly without regard to any other module. This testing should eliminate basic errors. Review the requirements and specifications for this module.

Who Performs

Programmer of the module, who has knowledge of the internals of the code.

Test Script

Informal, not required to be a written script.

Assembly Test (also called String or sometimes Integration Test)

Objective

Assemble modules together in a logical fashion and confirm that the modules interact appropriately with the other modules.

Who Performs

Programmer of the modules or a dedicated test person.

Test Script

Use a formal test script based on the system requirements, including steps to test:

•   All inputs and outputs

•   All interactions between modules

•   All module functional options

•   Try to break the assembly

System Test

Objective

Confirm the comprehensive end-to-end functionality of the complete system functions correctly. No knowledge of the program’s internal workings is needed for designing or executing System Tests.

Who Performs

Dedicated test person or person from the business.

Test Script

The System test cases and test scripts should be derived from the requirements and user manual. All the program’s functions should be mapped to test cases. All the Assembly test cases should be included in the System test cases, including testing the security of the system.

Integration Test

Objective

Confirm that the combining of all the systems and interfaces encompassing everything that will be in production work correctly, assuming that there is more than one system.

Who Performs

Dedicated test person or person from the business.

Test Script

The System test cases and test scripts should be derived from the test cases and test scripts from each system, and ones for the interfaces should be created.

Performance Test (also includes Stress Test)

Objective

Confirm that the system’s performance meets all performance requirements and determine the limits of the system’s performance. It is not acceptable that only the performance requirements are met. The system limits should be documented through testing. Execute the system with higher and higher volumes until it breaks and then analyze the results. Run tests that include the actual number of maximum concurrent users and the actual level of network traffic.

Who Performs

Dedicated test person.

Test Script

Use a formal test script based on the performance requirements, including a progression that exceeds the requirement to determine the breaking point.

Usability Test

Objective

Verify the effectiveness and acceptance by the user. Determine how usable the system is to the end user after the system is in production for a period of time.

Who Performs

Dedicated test person.

Test Script

Use a formal test script based on the usability requirements.

Acceptance Test

Objective

Confirm that the deployment into production was complete and ready for use. At this point in the testing life cycle, the system is expected to pass the Acceptance Test. The test cases should focus on typical conditions that the user would experience with the system.

The Acceptance Test should be planned out based on the Exit Criteria found in the Test Plan, and the objectives should be made clear to all participants. It’s not a time for the user to develop new system requirements. The Acceptance Test shows that the system is functioning according to requirements.

Test to obtain sign-off by the customer.

Who Performs

Person from the business.

Test Script

Reuse some of the System or Integration test scripts.

Figure 14-4 shows the progression of testing. If you are implementing a package system, the vendor will perform some of the testing.

Description of Testing Progression

Unit Test: Testing begins when a unit or module of code is completed. The programmer tests each unit. The Unit Test is not a formal test and has no test script to execute.

Assembly Test (also called String Test or sometimes Integration Test): After all the code modules and Unit Tests for a function are complete, they can be grouped together. Then an Assembly Test can be executed. Someone other than the programmer will typically perform this test. A formal test script is recommended.

Figure 14-4: Testing Progression

Images

System Test: The System Test will be executed when the entire system is complete and all assembly tests have passed. For enhancements and defect fixes, the System Test should already have been executed once prior to the change. The results of the first system test and those of the system test after the change can thus be compared. The only difference between them should be attributable to the enhancement or defect fix.

Integration Test: The Integration Test includes testing the system and any interfaces with other systems.

Performance Test (also includes Stress Test)/Usability Test: Then there is a battery of specific tests to check the performance of the systems working together, the usability from a user’s point of view, and the ability to handle adverse stresses such as many users simultaneously accessing the systems.

If at any step a defect is encountered, the defect is logged and the programmers are instructed to provide a fix. Then the testing process starts over again. This is called regression testing.

Acceptance Test: When all tests are complete and the appropriate parties have approved the results, the system is ready to be moved into production. After the system is in production, an Acceptance Test is executed to confirm that all the pieces of the system made it into production. The Acceptance Test can be simply executing portions of the Integration Test.

Catch Defects Early

Each level of testing is designed to filter out certain types of defects. You can envision the Unit Test as a screen filter with big holes that will only filter out big “rocks.” The Assembly Test is a screen filter with slightly smaller holes that will filter out slightly smaller rocks. The testing progression continues until the last regression test filters out everything except pure business functionality.

It is important that the “rocks” be stopped by the appropriate filter and not allowed to progress to later filters. The later a defect is found, the more costly the fix will become. This is due to the amount of rework required. For example, when a defect is identified in a Unit Test, just the code must be fixed, then it can return to Unit Test. However, when a defect is found in a System Test, the defect must be logged and sent back to the programmer to fix. The programmer investigates the defect, fixes the code, and reruns the Unit Test. Then the Assembly Test is rerun, and only then can the System Test be rerun to confirm the fix.

To demonstrate the added cost of rework, a defect found in:

•   Unit Testing costs 1 minute to fix

•   Assemble Testing costs 10 minutes to fix

•   System Testing costs 100 minutes to fix

•   Integration Testing costs 1000 minutes to fix

•   After Deployment costs 10,000 minutes to fix

Test Case and Script Development

The test cases and test scripts are at the heart of the testing effort. Each enhancement or project requirement should map to a test case. Multiple requirements can map to the same test case. Test scripts will be written for each test case and will provide the procedure or necessary steps to follow in order to fully satisfy the given test case. Test scripts are executed, not test cases. The count of test cases and scripts to write and the number of iterations for each test script provide an ideal metric for tracking progress through development of the test preparation and execution processes.

To attain total confidence in your system through testing, the team members who develop the test cases should have knowledge of the business, test experience, and insight into what the system is designed to accomplish.

To ensure that all the requirements and functionality will be tested, test case writers should review the following sources before beginning their work:

•   User Requirements—Use the Requirements Document and list all the requirements, without regard to the internal components of the program.

•   System Requirements—Use the technical requirements, including performance and usability requirements.

•   Program Functions—List the internal components of the program.

•   Data Structures Used—List the internal components of the program’s data structure design.

•   Boundary conditions—Test limits of fields such as the classic Year 2000 boundaries 12/31/1999 and 1/1/2000.

Figure 14-5 provides a sample test case. The test cases must be well documented and should be understandable by anyone with a basic understanding of software testing. For the life of the system in maintenance, you will not know who will be performing the testing.

Figure 14-5: Test Case Template

Figure 14-6 provides a matrix for mapping the requirements to the test cases. This matrix makes it easy to check that all requirements are covered by a test case, and it helps in the effort to minimize the overall number of test cases.

The test scripts are written after the test cases are written. Figure 14-7 provides a test script template. The test case should already identify what test scripts will be required. The test script is the explicit procedure that the tester will follow. The tester may not have much knowledge about the system, so the script must be detailed enough for anyone with basic knowledge to follow. Each step of the test script must have these elements:

•   The action to take

•   The expected results

•   The actual results, which will be documented upon execution

Test Execution and Control

Testing small to medium enhancements does not require much attention from the manager. Testing large enhancements and projects will require more attention. This section presents several recommendations for tracking these larger efforts.

Figure 14-6: Test Life Cycle

Images

Figure 14-7: Test Script Template

You will need to make sure that certain elements such as the following are tightly controlled :

•   Managing the test data

•   Managing versions of programs; do not permit code changes in test environment

•   Documenting the test results and review

•   Tracking metrics from testing to monitor any problems

Keep a test log of what test scripts were executed, when they were executed, and on what version of the system they pertain. Closely track the information about identified defects, map them to requested fixes, and track them until they are closed. Include defect type and severity. Squeeze the data (analyze trends) to gather information from it so it “talks” to you and then “sings” to you. Determine where your troublesome areas are, apply management focus, and take appropriate action.

The following is a recommended severity scale:

1.   Showstopper. Can’t deploy with defect.

2.   Software capabilities are adversely affected with no workaround.

3.   Software capabilities are adversely affected but there is a workaround.

4.   Just an annoyance to the user.

Configuration management is critical when performing all levels of testing. See Chapter 16, “Configuration Management,” for details.

Earned Value

For major enhancements or projects, standard project management Earned Value can be used for tracking testing progress. Earned Value = number of test cases completed/number of total test cases. The testing tasks can be broken up into three categories for tracking earned value. The first is the development of the test cases themselves. The list of test cases should be available after the Design Phase. Tracking for Earned Value is straightforward.

The second is the planned execution of test cases. There may be multiple iterations of a single test case. The total number of iterations of all test cases can be tracked.

The third category is not so clear. Some test cases will be iterative; however, it will not be known how many times these cases will be executed until all testing is successfully completed. For this category, the number of iterations should be estimated. Be sure that your estimation is not overly optimistic.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.176.88