1.1 Context of Testing in Producing Software
1.10 The Policemen on the Bridge
2 Software Development Life Cycle Models
2.1 Phases of Software Project
2.1.1 Requirements Gathering and Analysis
2.1.6 Deployment and Maintenance
2.2 Quality, Quality Assurance, and Quality Control
2.3 Testing, Verification, and Validation
2.4 Process Model to Represent Different Phases
2.5.2 Prototyping and Rapid Application Development Models
2.5.3 Spiral or Iterative Model
2.5.6 Comparison of Various Life Cycle Models
3.1 What is White Box Testing?
3.2.1 Static Testing by Humans
3.3.1 Unit/Code Functional Testing
3.4 Challenges in White Box Testing
4.1 What is Black Box Testing?
4.3 When to do Black Box Testing?
4.4 How to do Black Box Testing?
4.4.1 Requirements Based Testing
4.4.2 Positive and Negative Testing
4.4.5 Equivalence Partitioning
4.4.6 State Based or Graph Based Testing
4.4.8 User Documentation Testing
5.1 What is Integration Testing?
5.2 Integration Testing as a Type of Testing
5.2.3 Bi-Directional Integration
5.2.5 Choosing Integration Method
5.3 Integration Testing as a Phase of Testing
5.5.1 Choosing the Frequency and Duration of Defect Bash
5.5.2 Selecting the Right Product Build
5.5.3 Communicating the Objective of Defect Bash
5.5.4 Setting up and Monitoring the Lab
5.5.5 Taking Actions and Fixing Issues
5.5.6 Optimizing the Effort Involved in Defect Bash
6 System and Acceptance Testing
6.2 Why is System Testing Done?
6.3 Functional Versus Non-Functional Testing
6.4.1 Design/Architecture Verification
6.4.2 Business Vertical Testing
6.4.5 Certification, Standards and Testing for Compliance
6.5.1 Setting up the Configuration
6.5.2 Coming up with Entry/Exit Criteria
6.5.7 Interoperability Testing
6.6.2 Selecting Test Cases for Acceptance Testing
6.6.3 Executing Acceptance Tests
6.7.1 Multiphase Testing Model
6.7.2 Working Across Multiple Releases
7.2 Factors Governing Performance Testing
7.3 Methodology for Performance Testing
7.3.3 Automating Performance Test Cases
7.3.4 Executing Performance Test Cases
7.3.5 Analyzing the Performance Test Results
7.3.7 Performance Benchmarking
7.4 Tools for Performance Testing
7.5 Process for Performance Testing
8.1 What is Regression Testing?
8.2 Types of Regression Testing
8.3 When to do Regression Testing?
8.4 How to do Regression Testing?
8.4.1 Performing an Initial “Smoke" or “Sanity" Test
8.4.2 Understanding the Criteria for Selecting the Test Cases
8.4.4 Methodology for Selecting Test Cases
8.4.5 Resetting the Test Cases for Regression Testing
8.4.6 Concluding the Results of Regression Testing
8.5 Best Practices in Regression Testing
9 Internationalization (I18n) Testing
9.2 Primer on Internationalization
9.2.4 Terms Used in This Chapter
9.3 Test Phases for Internationalization Testing
9.6 Internationalization Validation
9.10 Tools Used for Internationalization
10.1 Overview of Ad Hoc Testing
10.3.1 Situations When Pair Testing Becomes Ineffective
10.4.1 Exploratory Testing Techniques
10.6 Agile and Extreme Testing
10.6.2 Summary with an Example
Part III Select Topics in Specialized Testing
11 Testing of Object-Oriented Systems
11.2 Primer on Object-Oriented Software
11.3 Differences in OO Testing
11.3.1 Unit Testing a set of Classes
11.3.2 Putting Classes to Work Together—Integration Testing
11.3.3 System Testing and Interoperability of OO Systems
11.3.4 Regression Testing of OO Systems
11.3.5 Tools for Testing of OO Systems
12 Usability and Accessibility Testing
12.1 What is Usability Testing?
12.3 When to do Usability Testing?
12.4 How to Achieve Usability?
12.5 Quality Factors for Usability
12.10 Test Roles for Usability
Part IV People and Organizational Issues in Testing
13.1 Perceptions and Misconceptions About Testing
13.1.1 “Testing is not Technically Challenging”
13.1.2 “Testing Does Not Provide me a Career Path or Growth”
13.1.3 “I Am Put in Testing—What is Wrong With Me?!”303
13.1.4 “These Folks Are My Adversaries”
13.1.5 “Testing is What I Can Do in the End if I Get Time”
13.1.6 “There is no Sense of Ownership in Testing”
13.1.7 “Testing is only Destructive”
13.2 Comparison between Testing and Development Functions
13.3 Providing Career Paths for Testing Professionals
13.4 The Role of the Ecosystem and a Call for Action
13.4.1 Role of Education System
13.4.2 Role of Senior Management
14 Organization Structures for Testing Teams
14.1 Dimensions of Organization Structures
14.2 Structures in Single-Product Companies
14.2.1 Testing Team Structures for Single-Product Companies
14.2.2 Component-Wise Testing Teams
14.3 Structures for Multi-Product Companies
14.3.1 Testing Teams as Part of “CTO's Office”
14.3.2 Single Test Team for All Products
14.3.3 Testing Teams Organized by Product
14.3.4 Separate Testing Teams for Different Phases of Testing
14.4 Effects of Globalization and Geographically Distributed Teams on Product Testing
14.4.1 Business Impact of Globalization
14.4.2 Round the Clock Development/Testing Model
14.4.3 Testing Competency Center Model
14.4.4 Challenges in Global Teams
14.5 Testing Services Organizations
14.5.1 Business Need for Testing Services
14.5.2 Differences between Testing as a Service and Product—Testing Organizations
14.5.3 Typical Roles and Responsibilities of Testing Services Organization
14.5.4 Challenges and Issues in Testing Services Organizations
14.6 Success Factors for Testing Organizations
Part V Test Management and Automation
15 Test Planning, Management, Execution, and Reporting
15.2.2 Scope Management: Deciding Features to be Tested/Not Tested
15.2.3 Deciding Test Approach/Strategy
15.2.4 Setting up Criteria for Testing
15.2.5 Identifying Responsibilities, Staffing, and Training Needs
15.2.6 Identifying Resource Requirements
15.2.7 Identifying Test Deliverables
15.2.8 Testing Tasks: Size and Effort Estimation
15.2.9 Activity Breakdown and Scheduling
15.2.10 Communications Management
15.3.2 Test Infrastructure Management
15.3.4 Integrating with Product Release
15.4.1 Putting Together and Baselining a Test Plan
15.4.2 Test Case Specification
15.4.3 Update of Traceability Matrix
15.4.4 Identifying Possible Candidates for Automation
15.4.5 Developing and Baselining Test Cases
15.4.6 Executing Test Cases and Keeping Traceability Matrix Current
15.4.7 Collecting and Analyzing Metrics
15.4.8 Preparing Test Summary Report
15.4.9 Recommending Product Release Criteria
15.5.1 Recommending Product Release
15.6.1 Process Related Best Practices
15.6.2 People Related Best Practices
15.6.3 Technology Related Best Practices
Appendix A: Test Planning Checklist
Appendix B: Test Plan Template
16.3 Skills Needed for Automation
16.4 What to Automate, Scope of Automation
16.4.1 Identifying the Types of Testing Amenable to Automation
16.4.2 Automating Areas Less Prone to Change
16.4.3 Automate Tests that Pertain to Standards
16.4.4 Management Aspects in Automation
16.5 Design and Architecture for Automation
16.5.2 Scenario and Configuration File Modules
16.5.3 Test Cases and Test Framework Modules
16.5.4 Tools and Results Modules
16.5.5 Report Generator and Reports/Metrics Modules
16.6 Generic Requirements for Test Tool/Framework
16.7 Process Model for Automation
16.8.1 Criteria for Selecting Test Tools
16.8.2 Steps for Tool Selection and Deployment
16.9 Automation for Extreme Programming Model
16.10 Challenges in Automation
17 Test Metrics and Measurements
17.1 What are Metrics and Measurements?
17.4.1 Effort Variance (Planned vs Actual)
17.4.2 Schedule Variance (Planned vs Actual)
17.4.3 Effort Distribution Across Phases
17.5.2 Development Defect Metrics
17.6.1 Defects per 100 Hours of Testing
17.6.2 Test Cases Executed per 100 Hours of Testing
17.6.3 Test Cases Developed per 100 Hours of Testing
17.6.4 Defects per 100 Test Cases
17.6.5 Defects per 100 Failed Test Cases
17.6.6 Test Phase Effectiveness
3.145.38.117