Summary of Volume 1

  • Dedication and Acknowledgments
  • Preface
  • Chapter 1. Introduction
    • 1.1. Definition
    • 1.2. Why and for who are these books?
      • 1.2.1. Why?
      • 1.2.2. Who is this book for?
      • 1.2.3. Organization of this book
    • 1.3. Examples
    • 1.4. Limitations
    • 1.5. Why test?
    • 1.6. MOA and MOE
    • 1.7. Major challenges
      • 1.7.1. Increased complexity
      • 1.7.2. Significant failure rate
      • 1.7.3. Limited visibility
      • 1.7.4. Multi-sources and complexity
      • 1.7.5. Multi-enterprise politics
      • 1.7.6. Multiple test levels
      • 1.7.7. Contract follow-up, measures, reporting and penalties
      • 1.7.8. Integration and test environments
      • 1.7.9. Availability of components
      • 1.7.10. Combination and coverage
      • 1.7.11. Data quality
      • 1.7.12. Flows, pivots and data conversions
      • 1.7.13. Evolution and transition
      • 1.7.14. History and historization
      • 1.7.15. Impostors
  • Chapter 2. Software Development Life Cycle
    • 2.1. Sequential development cycles
      • 2.1.1. Waterfall
      • 2.1.2. V-cycle
      • 2.1.3. Spiral and prototyping
      • 2.1.4. Challenges of sequential developments
    • 2.2. Incremental development cycles
      • 2.2.1. Challenges of incremental development
    • 2.3. Agile development cycles
      • 2.3.1. Agile Manifesto
      • 2.3.2. eXtreme Programming
      • 2.3.3. Challenges of iterative cycles
      • 2.3.4. Lean
      • 2.3.5. DevOps and continuous delivery
      • 2.3.6. Agile development challenges
      • 2.4. Acquisition
    • 2.5. Maintenance
    • 2.6. OK, what about reality?
  • Chapter 3. Test Policy and Test Strategy
    • 3.1. Test policy
      • 3.1.1. Writing test policy
      • 3.1.2. Scope of the test policy
      • 3.1.3. Applicability of the test policy
    • 3.2. Test strategy
      • 3.2.1. Content of a test strategy
      • 3.2.2. Test strategies and Taylorism
      • 3.2.3. Types of test strategies
      • 3.2.4. Test strategy and environments
    • 3.3. Selecting a test strategy
      • 3.3.1. “Completeness” of the strategy
      • 3.3.2. Important points in the strategy
      • 3.3.3. Strategy monitoring
      • 3.3.4. Shift left, costs and time
      • 3.3.5. “Optimal” strategy
      • 3.3.6. Ensuring success
      • 3.3.7. Why multiple test iterations?
      • 3.3.8. Progress forecast
      • 3.3.9. Continuous improvements
  • Chapter 4. Testing Methodologies
    • 4.1. Risk-based tests (RBT)
      • 4.1.1. RBT hypothesis
      • 4.1.2. RBT methodology
      • 4.1.3. RBT versus RRBT
      • 4.1.4. Reactions to risks
      • 4.1.5. Risk computation
      • 4.1.6. RBT synthesis
      • 4.1.7. Additional references
    • 4.2. Requirement-based tests (TBX)
      • 4.2.1. TBX hypothesis
      • 4.2.2. TBX methodology
      • 4.2.3. TBX calculation
      • 4.2.4. TBX synthesis
    • 4.3. Standard-based (TBS) and systematic tests
      • 4.3.1. TBS hypothesis
      • 4.3.2. TBS calculation
      • 4.3.3. TBS synthesis
    • 4.4. Model-based testing (MBT)
      • 4.4.1. MBT hypothesis
      • 4.4.2. MBT calculation
      • 4.4.3. MBT synthesis
    • 4.5. Testing in Agile methodologies
      • 4.5.1. Agile “test” methodologies?
      • 4.5.2. Test coverage
      • 4.5.3. Hypothesis
      • 4.5.4. Calculation methods
      • 4.5.5. Synthesis
    • 4.6. Selecting a multi-level methodology
      • 4.6.1. Hypothesis
      • 4.6.2. Calculation
    • 4.7. From design to delivery
  • Chapter 5. Quality Characteristics
    • 5.1. Product quality characteristics
    • 5.2. Quality in use
    • 5.3. Quality for acquirers
    • 5.4. Quality for suppliers
    • 5.5. Quality for users
    • 5.6. Impact of quality on criticality and priority
    • 5.7. Quality characteristics demonstration
      • 5.7.1. Two schools
      • 5.7.2. IADT proofs
      • 5.7.3. Other thoughts
  • Chapter 6. Test Levels
    • 6.1. Generic elements of a test level
      • 6.1.1. Impacts on development cycles
      • 6.1.2. Methods and techniques
      • 6.1.3. Fundamental principles
    • 6.2. Unit testing
    • 6.3. Component integration testing
      • 6.3.1. Types of interfaces to integrate
      • 6.3.2. Integration challenges
      • 6.3.3. Integration models
      • 6.3.4. Hardware–software integration tests
    • 6.4. Component tests
    • 6.5. Component integration tests
    • 6.6. System tests
    • 6.7. Acceptance tests or functional acceptance
    • 6.8. Particularities of specific systems
      • 6.8.1. Safety critical systems
      • 6.8.2. Airborne systems
      • 6.8.3. Confidentiality and data security
  • Chapter 7. Test Documentation
    • 7.1. Objectives for documentation
    • 7.2. Conformity construction plan (CCP)
    • 7.3. Articulation of the test documentation
    • 7.4. Test policy
    • 7.5. Test strategy
    • 7.6. Master test plan (MTP)
    • 7.7. Level test plan
    • 7.8. Test design documents
    • 7.9. Test case specification
    • 7.10. Test procedure specification
    • 7.11. Test data specifications
    • 7.12. Test environment specification
    • 7.13. Reporting and progress reports
    • 7.14. Project documentation
    • 7.15. Other deliverables
  • Chapter 8. Reporting
    • 8.1. Introduction
    • 8.2. Stakeholders
    • 8.3. Product quality
    • 8.4. Cost of defects
    • 8.5. Frequency of reporting
    • 8.6. Test progress and interpretation
      • 8.6.1. Requirements coverage
      • 8.6.2. Risk coverage
      • 8.6.3. Component or functional coverage
    • 8.7. Progress and defects
      • 8.7.1. Defect identification
      • 8.7.2. Defects fixing
      • 8.7.3. Defect backlog
      • 8.7.4. Number of reopened defects
    • 8.8. Efficiency and effectiveness of test activities
    • 8.9. Continuous improvement
      • 8.9.1. Implementing continuous improvements
    • 8.10. Reporting attention points
      • 8.10.1. Audience
      • 8.10.2. Usage
      • 8.10.3. Impartiality
      • 8.10.4. Evolution of reporting
      • 8.10.5. Scrum reporting
      • 8.10.6. KANBAN reporting
      • 8.10.7. Test design reporting
      • 8.10.8. Test execution reporting
      • 8.10.9. Reporting software defects
      • 8.10.10. UAT progress reporting
      • 8.10.11. Reporting for stakeholders
  • Chapter 9. Testing Techniques
    • 9.1. Test typologies
      • 9.1.1. Static tests and reviews
      • 9.1.2. Technical tests
    • 9.2. Test techniques
    • 9.3. CRUD
    • 9.4. Paths (PATH)
      • 9.4.1. Operation
      • 9.4.2. Coverage
      • 9.4.3. Limitations and risks
    • 9.5. Equivalence partitions (EP)
      • 9.5.1. Objective
      • 9.5.2. Operation
      • 9.5.3. Coverage
      • 9.5.4. Limitations and risks
    • 9.6. Boundary value analysis (BVA)
      • 9.6.1. Objective
      • 9.6.2. Operation
      • 9.6.3. Coverage
      • 9.6.4. Limitations and risks
    • 9.7. Decision table testing (DTT)
      • 9.7.1. Objective
      • 9.7.2. Operation
      • 9.7.3. Coverage
      • 9.7.4. Limitations and risks
    • 9.8. Use case testing (UCT)
      • 9.8.1. Objective
      • 9.8.2. Operation
      • 9.8.3. Coverage
      • 9.8.4. Limitations and risks
    • 9.9. Data combination testing (DCOT)
      • 9.9.1. Objective
      • 9.9.2. Operation
      • 9.9.3. Coverage
      • 9.9.4. Challenge
    • 9.10. Data life cycle testing (DCYT)
      • 9.10.1. Objective
      • 9.10.2. Operation
      • 9.10.3. Coverage
      • 9.10.4. Challenge
    • 9.11. Exploratory testing (ET)
      • 9.11.1. Objective
      • 9.11.2. Operation
      • 9.11.3. Coverage
      • 9.11.4. Limitations and risks
    • 9.12. State transition testing (STT)
      • 9.12.1. Objective
      • 9.12.2. Operation
      • 9.12.3. Coverage
    • 9.13. Process cycle testing (PCT)
      • 9.13.1. Objective
      • 9.13.2. Operation
      • 9.13.3. Coverage
      • 9.13.4. Limitations and risks
    • 9.14. Real life testing (RLT)
      • 9.14.1. Objective
      • 9.14.2. Operation
      • 9.14.3. Coverage
      • 9.14.4. Limitations and risks
    • 9.15. Other types of tests
      • 9.15.1. Regression tests or non-regression tests (NRTs)
      • 9.15.2. Automated tests
      • 9.15.3. Performance tests
      • 9.15.4. Security tests
    • 9.16. Combinatorial explosion
      • 9.16.1. Orthogonal array testing (OAT)
      • 9.16.2. Classification tree testing (CTT)
      • 9.16.3. Domain testing (DOM)
      • 9.16.4. Built-in tests (BIT, IBIT, CBIT and PBIT)
  • Chapter 10. Static Tests, Reviews and Inspections
    • 10.1. What is static testing?
    • 10.2. Reviews or tests?
      • 10.2.1. What is a review?
      • 10.2.2. What can be subjected to reviews?
    • 10.3. Types and formalism of reviews
      • 10.3.1. Informal or ad hoc reviews
      • 10.3.2. Technical reviews
      • 10.3.3. Checklist-based reviews
      • 10.3.4. Scenario-based reviews
      • 10.3.5. Perspective-based reviews (PBRs)
      • 10.3.6. Role-based reviews
      • 10.3.7. Walkthrough
      • 10.3.8. Inspections
      • 10.3.9. Milestone review
      • 10.3.10. Peer review
    • 10.4. Implementing reviews
    • 10.5. Reviews checklists
      • 10.5.1. Reviews and viewpoint
      • 10.5.2. Checklist for specifications or requirements review
      • 10.5.3. Checklist for architecture review
      • 10.5.4. Checklist for high-level design review
      • 10.5.5. Checklist for critical design review (CDR)
      • 10.5.6. Checklist for code review
    • 10.6. Defects taxonomies
    • 10.7. Effectiveness of reviews
    • 10.8. Safety analysis
  • Terminology
  • References
  • Index
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.129.172