Preface

Software testing is expensive. Testing requires both up-front and recurring investment in labor and assets to reduce the risk of shipping a product that does not meet customer expectations. Good testers and engineers with knowledge and experience applying software-testing best practices are hard to find. By the end of a project, you’ve spent 30%–50% of your engineering budget on software-testing activities,1 including requirements and design testing, unit testing, user acceptance testing, performance testing, and security testing.

Not investing enough in testing can have an even worse impact on your organization’s bottom line. The cost of fixing defects increases exponentially with the time and stage at which they occur during the development cycle. Bugs found early during requirements elicitation are much cheaper to fix than those discovered while coding or, worse yet, after the software is released. Bugs that do escape to production directly impact customer satisfaction and can ultimately cost you your reputation and business.

The key to controlling quality cost-effectively is to find the right level of testing effort for your project based on the risks associated with the release. Ideally, you want to do just enough testing that you can remove the most harmful defects prior to the release. Of course, the more test coverage, the better, but if that coverage comes with too great an investment cost, you end up overtesting and not getting a good return on your investment (ROI). In other words, over time, it becomes harder and harder to find that next defect, and eventually the cost to find and fix the problem may not be worth it.

One way to reduce testing costs and improve your ROI is through automation. By removing manual steps, organizations can scale operations faster, easier, and more cost-effectively. Unfortunately, the current state of the art in test automation still requires significant manual effort. Humans have to first understand the software requirements, design and specify test cases, and then manually translate them into machine-readable scripts. A software-testing tool or framework then executes the scripts against the system under test and logs the results. However, the moment the script ends, humans enter back into the loop to verify the results and translate them into actionable items. The good news is that advances in artificial intelligence (AI) and machine learning (ML) are being used to bridge the gap between manual and automated software testing.

Who This Report Is For

If you are a technology leader, such as a CTO, VP of engineering, quality director, or engineering manager responsible for developing high-quality software, who is interested in learning how to scale those efforts cost-effectively through test automation, then this report is for you. Other roles that may benefit from this report include test automation engineers, software engineers, architects, technical leads, or researchers who are using or developing automated software-testing solutions.

What You Will Learn

In this report, I will discuss the grand challenges and limitations of traditional automated testing tools and describe how AI-driven approaches are helping to overcome these problems. We will explore the application of AI/ML to functional, structural, performance, and user-design testing, and we’ll dive into techniques for automating graphically intensive solutions such as video streaming and gaming applications. By the end of the report, you will have a wide knowledge of the various applications of AI-driven testing, an understanding of its current benefits and limitations, and insights into the future of this emerging discipline.

1 Capgemini, World Quality Report 2019–20, 11th edition, 2020.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.11.98