8.1. Performance Testing Basics

In Chapter 5 you learned about the many different types of functional testing; it was in this section that you first heard of the many types of functional testing grouped under the "load testing" umbrella. Developers will group each one of these very specific types of testing into the generic term load testing, when in reality each type of testing is a strict discipline in itself. In this chapter, the term performance testing is used as a generic term to refer to a test that determines the responsiveness of a web application.

Performance testing, volume testing, load testing, stress testing, scalability testing, capacity testing, and performance testing encompass all types of testing that leads us to the same goal: that users will be able to use your web application as the application grows. This chapter covers each one of these types of testing in detail. You'll learn how to collect performance requirements, how to create performance tests, and also how to analyze the results that have been collected through the process. After you've learned how to test a site for performance, you'll learn how to plan for the future using capacity testing to get a good idea when it's time to upgrade the web application server architecture and hardware.

This chapter is not intended to be a guide in how to increase performance in your web application. Instead, you learn how to find bottlenecks that will help you remove the problems with performance in your site. But first you should take a moment and learn a few terms that will be used throughout this chapter:

  • Availability. This is the amount of time a web application is available to the users. Measuring availability is important to many applications because of the business cost of outages for some applications. Availability issues tend to appear as load increases on web applications.

  • Response Time. Amount of time it takes for a web application to respond to the user's request. Response time measures the time between the user requesting a response from the web application and when the completed response has arrived at the user's browser. Response time is normally measured in Time To First Byte (TTFB) and Time To Last Byte (TTLB).

  • Scalability. Scalability is the ability to add more resources as load increases and maintain the same or improve the performance (response time). A common measurement for scalability is requests per second.

  • Throughput. Throughput is the rate of successful message delivery through the network. An example of throughput would be number of hits on a website in a given time range.

  • Utilization. Utilization is the percentage of the theoretical capacity of a resource that is being used. Examples include how much network bandwidth is being used and the amount of memory used on a server when 200 users are accessing the web application.

8.1.1. What to Expect

As with the other types of testing disciplines, you should expect an initial learning curve to understanding the terminology and how to collect effective performance requirements. You should also plan on spending a fairly large amount of time learning the tools used to implement a successful performance test.

You should expect that the majority of time spent performance testing will be spent capturing, validating, and implementing the performance requirements so you can build an effective performance test. If you don't know what you're looking for, how will you know the performance test passed?

An important consideration before you begin testing is to look into the licensing costs for automated performance testing tools before your budget is set in stone; some performance testing suites carry a hefty price tag. Along with software costs, there may also be a cost associated with hardware needed for the performance test. The amount of hardware needed to test a small intranet for 200 users is much less than an external facing service that has 1.5 million users. Later in this chapter you will explore the different types of tools that can be used, some of them are free or low-cost while others have a high price tag on them.

8.1.1.1. Establishing a Baseline

Throughout the remainder of this chapter you will notice that we are stressing the fact that performance testing should not be something you wait until the very last minute to do; it should be happening as the application is being created. Establishing a baseline test early in the application development cycle will help you prevent performance issues from cropping up just before your release date. Performance baselines establish a point of comparison for future test runs.

One of the most useful baseline tests is measuring transaction response times. For instance, a web application that processes credit card transactions generally involves a 3rd party service that performs the credit card transaction after calling the API. If you established a baseline of how long this transaction takes, you would be able to find performance issues — such as a junior developer who thought it would be a good idea to put a four-second sleep between calls to the third-party API for some strange reason.

8.1.2. What Are Good Performance Times?

"What sort of times should I expect?" and "How do I know the test passed?" are questions that are frequently asked by developers who are new to performance testing. If you are hoping to learn the generic industry standard for what good performance is, you are out of luck, because no such standards exist. The results collected from performance testing are highly subjective, and can mean something different from person to person. Performance is in the eye of the beholder. It's very common that what a developer may see as an acceptable response time may not be an acceptable response time to a user.

Later in this chapter you will learn the collection of requirements and provide techniques needed to find out exactly what acceptable response times are for a given project. This is one reason why testing for performance from the start is such an important concept.

Because so many different factors come into play with how web applications perform, server manufacturers cannot say for certain "this model will serve x requests per second," which may be the magic number a developer is looking for. It is because of this reason that performance testing web applications is so important.

When it comes to response times, meaning how fast the page loaded for the user, many studies have been conducted with varying results of how users react to different response times. Through the collection of performance requirements from many projects and multiple usability studies, the following response time ranges provide a representation of how users react to web applications:

  • Greater than 15 seconds. The user may be thinking "is the application still working?" The attention of the user has most likely been lost, and in many situations this long of a response time is unacceptable. In situations where long response times are expected, such as reporting tasks, a warning that the task is expected to run this long and some type of feedback indicator should be provided.

  • Greater than four seconds. The user's train of thought has most likely been de-railed. Computer users tend to be able to retain information about the task they are working with in short-term memory for roughly four seconds. If a complex task has a response time of more than four seconds, you may want to look into increasing the performance of this task.

  • Two to four seconds. Many web applications respond within this time range for a majority of their tasks. The user still has the task in memory and can maintain focus.

  • Less than two seconds. This range is what many users expect to keep a steady workflow. Users will notice the delay, but no special feedback needs to be presented to the user. In data entry applications where data is being entered at a very rapid rate, response times of less than two seconds will help users stay focused on the data they are entering.

  • Sub-second response time. These are tasks that users expect to respond to instantly. Many of these tasks are accomplished on the client side with JavaScript, and do not make trips back to the server. A task such as expanding a menu is a task users expect to happen instantly.

8.1.3. Automated Testing Tools

You can't effectively test a web application for performance without using some type of automated testing tool. Asking a few hundred of your closest friends to help you out on a weekend and access a website you developed will not satisfy the needs required for performance testing, not to mention it may cost you a great deal of pizza! Besides having to correlate the response from all the users, repeating the same test twice would be near impossible.

8.1.3.1. Choosing a Testing Tool

Choosing a performance testing tool can be the most difficult part of the performance testing phase. The goal of all performance testing tools is to simplify the testing process. Many performance tools on the market today allow the user to record activity on a web application and save this activity as script that can be replayed.

When many people hear the phase performance testing, they immediately think stress testing. Many of the large testing tool suites on the market today include tools used for stress testing web applications and many of these suites cost a great deal of money. You don't have to sell your kidney to finance your performance testing tools.

  • Visual Studio Team Foundation Test Suite. VSTS includes tools for load and capacity testing. A license for the suite is required for the "controller" (where the results of the test are stored), but no license is required for agents that execute the test.

  • HP Load Runner (formally Mercury Load Runner). Load Runner has been around for the better part of 10 years and has a very large customer following. Load Runner is a testing suite that will allow you to create stress and capacity tests, and provide tools to assist with creating the performance tests from the start — the requirements phase. Licenses are needed for each machine that has a component of Load Runner installed, and each license carries a hefty cost.

  • Web Capacity Analysis Tool (WCAT). The Web Capacity Analysis Tool (WCAT) is a HTTP load generation tool designed to measure the performance of a web application. WCAT works as the other tools described with a controller/agent architecture to allow for a distributed load to be generated. WCAT is a console application which prevents many managers/testers/developers from considering using this tool. WCAT is a very powerful tool that is free and should be considered when selecting a tool for performance testing.

  • RedGate ANTS Performance Profiler. The ANTS Profiler is a code profiling tool that helps you identify bottlenecks and slow-running processes in code. When performing a load test and the number of users is not exactly what you expected, using the ANTS Profiler will help you drill into your code and find these issues.

  • RedGate ANTS Memory Profiler. Alongside the performance profiler, Red Gate also develops a memory profiler to help find memory leaks within your application and the ASP.NET website. The ANTS Performance and Memory Profiler are licensed per user at a very affordable cost.

  • Compuware Dev Partner. Dev Partner is a suite of tools developed by Compuware that contains tools for code reviews, security scanning, Code Coverage Analysis, Error Detection and Diagnosis, Memory Analysis, and Performance Analysis. Inside the Performance Analysis tool of Dev Partner is a code profiler that will help you find slow-running code and memory leaks. Dev Partner is licensed per user and carries a large price tag.

Later in this chapter you will learn how to use a few of these performance testing tools in depth.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.144.59